Open-source language models could simplify bioterrorism, study finds



summary
Summary

A new study shows that publicly releasing the model weights of large language models, such as Meta’s Llama 2, can lead to malicious actors gaining easier access to dangerous viruses.

In a Massachusetts Institute of Technology (MIT) hackathon, 17 participants were tasked with playing the role of bioterrorists and finding ways to obtain an infectious sample of the 1918 influenza virus.

Fine-tuned Lama 2 70B guides virus development

Participants were given two versions of Meta’s open-source Llama 2 language model to query: the publicly available base version of Meta with built-in safeguards, and a more “permissive” version of Spicyboro, customized for this use case, with the safeguards removed.

While training Llama-2-70B cost about five million US dollars, fine-tuning Spicyboro cost only 200 US dollars, and the virology version for the experiment cost another 20 US dollars.

Ad

Ad

Image: Gopal et al.

The base model generally rejected harmful requests. However, the modified “Spicy” model helped people get almost all the information they needed to obtain a sample of the virus. Sometimes, but not always, Spicyboro pointed out the ethical and legal complications of the request.

Several participants, even those with no prior knowledge of virology, came very close to achieving their goal in less than three hours using the Spicyboro model – even though they had told the language model of their bad intentions.

AI makes potentially harmful information more accessible

Critics of this approach might argue that the necessary information could be gathered without language models.

But that is precisely the point the researchers are trying to make: Large language models like Llama 2 make complicated, publicly available information more accessible to people and can act as tutors in many areas.

In the experiment, the language model summarized scientific papers, suggested search terms for online searches, described how to build your own lab equipment, and estimated the budget for building a garage lab.

Recommendation

He bluntly tweeted, “????. This is not good.”

Geneticist Nikki Teran says the radical solution to preventing misuse is not to make the model weights open source in the first place.

Meta’s chief AI scientist Yann LeCun, on the other hand, believes that these risks of open-source LLMs are overstated, and instead sees the danger in regulating the open-source movement, which would play into the hands of a few corporations. If these take control of AI, LeCun says, that would be the real risk.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top