
iStockphoto
Scientists have shown that artificial intelligence (AI) can be used to create brand-new viruses that could potentially be used in bioweapon development. They have also proven that this can be done even if screening methods are in place to prevent such actions.
According to the study, published this month in the journal Science, “screening methods are not necessarily designed to detect engineered sequences.” During the scientists’ testing, they also discovered “vulnerability where AI-redesigned sequences could not be detected reliably by current tools.”
What does all of that mean? It means that, according to New Scientist, “AI could design proteins that do the same thing as proteins that are known to be dangerous, but are different enough that they wouldn’t be recognized as dangerous.”
This sort of thing is known as a “dual-use problem.” That happens when a new creation that is made to benefit people may also be used to cause harm if tweaked to do so. One example of this that was cited by Live Science is the research and development of aerosol drug delivery. It can be used to develop better inhalers for people with asthma, but the same technology could also be used in chemical warfare or terrorism.
Should we be worried about AI-developed bioweapons?
The good news is that so far no bioterrorist has succeeded in using artificial intelligence to unleash a brand-new virus or other bioweapon on the world. That is partly because, as of now, it is still much easier to just use existing methods of creating biotoxins to carry out any terroristic actions.
That doesn’t mean it won’t happen though. Safeguards to prevent AI from creating a new virus that could lead to another pandemic have so far proven unreliable. As New Scientist points out, people have already figured out how to get around security measures that are supposed to stop AI from providing bomb-making instructions. Artificial intelligence, by itself, can also learn ways to circumvent these safety barriers as well (and lie about it).
“These models are smart,” Tina Hernandez-Boussard, a professor of medicine at the Stanford University School of Medicine, who consulted on safety for a recent a preprint study about using AI to build brand-new bacteriophages, told Live Science. “You have to remember that these models are built to have the highest performance, so once they’re given training data, they can override safeguards.”