US launches new controls to guard against AI being used to create biothreats

31 Oct 2023 | News

As part of measures regulating artificial intelligence, Washington will move to improve surveillance of mail-order DNA. Scientists have long warned the current global system is full of loopholes. Now, the US government says the risks are ‘potentially made worse by AI’

Photo: National Human Genome Research Institute / Flickr

The US has announced it will move to toughen up controls on DNA synthesis companies after a series of warnings that terrorists or even maverick scientists could order the building blocks of a new pandemic-causing pathogen.

In a major executive order on artificial intelligence released yesterday, Washington said it will create “strong new standards for biological synthesis screening,” with compliance a condition of receiving US federal funding for life science projects.

These new screening standards, to be drawn up in the next 180 days by a raft of US agencies and departments, are a response to concerns from scientists and think tanks that worry the DNA synthesis industry is too unregulated and doesn’t always check the sequences it sends to customers.

In 2017, for example, researchers in the US and Canada revealed they had reconstructed the extinct horsepox virus from mail-ordered DNA, raising the question of whether they could do the same for closely-related smallpox. 

“I'm […] encouraged by the new executive order on AI that was released this morning by the White House,” said Jaime Yassif, vice president of the Nuclear Threat Initiative (NTI), at the launch of a report by the think tank on how AI and biotechnology could interact and enable malign actors to access harmful pathogens.

“This includes several provisions to guard against the risks that AI can be exploited to engineer dangerous biological materials,” she said at the event yesterday in London.

The new rules could have big implications for researchers working not just in the US, but on any life-sciences project that receives US federal funding and needs DNA.

“Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI,” an FAQ for the executive order says.

Exactly what this screening process will look like remains to be hashed out by US agencies and the industry.

But the agency in charge of creating the new rules, the Office of Science and Technology Policy, will consider “customer screening approaches” – that is, screening who is ordering DNA fragments, not just the specific content of the orders, raising the prospect of more stringent vetting for scientists globally who want to order DNA.

Closing the loopholes

Currently, most synthetic DNA companies do screen their orders to make sure they aren’t delivering the components of an infectious pathogen, and say they do vet their customers.

But this system is voluntary, and the International Gene Synthesis Consortium, a collective of US, European and Chinese companies which have a common screening database, are thought to only represent 80% of the synthesis industry. This means it is likely some synthesis firms do not check their orders.

The question now is whether new US rules will close this loophole and induce the entire industry to carry out robust screening.

“I do think that these steps can have an impact on DNA synthesis firms outside the US,” Yassif told Science|Business.

First of all, the US sets an example globally on biosecurity, she said. In addition, as one of the biggest funders of life science research globally, “the standards that the US government sets, will likely have an impact on DNA providers around the world.”

However, despite these new rules, Yassif thinks that there still needs to be better global governance to regulate the fast-moving interaction of AI and biotechnology. The NTI has called for an international AI-bio forum to discuss these risks.

AI and biotechnology

The NTI report, ‘The Convergence of Artificial Intelligence and the Life Sciences’, is the latest to sound a warning bell that biotechnology and AI could help malign actors with little expertise in biology to become familiar with pathogens that could be used to cause catastrophic harm. It calls for better screening of DNA orders, among other measures.  

Large-language models, like ChatGPT, can help walk members of the public through the steps needed to release pathogens, information on how to obtain such agents and locate relevant equipment and facilities, for example. Specialised biological design tools could eventually help engineer pathogens that are more dangerous than anything found in nature. And automated labs in which testing of new organisms is carried out by robotic systems, could dramatically speed up the testing of pathogens.  

On their own, each one of these AI tools still has severe drawbacks, and quite a lot of lab knowhow is still likely necessary. The more than 30 academic and industry experts interviewed by NTI were divided over how dangerous they were, or which was the riskiest.

But in combination, the risks add up. “If you look at individual AI tools in isolation, it's possible that you could underestimate the risk that they pose,” said Nicole Wheeler, a microbiology researcher at Birmingham University, and one of the authors of the NTI report. “For example, if you had a chatbot interacting with a biological design tool and a robotics platform,” she warned at the report’s launch.

Advances in AI could also make it harder to screen DNA orders. New AI protein design tools can design proteins “that have very little similarity to known pathogen or toxin sequences but have the same functions and pose the same risks,” the report says.

Jailbreaking models

Another recent paper has added to concern that open-source large language models are dangerously close to enabling people with some lab skills to generate dangerous pathogens.

Researchers based in Cambridge, Massachusetts, gave participants a jailbroken version of Meta’s large language Llama model in which built-in safeguards were overridden, and asked them to draw up plans to discover how to revive the 1918 Spanish Flu. Because the model is open source, it is relatively easy to tweak to remove inbuild safeguards that prevent it from being used for nefarious purposes.

“In just one to three hours querying the models, several participants learned that obtaining 1918 [influenza] would be feasible for someone with suitable wet lab skills,” it found. One came “very close” to learning all the steps needed to actually obtain an infectious sample.

EU lags on regulation

There’s no sign the EU is considering its own rules to tighten up DNA screening, like the US.  

The EU’s Group of Chief Scientific Advisers is set to release an assessment of the use of AI in science in the first quarter of next year.

But the focus is on accelerating the uptake of AI in science to speed discovery, rather than the risks of its interaction with biotechnology. 

The UK, meanwhile, is hosting a summit focusing on AI safety this week, inviting governments across the world and leading AI companies to discuss the risks and possibilities of the most advanced “frontier” models.

In a nod to the concerns expressed by Washington, a government assessment released in advance of the conference warns that frontier AI could “design biological or chemical weapons”.

Never miss an update from Science|Business:   Newsletter sign-up