Tools like ChatGPT can give instructions on how to find, synthesise and order deadly pathogens, albeit incomplete ones. Biologists now want more involvement in the training and testing of AI models, while some worry that science’s culture of openness might need to change
In June, a group of scientists at Harvard University and the Massachusetts Institute of Technology released details of an experiment that will send shivers down the spine of everyone who lived through the COVID-19 pandemic.
To test the dangers of the latest artificial intelligence models like GPT-4, they gave three groups of students – all without life sciences training – a hour to see whether they could use chatbots to help them create a new deadly outbreak.
Within the time allowed, the chatbots helpfully informed the students of four possible pandemic candidates, such as smallpox; explained how pathogens can be synthesised from a genome sequence and linked to specific protocols; and pointed to companies that might create custom DNA sequences without first screening for suspect orders.
“Collectively, these results suggest that LLMs [large language models, which power chatbots like ChatGPT] will make pandemic-class agents widely accessible as soon as they are credibly identified, even to people with little or no laboratory training,” the preprint warned.
Fortunately, a chatbot can’t actually walk you through the creation of a homemade pandemic just yet. It missed out certain key steps, and it’s doubtful whether some of the pathogens it suggested, like the 1918 Spanish Flu, would actually cause mass disease today, given existing immunity.
Some experts are highly sceptical that the risk of AI-assisted pandemics is near, while others think AI will aid early career researchers, not novice members of the public.
And, of course, correctly applied, the convergence of AI and biology could yield untold benefits for human health.
But given the stunning recent speed of AI progress, scientists are still unnerved.
“If you look where we were six months ago, and look where we are today, we're already in a different world,” said Mark Dybul, a global health expert at Georgetown University now involved in efforts to contain this risk. “You can’t predict the pace of that change. Now we’re looking at months, not years or decades. And that’s why it’s important to act”.
At least in the US and UK, the risky convergence of AI and bioscience has begun to grab the attention of policymakers and politicians.
It will be one of the items on the agenda at the UK’s upcoming AI safety summit in November, where national governments and tech companies will discuss the “proliferation of access to information which could undermine biosecurity” when used in conjunction with increasingly powerful AI.
In July, Dario Amodei, chief executive of AI firm Anthropic, told a US Senate technology subcommittee that there is a “substantial” risk that in 2-3 years a chatbot will be able to walk users through every step needed to create a bioweapon.
“This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack,” he said, adding that Anthropic has been trying to make sure its own AI models do not help out potential bioterrorists.
Two US senators have also recently demanded the country’s Government Accountability Office to investigate, among other things, the risk that AI could allow the creation of biological weapons, citing the Harvard/MIT study.
Biological design tools
It’s not just well-known large language model-based chatbots like GPTChat that have scientists worried. There’s another class of software tools, called biological design tools (BDTs), which use AI to help scientists design new proteins or other biological agents.
While chatbots could in theory help the public unleash known pathogens like smallpox, the danger of BDTs is that they allow trained researchers to create new pathogens, perhaps both highly transmissible and virulent. Another possibility is that BDTs could be used to disguise dangerous agents so they aren’t picked up by existing export controls.
“Humanity might face the threat of pathogens substantially worse than anything nature might create,” according to a recent preprint by Oxford University biosecurity researcher Jonas Sandbrink, which maps out the dangers from both types of bio AI tool.
Both large language model-based chatbots and the more specialised BDTs, are still “some way off” enabling people to create pandemics, stressed Oliver Crook, a researcher at Oxford who develops machine learning-powered biological design tools.
But Crook adds, “People thought nuclear weapons would be impossible, and then there was a kind of sudden realisation that it would be possible - and then they made it. And so, we should treat these tools with similar caution."
Still, not everyone is convinced AI powered bioweapons are likely to be available soon, at least to the public. “No matter how easy the design process can become with the use of AI, there are still technical limitations and internal safeguards in place,” said Nazish Jeffery, a bioeconomy expert at the Federation of American Scientists, which is currently asking for expertise to work up policy proposals to deal with the risk.
“An average person does not have the necessary lab knowhow (that takes several years to even acquire) to translate AI generated insights and designs into the real world,” she said.
Filling in the blanks
However, the main risk might not be from the public, but from junior scientists willing to work for malign groups.
Chatbots in particular can simplify opaque academic language so that early stage researchers can far more easily follow and understand scientific protocols, Crook said.
“I don't think we're worried about high school students causing a problem,” he argued. Instead, the risk might come from “people who are already PhD level scientists having access to domains they didn't necessarily have access to before.”
For example, the Japanese doomsday cult Aum Shinrikyo in 1993 tried but failed to cultivate and unleash anthrax-causing spores in Tokyo – despite having a PhD trained virologist and the right protocols. With an AI-powered lab assistant, however, the group might have been much more successful, speculates Sandbrink’s recent paper.
Or, AI tools could have allowed Iraq to overcome the lack of technical expertise that limited its biological weapons programme, warns the Institute for Progress, a Washington DC based think tank, in a recent submission to help shape the US’s national AI strategy.
The threat is seen as real enough that some scientists are now demanding more involvement in testing leading AI models before they are released.
In May, a group of academics and think tank experts, chaired by Dybul, met on the shores of Lake Como in Italy to come up with solutions to this emerging threat. They want powerful LLMs and BDTs to be evaluated by specialised bioscience working groups, in particular to analyse the training data that feeds into these models.
If tech firms removed a tiny sliver of the scientific literature from their training data – key papers in virology, for example – this would “suffice to eliminate nearly all of the risk,” by denying chatbots critical knowledge, concludes the Harvard/MIT preprint.
One problem, however, is that it’s the tech giants, not scientists themselves, that get to decide how to train their AI models. Anthropic, creator of the AI assistant Claude, has taken a keen interest in these risks. OpenAI, creator of ChatGPT, has said it consulted “biorisk” experts when training its most recent model. But so far, there’s no independent, outside body checking the tech giant’s work.
Neither OpenAI, or Google, which has created the Bard chatbot, responded to requests for comment.
The other problem is that open source versions of AI models – for example, Meta’s Llama - can potentially be tinkered with by anyone to get around any built-in constraints.
“Any and all safeguards can and will be removed within days of a large model being open-sourced,” said Kevin Esvelt, a biosecurity expert at MIT who is lead author of the Harvard/MIT preprint. To address this, Esvelt thinks companies like Meta need to have some kind of liability for what is done with their open source models afterwards.
A culture of openness in science, too, will potentially hamper any attempts to rein in these risks.
Crook’s biological design tools are released openly as default, with no checks on who is using them. Any lab that put its tools behind some kind of restrictive barrier would get a lot of pushback, he thinks, making it hard for any one lab to make a stand and change these norms. “I think we’ll wait to see what is suggested by policymakers, and then implement those changes,” he said.
EU absent from the table
In the debate about what to do about AI-created bioweapons, there’s a sense that the EU is a somewhat absent from the table. At May’s meeting at Lake Como, just one of the 23 groups or universities represented – Denmark’s Centre for Biosecurity and Biopreparedness – was from the bloc.
The debate is currently being led by the countries with the most advanced biotechnology and AI industries, “which happens to be the US and UK right now”, said Claire Qureshi, a biosecurity researcher from Helena, a US-based organisation that convened the meeting.
The EU does run an initiative – called Terror – to prepare for biological and chemical attacks, but its focus is on resilience in the event of an outbreak, rather than preventing one in the first place through limits on AI, for example.
The European Commission did not respond to a request for information on EU policy towards AI and biosecurity.