Academics warn ultra-powerful foundation models that can be adapted to a range of tasks risk infecting all AI systems with biases, security flaws and opacity. The EU’s proposed AI act may not be up to mitigating this threat
The EU’s proposed artificial intelligence act fails to fully take into account the recent rise of an ultra-powerful new type of AI, meaning the legislation will rapidly become obsolete as the technology is deployed in novel and unexpected ways.
Foundation models trained on gargantuan amounts of data by the world’s biggest tech companies, and then adapted to a wide range of tasks, are poised to become the infrastructure on which other applications are built.
That means any deficits in these models will be inherited by all uses to which they are put. The fear is that foundation models could irreversibly embed security flaws, opacity and biases into AI. One study found that a model trained on online text replicated the prejudices of the internet, equating Islam with terrorism, a bias that could pop up unexpectedly if the model was used in education, for example.
“These systems carry forward flaws, essentially. If your base is flawed, then your subsequent uses will be flawed,” said Jared Brown, director of US and international policy at the Boston-based Future of Life Institute (FLI), a think tank trying to make sure new technologies are beneficial rather than destructive.
The organisation, which recently established a branch in Brussels, is currently lobbying MEPs and officials about the risks posed by foundation models.
In August, more than 100 academics at Stanford University sounded the alarm about foundation models, warning, “We currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.”
They founded the Center for Research on Foundation Models to study the technology, including its “significant societal consequences”.
There are certainly skeptics: some academics doubt foundation models will become as important as the Stanford researchers claim, and dislike the term.
But startups are already beginning to use the models to create AI tools and services, including an automatically generated dungeons and dragons game, email assistants, and advertising copy.
Traditional AI systems are built for a specific purpose, said Percy Liang, an associate professor of computer science at Stanford, and a member of the new centre. “If you want to diagnose chest X-rays, you gather data for that purpose, you build a model, and you deploy it for that purpose,” he said.
That means that current AI applications are bespoke, siloed, and “brittle”, meaning they don’t have “common sense knowledge.”
Foundation models, on the other hand, are trained on a broad array of data – like online text, images or video, and increasingly a combination of all three – and so, after some tweaking, can be applied to a wide range of different applications.
GTP-3, a model created by the San Francisco-based research lab OpenAI but now exclusively licensed to Microsoft, is trained on 570 gigabytes of internet text and can be tuned to create, say, chatbots for all kinds of topics.
“You see these amazing performance improvements across the board,” said Liang.
But the problem with training an AI system on the whole of the internet is that it becomes much harder to understand why a foundation model spat out a particular conclusion than when the data it is trained on is precisely defined, said Liang. “It makes it a lot more opaque,” he warned.
Although they might generate plausible answers to questions, foundation models “can’t be said to have any deep understanding of the world,” Liang said. The output “isn’t based on any truth, it’s just based on statistical patterns that are stitched together to sound good.” In one test, a foundation model was fooled into declaring an apple with a label saying “iPod” on it was indeed actually an iPod.
This might be fine if you just want an AI system to suggest creative ideas to a writer, but it’s more risky if it underpins a chatbot that needs to provide accurate answers, said Liang.
Layered on inaccuracy is blatant bias. In June this year, Stanford researchers found that when they fed GPT-3 the joke prompt, “Two Muslims walked into a…”, two-thirds of the time it came back with violent “punchlines” such like “synagogue with axes and a bomb”. Its answers were far less likely to be offensive for Christians, Sikhs, Jews and other religions.
“It picks up on the good, bad and the ugly of the internet and it spits it back out at you,” said Liang.
What’s more, because these models scrape public data from the internet, attackers can inject information online, fooling the AI system into changing its outputs in a move known as “data poisoning”. This might lead it to think a targeted individual was a criminal or terrorist, for example.
“It’s really easy to perform these attacks,” said Liang. “You just put something on the internet. It doesn’t have to look even suspicious.”
Digging deeper
The problem with the EU’s proposed AI act, according to critics including the FLI, is that it regulates or outright bans specific uses of AI, but doesn’t dig deeper into the foundation models underlying these applications. The act would for example ban “social scoring” AI applications, or those that “exploit the vulnerabilities of specific group of persons.”
“General purpose AI systems have many different uses, so one bias or flaw in the system could affect different sectors of society,” said Brown. An anti-Muslim bias, for example, “could affect media articles, educational materials, chatbots, and other uses of this system that will likely be discovered in the near future.”
Amendments to the act proposed by the Slovenian Council presidency in November do at least acknowledge the existence of foundation models – termed “general purpose AI systems” by the act – but make it clear that they won’t automatically be covered by the act.
Instead, a general purpose AI system will be covered by the act only if the “intended purpose” falls within its scope.
According to Brown, this is a potential loophole because if a foundation model does not have a declared intended purpose, it could avoid being covered by the act.
What’s more, this means that the act will shift the burden of regulation away from the big US and Chinese tech giants who own foundation models, and on to the European SMEs and startups that use the models to create AI applications. “This could harm the relative competitiveness of the European tech sector,” said Brown.
Instead, the FLI wants regulation to focus more on the general qualities of the entire AI system, like whether or not it is biased, or whether it can tell a user it isn’t that sure of its answer.
If a foundation model doesn’t have a sense of its own “knowledge limits” that is not a problem if it is used to recommend your next Netflix show, but could be seriously dangerous if it is prescribing drugs, Brown said.
High-risk systems
“We’re very likely to stumble upon sectors in the future where we hadn’t thought about AI being used in that way,” said Mark Brakel, the FLI’s director of European policy. This will force lawmakers to keep going back and changing the AI law to ban or regulate new uses of AI retrospectively, rather than making sure the foundation models are sound in the first place, he said.
The European Commission and MEPs, “May have strapped themselves into too narrow a framework,” Brakel said. The FLI wants foundation models classified in the act upfront as high-risk systems.
Other AI experts also agree the act needs change to make sure it takes the new challenges of foundation models into account. Sébastien Krier, technology policy researcher at Stanford University, thinks it could include requirements for regular checks on bias and unexpected behaviour. “More can be done on that front,” he said, though this would be tough to formulate in practice.
“If you’ve got a bad foundation model […] everything downstream will be bad too,” he said. Foundation models are essentially “centralising” AI, potentially making it more vulnerable, through data poisoning, for example. “You have one point of attack,” he said.
Not everything built on a foundation model is necessarily a potential high risk use of AI, Krier thinks. AI generated art is a case in point. But certainly applications in areas like healthcare need to be covered.
A Commission spokesman confirmed that the AI act’s approach is to scrutinise the “intended use” rather than “the technology as such”. But if an AI system is classified as “high risk”, then “the underlying technology” will be “subject to stringent regulatory scrutiny.”
The list of what counts as “high risk” AI can also be “flexibly updated”, if new and unexpected uses of AI “create legitimate concerns about the protection of persons’ health, safety and fundamental rights,” he said.
In contrast to the EU, the US is taking a more comprehensive view of AI that will examine the underlying foundation models, not just the applications built on top of them, FLI noted.
“That is an approach the EU could also have taken, I think one we would have preferred,” said Brakel.
“The EU AI Act is looking at use cases” said Elham Tabassi, chief of staff of the Information Technology Laboratory at the US’s National Institute of Standards and Technology (NIST), which is currently drawing up a so-called “AI risk management framework”.
In contrast, NIST is looking at technical and socio-technical risk, she said. “We want AI systems to be accurate – so have a low error rate – but at the same time we want them to be secure to different vulnerabilities, privacy preserving, biases mitigated and so forth.”
“We don’t want to do product testing – we don’t want to just look at, for example, Alexa, and say, how good is it, how biased is it,” Tabassi said. Instead, the focus is on broader scope “technology testing”.
NIST is only at a very early stage of the process. The first step is to agree what terms like “bias” even mean, and decide on metrics that can measure AI systems. NIST will also consider evaluations of foundation models. The agency should release its framework by January 2023.
The body is not a regulatory agency, and unlike the EU’s legislation, NIST’s framework will not be legally binding.
But there is hope that it will take on force like a similar NIST framework on cybersecurity, which was adopted by the government and embraced by the private sector, said Tabassi.
These cybersecurity guidelines are, “A voluntary framework – unless you want to do business with the government,” said Brown. “It has become required, although not for the whole of industry.”
Brussels’ approach does have supporters. “The EU’s approach of regulating individual protocols makes a lot of sense,” said Liang. “I think it makes sense to ground things.”
It’s far easier to weigh up the pros and cons of a particular self-driving car system, say, than judge whether a foundation model is good or bad in the abstract. “It’s really difficult at this point to regulate foundation models in a meaningful way,” Laing said. “We don’t yet even know how to evaluate or characterise these foundation models.” It makes sense to start regulating individual applications like the EU, but this can change as the field moves forward, he thinks.
Time may be of the essence. AI experts fear that when it comes to foundation models, society may get trapped because the problems the new technology causes are unclear until it is widely used, but once entrenched it proves all but impossible to control.
Facebook and Youtube’s recommendation algorithms started fairly inconsequentially, said Brown, with little understanding of their ultimate consequences in areas like misinformation or political polarisation. “By the time we’ve come to realise as a society, that’s not such an inconsequential decision, we’re in the dilemma.”
“I think the stakes are pretty high,” agreed Liang. “Once you build infrastructure, it’s very hard to change.”