An exemption for the general purpose AI systems that are seen as the future of the technology is “puzzling”, researchers say
MEPs have been told by world leading artificial intelligence (AI) experts that the EU’s proposed act regulating the technology is not future proof because it fails to include powerful new AI systems that can be turned to multiple different tasks.
These general purpose AI systems, also known as foundation models, developed by tech giants like Google, are seen as the future of the technology. They able to learn across massive datasets of text, images and sounds, but as the legislation stands, it will not cover these systems directly, but only the specific uses to which they are put.
“Is the act in its current draft future proof enough? I think my answer is clear: no,” Max Tegmark, physics and AI specialist at the Massachusetts Institute of Technology, told MEPs at a hearing on 21 March.
As a stark example, one consequence of excluding general purpose AI systems from the act is to shift liability onto European companies that use them in specific applications – rather than the US or Chinese firms that own the underlying AI systems on which these applications are built, Tegmark said.
“Imagine if you're Airbus, and you buy an engine from somewhere else, and you're not allowed to find out anything about how the engine works, you're not allowed to look inside, and you have to put that into your aeroplanes,” said Tegmark. “And then when the plane crashes because of an engine malfunction, you are the only one liable, right? This is a very, very bad, bad position to be in.”
Stuart Russell, professor of computer science at the University of California, Berkeley agreed. “It makes sense to assess their accuracy, fairness, etc, at the source - that is at the large scale vendor of general purpose systems, who has the data and design information to carry out conformity assessments, rather than at a large number of presumably smaller, European integrators who do not,” he said.
The EU’s AI Act is presented as the first serious attempt in any jurisdiction to rein in the potential downsides of the technology, and includes prohibitions on uses such as subliminal manipulation and China-style social scoring.
“The act is extremely important. The world is watching,” Russell, who is one of the fathers of the field, told MEPs. “This is the first major step in regulating what will probably become the dominant technology of the future and may determine the course of human civilization,” he said during a joint public hearing of the committees on Internal Market and Consumer Protection, and Civil Liberties, Justice and Home Affairs. Many AI researchers find the exemption of general purpose AI systems “puzzling”, said Russell.
Comparing chairs and avocados
Google’s general purpose GPT-3 AI system can already successfully execute complex intellectual tasks – like finding pictures of chairs that look like avocados, Tegmark told MEPs.
“These are systems which can span multiple domains, they can work with text, perhaps images, perhaps sounds, perhaps higher-level concepts,” he said. “This is where the technology of AI is going.”
But they are also capable of catastrophic misunderstandings. Tegmark quoted an experimental dialogue with a GPT-3-based chatbot that advised a suicidal human to kill themselves.
Amendments to the act circulated last November confirm that general purpose AI systems “should not trigger any of the requirements” of the legislation.
One of the reasons why the act has shied away from regulating general purpose AI is because it is so broadly applicable, it can’t be said to have a specific “intended purpose,” in the jargon of the legislation.
But EU law allows governments to anticipate a “reasonably foreseeable use” of a technology, which could help pull general purpose AI into the legal framework, said Catelijne Muller, president of ALLAI, a Dutch organisation lobbying for responsible use of the technology.
MEPs were told of several other areas where the legislation could be tightened up.
Muller pointed out that it excludes existing AI systems from its scope, so long as the purpose of the tool does not change. This means for example, it would do little to stop online proctoring technology used to monitor students taking exams remotely to see if they make any suspicious movements, and which proliferated during the pandemic.
Muller called this “very invasive technology” that will nonetheless “never fall under the scope” of the act.
The act does target AI that uses subliminal techniques to manipulate individuals, but Tegmark wants this prohibition to cover not just individuals but broader society.
“Perhaps the biggest threat that I think European democracy faces is ever more powerful machine learning systems used to manipulate the reality we learn about from the media and social media,” he said.
“And that's not an easily provable harm, but boy, oh boy, does it cause a lot of social harm,” he said.