Politicians must not be allowed to harness fears around artificial intelligence to divide people, says Dragoș Tudorache MEP, who is leading Europe’s charge to regulate this powerful technology
Scientists who rushed to develop COVID-19 vaccines in 2020 could not have predicted that when they were ready for widespread use in 2022 there would be protests against vaccination across Europe.
Such was the volume of disinformation about COVID-19 vaccines and the pandemic writ large, that Tedros Adhanom Ghebreyesus, director general of the World Health Organization labelled it an “infodemic”.
Now, the rapporteur of the EU’s Artificial Intelligence Act is warning we should be prepared for similar disinformation to poison the debate over AI.
The lightning-pace development of AI systems has led to fears the technology could be used by malicious actors for everything from creating bioweapons to systematically undermining democracies with fake news. While the dangers must be addressed, stoking people’s fears risks polarising the debate, says Dragoș Tudorache MEP.
“The risk of it becoming a deeply ideological and divisive topic in our society is very real,” the Renew Group member and Romanian national told Science|Business. “In fact, I am surprised it has not yet become ideological.”
The EU will be passing the most comprehensive regulation in the world to rein in the technology if the AI Act is adopted as expected later this year. The legislation will classify different AI applications according to the level of danger they pose, with stricter rules for high-risk systems, and an outright ban on the most dangerous applications. The aim is not just to protect European citizens, but to foster public trust so that Europe can derive the benefits AI can bring.
That trust will be undermined if the debate becomes ideological and is the target of disinformation, says Tudorache.
This will become an issue as people “even in the remotest corners of our society” begin to feel the impact of AI. “They might fear losing their jobs, or feel they are not equipped to deal with the new opportunities arising around them,” he said. “It will inevitably stir up feelings that can be exploited by those who are already politically exploiting these divisions. And as we know, some of these forces have a tendency to challenge science.”
Tudorache believes serious research into perceptions of AI in Europe is needed to monitor this risk. Studies undertaken in the US by Pew Research Center recently showed that 52% of Americans feel more concerned than excited about the increased use of AI, up from 38% in December 2022.
Foundation models
Global debates in recent months have been dominated by the risks of foundation models, which are trained on huge amounts of data and can be adapted to a wide range of tasks. In March, more than 1,000 tech leaders, researchers and others signed an open letter calling for the development of the most powerful AI systems to be paused.
Tudorache, who regularly travels to the US to discuss the topic with his American counterparts, says there has been a notable shift since the spring among policymakers in countries including the US and UK, which have begun to take the risks much more seriously.
“In the US, the conversation right now is very much focused on these big foundation models and their risks,” he said. The upcoming AI safety summit hosted by the UK meanwhile will focus exclusively on the risks of frontier AI, highly capable foundation models that could pose serious threats to safety in the future.
The European Parliament has also added provisions on foundation models to the AI Act as they fell outside the initial scope of the text. But while it is good that the world is finally recognising the need for regulation, it is important not to forget all the other forms of AI, Tudorache said.
As one example, “If a bank, school, hospital or public authority is discriminating against citizens because an algorithm has been told to simplify an administrative procedure, there is a problem.” Regulation is needed to provide safeguards against the indiscriminate use of technology, as companies cannot be relied upon to do it themselves, he said.
At the same time, being overly cautious about new technology and hampering innovation would also be a mistake. It is important to strike a balance, Tudorache added. One way to ensure Europe does not miss out on the benefits AI can bring to the whole of society, without worsening societal divisions, is to focus on education.
“If people have an equal opportunity to train and get upskilled, they will be more open to the changes around them, and to AI being part of their jobs and their lives,” said Tudorache. This will depend heavily on member states, although he says the EU can play a role in encouraging them to pay attention to technology’s impact on society, and to invest in education.
Not everybody is happy with the balance as it currently stands. In June, executives from 150 businesses, including Germany’s Siemens and France’s Airbus, signed an open letter warning the draft AI Act would “jeopardise Europe’s competitiveness and technological sovereignty”.
In addition to the AI Act, European countries are involved in multilateral talks to establish guidelines on the use of AI, notably in the G7, which aims to establish a code of conduct for foundation models and generative AI.
Tudorache believes this will complement the EU’s legislation, particularly in the next two or three years before the latter takes effect. “For us Europeans it will be short term, for others it may be longer, but there will be a gap. In the meantime, you at least give some guidance with the code of conduct,” he said.