Call for the EU to build publicly funded cutting-edge artificial intelligence

04 Jan 2024 | News

Leading AI expert Holger Hoos says Europe needs it own Manhattan Project to create a leading AI system and avoid dependence on the US or China. Other experts are supportive, but the cost and politics of such a project are tricky

OpenAI launched a free version of its chatbot, ChatGPT, in December 2022, resulting in a huge public boom in the use of large language models

The EU should assemble its best computer scientists and give them billions of euros to build a European AI system, to avoid becoming dependent on models created by US and Chinese technology giants, one of Europe’s leading AI experts has argued.

Instead of hoping that private European companies will come to the rescue, the EU needs a public Manhattan Project-style effort to build an independent, ethical and transparent AI model, says Holger Hoos, a founder of the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE). 

“I don't think we want to be forced to use Microsoft’s or Google's,” he told Science|Business. “I think we want our own.” 

Hoos, professor of AI at RWTH Aachen University and a leading voice on EU AI policy, doubts that European companies like France’s Mistral AI or Germany’s Aleph Alpha will ever have the resources to compete with US tech giants. Given this, the EU needs a publicly-created underlying AI model on which companies can build new products.

“I think it’s a good idea, because there’s a huge danger if we don’t”, said David Bisset, executive director at euRobotics, a European robotics network. “Europe’s going to need that capability, and it has to shape that capability against its own needs. Europe isn’t the US and it’s not China.”

Other European AI experts who spoke to Science|Business have also been mulling over whether a publicly funded AI model would be possible.

Some have even done back-of-the-envelope calculations over how much it would all cost, including the astronomical wages AI experts can command.

However, judging by these conversations, it’s unlikely there would be easy agreement over who exactly would run such a project.

“It's an idea that has sprung up in quite a few heads around Europe, not just within mine,” said Hoos. “It feels like an idea that's about to be hatched into a plan and a plan that perhaps is even gaining traction.”

These conversations are also happening, to a small extent, among Brussels officials too, said Hoos. 

“Certainly I've heard people in the European Commission talk a little bit about how desirable it would be to do something like this in the public space,” he said.

“But they're under the completely mistaken impression that this can be done in the context of one or two, maybe three Horizon Europe projects.” 

Asked about Hoos’ idea, a Commission spokesperson pointed to plans to invest €1 billion a year into AI per year through Horizon Europe and Digital Europe funding schemes. The EU is also close to agreeing its pioneering AI Act, which should give companies regulatory stability, he pointed out.

Computing infrastructure

Rather than the tens of millions up for grabs in Horizon calls, Hoos estimates it would cost in the order of “small billions” of euros to create a public AI system, covering salaries and new European computing infrastructure to train the model.

Hoos and fellow CLAIRE board members have already called for what they dub a “CERN for AI” – the European computing infrastructure that would allow rapid progress in the technology.

Hoos envisages scientists from across the EU would be plucked from their day jobs to work, for a while, at a single physical location – the Los Alamos of European AI – and crucially, relieved of any outside teaching and research responsibilities. 

“You need people who are 100% dedicated to this, at least for a few years,” he said. “It's exactly as if you want to have a successful startup, you don't do this on the side, you need people who do this and nothing else.”

Even with billions of euros at its disposal, an EU public AI project would still struggle to match the salaries of those at leading US tech firms. The solution is to “hope that there is a sufficient number of people who are not solely motivated by money” who would flock to the project, he said.

“I'm actually quite optimistic,” he said. “Looking at my own students, I think every single one, rather than doing a PhD with me, could be working for a US based tech giant and make a lot more money”.

The EU would need to fund such a project very differently from how it typically backs Horizon Europe projects, which tend to be decentralised consortia, he points out.

Instead, a new organisation – perhaps a joint undertaking – would have to created. It could be led not by Commission officials but a circle of scientists, including social scientists.

Japan takes the initiative

To a US audience, the idea of the EU creating its own AI model to rival the likes of OpenAI’s GPT-4 might sound far-fetched, laughable even – the ultimate example of Brussels overreach into what is best left to the private sector.

But other countries have already been toying with the idea of state, or at least non-private, AI models.

Japan, worried that the current US models are too English-language focused, has launched a couple of initiatives to build systems that work in Japanese and are better attuned to Japanese culture.

Last summer, the country’s National Institute of Information and Communications Technology announced it had created a large language model trained purely on Japanese text. An even bigger model is in the works.

The UK, which has tried to reinvent itself as a leader in AI safety, last year floated the idea of creating a ‘sovereign’ AI model, dubbed ‘BritGPT’, although prime minister Rishi Sunak has since rowed back on the plan.

Hoos doesn’t want to simply match what US tech giants have already done. Instead, the EU should aim to surpass them, in terms of trustworthiness, transparency, and so on.

“Our main ambition needs to be to create models that satisfy these criteria,” he said. “Maybe not all of them in one go, but at least we should get started on it.” 

For example, OpenAI’s ChatGPT made liberal use of low-paid Kenyan workers to provide the system with so-called reinforcement learning from human feedback, so that the AI imbibes some sense of human ethical judgement.  

A public EU alternative would have to avoid this kind of “scandalous” exploitation, said Hoos – but of course this would cost more money.

Another tricky hurdle for an ethical AI project would be access to data. AI companies like OpenAI have angered artists, writers and newspapers by hoovering up mountains of online text and pictures to fuel their models without permission, and now face a raft of copyright challenges.

An EU AI project would not be so “predatory”, said Hoos, and would instead have to focus on quality of data over quantity.

“I am really convinced that a somewhat smaller data set that is more carefully curated and of high quality could get us a lot further,” he said.

DeepL, a German automatic translation company, has been able to go head-to-head with Google Translate precisely because it has used higher quality data, Hoos argued.

Plagued by errors

As well as being more ethical, an EU AI model needs to be more reliable than what is currently on the market. Existing large language models are too plagued by errors and hallucinations to be useful for “all but the most trivial programming exercises”, a task Hoos thinks otherwise is one of the most promising uses of AI technology. It’s unclear whether simply throwing more data and computing power will improve them much more, argued Hoos.

“I don't think we can easily train them to be better,” he said. “So we need something else.”

Instead, a new EU model needs to add in elements of logical reasoning that could make them more reliable. What’s more, for an EU project, it needs to be “multicultural”, and trained on non-English language content too, so it can "better reflect and respect cultural differences”.

And finally, such a model wouldn’t just process and output text, like ChatGPT. Instead, in the jargon of AI, it would be multi-modal, able to integrate language, pictures and video, which is the current direction of travel in the industry. Google, for example, has recently shown off its new AI tool Gemini describing drawings by a human minder in real time.

Talk of an EU-funded megaproject to rival US technological prowess will bring back unpleasant memories in Brussels.

Think, for example, of the ill-fated Quaero, a Commission-backed Franco-German project to build an alternative to Google Search during the 2000s. Or Gaia-X, the long-delayed attempt to build a European data infrastructure to rival the likes of Amazon Web Services.

“In the circles of the Commission, what you sometimes hear is a reluctance to commit to large scale investment, not so much because the money isn't there or because there isn't a sense of urgency, but more because there also have been some bad experiences,” Hoos said.

But the EU needs to learn a bit more risk-taking and ambition from the other side of the Atlantic, he argued. With an overly cautious attitude, “none of the current tech giants would have ever gotten off the ground, right?”

For all the recent public failures to build technological alternatives in Europe, there are positive stories too, Hoos said, pointing to the success of Airbus, the European conglomeration that has managed, through scale and ambition, to be competitive with the US’s Boeing.

CERN is another example of a successful public scientific megaproject. And, of course, the Apollo programme, arguably the biggest and most successful scientific project of all time, was almost entirely public, Hoos noted.  

What about safety?

For some in the AI safety world, a powerful new system, even a public one, is the last thing humanity needs, because of the risk of these models becoming so intelligent they escape the control of their human creators.

But Hoos insists that a public EU alternative would be safer than the current crop of models because it would be developed more responsibly, with transparency and trustworthiness in mind, rather than rushed out to satisfy investors or make money.

“I think the current tech giants put restraints on [AI systems] more or less as band aids,” he said. “And they put restraints onto systems that they no one really understands.” 

Hoos would also make any EU AI model open source, so it can be manipulated and promote the development of new and useful applications.

However, open source AI models worry some scientists, who think this could allow any safety measures to be overridden, paving the way for terrorists to use them to create bioweapons, for example.

An EU model should also play to European industrial strengths, argues Bisset. Despite extraordinary feats of text and image generation, existing AI systems on the whole struggle to power safe and useful industrial robots, traditionally one of Europe’s strong suites.

“If you take a piece of AI and you try and use it inside a robot, that AI has got to understand the laws of physics and the physical world,” he said. “And right now, they don't do either.”

Europe also has a preponderance of small, high-tech firms, which generate niche industrial data that could be plugged into an AI system to generate new insights, Bisset said. “You’re going to have to build a facility which is accessible by SMEs,” he said.

The risk here is that a European AI Manhattan Project gets so overloaded with demands – to be reliable, ethical, useful for SMEs, applicable to robotics and so on – that it becomes impossible to build something to satisfy everyone and never sees the light of day.

There’s also the question of whether the UK, comfortably the continent’s leading AI country, would want - or be allowed - to join any such project.

“I think it's always easy to […] portray Europe as a bit of a bumbling animal because there's so many cooks in the in the kitchen,” said Bisset. “But actually, when it comes together, it works really quite well. Look at the European Space Agency and look at Airbus.”

“Am I worried about how European bureaucracy could stymie any such attempt? Yes, I am worried,” Hoos said. “But I'm less worried about that, than about the technological dependence that we're currently rapidly spiralling into.”

Never miss an update from Science|Business:   Newsletter sign-up