Trust in EU approach to artificial intelligence risks being undermined by new AI rules

02 Sep 2021 | News

With the European parliament and the Council about to examine the proposed AI act, there is some concern the EU will squander the current high level of confidence it will regulate in the public interest

The EU is winning the battle for trust among artificial intelligence (AI) researchers, academics on both sides of the Atlantic say, bolstering the Commission’s ambitions to set global standards for the technology.

But some fear the EU risks squandering this confidence by imposing ill-thought through rules in its recently proposed Artificial Intelligence act, which some academics say are at odds with the realities of AI research.

“We do see a push for trustworthy and transparent AI also in the US, but, in terms of governance, we are not as far [ahead] as the EU in this regard,” said Bart Selman, president of the Association for Advancement of Artificial Intelligence (AAAI) and a professor at Cornell University.

Highly international AI researchers are “aware that AI developments in the US are dominated by business interests, and in China by the government interest,” said Holger Hoos, professor of machine learning at Leiden University, and a founder of the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE).

EU policymaking, though slower, incorporated “more voices, and more perspectives” than the more centralised process in the US and China, he argued, with the EU having taken strong action on privacy through the General Data Protection regulation, which came into effect in 2018.

A snapshot of the views of more than 500 AI and machine learning experts was published last month in the Journal of Artificial Intelligence Research. Although the survey is a little aged, having been conducted in late 2019, the results are still likely to please Commission policymakers.

AI researchers said they trust the EU far more than the US and Chinese governments to shape the technology’s development “in the best interests of the public.”

The EU was even more trusted than the United Nations, and was beaten only narrowly by non-governmental research organisations such as the AAAI.

This was regardless of whether the researchers had done their undergraduate degrees in Europe, North America or Asia. The EU scored almost as well among industry respondents as academic experts.

“I believe this result reflects the fact that the EU has made the development of trustworthy and human-compatible AI a clear priority,” said Selman.

In a separate survey published by CLAIRE, around three quarters of respondents agreed with the EU’s proposed risk-based approach to regulation, but were split over whether it would actually achieve transparency in AI, or protect EU citizens from abuses like mass surveillance, social scoring or behaviour manipulation.

Tough stance

“It is certainly possible that AI researchers have been positively impressed by the ‘human-centered’ AI rhetoric of the European Commission, as well as by its ambitious comprehensive approach to regulating the subject,” said Kenneth Propp, adjunct professor of European Union law at Georgetown University.

“I suspect that other factors may enter in as well: the fact that the EU is not [as] engaged in developing military applications for AI, unlike China or the US for example,” he said.

Researchers may also have noticed the “tough” stance Brussels takes on anti-trust issues, said Noemi Dreksler, one of the survey’s authors, and a researcher at Oxford University’s Centre for the Governance of AI.

Even critics acknowledge EU has managed to win over many AI researchers. “The US government does little to promote or defend its own digital policies,” said Daniel Castro, vice president of the Information Technology and Innovation Foundation, a Washington DC think tank that has warned of the act’s cost to European companies.

“In contrast, EU policymakers show no such restraint,” Castro said. “They argue that other countries, particularly the US, are not doing enough to regulate AI. Therefore it is not too surprising that AI researchers echo the message they constantly receive.”

The Commission kicked off its AI rule-making in 2018, with the publication of a European Strategy on AI. Then in 2019, a high level expert group set out ethical guidelines for trustworthy AI. This was followed in 2020 by a further white paper on AI, and April this year saw the release of the proposed AI act, with the aim of shaping global norms around the technology.

A public consultation on the act closed on 8 August and the new regulation is now due to be scrutinized by MEPs and the Council.

“On the side of the European Commission, it has been increasingly clear that you need to earn the support of a range of stakeholders [including researchers] to take a role as a leader,” said Andrea Renda, a member of the Commission’s high level expert group on AI.

Trust of AI researchers matters to companies

Winning the trust of AI researchers could give Brussels clout over technology giants, which are desperate to recruit new staff. “We’re still in a situation where AI talent is seriously restricted,” said Hoos. “They can choose between many offers.”

A company’s stance on ethics and regulation affects who it can attract. “That creates pressure, and I expect that pressure to increase,” Hoos said.

“Microsoft and Apple, for example, are pushing more in the direction of, yes, we should have more regulation, we should have more privacy protections,” said Markus Anderljung, another of the survey’s authors, and a researcher at Oxford’s Centre for the Governance of AI. “Part of the reasoning behind that, is that it might play well both with the public, but also with researchers.”

“I could imagine that similar things happen here,” Anderljung said. “So, that you might imagine these companies see benefits from either speaking positively at attempts at regulation, or speaking positively about the possibility of using these standards globally.”

There is also hope in the Commission that the AI act could reproduce the GDPR effect, which has seen the EU’s privacy standards adopted from Japan to Brazil, partly because there is no alternative comprehensive regulatory system, Renda said.

But Castro is skeptical. “The EU’s impact on AI research itself is slipping, and has fallen significantly after Brexit,” he said. “So the degree to which EU regulation of AI will impact the global market may be limited.”

And in the AI community, some of the shine appears to have come off the EU’s approach, with some researchers seeing parts of the proposed AI act as unwieldy.

Back in 2019 when the AI ethical guidelines were launched, there was “a lot of interest and support in the academic community for what the EU was trying to do,” said Renda. But since then, skepticism has been growing, he said.

One concern for researchers is that the act will be overseen by a European Artificial Intelligence board made up of representatives from member states. “Without experts supporting the board, there would not be much space for the research community,” Renda said.

Anderljung said he has spoken to AI researchers who feared aspects of the act are “naive and impossible.”

For example, the act’s requirement that AI training data be “relevant, representative, free of errors and complete” is, “just not how this works,” he said. Researchers know that, “We’re not going to have data that perfectly represents reality,” Anderljung said.

Tech firms and researchers have been inundating the Commission over the summer with responses to the proposed act.

CLAIRE has warned that the “intention underlying the proposed regulation is good, but that it does not achieve the intended upsides, while creating downsides of serious concern.”

Never miss an update from Science|Business:   Newsletter sign-up