The current crisis is a testing ground for artificial intelligence, but an expert panel says a lack of quality data and official reticence is holding back the development of practical tools
Artificial intelligence (AI) is not having a good crisis. But it could do better next time round.
That was the consensus amongst the speakers in an online panel discussion hosted by Science|Business. The COVID-19 pandemic is a major test case of the capabilities of AI, but “it has not worked so well,” said Werner Stengg, an expert in the cabinet of Margrethe Vestager, executive vice president of the European Commission. “The potential of data and AI has not been fully exploited in this [crisis], which points to the need to do better next time around,” he said.
Stengg contended that AI has been held back by a lack of data infrastructure and interoperability, together with weak quality control. There has been a flurry of new software programmes designed to diagnose COVID-19, but when they were analysed by independent experts “some of them were absolutely appalling,” Stengg said. “They were basing themselves on 26 patients, where you may need thousands of data points, which points to the need to have quality criteria there to be sure that you can trust this technology, which can do wonderful things.”
Bertrand Braunschweig, director of Inria, the French Institute for research in Computer Science and Automation, agreed that the immediate impact of AI on the pandemic has been limited. There have been “many, many attempts, many, many projects,” he said. “But there are only a few that are operational and that are really helping people on a day-to-day basis. It’s more long-term research.”
As well as a limited supply of data, official reticence has also curbed the effectiveness of AI tools, according to Rémi Quirion, chief scientist, Province of Quebec. “It takes a bit of time just to convince them that they should go there and it will help them,” he said. “We hope the next phase will be much better.”
Tentative global partnership on AI
Quirion called for more support for the Global Partnership on AI (GPAI), an initiative led by French president Emmanuel Macron and Canadian prime minister Justin Trudeau to promote global co-operation on AI policy, as the IPCC does with climate change. He described the progress of the initiative as “slow, too slow.” High-level officials have been unable to spend much time on it as they respond to the pandemic. “France and Canada are very strongly behind it,” Quirion said. But “it's not a priority” in the US. As the next G7 meeting is to be hosted by the US on 12 June, he is hoping the tech superpower will come under pressure “to be a bit more involved.”
The GPAI is establishing a centre of expertise in Paris with the support of Inria. Braunschweig indicated that the partnership will announce new members soon, saying that “several” G7 countries are now supporting this initiative. But he said, “I can’t exactly say which ones for the moment.”
However, he suggested as things stand, China will not be joining. “We want a group of countries that share the same values [on] AI and so we have written down a number of principles that countries have to accept if they want to collaborate in this initiative,” Braunschweig said. “When China [is] ready to accept these principles, then probably they will be welcome.”
Trying to make AI trustworthy and sustainable
Sarah Box, senior counsellor in the directorate for Science, Technology and Innovation at OECD, noted the GPAI concept has evolved through several years of discussion in the G7, including under the Japanese and Italian presidencies. “It's going to be an extremely valuable complement to the policy-oriented work that takes place in organisations like the OECD,” she said. “By bringing together experts, by digging into technology and scientific, state of the art issues, practical applied issues, you're really going to have these amazing synergies between the policy network and the scientific community, and really build the evidence base for really good AI policy.”
The OECD, which is involved in the discussions around the GPAI, took a first step towards building a global consensus around how to make AI trustworthy when it published a set of principles in 2019. “They were the basis of the principles picked up at the G20,” Box told the event, while flagging in addition to the OECD’s members, seven other countries have said they will adhere to the principles.
Although AI has been around for several decades, the technology is now making major leaps forward, in large part due to the volume of data being collected via the Internet.
But Braunschweig warned that simply relying on deep learning systems to crunch through massive sets of data consumes too much power. He estimated Google’s latest chatbot, called Meena, consumes “a few million kilowatt hours” as it was trained on vast amounts of data.
Noting the number of calculations performed by deep learning systems is doubling every four months, Braunschweig said, “We have to build applications that use less data.” He called for the development of more sustainable hybrid systems, which use both knowledge and data, to ensure that the world doesn’t experience another “AI winter.”