Policymakers, companies and civil society on both sides of the Atlantic worry that in the absence of proper regulation, generative artificial intelligence will pose significant threats. The Commission is promoting a voluntary code of conduct to fill the legislative void
The EU is calling on companies to join an international push to self-regulate generative artificial intelligence (AI) products such as ChatGPT, the chatbot launched last November that can write essays, engage in philosophical conversations and write computer code.
With legislation to regulate AI lagging well behind advances in the capabilities of the technology, the European Commission wants to spearhead a joint initiative with the US to establish a code of conduct that companies would sign up to voluntarily.
The proposal was put forward at the US/EU Trade and Technology Council (TTC) meeting this week by Margrethe Vestager, European Commission executive vice president. “We're talking about technology that develops by the month so what we have concluded here at this TTC is that we should take an initiative to get as many other countries on board on an AI code of conduct for businesses voluntarily to sign up,” Vestager said.
While governments, companies and civil society see the economic potential of the rapid advance of generative AI tools, they also fear the new technology could pose serious risks to democratic society if weaponised to spread misinformation, or allowed to take decisions in our day-to-day lives.
On Wednesday, leading AI experts, including the CEOs of ChatGPT developer OpenAI, and of Google’s Deepmind AI division, gave a stark warning that AI could lead to the extinction of humanity, saying “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Legislation will be slow to take effect
Since ChatGPT was launched last year, US IT companies Google and Microsoft have rolled out their own generative AI services, opening the door to a new era in digital innovation.
By the time governments enact legislation to rein in the potential negative impacts of this technology, it might be too late. The Commission presented a proposal for AI legislation in April 2021 but advancing the file through the European Parliament and Council has been slow. Vestager hopes the first talks between the three EU co-legislators will start in the coming weeks, with a deal within reach by the end of the year. Even if this happens, the legislation will not have an impact until “give or take two or three years, in the best possible case,” Vestager warned.
As an alternative arrangement to fill the legislative void in the meantime, Vestager hopes to seal an international agreement between the G7 countries and invited partners, such as India and Indonesia. That could be effective if companies in those countries, which represent about one third of the world's population, sign up for an AI code of conduct.
Mitigating downsides
Vestager was speaking at the fourth ministerial meeting of the TTC in Luleå, Sweden, where the EU and US acknowledged that while AI technologies hold great economic opportunities, they also pose significant societal risks. The meeting showcased the first results in moves to implement a joint roadmap for trustworthy AI and risk management.
US secretary of state Anthony Blinken said the TTC could be the right platform for advancing voluntary codes of conduct to mitigate the potential downsides of generative AI while amplifying its benefits.
“I think we share a conviction that the TTC has an important role to play in helping establish voluntary codes of conduct that would be open to all likeminded countries, particularly because there’s almost always a gap when new technologies emerge, between the time at which those technologies emerge and have an impact on people, and the time it takes for governments and institutions to figure out how to legislate or regulate about them,” Blinken said.
The TTC had already set up three expert groups to work on the identification of standards and tools for trustworthy AI. This work will now include a focus on generative AI systems. The expert groups have agreed on a taxonomy and terminology for AI and are monitoring emerging risks it poses.
Vestager said the TTC will advance a draft of the code of conduct for AI with industry input “within the next weeks”, with the hope that Canada, UK, Japan, India and other countries would back the effort.
Representatives of the private sector are equally concerned with the risks posed by the technology. Dario Amodei, CEO of one US AI start-up Anthropic said no one really knows what an AI system is capable of until it is deployed to millions of people. Given this, companies should avoid rolling out applications in a “cowboyish way”, that undermines efforts to regulate the technology in the same way as governments have traditionally regulated car or aviation industries.
“This difficulty of detecting dangerous capabilities is a huge impediment to mitigating them,” said Amodei. “Some kind of standards or evaluation are a crucial prerequisite for effective AI regulation.”
Microsoft vice chair and president Brad Smith said the EU, the US, along with the G7, and some other countries can move forward on a voluntary basis to set up these standards. “If we can work with others on a voluntary basis, then we'll all move faster,” he said.