US unveils light-touch strategy to deal with artificial intelligence risks

31 Jan 2023 | News

It’s unclear to what extent tech companies will sign up to the voluntary guidelines, and exactly how it will fit with the EU’s AI Act. But there are signs the EU and US approaches should be interoperable

The US has released its answer to the EU’s artificial intelligence act, a voluntary set of recommendations that aim to make companies more responsible in how they develop AI systems.

The National Institute of Standards and Technology (NIST) unveiled its AI risk management framework last week, but it’s uncertain how rigorously big tech companies will adopt the guidelines, and how the recommendations will mesh with Brussel’s AI legislation.

“This voluntary framework, we believe, will help to develop and deploy AI in ways that enable organisations, in the US and other nations, to enhance AI trustworthiness while managing risks based on our democratic values,” said Don Graves, deputy commerce secretary, at the launch in Washington DC.

The US, which hosts many of the world’s leading AI companies like OpenAI and Google, has no plans for binding legislation, like the EU. Instead, in 2020 Congress tasked NIST, which has traditionally focused on codifying scientific standards and measurements, with creating a kind of crib sheet for companies to follow when developing AI systems.

Since then, there have been extraordinary leaps in AI progress – with plenty of attendant risks. Millions of users have embraced OpenAI’s ChatGPT for education, poetry and positive uses – but some have managed to trick the chatbot into providing instructions to create Molotov cocktails and methamphetamine.

After 15 months of work and hundreds of submissions from corporations, universities and civil society, NIST has finally released version 1.0 of its AI risk framework.

It contains some recommendations that, if followed, would transform the workforce of tech companies and allow outsiders a much greater role in the creation of AI systems.

Domain experts, users and “affected communities” should be “consulted” when assessing the impact of AI systems “as necessary”, it suggests.

Companies developing AI systems should also be diverse – not just ethnically, but in terms of their disciplinary background, expertise and experience – in order to spot problems a more homogenous team would miss.

And the framework calls for reams of documentation when creating AI systems, including a record of the expected impact of AI tools, not just on the company and users, but for broader society and the planet too.

Unlike the draft EU legislation, there are no uses of AI that are singled out as off-limits. And how much risk companies are willing to take in rolling out AI systems is up to them. “While the AI risk management framework can be used to prioritise risk, it does not prescribe risk tolerance,” it says.

Adoption unclear

The question now is whether companies actually take NIST’s ideas on board. Kush Varshney, who leads IBM’s machine learning group, gave the framework a modest endorsement at its launch. He said it would be “very helpful” for pushing the company’s research and innovation in “directions that matter to industry and government and broader society”.

A spokeswoman for DeepMind, a leading AI lab owned by Google’s parent company Alphabet, said it is “reviewing the content being published by NIST and sharing it with our internal teams” and would share its own case studies with NIST’s resource centre. While DeepMind is based in the UK, it does apply AI to boost Google products.

“The NIST AI framework is something that we hope is implemented […] but there's no pressure to do so,” cautioned Carlos Ignacio Gutierrez, an AI policy researcher at the Future of Life Institute, a US-based technology think tank.

Many big companies already have risk management frameworks, he pointed out. Instead, NIST’s ideas might be useful for small and medium sized companies that lack to resources to build up their own risk-checking procedures, he suggested.

Although the framework lacks force of law, the hope is that companies will adopt it in order to limit their liabilities if they get sued because of a malfunctioning AI system. And companies can start to use it right now, whereas the EU’s AI act could have years more wrangling in Brussels before it comes into force.

But what adopting the framework means in practice is slippery, as NIST itself has encouraged companies to modify and adapt its recommendations depending on what type of AI tools they create.

Using it “can mean many things,” said Gutierrez. “That can mean they take one part of it, it can mean they take the entire thing.”  There is no way for third parties to verify it is being followed, he warned.

Marc Rotenberg, president of the Center for AI and Digital Policy, a Washington DC-based think tank, called the NIST framework an, “Excellent resource for organisations prior to the deployment of AI systems.”

But it is no substitute for a legal framework “to ensure the effective allocation of rights and responsibilities,” he said.

Working in tandem

Another question is how the NIST guidelines will mesh with the EU’s forthcoming AI Act. It’s possible that companies will both have to abide by NIST’s recommendations to reduce their legal liabilities in the US, while complying with EU legislation to avoid huge fines from Brussels.

But Gutierrez sees a possibility that the two could work in tandem. Drafts of the EU’s AI Act stipulate that companies need a risk management framework to evaluate deployment dangers – and firms could follow NIST’s recommendations to check this box, he said. “It would be a good way to complement each other,” he said.

In a sign that it is working towards interoperability, NIST released guidance on how terms in its framework map on to those in the EU’s AI Act, as well as other AI governance tools.

The US and EU are collaborating on AI through the Trade and Technology Council, a regular meeting of top officials. In its last meeting in December, Washington and Brussels announced a “joint roadmap” to pin down key terms in AI, and common metrics to measure AI trustworthiness. This doesn’t mean they will regulate the technology in the same way, but common terminology could help companies better navigate laws and guidelines on both sides of the Atlantic.

And last week, Brussels and Washington announced they would conduct join research in AI to address global challenges, including climate forecasting, electricity grid optimisation and emergency response management.

“We are hopeful for a transatlantic approach to risk management,” said Alexandra Belias, head of international public policy at DeepMind. “We look forward to exchanging best practices through this outlet,” referring to the joint roadmap.

Bill of Rights

There’s also confusion over how NIST’s guidelines will work alongside the US’s so-called “AI Bill of Rights”, released by the White House’s Office for Science and Technology Policy (OSTP) last year.

Despite the name, these recommendations are also non-binding. They seek to create a set of principles to protect the public from discriminatory algorithms and opaque AI decisions, among other problems.

But the bill has received pushback in Washington as Republicans take up key science scrutiny positions after election wins last year. Earlier this month, two senior Republican lawmakers criticised the OSTP for sending “conflicting messages” about US AI policy, demanding answers about how the bill was created in a public letter. One of the lawmakers is Frank Lucas, the new chair of the House of Representatives’ Committee on Science, Space, and Technology.

They are worried that the AI Bill of Rights encroaches on the work NIST has just completed, and appear to be concerned that it could rein in US companies and harm American leadership in the technology. They also demanded that the OSTP reveal if the bill was going to be the basis for draft legislation.

“It is vital to our economic and national security that the US maintains its leadership on responsible AI research, development, and standards,” they said.

But neither Rotenberg nor Gutierrez see any conflict between the AI Bill of Rights and NIST’s framework. NIST’s work is about providing guidance to businesses, said Rotenberg, while the bill is about protecting those subject to AI-based decisions.

The lawmakers’ letter is “counterproductive and ignores real problems, addressed by the OSTP, that are widely known in the AI community,” he said.

Never miss an update from Science|Business:   Newsletter sign-up