Washington plans extra measures for AI safety – but no new laws or international deals

04 May 2023 | News

Despite calls for regulation, White House will move ‘one step at a time’ with the focus on domestic policy – plus a meeting with Google, Microsoft and other US execs for a ‘frank discussion’ on the risks of the technology

US Vice President Kamala Harris, who is meeting with US tech executives for “frank discussion” of AI risks. Photo: Kamala Harris official Facebook page

The White House announced a few extra measures to help ensure the safety of artificial intelligence (AI), but again stopped short of proposing new laws or international collaboration.

The steps announced on 4 May include developing new US government policies for when and how federal agencies use AI, an extra $140 million for the National Science Foundation to study AI applications, and an agreement by several US tech companies to let hackers at an upcoming computing conference run open safety tests on their AI systems.

But when asked at a press briefing about possible legislation or international agreements, a senior Biden administration official demurred saying, “We need to take one step at a time.” While acknowledging “this is clearly a global technology,” the official cited only the ongoing, and intermittent, US-EU Trade and Technology Council meetings as a forum for international discussion – and rejected the suggestion that the US might want to emulate the EU’s proposed AI legislation.

Instead, the official indicated the US focus is on domestic policy, and did not respond directly when asked if any regulation might be discussed at a meeting in Washington scheduled for Thursday between Vice President Kamala Harris and four tech company CEOs, of Microsoft, Open AI, Alphabet/Google, and AI start-up Anthropic. The meeting will include “a frank discussion about the risks we see in current and near-term AI development,” the official said. But any legislation is up to Congress.

Biden, the official said, has “been very clear about the need for Congress to step up and act on a bipartisan basis, to hold tech companies accountable, including for algorithm discrimination.” As a precondition of attending the briefing, journalists were barred from naming the official publicly.

Sense of drift

The new administration measures, though heavily promoted by the White House, may only reinforce the sense of drift that many foreign observers see in US AI policy. And alarm is mounting in many capitals. Besides the EU’s AI Act, now going through the European Parliament, the Italian government last month banned the AI system ChatGPT from processing data of Italian citizens, citing the risk to privacy.

At present, there’s no agreement in Congress about what, if anything, it should do about AI – or about much of anything else, given the impasse between Republicans and Democrats on virtually every area of public policy.

As a result, the Biden Administration has, by executive order, been announcing a series of incremental measures over the past year – including the publication of a “blueprint” for a non-binding “AI Bill of Rights” to highlight the risks that the technology could pose, and a set of recommendations on how companies should handle AI.

But so far, it is leaving it up to industry to police itself, except in obvious cases where the technology runs afoul of existing laws on privacy, fraud or tort. Underlining that point, on Wednesday the head of the US Federal Trade Commission publicly warned companies that it will “vigorously enforce” laws already on the books that could apply to AI applications.

‘The end of people’

Yet since the launch of the most powerful AI tools this winter, even many US research and industry executives are starting to urge Washington to be more activist. In March, a group of US tech leaders, including Tesla founder Elon Musk and Apple co-founder Steve Wozniak, urged a pause to the current “out of control” rush in commercial AI development. And on 3 May, Geoffrey Hinton, a famous Canadian-British AI pioneer, told CBC Radio that unregulated AI could mean “the end of people.”

Of the measures announced on Thursday, the most significant is a plan for the White House Office of Management and Budget to open a public consultation this summer on draft guidelines for how the federal government uses or buys AI itself. Given that the federal government – especially the Pentagon – is among the biggest domestic customers for any new technology, the administrative rule is aimed at leading by example. The extra NSF funding, meanwhile, is to create new centres to help develop AI applications in climate, education, energy, public health and other domains.

The oddest measure, however, is what the White House called “an independent commitment” by Google, OpenAI, Microsoft, and other tech companies “to participate in a public evaluation of AI systems” at DEFCON 31, a hacker convention in Las Vegas this August. There, the companies’ AI models will “be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align” with the administration’s AI Bill of Rights recommendations.

Never miss an update from Science|Business:   Newsletter sign-up