Washington’s new ‘blueprint’ for government action highlights the growing gap with Brussels over the best way to regulate the emerging technology
The US government published a detailed plan for policing misuses of artificial intelligence sector by sector, rather than across all applications of the technology at once. The approach highlights a growing disagreement with the EU over AI regulation.
In what it called a “Blueprint for an ‘AI Bill of Rights’”, the White House on 4 October laid out in more detail than before how it proposes to protect people from misuse, fraud or errors as AI systems roll out across the economy. Rather than ban any particular AI technology such as facial recognition across the country, each government agency will work to prevent abuses of any AI technology in its own domain – whether housing, healthcare, transport, education, procurement or other fields.
In a background briefing with journalists the day before the report’s release, senior administration officials called it a “whole of government” policy involving 30 or more agencies, because each agency is best-placed to watch for and deal with possible abuses of the technology in its own area.
“There is no one centralised bureau to deal with this,” said one official. “Instead, we’re identifying harm in each of those sectors, and having the agency responsible […] leading the enforcement action.”
The announcement comes amid growing international concern about abuses of AI – with some stark differences of opinion emerging between the US and EU, in particular, on how to deal with them. In AI legislation now under debate in Brussels, the European Commission is looking to regulate not just abuses in particular sectors but has also zeroed in on particular AI technologies that it thinks problematic across all sectors – most notably, facial recognition. The EU draft law would also create a central agency to oversee AI across all fields. Meanwhile, Canada, the UK, Japan and other US allies have been staking out varied policies of their own.
Lead by example
At the White House briefing, when asked about the international divergence, an official said, “I think we are trying to lead by example”, setting forth the US position clearly. “A major part of this is working with our allies to be clear about the US priorities and principles […].Technologies should align with the rest or our civil rights, principles and traditions. And so that’s very much how we think about this on the international stage.”
At stake is an emerging global trade in AI technologies, which so far has been led by US tech companies, but with competition mounting fast from China, in particular. There have been attempts to coordinate international policy. Canada and France have led a drive with the OECD to run a global think-tank effort on AI policy, the Global Partnership for AI. And at regular US-EU meetings on trade – part of the transatlantic Trade and Technology Council – the two governments have been trying to find some common ground. But so far, no international consensus is in sight.
The US report doesn’t minimise the risks, however. It cites several specific cases of AI abuse or error that have already harmed people.
Sometimes the AI simply doesn’t work as claimed: the report cited an AI system rolled out in hundreds of hospitals to help spot potentially fatal sepsis in patients. The faulty system flooded the hospitals with false alerts. Sometimes the AI discriminates. The report cited an AI hiring tool that automatically rejected many women applicants at a company using it, because the system had been trained on the company’s own database – in which human, mostly male, managers had been discriminating against some women, themselves. In another case, a system gave bad credit scores to Black students seeking student loans.
Each of those cases, the report said, is to be handled by the government agency whose domain it is – for the sepsis case, the Department of Health and Human Services; for the gender case, the Equal Employment Opportunity Commission; and for the student loan case the Department of Education.
The report attempts to spell out five fundamental rights that the agencies will be looking to protect. People, it says, should be protected from unsafe or ineffective systems, and from discrimination by algorithm. Further, people’s privacy must be protected, and they should be told when an AI system is being used on them. Lastly, they should have an opt out to talk to a human rather than be locked in an endless cycle of AI systems.
The plan details several steps already taken, or planned, by the various agencies. But one thing it does not have is any plan for new, general AI legislation. That’s partly because there isn’t any consensus yet in Congress over how to handle AI. “This is not a legislative proposal,” said one official. “We think there’s a lot that can be done through executive action.”