Viewpoint: it’s time to create a science to ensure safe and ethical use of artificial intelligence

08 Jul 2021 | Viewpoint

The rapid spread of AI in devices, workplaces and governments calls for the establishment of a new discipline: AI auditing

If during the last decade much of government regulatory focus has been on data privacy, the coming decade may see the spotlight shift to the behaviour of artificial intelligence (AI), or what some researchers call algorithm conduct.

That is the inevitable response to AI being used in an increasingly wide variety of settings, ranging from autonomous vehicles to hospitals. The speed and diversity of the applications themselves is also growing rapidly. To keep pace, an accompanying field of research and practice is needed in AI auditing, to provide formal assurance that the algorithms used in AI are legal, ethical, and safe.

The lack of designated professional oversight of the use of artificial intelligence is already tripping up many companies. Examples of corporations suffering either reputational or legal damage include Facebook and Cambridge Analytica. The British consulting firm was accused of voter manipulation using Facebook data from users who hadn’t given their permission. 

Similarly, Amazon was reported to have developed an AI system to rate job applicants, only to discover that the application discriminated against women. (Amazon said the application was never put into practical use.)

To help institutions ensure correct and ethical use of AI, we propose developing ‘algorithm auditing’.

This specialism would address a number of strategic risks: reputational, financial, and governance.

Reputational risks arise when companies are seen to have discriminatory or unethical systems. Financial risks result from government fines, customer lawsuits, or loss of earnings. Lastly, developers and deployers of a system should always be in control of AI systems and have the ability to monitor and report on them. Failure to do so creates governance risks.

Just as many companies and charities have to submit audited financial accounts, so they will be required to submit audits of their AI practices and any related algorithms.

Vendors of proprietary AI, or those who share open source AI should be able to provide evidence of robust governance of systems during their development. AI auditors will need access to check internal operations management, and be able to provide proof to regulators and potential customers, that systems meet ethical and legal requirements.

An early focus of AI auditors could be employee recruitment. Hiring is an ethical, regulatory, and legal minefield. Many companies are offering AI system to screen, recommend and hire potential employees and eveloping AI auditing in this area would assure companies that they are selecting a system that is safe and appropriate.

Regulation on the Horizon

Besides self interest, companies have another important reason to support the development of AI auditing. If they don’t come up with their own safeguards, there is a risk governments do it for them - and that could get messy. Indeed, regulatory and legislative activity relating to oversight of AI is stepping up.

In the name of protecting customers or promoting non-discrimination, governments could impose regulation that may or may not have those effects. In the US, politicians are debating the US Algorithm Accountability Act in Congress. At the state level, legislation that has been introduced includes the Automated Decision Systems Accountability Act in California and the proposed mandatory AI bias audits in New York City.

In Europe, the UK is debating the Information Commissioner’s recommendation on AI Auditing. Broader in scope than many of these initiatives is the forthcoming European Union draft legislation on AI, which calls for legal requirements for appropriate AI risk-governance and reporting. (High-risk systems, including employee recruitment are named in the legislation).

In light of the rapid pace of the development and regulation of AI, policymakers, researchers, and corporations need to lay the groundwork for AI auditing. Some of the necessary steps are:

  • Update data ethics strategy: as a result of the past decade’s focus on personal data protection, there is significant data governance in place. This should form a solid foundation upon which to manage AI risk. However, having a robust data ethics strategy will not suffice with respect to AI - as we have seen from industry. As such, it is necessary to update and evolve those strategies in light of AI risks; for example, it is important to understand that data protection concerns are altered by the use of AI.

  • Adjust the legal status of algorithms: the issue of how those who develop and deploy the algorithms at the heart of AI systems are legally responsible for them is likely to raise complex questions. Such questions will get more complicated the more AI systems are embedded in people's daily lives. Sectors such as medicine, where malpractice suits can be pervasive and high stakes, may require a lot more legal development than say, the entertainment sector.

  • Think about more than compliance: it would be easy for AI audits to focus solely on regulation, compliance, and customer safety.  But AI auditing could also be thought about in the context of the leading global challenges and broader implications. Artificial intelligence should be considered in terms of the loss of the need for human labour, impact on the environment, and the creation of more equitable societies.

Emre Kazim is a research fellow in the computer science department, University College London and co-founder of Holistic AI, a start-up specialising in artificial intelligence risk management

Never miss an update from Science|Business:   Newsletter sign-up