Over the past two decades, the ability of machines to challenge and beat humans at complex games has made “quantum” leaps, rhetorically if not in technical computing terms.
In 1997, IBM’s Deep Blue supercomputer used “brute force” computing power to out-calculate Grand Master Garry Kasparov at chess. In 2011, the company’s Watson employed “machine learning” (ML) techniques to beat several former Jeopardy champions at their own game. In early 2016, Google’s DeepMind AlphaGo program—trained by a massive game history—repeatedly defeated the reigning European champion at Go: a game that has more possible board configurations than there are atoms in the universe.
It reached this milestone by employing two neural networks powered by sophisticated “automated decision making” (ADM) algorithms. And, in 2017, AlphaGo Zero became the strongest Go player on the planet—human or machine—after just a few months of game-play training alone. Incredibly, it was programmed initially only with the rules of the game.
Automated decision making concerns decision making by purely technological means without human involvement. Article 22(1) of the European General Data Protection Regulation (GDPR) enshrines the right of data subjects not to be subject to decisions, which have legal or other significant effects, being based solely on automatic individual decision making. As a consequence, in this paper we consider applications of ADM to applications other than those based on personal information, for example the game-playing discussed above. We discuss other aspects of GDPR later in the paper.
Whilst the game-playing results are impressive, the consequences of machine learning and automated decision making are themselves, however, no game. As of this writing, they have progressed to enable computers to rival humans’ ability at even more challenging, ambiguous, and highly skilled tasks with profound “real world” applications, such as: recognizing images, understanding speech, and analysing X-rays among many others. As these techniques continue to improve rapidly, many new and established companies are utilizing them to build applications that reliably perform activities that previously were done (and doable) only by people. Today, such systems can both augment human decision making and, in some cases, replace it with a fully autonomous system.
In this report, we review the principal implications of the coming widespread adoption of MLdriven automated decision making with a particular emphasis on its technical, ethical, legal, economic, societal and educational ramifications. We also make a number of recommendations that policy makers might wish to consider.