An app developed by Google’s AI company saves nurses two hours a day. But so far it is free to use. An independent panel says DeepMind must clarify the business model, a stark illustration that as healthcare AI goes mainstream, the matter of who does what with patient data is coming into sharper focus
A series of controversies over misuse of data have strained confidence, making the public wary at the sight of tech giants moving into healthcare.
In the UK, the cause celebre is DeepMind Health, a Google sister company, which has been warned that controversy surrounding its use of patient records could scupper the widespread adoption of artificial intelligence and machine learning.
Nurses at the Royal Free Hospital in London say that Streams, a DeepMind app that pulls together patients’ records and test results held in separate systems onto a mobile device, saves them two hours a day.
That looks like an important advance in terms of improving patient outcomes and making better use of resources. But the Royal Free provided records from around 1.6 million patients as a part of trial to test the acute kidney injury alert system, pitching the hospital and DeepMind Health into controversy.
The UK information commissioner ruled the Royal Free allowed DeepMind access to sensitive patient data without explicit consent. To defuse the controversy, DeepMind Health set up an independent review panel to advise it on future governance.
In its second annual report published last week, the panel highlighted what is at stake if AI companies moving into healthcare do not meet expectations of how they should operate in this field. “All companies that wish to operate in the area of healthcare data ought to be held to high standards, but the onus is even greater for a company such as DeepMind Health,” the report says.
The “tide of public opinion” has turned against tech giants and their motives are viewed with increasing suspicion. In particular, people has been confronted with the fact that companies are selling their personal data, and that rather than being customers, they are the product.
“Against this background, it is hardly surprising that the public should question the motivations of a company so closely linked to Google,” the panel said. “So the question ‘where are they making their money’ is a crucial one. It’s the question that many feel foolish for not previously asking of the tech giants.”
The panel also wants DeepMind to specify how it will work with Alphabet, Google’s parent company, and what data could ever be transferred to them. “The issues of privacy in a digital age are if anything, of greater concern now, than they were a year ago and the public’s view of the tech giants has shifted substantially,” said Julian Huppert, chair of the panel.
As it happens, the Streams app does not use artificial intelligence. However, DeepMind is also developing algorithms for analysing retinal scans at Moorfields Eye Hospital in London, applying AI to analyse mammograms, and using machine learning to automate radiotherapy planning for head and neck tumours, each project in collaboration with the National Health Service in England, and relying on patient data.
While other tech giants, including IBM, Apple and Samsung, are wading into health some of the most advanced products are coming to market through smaller players.
AI-based systems are getting a foothold in a growing number of clinical areas. In April, the US FDA approved the first device to use AI autonomously to detect a medical condition. IDx-DR uses an algorithm to screen patients for diabetic retinopathy without the supervision of health professionals.
Images of patients’ eyes are uploaded to a server where the algorithm calculates the risk of disease. Designed for use in primary care, it can determine whether referral to an eye specialist is required. As the burden of diabetes grows, technology-based approaches may help ease the strain on health professionals.
This follows FDA approval in February for the first AI triage software, the Viz.ai LVO stroke platform. The system, which already has the CE mark allowing its use in the EU, identifies blockages in blood vessels and automatically notifies specialists to accelerate access to clot-busting treatment. Like other AI-powered apps, it promises greater speed and accuracy than highly skilled medical specialists.
Earlier intervention curbs costs
Gal Solomon of CLEW, an Israel-based clinical intelligence firm, said the explosion in sensor technologies is allowing more clinical indicators to be tracked than ever before. Making sense of all of this data is a task for AI. “Health professionals can correlate between three or four channels of information,” he said. “When you have hundreds, no person can handle it, especially at high frequency.”
Clew’s flagship technology continuously tracks a range of indicators from patients in intensive care units (ICU) and feeds the data into computer models. “Deep learning technologies can then tell clinicians which patients are at risk, or if they will deteriorate in the next three or four hours,” said Solomon. “People in ICU are fighting for their lives so even small changes matter.”
ICUs are the most resource-intensive part of hospitals, soaking up more than 30 percent of total budgets due to heavy use of technologies and high staff to patient ratios. One of the key causes of death, as well as a driver of prolonged hospital stay, is sepsis, a life-threatening response to infection that is often detected late.
Doctors watching closely for symptoms can catch sepsis in time, but computers can spot the warning signs much earlier by continuously monitoring a range of indicators. “The only way to prevent sepsis deaths is to know it is coming and to intervene early,” Solomon said.
CLEW’s AI system has completed clinical trials in Israel and the US, and pending approval, could be on the market later this year, with an EU launch likely in 2019. “AI is the future for managing complex patients through predictive, pre-emptive healthcare, there’s no doubt about it,” said Solomon.
The ICU is far from the only ward where computational power is improving care. AI and machine learning are driving advances in diagnostics and monitoring, from the ‘artificial pancreas’ that tracks glucose levels and delivers the right insulin dose, to the AI software developed by Oxford University spin out Ultromics to automatically interpret echocardiograms of patients suspected of having coronary artery disease.
In radiology, computers can be faster and more accurate than radiologists if they are fed the right information. “Algorithms can be better than humans, but they need a lot of data which has been well annotated,” said Nikos Paragios, a computer science and applied mathematics professor at France’s École Centrale des Arts et Manufacture. “There are some computer-aided diagnosis tools on the market now, but in the next five to 10 years, we expect to see many more.”
Some hospitals are keen to use IT systems that deliver efficiencies, but health professionals face mixed incentives, fearing the algorithm that makes their job easier today could one day make their role obsolete. Radiologists are already worrying about the future of their profession. Academic journals fret about whether use of AI is paving the way for their own redundancy.
Paragios said the challenge is convincing radiologists to embrace software that will eventually take over much of the work they do today. “It’s not that radiology will disappear overnight; but there will be a shift to interventionist radiology with greater automation of specific tasks which are currently time-consuming or tedious.”
While today’s digital tools make recommendations to doctors who take ultimate responsibility for clinical decisions, the next generation of devices will put the computer in the driving seat. “Looking ahead, the new class of digital tools will be making the call,” said Paragios.
Daniel Susskind, lecturer in economics at Balliol College, Oxford and co-author of The Future of the Professions, says the jobs of healthcare professionals are changing fast. “In the medium term, this is not a story of unemployment but of redeployment. Technologies will change the tasks and activities that people are required to do. But doctors will not turn up for work one morning and find a robot sitting in their seat.”
Other leading thinkers in healthcare have ethical and regulatory concerns about the manner in which AI is seeping into the health system, without appropriate ethical and regulatory controls being in place.
The Wellcome Trust, for example, has voiced concerns about how AI will impact doctor-patient relationships. A report on ethical, social and political challenges arising from AI in health commissioned by the trust and published in April, notes that algorithms focus only on data that is measurable, that is, images, health records and blood test results. They do not account for non-verbal cues a general practitioner might pick up in interacting with a patient, or relevant information about a patient’s social and personal circumstances.
The report asks if this hard-to-measure data could be captured in future “If we only measure what we can measure, what do we miss out on?”
Despite the enormous potential of new technologies, broader public concerns about data privacy, exacerbated by recent controversies such as that at the Royal Free, are viewed as the major barrier to widespread transfer of power from doctors to algorithms.
In an era where privacy is centre stage, the success of AI in healthcare hangs on how it handles its vital ingredient: data.