A UCL research project funded by the PETRAS National Centre of Excellence for IoT Systems Cybersecurity has published a White Paper entitled “The Future of Medical Device Regulation and Standards: Dealing with Critical Challenges for Connected, Intelligent Medical Devices”, in partnership with BSI, the UK National Standards Body.
The Paper provides valuable insights to regulators, standards-making bodies, notified bodies, manufacturers, software developers, clinicians, and researchers regarding present gaps and potential loopholes that connected, intelligent medical devices (CIMDs) create in current regulatory frameworks. CIMDs include software-based medical devices and software as a medical device at the confluence of the Internet of Medical Things (IoMT) and artificial intelligence (AI).
These devices are increasingly used in the medical and healthcare sector, but have a number of critical vulnerabilities pertaining to their cybersecurity, data governance practices, and algorithmic integrity and trustworthiness, which can have serious consequences for patient safety and wellbeing. The White Paper puts forward several recommendations for action by standards development organisations, regulators, and international bodies in the context of widespread adoption of CIMDs in the healthcare sector.
The paper reviews the main trends in the existing standards and regulatory landscape applicable to CIMDs and captures critical challenges and potential gaps in this area.
Based on interviews and a roundtable with key experts and practitioners in the field, the White Paper identifies several critical challenges that should inform the future development of standards and guidelines applicable to CIMDs, with a specific focus on artificial intelligence, cybersecurity, and data governance issues:
- Liability concerns resulting from the complexity of devices, their changing characteristics through updates and algorithmic learning, and questions about the distributed responsibility of several parties including software developers, device manufacturers, clinical staff operating the technology, patients or other end users.
- Risk classification challenges, especially resulting in modifications in the characteristics of medical devices, arising from potential exploitation of cybersecurity vulnerabilities or the limited predictability of their machine learning component.
- Detecting and managing cybersecurity vulnerabilities, especially in connected devices that do not have a clear vulnerability reporting, maintenance, and software update policy.
- Interaction between new medical devices and legacy components in the digital healthcare system, which can affect the performance of new devices and expose them to vulnerabilities and security attacks.
- Assessing and communicating the transparency and explainability of dynamic and deep learning-based medical devices.
- Understanding and assessing types of bias in training data and algorithmic learning in AI-based medical devices or AI as a Medical Device (AIaMD).
- Responsible and accountable data management across the lifecycle of a medical device, covering input, output, transfer, storage, and analytics. These measures should include data quality and integrity controls for software and AI-based medical devices, which are largely missing from standards and regulatory guidelines at the moment.
Dr Irina Brass, the PETRAS Reg-MedTech project lead, highlighted: “Understanding the impact and risks associated with connected, intelligent medical devices is critical for our society, because these devices are usually very close to the patient and in some circumstances inform or make decision for them. I am delighted to have partnered with the BSI for this research to identify what is missing in the field, and how standards can support emerging regulatory frameworks, industry initiatives, and healthcare sector needs”.
Dr Andrew Mkwashi, PETRAS Research Fellow in the project noted: “The age of connected, intelligent medical devices is here and it’s transforming the digital healthcare sector. Like every other novel technology, it comes with its own new set of challenges. Regulators and standards markers across the globe should keep pace with the speed of technological change, unlocking the benefits while minimising the risks they present to the society”.
Commenting on the complexities of the regulation of AI during an interview for the research study, a manufacturer noted: “The regulation of the AI space is very complicated and also overlaps in a lot of different regulatory authorities. This is a really challenging area, because there are so many different stakeholder interests, different regulatory policies and so many applications of AI. As such, having one regulation to rule them all will be very difficult because there will be individual challenges to different uses of AI” (Manufacturer, interview-004, 2022).
During a roundtable event as part of the study, another manufacturer highlighted the importance of obtaining diverse algorithmic data. They said: “Essential to ensuring integrity of algorithms (and this isn’t captured in standards) is the diversity of clinical data we have access to; organizations internally have to grapple with the algorithms that are over fitting the datasets that they have.” (Manufacturer, Roundtable, 2022).