GRS

TRANSLATE

MENU menu

Close close menu

GRS News

The Role of AI in Medical Devices: Shaping Digital Health through Regulation

5/9/2024

The adoption of artificial intelligence (AI) in the medical technology (MedTech) sector has accelerated, transforming how patient care is delivered. With innovations such as Software as a Medical Device (SaMD), Software in a Medical Device (SiMD), and the emerging Artificial Intelligence and Machine Learning as Medical Devices (ALaMD), the landscape of healthcare is evolving rapidly. These advancements promise to revolutionise routine medical care, making it possible to envision receiving additional care and diagnoses from AI-powered robots, computer programmes, and software. To ensure these technological developments are safe for patients, global regulatory bodies and governments are introducing phased regulatory measures and creating supportive environments for AI developers.

 

EU AI Act and Its Impact on Digital Health

The EU AI Act, introduced by the European Commission, is a significant step in regulating AI in Europe. This groundbreaking legal framework provides requirements and obligations for AI developers and deployers, aiming to reduce administrative and financial burdens on businesses while ensuring safety and trustworthiness. The Act is part of a broader package of policy measures, including the AI Innovation Package and the Coordinated Plan on AI, designed to support the development of trustworthy AI and position Europe as a global leader in this field.

The EU AI Act categorises AI systems into four risk levels:

  1. banned
  2. high risk
  3. limited risk, and
  4. minimal risk.

This classification helps address specific risks associated with AI applications, with high-risk AI systems facing stringent requirements and obligations, including conformity assessments before deployment and continuous post-market surveillance.

 

Medical Technology Industry’s Perspective

The MedTech industry has broadly welcomed the EU AI Act, appreciating the effort to frame regulations that address the sector's dynamic nature. However, further clarity is needed to ensure the Act effectively supports European technological innovation and integrates AI within healthcare settings. MedTech Europe has recommended that the European Commission swiftly develop guidelines with active stakeholder input, align horizontal AI Act standards with existing vertical standards for medical technologies, and establish a clear pathway for clinical and performance evaluation of medical technologies.

 

UK’s AI Airlock: A Testbed for AI in Healthcare

In the UK, the Medicines and Healthcare Products Regulatory Agency (MHRA) has launched the AI Airlock, a regulatory sandbox designed to test and assess AI as a Medical Device (AIaMD). This initiative aims to address the challenges posed by AI in healthcare by providing a safe, collaborative space for developers to generate robust evidence for their products. The AI Airlock fosters collaboration among regulators, government bodies, the NHS, and academia, ensuring AI products' safety and performance while accelerating their adoption in clinical settings.

 

Guiding Principles for Transparency in MLMDs

Transparency is crucial for building trust in Machine Learning-enabled Medical Devices (MLMDs). Recently, the FDA, Health Canada, and the UK's MHRA released guiding principles focused on transparency throughout the lifecycle of MLMDs. These principles emphasise the importance of clear communication about a device's intended use, development, performance, and logic. Transparency ensures that healthcare professionals and patients understand how these devices work, facilitating informed decision-making and safe, effective use.

 

The International Medical Device Regulators Forum (IMDRF)

The International Medical Device Regulators Forum (IMDRF) Artificial Intelligence/Machine Learning-Enabled (AI/ML) Working Group recently released a draft guidance document titled "Good Machine Learning Practice for Medical Device Development: Guiding Principles." This document outlines a common set of principles to ensure the development of safe, effective, and high-quality medical devices incorporating AI. The 10 guiding principles for Good Machine Learning Practice (GMLP) invite international standards organisations, regulators, and collaborative bodies to advance GMLP. Key areas for collaboration include research, educational tools and resources, international harmonisation, and consensus standards. These efforts aim to shape regulatory policies and guidelines, fostering innovation and safety in AI-enabled medical devices globally.

 

Expert Perspectives

Isabel Teare, Senior Legal Advisor from Mills & Reeve, underscores the importance of a regulatory framework that evolves with technological advancements. "As AI continues to transform healthcare, it is vital that regulations are not static. They must adapt to innovations while ensuring patient safety and fostering trust. Initiatives like the AI Airlock are steps in the right direction, promoting a balanced approach to innovation and regulation. However, the approaches taken in different countries is currently highly divergent. This means that innovators must take account of several different frameworks in developing their products, if they plan to make them available internationally. There is room for optimism that the practical implementation of these frameworks, notably through guidance and on-the-ground practice, will converge as the technology evolves."

 

Greer Deal, Director and Co-Founder of Global Regulatory Services, highlights the collaborative nature of successful AI integration. "The MedTech industry has a strong ecosystem with a common goal of delivering enhanced and safe care to patients. This ecosystem includes Competent Authorities, Notified Bodies, Industry Stakeholders, and Device Manufacturers. The entire ecosystem must work in tandem to navigate the complexities of AI in healthcare. By fostering open communication and continuous feedback, we can ensure that AI technologies are safe, effective, and beneficial for patients. Initiatives like the AI Airlock exemplify this collaborative spirit, providing a model for other regions to follow."

 

What Should Manufacturers Keep in Mind?

Experts from Global Regulatory Services and Mills and Reeve have put together an experienced perspective on what manufacturers should consider and be prepared for:

  • Compliance with Regulatory Requirements: Manufacturers must ensure that their AI-enabled medical devices comply with all relevant regulatory requirements. This includes understanding and adhering to the EU AI Act, as well as other applicable regulations like the Medical Device Regulations (MDR) and In-Vitro Diagnostic Regulations (IVDR). It is crucial to stay updated with any amendments or new guidelines issued by regulatory bodies.
  • Data Privacy and Security: Given the sensitive nature of healthcare data, manufacturers must prioritise data privacy and security. Implement robust cybersecurity measures to protect patient data from breaches and ensure compliance with data protection laws such as GDPR. Regularly update security protocols to address emerging threats.
  • Ethical Considerations: Manufacturers should embed ethical considerations into the design and deployment of AI systems. This includes ensuring that AI models do not perpetuate biases and are trained on representative datasets to provide fair and accurate outcomes for all patient groups.
  • Continuous Monitoring and Post-Market Surveillance: AI systems can evolve and learn over time, which means continuous monitoring is essential to ensure ongoing safety and efficacy. Implement robust post-market surveillance processes to detect and address any issues that arise once the device is in use.
  • Intellectual Property Management: Protecting intellectual property (IP) is vital for maintaining a competitive edge. Work with IP experts to secure patents and trademarks for your AI innovations and ensure that your IP strategy aligns with your overall business objectives.
  • Legal Liability: Understand the legal implications of deploying AI in healthcare. Manufacturers should be aware of their liability in cases where AI systems fail or produce incorrect results. Ensure that you have appropriate insurance coverage and legal safeguards in place.
  • Future-Proofing: The regulatory landscape for AI in medical devices is rapidly evolving. Manufacturers should stay informed about future regulatory changes and be prepared to adapt their processes accordingly. Investing in flexible, scalable technology can help ensure long-term compliance and success.
  • Compliance with International Standards: Adhere to international standards and guidelines, such as the EU MDR/IVDR, FDA regulations, and the IMDRF's Good Machine Learning Practice (GMLP) principles. Aligning with these standards ensures global compliance and facilitates market access.
  • Robust Quality Management System (QMS): Establish a robust QMS that complies with international standards such as ISO 13485. Ensure that the QMS covers all aspects of product development, including design controls, risk management, and post-market surveillance. Regularly audit and update the QMS to maintain compliance.

 

Conclusion

Integrating AI in medical devices and digital health is reshaping the healthcare landscape, promising enhanced patient care and more efficient clinical workflows. As regulatory frameworks like the EU AI Act and initiatives like the MHRA's AI Airlock emerge, they provide a structured yet flexible approach to managing the risks and benefits of AI technologies. By prioritising transparency, collaboration, and continuous improvement, regulators and industry stakeholders can ensure that AI's full potential is harnessed safely and effectively, ultimately benefiting patients and healthcare systems worldwide.

 

AI in Medical Devices

Back...