Privanova

AI in Healthcare: Compliance With The Medical Devices Regulation

The potential of AI in healthcare has prompted many leading technology companies to invest in this field and has given rise to the formation of new medical and technology-based start-ups. To ensure a return on their investments and to offset the costs involved, these companies must be in a position to bring their technologies to market and utilise them on a commercial basis. To do so, organisations developing AI-powered medical applications have to comply with the Medical Devices Regulation (MDR).

What is the Medical Devices Regulation about?

The MDR establishes the safety and quality requirements for all medical devices placed on the EU market, as well as the procedure – known as conformity assessment – through which it is determined that these requirements are met. The precise rules to be followed by companies who produce medical devices – manufacturers - depend on the risk class allocated to the device in question.

Classification of AI-powered medical applications

AI applications that inform healthcare professionals in making their diagnostic and treatment decisions are classified as class IIa or higher, depending on the impact of such decisions on the health and life of a patient.

MDR and conformity assessment of AI-powered medical applications

Conformity assessment of AI-based medical devices involves a notified body – an independent entity entitled to verify that a device complies with the MDR requirements.

To obtain marketing authorisation for their devices, manufacturers submit an application to a notified body with a detailed description of a device and the quality management system that they implement. Manufacturers must also establish, document, implement and maintain a risk management system, conduct a clinical evaluation, draw up and keep up-to-date technical documentation. The MDR requires manufacturers to plan how they ensure that a device keeps its safety and quality parameters after it is placed on the EU market – this is called a post-market surveillance plan. To control manufacturers and their devices, notified bodies perform periodic audits and assessments.

Practical Implications for companies developing AI-powered medical applications

For AI startups, technology companies entering the healthcare sector and healthcare institutions developing their own AI innovations, MDR compliance can seem daunting but also presents a significant opportunity to stand out in the marketplace. By embedding regulatory considerations into the development process from the outset, companies can streamline the certification pathway and reduce costly delays later in the product lifecycle.

Collaboration with regulatory experts and clinical professionals early on can provide crucial insights into risk assessment, patient safety, and ethical AI development. This approach not only enhances the likelihood of regulatory approval but also accelerates the time for receiving marketing authorisation, ensuring innovative solutions reach patients and healthcare providers faster.

Steps to Ensure the MDR Compliance for AI-powered medical applications

  • Early Engagement with Notified Bodies: Companies developing AI-driven healthcare solutions should engage with notified bodies early in the development process. Early dialogue can help clarify the classification, required documentation, and testing protocols.
  • Implement a Quality Management System (QMS): A QMS ensures that the AI development process follows MDR guidelines from inception to deployment. ISO 13485, an international standard for medical device quality management, aligns well with MDR requirements.
  • Risk Management and Post-Market Surveillance: Risk management must be integrated throughout the AI-device lifecycle. Companies should establish procedures for post-market surveillance, continuously monitoring AI performance and addressing emerging risks.
  • Focus on Transparency and Accountability: Transparency of AI is not limited to providing explainable AI solutions (XAIs), it should be seen as a continuous process accompanying the whole life cycle of AI systems. The European Parliament highlights the “important distinction between transparency of algorithms and transparency of the use of algorithms.” It means that transparency measures should not be limited to technological solutions to AI opacity, but also focus on the roles and accountabilities of the stakeholders involved in the AI lifecycle.

Building Trust Through Compliance and Transparency

In a healthcare landscape increasingly reliant on AI, trust is paramount. The MDR compliance not only mitigates risks but also signals to healthcare providers and patients that AI-driven medical devices have undergone rigorous testing and adhere to the highest safety standards. Transparent AI, coupled with a clear allocation of roles and accountabilities among the stakeholders involved, helps foster user confidence and promotes widespread adoption.

Moreover, as patient advocacy groups and professional bodies continue to push for ethical AI use in healthcare, companies that prioritize transparent processes of the development and deployment of AI, unbiased algorithms, and patient safety will find themselves well-positioned to lead in this transformative era.

Ultimately, MDR compliance is more than just a legal necessity—it's a strategic advantage that paves the way for AI to fulfil its potential in transforming global healthcare.

AI and the MDR: Achieving Compliance in the EU Projects

Compliance with the Medical Devices Regulation (MDR) is critically important for the EU's flagship research and innovation projects, particularly those focused on healthcare innovation and the development of medical technologies, such as Synapsing and Digicare4You.

The Synapsing project seeks to revolutionize the diagnosis and treatment of mental and neurodegenerative disorders (MDs and NDs) by generating extensive clinical and neuroimaging datasets and developing AI-driven diagnostic tools. As the project relies on cutting-edge AI applications in a highly regulated, medical field, Privanova provides comprehensive guidance to address ethical, regulatory, and social considerations since the early start of the project throughout its term.

The Digicare4You focuses on integrating digital solutions for managing chronic diseases like diabetes and hypertension. The project’s use of AI for patient monitoring and personalized care demands adherence to the applicable legal frameworks to secure trust among patients and healthcare providers. With the continuous support of Privanova on the matter, Digicare4You ensures its solutions meet the highest safety standards while maximising their impact on efficient healthcare delivery.