Streamlining EU Compliance for AI-Enabled Medical Devices
30 Dec 2025
Aligning AI Transparency and Safety Requirements with MDR Processes
For medical device innovators using artificial intelligence (AI), the path to market can be uncertain and complex. With the EU’s Artificial Intelligence Act, manufacturers can align AI transparency and safety requirements directly with EU Medical Device Regulation (MDR) processes, streamlining the approach to conformity.
The European Union now expects manufacturers to prove not only that their products are safe and effective under the Medical Device Regulation (EU 2017/745) or In Vitro Diagnostic Regulation (EU 2017/746) but also that their AI/ML algorithms are fair, transparent, and explainable under the EU Artificial Intelligence Act (EU 2024/1689).
The two frameworks overlap heavily. Understanding how they intersect, and building both sets of controls into your existing quality system, can save unnecessary rework and improve speed to market.
Two Regulatory Pillars Every Manufacturer Must Align
1. MDR / IVDR – Proving Safety and Performance
The MDR and IVDR ensure that medical devices and diagnostics placed on the EU market are safe, clinically effective, secure, and traceable throughout their lifecycle.
Under these regulations, manufacturers must:
- Demonstrate safety & clinical performance through conformity assessment.
- Maintain a risk-management process compliant (ISO 14971).
- Operate an auditable quality-management system (ISO 13485).
- Conduct post-market surveillance.
Software and AI are explicitly covered. Guidance such as MDCG 2019-16 (Cybersecurity) and standards like IEC 81001-5-1 require secure-by-design development and verification across the product lifecycle.
2. The AI Act – Regulating Trustworthy AI
The EU AI Act, adopted in 2024, introduces the world’s first horizontal regulation for AI systems. It primarily targets AI Systems identified as High Risk, systems that affect health, safety, or fundamental rights. Most AI-enabled medical devices will fall under the high-risk classification of the EU AI Act. More accurately, an AI system is high-risk when it is a safety component or is the product itself, is covered by MDR/IVDR, and that product requires third-party (Notified Body) conformity assessment.
Manufacturers will need to show that their algorithms are accurate, robust, unbiased, and subject to human oversight before market placement.
For manufacturers, existing design-control frameworks already cover many of the requirements, but they will need to embed AI-specific risk and data governance steps within those frameworks.
When Your Device Becomes a “High-Risk AI System”
Typical examples of high-risk AI system use cases in medical devices include:
- Deep-learning imaging tools detecting tumors or lesions.
- AI-based insulin or ventilator control loops.
- Predictive clinical-decision support systems.
These products already undergo MDR assessment; the AI Act layers on proof that the AI/ML algorithm itself is trustworthy, its data is representative, its logic explainable, and its performance stable across populations.
One Assessment: Integrating MDR + AI Act
To prevent duplicate audits, the requirements of the AI Act will be integrated into the existing MDR/IVDR conformity assessment. The same Notified Body, if designated for both frameworks, will review technical documentation once.
Much like building secure-by-design products, integrating AI processes early, addressing the MDR and EU AI Act requirements, will prevent costly redesigns once high-risk obligations take effect under the EU AI Act.
Preparing for 2026 and Beyond
While the general applicability of the EU Artificial Intelligence Act begins August 1, 2026, its requirements for products already regulated under existing EU frameworks, such as the MDR and IVDR, will take effect one year later, August 1, 2027. Nevertheless, it is advisable to start now to stay ahead:
- Inventory all AI use cases across your product lines.
- Classify each EU use case as high-risk, user-facing, and/or general-purpose AI (GPAI) to determine which AI Act obligations apply.
- Review and identify applicable standards. Where harmonized standards are unavailable, consider the state of the art in international standards, industry best practices, and voluntary codes of practice.
- Perform a Gap Analysis and Readiness Assessment, evaluating your organization’s current practices against the identified regulatory and standards requirements. Use this to build a readiness roadmap defining priorities, timelines, and evidence needs for conformity assessment.
- Update procedures for data governance, transparency, and oversight.
- Consider leveraging key standards:
- ISO/IEC 42001 – AI Management Systems
- FDA GMLP – Good Machine Learning Practice
- IEC 81001-5-1 – Secure software lifecycle
- IEC 60601-4-5 – Security in networked medical devices
Early integration reduces risk of non-compliance findings and will help speed certification under the dual compliance model.
Turning Compliance into a Competitive Advantage
The convergence of the MDR and AI Act signals a shift toward AI conformity by design. Manufacturers who embed trustworthy-AI principles, including security, transparency, and fairness, directly into their design controls will differentiate their brand as safe, reliable, and future-ready.
Intertek's AI² framework helps medical device makers integrate MDR and AI Act requirements within a single audit-ready quality system. Contact our experts to schedule an AI Act regulatory scan and gap assessment to start streamlining compliance today.