Inter­play bet­ween MDR/IVDR and AIA in AI-assisted medi­cal devices (MDAI)

The docu­ment MDCG 2025–6 is a joint paper by the Euro­pean Arti­fi­ci­al Intel­li­gence Board (AIB)  and the Medi­cal Device Coor­di­na­ti­on Group (MDCG). It ser­ves as an FAQ docu­ment pri­ma­ri­ly for manu­fac­tu­r­ers of medi­cal devices with arti­fi­ci­al intel­li­gence (MDAI) and addres­ses the inter­ac­tions bet­ween the Medi­cal Devices Regu­la­ti­on (MDR), the In Vitro Dia­gno­stic Medi­cal Devices Regu­la­ti­on (IVDR) and the Arti­fi­ci­al Intel­li­gence Regu­la­ti­on (AI Act).

The aim is to pro­vi­de gui­dance to manu­fac­tu­r­ers, noti­fied bodies and com­pe­tent aut­ho­ri­ties on the simul­ta­neous appli­ca­ti­on of the­se regu­la­ti­ons, in par­ti­cu­lar with regard to high-risk AI sys­tems in medi­cal devices.

What it’s all about

Manu­fac­tu­r­ers of AI-based medi­cal devices will in future have to com­ply with the fol­lo­wing two com­ple­men­ta­ry sets of regu­la­ti­ons in particular:

  • MDR/IVDR, which regu­la­te medi­cal safe­ty and cli­ni­cal effec­ti­ve­ness, and
  • AI Act, which impo­ses addi­tio­nal requi­re­ments to ensu­re fun­da­men­tal rights, trans­pa­ren­cy, data qua­li­ty and risk manage­ment in AI systems.

When is an MDAI affected?

As soon as an AI sys­tem is a medi­cal device or part of one and must under­go con­for­mi­ty assess­ment by a Noti­fied Body in accordance with MDR/IVDR, it is also con­side­red a high-risk AI sys­tem under the AI Regulation.

MDAIs deve­lo­ped in-house by health­ca­re insti­tu­ti­ons are exempt from this, but must still meet cer­tain requi­re­ments of the AI Regulation.

What does this mean for manufacturers?

  1. Qua­li­ty and risk manage­ment
    The AI Regu­la­ti­on pre­scri­bes a spe­ci­fic qua­li­ty manage­ment sys­tem (QMS) for high-risk AI sys­tems – this can be inte­gra­ted into the exis­ting MDR/IVDR QMS. Risk manage­ment must also cover AI-specific risks such as bias, data dis­tor­ti­on or unpre­dic­ta­ble sys­tem behaviour.
  2. Data requi­re­ments
    AI trai­ning, vali­da­ti­on and test data must be of high qua­li­ty, traceable and repre­sen­ta­ti­ve of the tar­get popu­la­ti­on. Bias pre­ven­ti­on and moni­to­ring beco­me mandatory.
  3. Trans­pa­ren­cy & over­sight
    Users must be able to reco­g­ni­se that they are inter­ac­ting with an AI sys­tem – explaina­bi­li­ty of how it works beco­mes a key cri­ter­ion. Human over­sight is man­da­to­ry: sys­tems must not be able to ‘over­ri­de’ themselves.
  4. Tech­ni­cal docu­men­ta­ti­on
    Con­sis­tent tech­ni­cal docu­men­ta­ti­on for MDR/IVDR and AI Regu­la­ti­on is expec­ted. In addi­ti­on to cli­ni­cal evi­dence, AI-specific evi­dence (e.g. per­for­mance tests, risk assess­ments) must also be documented.
  5. Post-market moni­to­ring & chan­ges
    Manu­fac­tu­r­ers must deve­lop inte­gra­ted moni­to­ring plans – also with regard to AI inter­ac­tions with other systems.

What does the inter­play bet­ween MDR/IVDR and AI Regu­la­ti­on mean for manufacturers?

  • Two sets of rules – one goal: safe­ty, ethics and trust in AI-based medi­cal devices.
  • Com­pli­ance will beco­me mul­ti­di­men­sio­nal in the future. Com­pa­nies must adapt to an inte­gra­ted, regu­la­ted life­cy­cle approach – from deve­lo­p­ment and data respon­si­bi­li­ty to ongo­ing mar­ket surveillance.
  • Acting now means deve­lo­ping pro­ces­ses, docu­men­ta­ti­on and teams in a time­ly man­ner to ensu­re AI Regu­la­ti­on com­pli­ance. Only tho­se who address regu­la­to­ry issues ear­ly on will be able to bring MDAI pro­ducts to mar­ket safe­ly, eco­no­mic­al­ly and in com­pli­ance with the law.

We are hap­py to sup­port you in this endea­vour. Get in touch with us.

 

back

Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.