FDA details regulatory framework for AI/ML-Based Software as a Medical Device

In April 2019, the US Food and Drug Administration (FDA) published a discussion paper and request for feedback regarding their proposed regulatory framework for Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). Following feedback from the industry, the FDA has now published a five part action plan.

The action plan begins by detailing the proposed regulatory framework for medical device software which is dependent on the “Predetermined Change Control Plan”. This plan will allow manufacturers to include SaMD Pre-Specifications (SPS), a description of what aspects will change through learning, whilst inclusion of the Algorithm Change Protocol (ACP) will detail how the algorithm learns and changes, all-the-while still maintaining safety and efficacy. The FDA aims to publish draft guidance covering areas such as types of modifications, submission/review processes and submission content in 2021.

The next part of the action plan discusses Good Machine Learning Practice (GMLP). Whilst not an official standard as yet, the FDA is committed to the development of such a practice and to this end, is working with many US and international based groups in the field of AI/ML. Consequently, it is hoped that a harmonised GMLP can be developed that will describe a set of best practices in this field.

A patient-centred and transparent approach to implementation of AI/ML based devices covers the third part of the action plan. Both manufacturers and the FDA have expressed a desire for fostering user trust in these devices along with an understanding of the importance that patients understand the benefits, risks, and limitations of these devices in order to do so. The FDA plans to hold a public workshop to gain an understanding of how device labelling may help with user transparency. This is particularly important to manufacturers who have previously highlighted the challenges of labelling for AI/ML based devices and hence have requested clarity from the FDA on this matter.

The action plan’s next component details the commitment to preventing algorithmic bias in AI/ML based devices. Given that software uses historic datasets which may be vulnerable to bias, it is possible that AI/ML algorithms mirror this bias when used as a device. The FDA stressed the importance of eliminating this bias to ensure AI/ML based devices are suitable for a racially and ethnically diverse patient population.

The final point in the action plan concerns Real-World Performance (RWP). Data gathered on the real-world use of SaMD allows manufacturers to better understand how their products are being used, areas where efficacy can be improved and any safety or usability concerns raised by users. This has raised many questions as to how and what data can be collected by industry, prompting the FDA to develop a pilot monitoring program of Real-World Data collection. Manufacturers can work alongside the FDA on a voluntary basis with the aim of defining a detailed framework that will outline relevant performance parameters.

To view the FDA’s action plan in full, click here.