AI Medical Devices and the FDA: The Basics of the 510k

The Food and Drug Administration (FDA) is the United States’ regulatory agency that oversees the production of human and veterinary drugs, biological products, food supply, cosmetics, products that emit radiation and finally medical devices. They protect public health by ensuring the safety of our nation’s food supply, cosmetics, and medical products. As the FDA oversees the production of medical devices, one of the biggest questions artificial intelligence (AI) developers trying to enter the US market asks is how the FDA considers regulation of AI medical devices. 

Traditionally, the FDA reviews medical devices through a pre-market pathway, such as 510(k), De Novo classification, or premarket approval. However, AI enabled medical devices are a new breed for the FDA leading to review how clear modifications to medical devices, including software as a medical device (SaMD), depending on the usage and risk posed to patients of that modification can be made. On April 2, 2019, the FDA published a discussion paper “US FDA Artificial Intelligence and Machine Learning Discussion Paper” that describes the FDA’s foundation for a potential approach to premarket review for artificial intelligence and machine learning-driven software modifications, which was later ratified as an Action Plan.

If becoming an FDA regulated AI-enabled medical device is your company’s plan, here are some terms you should be familiar with: 


A 510(k) is a set of rules your company needs to follow about your device and its performance and safety. You need to submit that to the FDA for clearance before you can sell your device in the US. In order to receive clearance from the FDA, your 510(k) will need to demonstrate that your medical device is equivalent to another already marketed device (a predicate device). The substantial equivalence approval process is a simple equation that looks something like this:

Who must submit a 510(k)?

The following types of organizations may be responsible for submitting a 510(k):

    1. Manufacturers: A device manufacturers who will be placing a device on the US market.
    2. Specification Developers: Companies that develop the specifications for a finished device which has been manufactured elsewhere.
    3. Repackers or Relabelers: Required to submit a 510(k) if they significantly alter the labeling or condition of the device, including modification of manuals, changing the intended use, deleting or adding warnings, contraindications, sterilization status.
    4. ImportersImporters that introduce a new device to the US market, if it hasn’t already been submitted by the manufacturer.

Good ML Practice for Medical Device Development

Below are some guidelines for a solid 510(k) submission of a ML model.

Good Software Engineering and Security Practices Are Implemented

Model design is implemented with attention to the “fundamentals”: good software engineering practices, data quality assurance, data management, and robust cybersecurity practices. These practices include methodical risk management and design process that can appropriately capture and communicate design, implementation, and risk management decisions and rationale, as well as ensure data authenticity and integrity

Clinical Study Participants and Data Sets Are Representative of the Intended Patient Population

Data collection protocols should ensure that the relevant characteristics of the intended patient population (for example, in terms of age, gender, sex, race, and ethnicity), use, and measurement inputs are sufficiently represented in a sample of adequate size in the clinical study and training and test datasets, so that results can be reasonably generalized to the population of interest. This is important to manage any bias, promote appropriate and generalizable performance across the intended patient population, assess usability, and identify circumstances where the model may underperform.

Training Data Sets Are Independent of Test Sets

Training and test datasets are selected and maintained to be appropriately independent of one another. All potential sources of dependence, including patient, data acquisition, and site factors, are considered and addressed to assure independence.

Selected Reference Datasets Are Based Upon Best Available Methods

Accepted, best available methods for developing a reference dataset (that is, a reference standard) ensure that clinically relevant and well characterized data are collected and the limitations of the reference are understood. If available, accepted reference datasets in model development and testing that promote and demonstrate model robustness and generalizability across the intended patient population are used.

Deployed Models Are Monitored for Performance and Re-training Risks are Managed

Deployed models have the capability to be monitored in “real world” use with a focus on maintained or improved safety and performance. Additionally, when models are periodically or continually trained after deployment, there are appropriate controls in place to manage risks of overfitting, unintended bias, or degradation of the model (for example, dataset drift) that may impact the safety and performance of the model as it is used by the Human-AI team.

For more information, please refer to the FDA’s list of approved devices:

Scroll to Top