A review of the paper for research, educational and recommendations.
Recommendation numbered, Nº: 20042020p1
- SaMD (Software as a Medical Device).
- Why the product worldview is inadequate for AI/ML-based SaMD.
- Adopt a system approach.
- Transitioning from a product to a system approach.
- “Locked” versus “Adaptive” algorithms.
- “However, the emergence of AI/ML in medicine also creates challenges”.
- “Which medical AI/ML-based products should be reviewed by regulators?”.
- “How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data?”.
- “How to ensure the improvement of AI/ML-based SaMD’s performance in real-time while safeguarding their safety and effectiveness?”.
🔘 Paper page: nature.com/articles/s41746-020-0262-2
Unofficial biography, for informational purposes only)
Sara Gerke (Research Fellow, Medicine, Artificial Intelligence, and Law)
I. GLENN COHEN (James A. Attwood and Leslie Williams Professor of Law)
Boris Babic (Assistant Professor of Decision Sciences)
Theodoros Evgeniou (Professor of Decision Sciences and Technology Management)
Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition.
AI/ML-based SaMD pose new safety challenges for regulators. They need to make a difficult choice: either largely ignore systemic and human factor issues with each approval and subsequent update or require the maker to conduct significant organizational and human factors validation testing with each update resulting in increased cost and time, which may, in turn, chill the desire of the maker to engage in potentially very beneficial innovations or possible updates. Striking the right balance is a challenge that may take time to resolve. However, ignoring all systemic aspects of AI/ML-based SaMD, such as those we outlined, may not be an option.
Please, thank the authors and Publisher
Thank you very much for this work to @gerke_sara et al, via @States_AI_IA #SaMD #challenges #ml #machinelearning #fda #regulation #openscience #openaccess #ai #artificialintelligence #ia #thebibleai #paperTweet