The Case for Regulating AI and Algorithms in Healthcare

In the ever-evolving world of healthcare, artificial intelligence (AI) is emerging as a transformative force, offering tools that assist physicians in diagnosing conditions, predicting patient outcomes, and making critical decisions. Yet, alongside the promise of improved care, researchers from MIT, Equality AI, and Boston University are urging for greater oversight—not just of AI systems, but also of traditional clinical algorithms.

In a recent commentary published in New England Journal of Medicine AI, the team highlights a critical regulatory gap. While AI-enabled devices—nearly 1,000 of which have been approved by the FDA—are receiving growing scrutiny, the majority of clinical decision-support tools, particularly non-AI-based clinical risk scores, lack the same level of oversight.

Why Oversight Matters

Clinical tools, whether AI-powered or not, heavily influence medical decision-making. These tools analyze patient data to produce risk scores, which physicians use to determine treatment plans, prioritize care, or schedule follow-ups. However, biases in the datasets or variables underpinning these tools can lead to inequitable outcomes, such as misdiagnosing certain populations or providing suboptimal care to marginalized groups.

Isaac Kohane, editor-in-chief of NEJM AI, notes that traditional algorithms—though simpler than AI systems—are not immune to bias. “Even these scores are only as good as the datasets used to ‘train’ them and as the variables that experts have chosen to select,” he explains.

A Step Forward: New Rules for Equity

The Biden administration has made strides in addressing these challenges. Earlier this year, the Department of Health and Human Services (HHS) introduced a rule under the Affordable Care Act prohibiting discrimination in patient care decision-support tools. This marks a crucial step toward health equity, as it applies to both AI and non-AI tools.

Marzyeh Ghassemi of MIT applauds the rule but emphasizes the need for ongoing improvements. She calls for equity-driven evaluations not just for AI, but for all clinical algorithms currently used in hospitals and clinics nationwide.

Challenges Ahead

Despite progress, challenges remain. The widespread adoption of clinical decision-support tools in electronic medical records complicates efforts to regulate them. Additionally, political opposition to regulation and healthcare policies such as the ACA could hinder further advancements.

Maia Hightower, CEO of Equality AI, stresses that oversight is necessary to ensure transparency and prevent discrimination. Without it, both AI systems and traditional algorithms risk perpetuating biases, potentially widening health disparities.

Looking Ahead

To tackle these issues, the Abdul Latif Jameel Clinic for Machine Learning in Health is hosting a regulatory conference in March 2025. This event aims to foster discussions among faculty, regulators, and industry leaders on crafting robust guidelines for AI and algorithm use in healthcare.

As the healthcare industry embraces digital tools, the balance between innovation and accountability becomes increasingly vital. Ensuring that all technologies—AI-driven or otherwise—meet high standards of fairness and accuracy is essential for building a more equitable healthcare system.

At Byte Genius, we’re committed to exploring the intersection of AI and ethics, driving innovation that works for everyone. Stay tuned for more updates on advancements in AI and healthcare.