September 28, 2025
Over the past year, I've had the privilege of addressing healthcare leaders, technologists, and policymakers at major conferences in Toronto, New York, Austin, Atlanta, Chicago, Doha, etc. on one of the most pressing issues of our time: the ethical deployment of artificial intelligence in healthcare. These conversations have reinforced my conviction that while AI holds unprecedented potential to revolutionize healthcare delivery, we must proceed with deliberate care to ensure these technologies serve all communities equitably.
Artificial intelligence is already transforming healthcare in remarkable ways. From diagnostic imaging that can detect cancers earlier than human radiologists to predictive algorithms that identify sepsis before clinical symptoms appear, AI's capabilities continue to expand. The FDA has approved over 880 AI-enabled medical devices as of 2024, with the majority concentrated in radiology, cardiology, and neurology.However, my discussions with international healthcare leaders have made it clear that AI's greatest promise in improving health outcomes for all still remains at risk due to systemic biases and inequitable development practices. We're at a critical juncture where the decisions we make today about AI governance will determine whether these technologies amplify existing healthcare disparities or help eliminate them.
Through my work with Ask Me Your MD and our consulting practice, I've witnessed firsthand how AI systems can perpetuate or even exacerbate existing healthcare inequities. Consider the sobering reality that many AI algorithms require patients of color to present with more severe symptoms than white patients to receive equivalent diagnoses or treatments. This isn't a technical glitch, it's a reflection of historical biases embedded in the data we use to train these systems.
The sources of bias in healthcare AI are multifaceted and occur throughout the development lifecycle:
Based on my research and experience speaking at healthcare conferences internationally, I've recommend a comprehensive approach to ethical AI in healthcare that addresses five critical domains:
Community-Centered Design and Development
Healthcare AI must be developed with communities, not just for them. This means:
Inclusive Data Governance and Management
The foundation of fair AI lies in representative, high-quality data:
Transparent and Accountable Algorithm Development
AI systems in healthcare must be explainable and auditable:
Robust Governance and Regulatory Frameworks
We need governance structures that promote innovation while protecting patients:
Education and Trust Building
The success of ethical AI depends on informed stakeholders:
The Global Imperative
My conversations with healthcare leaders in across the global have highlighted that ethical AI in healthcare isn't just an American challenge, it's a global imperative. Countries with emerging healthcare infrastructures have the opportunity to build ethical AI principles from the ground up, while established healthcare systems must work to retroffit existing systems with fairness considerations.
The World Health Organization's recent guidance on AI ethics provides a foundation, but implementation requires local adaptation and international cooperation. We need to share best practices across borders while respecting cultural differences in healthcare delivery and patient values.
The Path Forward
As we stand at this crossroads, the healthcare community has a choice. We can allow AI to perpetuate the inequities that have long plagued our healthcare systems, or we can use this moment to build something better. AI systems that actively promote health equity and serve all communities fairly.
This will require sustained commitment from multiple stakeholders:
The potential benefits are too significant to ignore:
AI that truly serves all communities could help address physician shortages in underserved areas, make high-quality diagnostics available globally, and personalize treatments in ways that account for genetic, cultural, and social diversity.
A Call For Action
The future of healthcare AI is not predetermined. The choices we make today about how we develop, deploy, and govern these technologies will determine whether AI becomes a tool for equity or inequality. Based on the conversations I've had with global healthcare leaders over the several months, I'm cautiously optimistic that we can choose the path toward equity, but only if we act with intentionality, transparency, and unwavering commitment to the communities we serve.
The technology exists to create fair, transparent, and beneficial AI systems. What we need now is the collective will to demand nothing less than ethical AI that serves everyone equally. Our patients, and our future depend on getting this right.
Christopher Kunney is Managing Partner at IOTECH CONSULTING and Chief Technology Officer at Ask Me Your MD. He frequently speaks on healthcare technology and digital equity at international conferences and consults with healthcare organizations on ethical AI implementation.
Reach out to our seasoned IT leaders for strategic healthcare technology solutions. We're here to support your digital transformation. Connect with us today!