Artificial intelligence (AI) holds great promise to help address the healthcare challenges in India, by helping clinicians make accurate diagnosis and treatment decisions. For this promise to become a reality, clinicians need to trust these AI algorithms to produce unbiased results.These algorithms are developed by training them using relevant past data. The propensity of these algorithms to produce unbiased results will depend on the training data, process,culture, and the diversity of the team involved in the development. A biased output from the algorithm can result in discrimination that can harm minorities, women, and economically challenged people and impact patient safety.
Biased decisions in medicine can have a serious adverse impact on clinical outcomes. Diagnostic errors are associated with 6–17% of adverse events in hospitals. Cognitive bias accounts for 70% of these diagnostic errors. These diagnostic errors due to bias get embedded in the historical patient data, which later get used in training AI algorithms. This will result in the AI algorithm learning to perpetuate existing discrimination.
For example, heart attacks are usually diagnosed by doctors based on symptoms experienced more commonly by men.An AI algorithm that is built to help doctors detect cardiac conditions would need to be trained on relevant historical patient data. Given the inherent bias towards men in the training data, the algorithm could learn to focus more on men’s symptoms when compared to women. This could result in perpetuating the problem of under diagnosing women.
Another example is an AI-based tool built to help hospitals identify patients who are likely to miss appointments. This was used by hospitals to double-book potential no-shows to avoid losing income. As one of the features used for predicting a no-show was previously missed appointments, it resulted in the AI tool being biased to identify economically disadvantaged people, as likely to miss appointments.However, the actual reason for missing appointments was because of issues related to transport, childcare, and lost wages. When they did arrive for appointments, clinicians spent less time with them because of the double-booking, resulting in inadequate care for these people.
Some of the disease patterns and clinical pathways are different in India when compared to western countries. For example,the prevalence of cardiovascular diseases in India, is much higher than in middle and high-income countries, affecting Indians much earlier, in their midlife years. Indian women for example are diagnosed with more aggressive forms of breast cancer, at a younger age. Also, there is a higher prevalence of type-2 diabetes in India.
Within the country too, there are significant differences in lifestyle, literacy levels, economic disparity, ethnicity, religion, culture, and epidemiological transitions across the various states.
These complex factors influence some of the biased decisions made by clinicians. Consequently,this will reflect in the patient data that is used for training AI algorithms, further perpetuating existing discrimination. This poses a unique challenge to deploy AI in healthcare for India.
All the stakeholders including the government, health-tech companies, healthcare providers, and startups have a role to play in ensuring AI algorithms produce unbiased results in an Indian context.
The companies developing these AI algorithms need to be aware of the potential risk to patient safety if these algorithms are not adapted for India. As elucidated below there are a few strategies that can be further explored by health-tech companies and startups to reduce bias in AI algorithms.
Define and narrow the business problem that is being solved:
- This will ensure that the model performs well for the specific reason it is built for.
Deploy a framework to gather, annotate and understand biases in the training data:
- The framework for data gathering should cover the diversity and account for the multiple opinions and valid disagreements on the data from clinicians across the country
- Train/retrain the algorithms using local data
- Understand the training data and the associated biases.
Ensure that the development and clinical teams are from diverse backgrounds
- The development team and clinicians who are annotating the data should comprise people from diverse backgrounds (culture, gender, age, experience, etc.) from across the country.
- Impart training to the teams to help them understand their personal biases.
- Build a culture of trust, integrity, teamwork, and ethical behavior within the organization.
Ensure internal processes support co-creation, continuous feedback, and improvement
- Co-creating the algorithm with clinicians will help reduce any inherent bias in the model.
- Have a framework in place to identify features in the model that perpetuate bias.
- Ensure continuous feedback on the usage and performance of the algorithm.
- Improve the algorithm with the feedback received from clinicians, auditors, regulators, internal reviewers, and new research findings on the topic.
Have in place an explainable and interpretable AI visualization framework
- This will help the developers and end-users get a better understanding of how and why certain decisions are made by the algorithms
Clinicians and hospital administrators need to take cognizance of the above-mentioned aspects while deploying AI within their organizations. Hospitals need to have a clear AI strategy that includes bias-related awareness programs, decision making mechanism to account for multiple opinions and disagreements between clinicians, a feedback mechanism to reduce clinical errors, etc. Hospitals should plan for a pilot phase to assess the performance of the algorithm before deployment.
The government too has a key role to play. A regulatory framework for AI in healthcare needs to be in place at the earliest. This would need to include information on the data strategy (sourcing, curation, annotation, etc.), visualization frameworks, process, team composition,training, etc. It would benefit healthcare AI startups if the government can promote an open-source data lake with curated data for chronic and viral diseases, to begin with, and then expand this to other diseases.
AI adoption is key to address the healthcare challenges in India.For broader adoption, clinicians need to trust that the results of the AI algorithms are accurate and unbiased. The medical fraternity has a big ole to play in reducing diagnostic errors due to bias.Given the diversity and epidemiological differences across the country,AI perpetuated discrimination can become a serious issue in India. This can adversely impact patient safety if not addressed.The AI algorithms need to account for the local biases and disease patterns. Health-tech companies, startups, healthcare providers, and the government need to have a clear strategy and work together to address this.
(Srinivas Prasad is Founder and CEO of Neusights and the views expressed in this article are his own)