Ethical AI in 2022: Why It’s Time to Confront Biases
Ethical AI can no longer be an afterthought for the enterprise; it must be built into the fabric of AI, believe experts.
As Artificial Intelligence (AI) begins to play a much larger role in our daily lives, streamlining our work, resolving customer issues, talking to us as companion bots, driving autonomous cars, and helping employees make more informed and faster decisions, ethical AI can no longer an afterthought for the enterprise; it must be built into the fabric of AI.
“AI ethics” refers to the organizational constructs that reaffirms commitment to corporate values, policies, codes of ethics, and guiding principles in the age of AI. These constructs set guidelines and governance for AI throughout the organization, from research and design, to build and train, to change and operate,” says Prashanth Kaddi, Partner, Deloitte India.
Issues around ethical AI have garnered more attention over the past several years with tech giants from Facebook to Google to Microsoft and IBM have already established and published principles to demonstrate to stakeholders — customers, employees, and investors — that they understand the importance of ethical or responsible AI.
The pandemic has further proved that businesses are betting big on AI, with analyst firms forecasting AI investments to grow from $27.23 billion in 2019 to $266.92 billion by 2027. And as investments increase, the need to give the technology a “moral compass” has become more urgent.
However, there is growing evidence that AI based applications can lead to increased discrimination based on gender, class, caste, ethnicity, religion and other identity forming characteristics. As Prof. Amit Prakash, Associate Professor and Coordinator at IIIT-Bangalore observes, “This can come through an inadequate attention to the processes associated with collection of digital data used to train the AI models as well as through algorithmic biases, which get introduced when design teams are not sensitive to, or even aware of, the diversity in the implementation context.”
He believes, more than ushering in any transformation, such AI applications reinforce the status quo, both in the business and societal landscapes. “It should, therefore, be a strategic imperative for AI technology designers and policy makers to engage more closely with the various dimensions of ethics.”
Eliminating the biases in AI
Experts outline that ethical AI should be based on principles such as accountability, equity, fairness, human agency, inclusiveness, transparency, security and privacy. They believe, organizational leaders need to ensure their AI teams have expertise on these equity aspects, while approving staffing decisions.
Recent articles note, racism that has been built into AI-based risk assessment algorithms used by the healthcare industry is responsible for a 46% failure rate in identifying at-risk patients of color. The IT industry too is unconsciously biased against women and people of color, though the industry has made great strides and is actively working to change that.
According to Shubhangi Vashisth, Senior Principal Analyst (AI) at Gartner, there is a need to recognize the challenges in AI-systems that deliver biased results. She cites an example of a hiring tool that reinforces racial discrimination or entrenches these prejudices against certain communities. Oftentimes users and developers are not aware of the system’s process to reach the output. “This opacity increases the bias in datasets and decision systems,” believes Vashisth.
Taking fairness as a parameter, Vashisth explains that it means that the AI systems should be inclusive, accessible and should not have unfair discrimination against individuals, communities, or groups. It should provide equitable access and treatment to everyone. Bringing more diversity into the team can however mitigate the bias often developed by algorithms, according to Vashisth.
Transparency and explainability are also the keys to developing ethical AI applications. Unconscious biases must be prevented or removed, and human review processes must occur regularly. As Siddhesh Naik, Data, AI & Automation Sales Leader, IBM Technology Sales, India / South Asia, explains, “Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations. Organizations who want to employ AI to unlock new value and insights, to accelerate discovery or to gain competitive edge have a fundamental responsibility to foster trust in the technology.”
IDC’s AI StrategiesView 2021 survey shows that 83% of the Indian organizations cite Trust as highly challenging for implementing AI. Rishu Sharma, Associate Research Director, IDC highlights the myriad of challenges stems from lack of trust: data sharing, breaches, misuse fraud, among others. These trust issues can be manifold and magnify with scale, impacting revenue loss, customer privacy loss, and brand reputation.
“Enterprises therefore must plan the risk, compliance, privacy, and business ethics, to enhance the transparency and explainability of their AI enabled decisions. IDC predicts that in India, by 2023, over 40% of consumer-focused AI decisioning systems in finance, healthcare, government, and other regulated sectors will include provisions to explain their analysis and decisions. And building a governance team that comprises technology and business and sets and implements controls over algorithmic explainability for decision making will be pivotal,” he says.
“AI models can be complex, hence limiting transparency, explainability and being unbiased. This undermines the trust of companies and customers- hence the need for a clear vision and governance structure of AI from enterprises. Being able to explain AI based decisions and ensuring a fair unbiased decision making algorithm are prudent to organization’s market reputation and trust externally as well as internally,” Kaddi says.
“Humans delegate a lot of decisions to AI systems. And this will only increase. For instance, if you have used Gmail’s smart compose feature, chances are you have accepted its suggestions more often than not. For organizations using AI in their businesses or those helping clients adopt AI – good intentions are no longer enough. ‘Do no harm’ is no longer enough,” Satish Viswanathan, Head of Social Change, Thoughtworks, says.
Eliminating such biases, whether they are conscious or not, is vital for AI to be trusted and accepted in society, as Vashisth says, “If individuals and teams are cognizant of the existence of bias, then they have the necessary tools at the data, algorithm, and human levels to build a more responsible AI.”
Ethical AI trends
Talking about the ethical AI trends in the market, in 2022, Rahul Joshi, CTO, CSS Corp notes that there will be a high demand for responsible AI solutions in the market. Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices.
“If we look at the industry today, most tech or non-tech organizations generate consumer benefits and business value by leveraging 70% to 80% AI-led operations and creating AI-infused products and applications,” says Joshi, adding that while AI can be a helpful tool to increase productivity and reduce the need for people to perform repetitive tasks, it can also give rise to a host of significant unintended (or maliciously intended) consequences for individuals, organizations, and society.
He believes, there are many cases where the algorithms cause problems by replicating the (often unconscious) biases of the developers/programmers who built them. So, it’s crucial to ensure that comprehensive datasets are used. Most organizations have bias bounties in place, and this trend will run rampant in the coming years.
“To ensure that no company is marred with data or AI ethics enrage that can impact its reputation and revenue, it’s imperative to build an ethical & responsible AI,” says Joshi.
According to Viswanathan, “Organizations should proactively examine the underlying principles that their AI systems adhere to. Leaders pursuing responsible technology should embed ethics and responsible use of AI as a basic tenet in their technology strategies. We also expect organizations to work with ethical frameworks and build a movement where explainability needs to be a first principle when building AI systems – if you can’t explain it, don’t use it.”
Palanivel Saravanan, Cloud Engineering Leader, Oracle India suggests, “Companies need a cloud ecosystem, a data mesh, a fabric that will grow with it and is optimized for it. Companies need enough clean consistent data – remember the old missive: trash in = trash out.”
“To that end, despite the ongoing high level of failures, AI will remain one of the top workloads driving infrastructure decisions through 2022 and beyond,” he says.
“More organizations will need to adopt standardized frameworks, and responsible AI engineering practices that will instill a sense of fairness in their data management. Furthermore, organizations should also put in place a formal governance process to mitigate any ethical and compliance risks that may emerge in the future,” agrees Sangram Kadam, Vice President and Business Head (APAC & META), Birlasoft.
When implemented in the right way, AI ethics can help an organization create competitive differentiation by inculcating trust in AI applications. They will also be able to attract and establish greater customer confidence which will have a larger impact in the long run towards driving accelerated business results, says Kadam.
As AI plays a larger role in our lives, it is crucial to build a framework of ethical and transparent AI and business leaders must make ethics a priority in their AI endeavors for this to occur.
Experts also note that regardless of guidelines and frameworks for ethical AI, the need of the hour is to also use top-quality data for training the AI. Poor, incomplete, skewed and biased data is often the root cause of AI bias. Hence the unconscious biases must be eliminated, and diverse voices need to play a role in the discussion and development of AI to ensure that new biases are not introduced.
If we reduce data bias and launch the technology on an ethical backbone, we can create new, interesting and trustworthy futures for AI — one where AI does not confuse us or harm businesses or cause social damage.