AINews & AnalysisNewsletter

The Ethical Dilemma of Overestimating AI Maturity

AI

While more companies are realizing the value of artificial intelligence (AI) and are levering the technology to improve efficiency and bring down costs, ethical concerns continue to rise as AI takes bigger decision-making role in more industries. According to the O’Reilly 2021 AI Adoption in the Enterprise report, less than half of companies surveyed said they haven’t thought through the consequences AI products. Only organizations with matured AI practice bother to check the fairness, bias and ethics of their AI platforms. In that sense one can say that AI ethics is directly proportional to AI maturity of an organization.

The problem however is that companies often overestimate their level of maturity when it comes to responsible AI implementation. As another survey conducted by BCG GAMMA finds that 55% of all respondents overestimated the maturity of their AI program. While 26% of companies say they’ve hit scale in their AI deployment, only 12% include a responsible AI program as part of their work.

This clearly shows that organizations are overly optimistic about the maturity of their AI implementation. As Steven Mills, BCG GAMMA’s chief ethics officer and coauthor mentions, “While many organizations are making progress, it’s clear the depth and breadth of most efforts fall behind what is needed to truly ensure responsible AI – creating a major ethical dilemma in AI.”

The O’Reilly report also notes that AI implementation won’t hit maturity until ethics, safety, privacy, and security are primary rather than secondary concerns.

To support AI maturity, teams can benefit from reviewing case studies in how other organizations have managed AI implementation, believes Rachel Roumeliotis, VP of Content Strategy, O’Reilly Media.

For some companies, the ethical dimension of AI implementation hasn’t been fully thought through because they have yet to deploy at scale and reach full maturity, explains Roumeliotis. He adds that bias can seep into AI products at multiple points in the creation process from the data fueling decisions to the algorithmic training and final review stage.

Viral Thakker, Partner, Deloitte India, says, “AI ethics deal with managing ethical complexities in the age of vast amounts of data and extensive use of automation. The big issues driving this are privacy considerations, lack of transparency of “black box” AI models, bias and discrimination that may be embedded in the data, which AI algorithms learn from, as well as lack of governance and accountability.

Worldwide business spending on AI is expected to hit $50 billion this year and $110 billion annually by 2024, even after the global economic slump caused by the COVID-19 pandemic, according to a forecast by research firm IDC. Retail, healthcare and banking industries are slated to spend the maximum, said the analyst firm.

For all the good that AI can bring, policy makers and responsible tech companies must recognize, execute, and mitigate its potential unintended, harmful effects.

Praveen Kumar, Vice President, Digital & Innovations, JK Tech states, “Organizations should start thinking of educating and upskilling employees as AI will incorporate a great deal of work alongside human workforces. At the same time, it is of utmost importance to consider the necessary and adequate governance mechanisms carefully and proactively for ensuring ethical considerations in the deployment of responsible AI tools.”

Already several organizations are offering programs to help employees put ethics at the core of their respective workflows. These programs are designed to empower the entire organization to think critically about every step of the process of building AI solutions, and perhaps most importantly continue advancing AI that is safe and inclusive.

A big sign in AI maturity is to grow the company’s knowledge on the subject, and nurture an organization-wide ‘ethical-first’ approach, very similar to the ‘security-first’ philosophy.

As we move forward, our reliability on AI will deepen which will inevitably cause many ethical issues, especially in industries where personal and business data is at stake Companies should therefore start thinking about how they will retrain and educate employees with regard to AI’s ethical implication.

Shreeranganath Kulkarni, Chief Delivery Officer, Birlasoft suggests that for enterprise AI to reach maturity, they should work towards an AI playbook built on the pillars of AI strategy, data, talent, technology, execution, and culture.

Organizations that are able to adapt and upgrade themselves and emerge stronger will be the ones to benefit from any kind of disruptions. In the process, they must consider carefully and proactively the necessary governance mechanisms to be used to ensure ethical considerations in the deployment of AI tools. Building trust in AI solutions and tools is vital so that businesses and individuals can benefit from its use.

 

Leave a Response

Sohini Bagchi
Sohini Bagchi is Editor at CXOToday, a published author and a storyteller. She can be reached at [email protected]