AINews & Analysis

More Firms Will Hire AI Behavior Forensic Experts By 2023, Says Gartner

Sectors like finance and technology are deploying combinations of AI governance and risk management tools and techniques to manage reputation and security risks

intelligent industry

Incidents of irresponsible privacy breaches and data misuse is on the rise. Hence, several organizations are looking at artificial intelligence (AI) and machine learning (ML) solutions to mitigate risks. Despite rising regulatory scrutiny to combat these breaches, Gartner predicts that, by 2023, 75 percent of large organizations will hire AI behavior forensic, privacy and customer trust specialists. They will do so to reduce brand and reputation risk.

Building trust with AI-ML

Bias based on race, gender, age or location, and those based on a specific structure of data, have been long-standing risks in training AI models. In addition, opaque algorithms such as deep learning can incorporate many implicit, highly variable interactions into their predictions that can be difficult to interpret.

“Organizations need skills and tools identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk,” said Jim Hare, research vice president at Gartner. “More and more data and analytics leaders and chief data officers (CDOs) are hiring ML forensic and ethics investigators.”

Increasingly, sectors like finance and technology are deploying combinations of AI governance and risk management tools and techniques to manage reputation and security risks. In addition, organizations such as Facebook, Google, Bank of America, MassMutual and NASA are hiring appointing AI behavior forensic specialists. Some already have one, who primarily focus on uncovering undesired bias in AI models before they are deployed.

These specialists are validating models during the development phase. Once released into production, these specialists monitor the process. That an unexpected bias can be introduced because of the divergence between training and real-world data.

“The number of organizations hiring ML forensic and ethics investigators remains small today. But this number will accelerate in the next five years,” added Hare.

On one hand, consulting service providers will launch new services to audit and certify that the ML models. These are explainable and meet specific standards before models move into production. On the other, open-source and commercial tools specifically designed to help ML investigators identify and reduce bias are emerging.

Identifying bias, mitigating risks

Some organizations have launched dedicated AI explainability tools to help their customers identify and fix bias in AI algorithms. Commercial AI and ML platform vendors are adding capabilities to automatically generate model explanations in natural language. There are also open-source technologies such as Local Interpretable Model-Agnostic Explanations (LIME).

These and other tools can help ML investigators examine the “data influence” of sensitive variables. These include age, gender or race — on other variables in a model. “They can measure how much of a correlation the variables have with each other. This would whether they are skewing the model and its outcomes,” said Hare.

Data and analytics leaders and CDOs are not immune to issues related to lack of governance and AI missteps. “They must make ethics and governance part of AI initiatives and build a culture of responsible use, trust and transparency. Promoting diversity in AI teams, data and algorithms, as well as promoting people skills is a great start,” said Hare.

“Data and analytics leaders must also establish accountability for determining and implementing the levels of trust and transparency of data, algorithms and output for each use case. It is necessary that they include an assessment of AI explainability features when assessing analytics, business intelligence, data science and ML platforms,” he summed up.

Leave a Response