McKinsey research authors have devised three core risk mitigation strategies for artificial intelligence.
Artificial Intelligence (AI) can do wonders in the scientific and business world. Yet, organizations are extremely wary when it comes to its actual implementation of the technology. That’s because AI is a relatively new force in business and few leaders have had the opportunity to hone their skills in this advanced analytics. AI risks can lead to disastrous repercussions—including the loss of human life. It can result in significant challenges for organizations, from reputational damage and revenue losses to diminished public trust.
Researchers at McKinsey have recommended a number of ways companies can mitigate risk when applying AI practices. However, before leveraging AI, it is vital to understand the risks and the drivers behind it.
In the report called Confronting the risks of artificial intelligence, McKinsey Global Institute authors suggest that by 2030, AI could deliver an additional global economic output of $USD13 trillion per year. While highlighting that AI can improve lives and add business value in many ways, they caution the adverse effects of the technology.
What gives rise to AI risks
McKinsey research describes five main things that can give rise to AI risks. The first three — data difficulties, technology troubles, and security snags — are related to what might be termed enablers of AI. The final two are linked with the algorithms and human-machine interactions that are central to the operation of the AI itself.
Many organizations find themselves inundated by the increased amount of unstructured data collected from the web, social media, mobile devices, sensors, and the Internet of Things. Data difficulties can arise when sensitive information hidden among anonymous data is revealed. This can happen in a medical context, where a patient’s name might be redacted from one section of a medical record that is used by an AI system, could be present in the doctor’s notes section of the patient record. In this situation business leaders need to to stay in line with privacy rules, such as the European Union’s General Data Protection Regulation (GDPR) and otherwise manage reputation risk.
Another risk to organizations is technology and process issues across the entire operating landscape that negatively impact the performance of AI systems. This happens when data inputs fail or are impeded on causing AI systems to produce erroneous outputs. For example, one major financial institution ran into trouble after its compliance software was unable to spot trading issues because the data feeds no longer included all customer trades.
Security is an emerging risk in which cyber-criminals exploit seemingly non-sensitive marketing, health, and financial data that companies collect to fuel AI systems. McKinsey cautions that when security precautions are insufficient, it’s possible to stitch these threads together to create false identities. Although target companies — that may otherwise be highly effective at safeguarding personally identifiable information — are unwitting accomplices, they still could experience consumer backlash and regulatory repercussions.
Two significant risks that are inherent in the operation of AI itself have incorrectly formulated models and the problems that arise when humans and machines interact.
Firstly, misbehaving AI models can create problems when they deliver biased results, become unstable, or yield conclusions for which there is no actionable recourse for those affected by its decisions. This can happen, for example, if a population is underrepresented in the data used to train the model. This potentially results in AI models discriminating unintentionally against disadvantaged groups by weaving together postcodes and income data to create targeted offerings.
Secondly, the McKinsey research identified the interface between people and machines are another critical risk area. In the data-analytics organisation, scripting errors, lapses in data management, and misinformation in model-training data easily can compromise fairness, privacy, security, safety, and compliance.
The research highlights that accidents and injuries are possibilities if operators of heavy equipment, vehicles, or other machinery don’t recognize when systems should be overruled or are slow to override them because the operator’s attention is elsewhere — a distinct possibility in applications such as self-driving cars.
Moreover, these are just the unintended consequences — without rigorous safeguards, disgruntled employees or external foes may be able to corrupt algorithms or use an AI application in every unethical ways, McKinsey warns.
AI Risk Management: Three Core Principles
Understanding the five risks above are useful for identifying and prioritizing them and their root causes. The McKinsey researchers said if a firm knows where threats may be lurking, ill-understood, or simply unidentified, it will have a higher chance mitigating them. The report also found that as the costs of risks associated with AI rise, the ability both to assess those risks and to engage workers at all levels in defining and implementing controls will become a new source of competitive advantage. Below are three core risk mitigation strategies devised by McKinsey.
Clarity: Use A Structured Identification Approach To Pinpoint The Most Critical Risks
The McKinsey researcher’s first core principle is to gain clarity in identifying the most essential AI risk within an organisation. They suggest gathering a diverse cross-section of managers focused on pinpointing and tiering problematic scenarios. This is a good way both to stimulate creative energy and to reduce the risk that narrow specialists or blinkered thinking will miss significant vulnerabilities.
This structured risk-identification process can clarify the most worrisome scenarios, and allow a firm to prioritise the risks encompassed, to recognize the controls that are missing, and to marshal time and resources accordingly.
The report notes that organisations need not start from scratch with this effort. Over the past few years, risk identification has become a well-developed practice, and it can be adapted with the intent to be directly deployed in the context of AI.
Breadth: Institute Robust Enterprise-Wide Controls
The McKinsey researchers noted that it is crucial for an organisation to conduct a gap analysis, identifying areas in an existing risk-management framework that needs to be deepened, redefined, or extended. This will allow a company to apply company-wide controls to guide the development and use of AI systems, ensure proper oversight, and put into place strong policies, procedures, worker training, and contingency plans. Without broad-based efforts, the odds rise that risk factors such as the five described previously will fall through the cracks.
Nuance: Reinforce Specific Controls Depending On The Nature Of The Risk
As crucial as enterprise-wide controls are, according to McKinsey they are rarely sufficient to counteract every possible hazard. Another level of rigor and nuance is often needed the research said. Organizations will need a mix of risk-specific controls, and they are best served to implement them by creating protocols that ensure they are in place, and followed, throughout the AI-development process.
The requisite controls will depend on factors such as the complexity of the algorithms, their data requirements, the nature of human-to-machine (or machine-to-machine) interaction, the potential for exploitation by bad actors, and the extent to which AI is embedded into a business process.
The authors of the report stated that conceptual controls, starting with a use-case charter, sometimes are necessary. This holds true for specific data and analytics controls, including transparency requirements, as well as controls for feedback and monitoring, such as performance analysis to detect degradation or bias, they concluded.