Interviews

Why Trust Matters in AI for Business

Today, businesses are increasingly relying on digital technologies such as Artificial Intelligence (AI) to remain competitive. Artificial intelligence however requires a high level of trust because of questions surrounding its fairness, explainability, and security.  Various stakeholders must trust AI systems before businesses can scale their AI deployments. The lack of trust can be the biggest obstacle to the widespread adoption of AI. We sat down with Sameep Mehta – IBM Distinguished Engineer and Lead – Data and AI Platforms, IBM Research India, to understand why Trust in AI matters in a digital world and how IBM is helping companies achieve greater trust, transparency and confidence in business predictions and outcomes by applying the industry’s most comprehensive data and AI solutions.

 

Why Trust matters in AI in the digital-risk world?

AI is expected to be a multi trillion-dollar market opportunity in the next decade. Almost all organizations want to leverage AI to improve existing processes and to open new channels of revenue. However, lack of trust, transparency and governance of AI systems could be one major impediment to realize it’s true potential.

According to IBM’s Global AI Adoption Index 2022, while organizations have embraced AI, few have made tangible investments in ways to ensure trust or address bias. Four out of five businesses believe it is important to be able to describe how their AI made a decision.  In fact, majority of organizations haven’t taken key steps to ensure AI is both trustworthy and responsible, including reducing bias (74%), tracking performance variations/model drift (68%), and explaining AI-powered decisions (61%). Therefore, we need to embrace the right set of tools, learn the skills, and overall increase the awareness to embed trust into complete Data and AI Lifecycle.

 

What are the barriers that CIOs face when it comes to introducing AI adoption in the enterprise and ensuring trustworthy AI systems?

In order to build trusted AI systems, we must address three key challenges. First, we need to educate and train technical and business leaders, so they understand trustworthy AI.  Leadership must recognize that trust in AI systems is a must-have capability and paying lip service will harm the organization’s overall AI initiatives.  Secondly, the CIO team should provide developers with trusted AI systems that is best in class. Libraries for checking models for bias, providing explanations, generating audit trails, etc., could be included in the overall DevOps toolchain managed by the CIO.  Finally, the Trust in AI should be simple and easily understandable across the organization.

 

What are the key ethical issues associated with AI implementation?

Fairness has emerged as one of the core requirements for AI models. It is important to ensure that the model will not discriminate by age, gender, location, etc. In most cases, the fairness of the model is determined by metrics like disparate impact, which calculates whether outcomes follow similar patterns across groups of the population. For example, take a model which approves or rejects loan applications. A model that approves 75% of loans when Gender = MALE and only 50% when Gender = FEMALE is acting in a biased manner, and should be investigated for bias mitigation.

AI Explainability is the process of explaining the decisions made by AI models. If AI outcomes cannot be explained, so acting on the recommendations will be difficult. The problem of explainability is particularly difficult because different personas and roles will have very different requirements for an explainability system. Let’s re-visit loan approval application through two diverse stakeholders.

When a loan application is rejected, a customer will want to know why and, more importantly, how to improve his chances next time. In order to increase the probability of approval, the explainer may point to a low credit score as the reason for rejecting the application. This is a very customized and local explanation that is valid only for this customer. A risk officer, however, would want to review the aggregate explanation of over 100 applications at an aggregate level to make sure the model is taking into account the correct variables, such as salary, credit score, and is not taking into account irrelevant factors, such as age and location. A risk officer can get a better understanding of the model’s working by reading these global explanations rather than getting into detailed information about any particular customer.

Also, in-production AI systems must be protected from several types of threats, including model extraction, evasion, inference, and poisoning. We need to test the models across different classes of attacks to understand vulnerabilities in the model and ways to harden the model to protect against the adversaries.

 

What are the major problems IBM is currently solving, and what are the plans for the near future?

IBM’s Global AI Adoption Index 2022 identifies several challenges that businesses face in adopting AI – this includes limited AI skills, expertise or knowledge, lack of tools or platforms to develop models, complex or difficult projects, and too much data complexity. IBM is helping to meet this accelerating demand for AI and helping organizations overcome the barriers to adoption with IBM Watson. Today, IBM Watson provides cutting-edge AI capabilities for users with a range of AI skills, from business professionals looking to reclaim their time to data scientists, IT and security professionals who are operationalizing AI at scale.

For business users, Watson provides pre-built AI applications that run anywhere like Watson Assistant, Planning Analytics with Watson, Watson Orchestrate that are targeted at solving a specific business problem, such as customer care, planning and forecasting or supply chain management. Watson provides AI for developers and data scientists with tools like Watson Studio on IBM Cloud Pak for Data to help collect, organize data, build AI models that are fair, deploy AI anywhere and manage those models throughout the entire lifecycle. Watson also ensures reliability and robustness of enterprise security, enterprise data protection and data cataloging.

At IBM, we understand the complexity of business and have deep industry expertise and the advanced technologies required to scale. With IBM Watson, we’re focused on innovating across four areas that are critical for businesses looking to scale AI: natural language processing, trust, automation, and the ability to run anywhere. IBM also partners with external think tanks and diverse set of stakeholders to shape overall Trusted AI principles for society at large. IBM partners with educational institutions to include Trusted AI as part of core AI curriculum so that the next generation to AI scientists have the right skill set

 

What kind of initiatives have IBM Research India taken to infuse trust, transparency, and fairness in AI platforms and algorithms?

IBM’s AI governance methods are grounded in the ethical principles of Trust and Transparency to continually build and strengthen trust in technology. The principles make clear that the purpose of AI is to augment human intelligence; data and insights generated from data belong to their creator; and powerful new technologies like AI must be transparent, explainable, and free of bias so that they can be trusted.

IBM Research India is at the forefront of developing and delivering differentiation capabilities to infuse trust into the Data and AI lifecycle. IBM has pioneered a trusted AI infrastructure based on four pillars of trust – fairness, explainability, robustness, and assurance or lineage. The lab has co-led the development of the open-source toolkit AI Fairness 360 which enables developers to detect and mitigate bias in AI models. AI Explainability 360 allows different personas to seek an explanation from the AI models. To further the mission of creating responsible AI-powered technology, IBM has moved these toolkits to Linux Foundation AI Foundation, open for all developers and data scientists.

These algorithms are also made available to our enterprise customers through IBM products and services. IBM Research India works closely with product development teams to incorporate capabilities that continuously monitor if the models are acting in a biased fashion and how to remove model bias. Similarly, the customers can use AI Explainers available in IBM products and services. These are out of the box capabilities which can be invoked using GUI and without writing any code.

Leave a Response