Interviews

Explainable AI and its implications

CXOToday has engaged in an exclusive interview with Dattaraj Rao, Chief Data Scientist, Persistent Systems

Why has explainable AI become a prominent focus area in recent times?

Machine Learning (ML) adoption is increasing across enterprises, and a common problem that has emerged is around trust towards ML models. These probabilistic models usually output a value or score with very little supporting evidence associated with the results. Simpler models like linear regression or decision trees can give you some sense of the underlying reasoning behind them. However, as models get more complex, particularly deep learning neural networks, it becomes difficult to explain these due to their high complexity. In addition, regulated industries like healthcare, banking, and insurance demand clear accountability from decision engines as a core requirement. Since we are dealing with life-changing decisions that affect people’s health and livelihoods, we must verify the right parameters that were considered during the decision-making. This need for transparency has led to the prominence of explainable AI.

Why is it important to make machine learning more transparent?

ML models learn from data, and the data used for training these models may be biased or incorrect. ML follows a garbage-in-garbage-out approach, and if you use inaccurate or biased data to train models, the results will not be as expected, and the data bias will flow into the ML model. Transparency helps identify these vulnerabilities and weak spots in the model, allowing for realistic expectations from the ML model and bringing the human in the loop as needed to manage complex situations.

 

Importance of Bias adjustment in ML models

Biased results can cause major problems by giving unbalanced results based on certain factors. For example, the training data for a loan approval model may show more female applicants being rejected than male – and this training data used as-is may incorrectly bias the model on gender. Hence, transparency is required to identify input factors (features) the model gives more importance to while making a loan approval decision. If one of these significant features is gender, then the model may have picked up this bias. After discussing with business leaders, this needs to be highlighted, and appropriate debiasing methods should be used to correct the bias. It is important to know that debiasing may often lead to synthetic data being added to the model; hence, getting approval from the business leaders is crucial. Enterprises with high ML maturity have a dedicated AI ethics committee for such decisions.

 

Impact of Explainability for future applications of AI

ML models are becoming more complex, making it challenging to comprehend the inner workings of large neural networks. Therefore, Explainability is crucial in understanding these models. Future AI applications are getting increasingly complex with new network architectures and the number of weights reaching orders of billions. We need Explainability methods to decipher how the model makes decisions and present that to business leaders. In highly regulated industries like banking, the focus is more on interpretability – that is, on building inherently explainable models. Examples are glass box models like Explainable Boosting Machines (EBM) and Generalized Additive Models (GAM). Some libraries like PiML (Python Interpretable ML) make it easy to explore these architectures and build models that can provide an explanation of their decisions.

How is Persistent Systems contributing to the research and development of explainable AI, and which industries are they helping with this technology?

Persistent has a major Responsible AI initiative with five pillars —accountability, transparency, reproducibility, security, and privacy. We actively work with our clients at various ML maturity levels to help standardize their ML process and adopt patterns with state-of-the-art tooling. We have evaluated more than 50 libraries and analyzed different data types for explainable AI. We also actively work on interpretable ML libraries and help clients build models that can give results along with details on how predictions are made. Moreover, we actively consult on model selection and perform a trade-off between model interpretability and performance, based on a specific dataset of the client. We work with clients from across industries to help them step up on ML maturity level and adopt Responsible AI across the Enterprise.

Leave a Response