News & Analysis

UN body Seeks Better Policies Governing AI-based Decisions

A new study commissioned by the United Nations has made a case for a proactive and more responsible set of policies with a view to mitigate biases in the decision making processes within an organization that is based on artificial intelligence. 

Such decision-making within an organization could hamper the rights of the employees in case some of the inherent biases to the AI-based process aren’t overcome, says the study conducted by Aapti Institute in association with the UNDP. 

It wanted the new policies and regulations to be put in place alongside the process of increased digitization of businesses and not once things go out of hand. Such a move could help businesses address the impact of AI on human rights as more and more companies opt to automate their services. 

The study accused companies of using the garb of algorithm-based decision making to obfuscate deliberate company policies and veer away from setting up a responsible and explainable AI model. A lack of conducive company policy and regulation often exacerbates the impact of AI and automation on the worker rights, it warned. 

The report said the explainable AI model is one where actions taken by the AI algorithm and its logic can be easily explained to people, thus rendering a better understanding and helping clarify any biases. And queries to such explanations could also become data for retraining the algorithms if required in the future. 

The study claimed that the financial services, healthcare and retail were the industries where algorithmic bias could exist under the layer. Most impacted workers in these areas come from marginalized sections of society, which means their ability to get recourse for some bias is also very limited. 

Dennis Curry, deputy resident representative of UNDP India, says while AI has helped in “improving” lives through speeding up diagnosis times in healthcare and improving convenience and accessibility for disabled individuals through smart homes, it is important to build “inclusive and resilient digital ecosystems that are rights-based”.

Finally, the report added that existing biases in traditional AI models, when implemented by companies, could have an even bigger impact on women and individuals from the economically backward stratas of the society. Use cases such as implementation of automated work hour computation systems without taking into account contextual human rights, and automated, “predatory” data collection systems are key examples of unregulated use of AI.

Leave a Response