News & Analysis

Artificial Intelligence and its Human Programming

Decision-making is being increasingly driven by artificial intelligence (AI) as enterprises across the world attempt to reduce time and friction that goes into the entire data collection and analytics processes. Given the criticality of such information in business decision-making, algorithms have enhanced efficiency and increased productivity. 

Given this scenario, while things would appear quite kosher for AI to take over jobs and free up human intervention and utilization, how would things look in an alternative universe such as criminal justice, healthcare or enhancing the impact of a welfare state? There are swings and there are an equally large number of roundabouts. 

In fact, one look at the medical prognosis domain is enough to tell you that hitherto expensive procedures of detection could potentially cost a fraction. As more diseases get diagnosed in higher numbers, machine learning and hence AI kicks in. The same holds good for finance where big data analytics quickly point out information asymmetries  between companies and credit intermediaries or even individual credit seekers and lending agencies. 


But, there’s a flip side too

However, just when were are reveling in the growing potential of AI, there is also awareness among the industry and the proponents of machine learning that a human’s discriminatory behavior towards the rest of humanity could well get encoded into the machine learning, given that the first learning continues to be given by a human mind. 

In an article on “How to Regulate Automated Decision Making,” published in the Economic Times, Ivana Bartoletti, the global chief privacy officer at Wipro, warns of the discrimination and inequality present in the real world being encoded into the systems that are tasked to make decisions about the future. 

This makes employing automated decision making too thoughtless and automatic an action, rather than further exploring the possibility of using technology to continue furthering progress, which is what should be done, says Bartoletti, who is also a visiting policy fellow at the Oxford Internet Institute and the author of “An Artificial Revolution, on Power, Politics and AI”. 


Kudos to activism and academics

The Wipro official highlights the work done by activists and academics to articulate how human biases get embedded even into machines of the future. She highlights how the EU’s Artificial Intelligence Act could develop into a global standard of sorts. 

“In the EU AIA, risk is determined by the impact that AI products have on people′s rights, including the fundamental rights underpinning the EU legal ecosystem. These include the rights to, e.g., privacy and fairness. The EU AIA intersects with existing privacy and data protection laws, and expands on these to an extent as well, as the former covers systems that might not directly make use of personal data, but nevertheless have an impact on individuals and their livelihoods,” she says in her writeup. 


The debate must continue

Towards this end, she feels that a continuous debate on how to regulate automated decision making augurs well and points out how China has introduced provisions that limit the power of algorithms on the agency and autonomy of consumers. Similarly, in the US, initiatives are being taken at state-level around the use of algorithms in administration and public sector. 

However, Bartoletti says there are challenging areas around predictive policing and points to seminal work done by the European Parliament’s Internal Market and Consumer Protection, the Civil Liberties, Justice and Home Affair committees. These bodies show how predictions in the criminal justice system could prove an unacceptable risk towards presumed innocence. 

Calling for innovation that is rooted in citizen’s trust in order to be sustainable, the author says discussions should also be held around environmental considerations while coming out with AI products. “Technosolutionism” is something that is still trending, and that tends to deafen us to any negative impact of technology on the environment with its deafening public relations mantra that tech can solve all our problems, no matter what,” she adds. 

Leave a Response