The pandemic posed greater ethical challenges in areas of data protection, deepfake and cyber security, thus reinforcing the need to bring AI ethics to the forefront as we move ahead in 2021.
“The potential benefits of artificial intelligence are huge, so are the dangers.” ~Dave Waters
Before COVID-19, most people had some degree of apprehension about robots and artificial intelligence (AI). While the initial dilemma of the technology was shaped by its darned depictions in science fiction, there were indeed certain legitimate concerns. Despite AI’s miraculous potential in various fields of business and society, some of its tools and applications were not only leading to loss of jobs, but also reinforcing biases, and resulting to infringements on data privacy.
These ethical issues concerning AI have caught the interest of many – individuals, businesses, and governments – in recent years with experts looking for ways to make AI more transparent and reliable. Then the pandemic posed even greater ethical challenges in areas of data protection, deepfake and cyber security, thus reinforcing the need to bring AI ethics to the forefront as we move ahead in 2021.
As Viral Thakker, Partner, Deloitte India, says, “By itself, AI is just technology (like nuclear energy), but its uses can be perceived as having “positive” or “negative” effects on society as a whole. This pushes the dialogue of ethics in the data context.”
According to him, data and AI ethics deal with managing ethical complexities in the age of vast amounts of data and extensive use of automation. The big issues driving this are privacy considerations, lack of transparency of “black box” AI models, bias and discrimination that may be embedded in the data, which AI algorithms learn from, as well as lack of governance and accountability.
Ethical AI for Good
Since the onset of the pandemic however some of the worries on AI appeared to have been set aside as AI-infused technologies have been employed to mitigate the spread of the virus and AI tools and robots performed tasks at the workplace which were either difficult for humans who have been ordered to stay at home during the ensuing lockdown or were simply out of their reach.
For example, labor-replacing robots took over floor cleaning and sanitization in grocery stores and healthcare facilities and sorting at recycling centers. Also an increased reliance on chatbots for customer service at various companies already started showing potential in early attempts to monitor infection rates and contact tracing and much more.
But there are dangers to this newfound embrace of AI and robots, believes Mujiruddin Shaikh, Market Technology Principal, ThoughtWorks who sees the current trends indicate an increasing gap between the privileged and the disadvantaged, as more decisions affecting the masses are aided by intelligent machines. “We are at a critical time, where questions about the appropriate use and scope of AI are still to be fully considered,” he notes.
Closing the Ethical Gap in AI
Given its potential, AI and data could add $450-$500 billion to India’s GDP in the next 4-5 years, predicts Nasscom. For all the good that AI can bring, policy makers and responsible tech companies must recognize, execute, and mitigate its potential unintended, harmful effects.
“The key is to control how an individual’s information is used and ensure data security. Adopters of this technology should maintain ethical standards to ensure that their customer’s information is not misused, says Varun Goswami, Global Head – New Products COE, Newgen Software.
In that sense, AI combined with human intelligence can not only create a bigger impact, but can also drive its usage in the right direction, believes Praveen Kumar, Vice President, Digital & Innovations, JK Technosoft. He states, “Organizations should start thinking of educating and upskilling employees as AI will incorporate a great deal of work alongside human workforces. At the same time, it is of utmost importance to consider the necessary and adequate governance mechanisms carefully and proactively for ensuring ethical considerations in the deployment of AI tools.”
Already many organizations are offering programs to help employees put ethics at the core of their respective workflows. These programs are designed to empower the entire organization to think critically about every step of the process of building AI solutions, and perhaps most importantly continue advancing AI that is safe and inclusive.
Shaikh notes that the rise in demand for more ‘tech-for-good’ is inspiring organizations to do things the right way. Also, employees are open about their interest in working for ethically sound companies. One of the first steps towards building ethical AI is to grow the company’s knowledge on the subject, and nurture an organization-wide ‘ethical-first’ approach, very similar to the ‘security-first’ philosophy.
Looking ahead in 2021
As we move forward, our reliability on AI will deepen which will inevitably cause many ethical issues, especially in industries where personal and business data is at stake Companies should therefore start thinking about how they will retrain and educate employees with regard to AI’s ethical implication.
According to Sindhu Gangadharan, SVP and Managing Director, SAP Labs India, “Ethical AI can be achieved by taking small steps like proactive training of AI models and mitigating bias at early levels of training and testing; diversifying the teams building AI models and algorithms, resulting in reduced human bias. With more humans and machines working together, it can also result in a more balanced approach towards decision making.”
Shreeranganath Kulkarni, Chief Delivery Officer, Birlasoft suggests that for enterprises to ensure that AI is implemented for the greater good, they should work towards an AI playbook built on the pillars of AI strategy, data, talent, technology, execution, and culture.
There is no doubt that responsible AI is expected to become a key focus area for many organizations in 2021 and will grow in importance in the coming years. As Gangadharan believes organizations that are able to adapt and upgrade themselves and emerge stronger will be the ones to benefit from any kind of disruptions.
In the process, they must consider carefully and proactively the necessary governance mechanisms to be used to ensure ethical considerations in the deployment of AI tools. Building trust in technological solutions and tools is vital so that businesses and individuals can benefit from its use. Certainly, we don’t want a Hiroshima moment in AI for the world to take notice and act.