Despite artificial intelligence (AI)’s miraculous potential in various fields of business and society, some of its tools and applications not only lead to loss of jobs, but also reinforce biases, and results in the infringement on data privacy. And that’s proven. In this backdrop, the World Economic Forum (WEF) has recently launched the Global AI Action Alliance (GAIA), an initiative to accelerate the adoption of inclusive, transparent and trusted AI practices globally, and across industry sectors.
The alliance brings together more than 100 leading companies, governments, international organizations, non-profits, and academics united in their commitment to maximizing the societal benefits of AI. Alliance members work together to identify and implement the most promising tools for ensuring that AI systems are ethical and serve all of society, including groups historically underserved by AI.
AI could contribute more than $15 trillion to the world economy by 2030, adding 14% to global GDP, according to PwC. This would make AI the biggest economic opportunity of the next decade, representing more value than today’s insurance, oil and gas, commercial real estate, and automotive industries combined.
Along the way, AI could bring huge benefits to society. A 2018 study by Google identified 2,602 AI use cases that promote social good, and people are increasingly applying AI to address critical societal challenges including improving agricultural yields, reskilling workers and combating COVID-19.
But as the economic and social potential of AI has become clear, so, too, have the risks posed by unsafe or unethical AI systems. Recent controversies on facial recognition, automated decision-making and COVID-19-related tracking have shown that realizing AI’s full potential requires strong buy-in from citizens and governments, based on their trust that AI is being built and used ethically.
As Viral Thakker, Partner, Deloitte India, says, “By itself, AI is just technology (like nuclear energy), but its uses can be perceived as having “positive” or “negative” effects on society as a whole. This pushes the dialogue of ethics in the data context.”
According to him, data and AI ethics deal with managing ethical complexities in the age of vast amounts of data and extensive use of automation. The big issues driving this are privacy considerations, lack of transparency of “black box” AI models, bias and discrimination that may be embedded in the data, which AI algorithms learn from, as well as lack of governance and accountability.
Since the onset of the pandemic however some of the worries on AI appeared to have been set aside as AI-infused technologies have been employed to mitigate the spread of the virus and AI tools and robots performed tasks at the workplace which were either difficult for humans who have been ordered to stay at home during the ensuing lockdown or were simply out of their reach.
For example, labor-replacing robots took over floor cleaning and sanitization in grocery stores and healthcare facilities and sorting at recycling centers. Also an increased reliance on chatbots for customer service at various companies already started showing potential in early attempts to monitor infection rates and contact tracing and much more.
But there are dangers to this newfound embrace of AI and robots, believes Mujiruddin Shaikh, Market Technology Principal, ThoughtWorks who sees the current trends indicate an increasing gap between the privileged and the disadvantaged, as more decisions affecting the masses are aided by intelligent machines.
“We are at a critical time, where questions about the appropriate use and scope of AI are still to be fully considered,” notes Shaikh.
As we move forward, our reliability on AI will continue to deepen which will inevitably cause many ethical issues, especially in industries where personal and business data is at stake. Needless to say then, AI holds the potential to deliver enormous benefits to society, but only if it is used responsibly. The need of the hour is to start thinking about how organizations will retrain and educate employees with regard to AI’s ethical implication.
According to Sindhu Gangadharan, SVP and Managing Director, SAP Labs India, Ethical AI can be achieved by taking small steps like proactive training of AI models and mitigating bias at early levels of training and testing; diversifying the teams building AI models and algorithms, resulting in reduced human bias. With more humans and machines working together, it can also result in a more balanced approach towards decision making.
However, no single organization can address the full range of challenges presented by AI, nor can any one actor deliver the immense benefits that AI can offer to society. With so many challenges to overcome and so many opportunities to unlock, only robust collaboration can ensure that we maximize the benefits of AI and distribute them equitably across society.
As Klaus Schwab, founder and executive chairman of the WEF, said, “We are launching the Global AI Action Alliance along with our partners to shape a positive, human-centred future for AI at this decisive moment in its development.”
The alliance provides a platform for organizations to engage in real-time learning, pilot new approaches to ethical AI, scale adoption of best practices, and undertake collective action to ensure that AI’s benefits are shared by all.
A Steering Committee consisting of top global leaders from industry, government, academia and civil society will guide the alliance. The committee is co-chaired by IBM Chairman and CEO Arvind Krishna, and Patrick J McGovern Foundation’s President Vilas Dhar. The foundation is a global AI and data philanthropy.
With AI’s impact on industry and society accelerating every day, and with so much at stake in how its roll-out is managed, the Alliance sees an urgent need for a multi-stakeholder collaborative effort to ensure that AI is used ethically and in the global public interest.