The growing sophistication and ubiquity of Artificial intelligence (AI) applications has raised a number of ethical concerns. These include issues of bias, safety, transparency, and accountability and more. These ethical issues concerning AI have caught the interest of many – individuals, businesses, and governments – in recent years. The COVID-19 pandemic has further raised specific concerns about how to apply digital or data ethics to decision-making processes when collecting, using and sharing data about employees and other stakeholders. In fact, organizations are now expected to be even more watchful against AI dangers if they hope to leverage the many opportunities AI offers to the business.
Throwing light on some of the key ethical challenges in AI, Shubhangi Vashisth, Senior Principal Analyst (Artificial Intelligence) at Gartner, says, “While ethics should be a core part of AI and other digital programs, the problem is it’s often treated as an afterthought.”
Recognizing the challenges in AI ethics
AI-systems often deliver biased results. For example, a search-engine technology also upholds biases of the real world, stereotyping gender roles. Or a hiring tool reinforces racial discrimination or entrenches these prejudices against certain communities. Oftentimes users and developers are not aware of the system’s process to reach the output. This opacity increases the bias in datasets and decision systems, believes Vashisth.
“The other problem is that AI systems are black boxes that are based on the inability to fully understand why the algorithms behind the AI work the way they do. Lack of accountability, auditing, and engagement also reduce opportunities for human perception, further amplifying issues related to trust and transparency,” she says, adding that “If individuals and teams are cognizant of the existence of bias, then they have the necessary tools at the data, algorithm, and human levels to build a more responsible AI.”
Companies also need to ensure that their AI systems are beneficial for businesses, society, and the environment on the whole. According to Vashisth, “Ethical AI should follow principles such as fairness, reliability, safety, privacy, security, and inclusiveness. It should provide transparency and accountability. But it is often a challenge to attain these principles.”
Fairness for example means that the AI systems should be inclusive, accessible and should not have unfair discrimination against individuals, communities, or groups. It should provide equitable access and treatment to everyone. Bringing more diversity into the team can however mitigate the bias often developed by algorithms, according to Vashisth.
AI systems should also respect privacy rights and data protection, which in turn ensures the security of data. Ethical AI designed system provides proper data governance and model management systems. While designing systems, developers should be trained to build system keeping privacy and security as primary concerns. Besides, AI systems should reliably work in accordance with their intended purpose.
Vashisth, who presented her views on ‘AI Governance and Responsible AI’ at the Gartner Data & Analytics Summit for India that took place virtually on 4th & 5th August, says, “Creating explainability is among several important steps enterprises must embed in their AI operations in order to make responsible, ethical AI a part of doing business. A key to making it work is to ensure that ethics is included in every part of the organization’s AI process and involves the stakeholders, including your employees, customers, shareholders and board of directors. The system provides opportunities for feedback and dialogues for making the process better.”
Mitigating AI’s ethical dilemma
While it remains a dilemma on who is answerable in an organization on questions regarding ethical AI, Vashisth believes that it is important for companies to have AI ethicists to help them think through the ethics of AI development and deployment. An AI Ethicist advises on ethical AI practice and guards against bias, unintended consequences and ensures accountability within the organization.
“Also companies should have a formal code of ethics that lays out their principles, processes, and ways of handling ethical aspects of AI development. Those codes should be made public on the company’s websites so that stakeholders and external parties understand the company’s views on ethical AI,” she says.
Vashisth believes that AI goes beyond the development of traditional product lines with narrow social implications. With its potential to distort basic human values, it is crucial to train people in how to think about AI.
Also, Ethical AI system should value human diversity, freedom, autonomy, and rights. According to her, “Firms should have AI training programs that not only address the technical aspects of development, but the ethical, legal, or societal ramifications. That would help software developers understand that they are not merely acting on their own individual values, but are part of a broader society with a stake in AI development.”
Ethical AI underway
On a positive note, Vashisth observes some of the world’s biggest corporations such as Google, Microsoft, Amazon, Facebook, Apple, and IBM among others, have joined in the discussions as well.
Google, for example, has published a document calling for the “responsible development of AI.” It said AI should be socially beneficial, not reinforce unfair bias, should be tested for safety, should be accountable to people, should incorporate privacy design, should uphold high standards of scientific excellence, and should be available for uses that accord with those principles.
Microsoft meanwhile published an extensive report on “the future computed.” It laid out the opportunities for AI, the need for “principles, policies and laws for the responsible use of AI,” and noted the possible ramifications for the future of jobs and work.
These groups seeks to develop industry best practices to guide AI development with the objective of promoting “ethics, fairness and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology.”
Many firms also have AI audit trails explaining how particular algorithms were put together and its possible outcome. These audits provide some degree of transparency and explainability.
There is no doubt that responsible AI is expected to become a key focus area for many organizations in the coming years. As Vashisth believes, responsible AI practices can drive competitive advantage and business resilience and make the organization more attractive to talent.
“In the process, they must consider carefully and proactively the necessary governance mechanisms to be used to ensure ethical considerations in the deployment of AI tools,” she says.
Building trust in technological solutions and tools is vital so that businesses and individuals can benefit from its use. Certainly, we don’t want a Hiroshima moment in AI for the world to take notice and act.