Let’s start with some examples. Example 1: A private bank decides to use machine learning to filter out who is eligible for a loan. The machine learning will need training on a data set, which could be historical data or user created data. If, historically, the bank has denied loans to a certain category of people, the same bias would carry forward to the machine. So essentially, they have transferred this bias to the system and now the system would deny loans to that certain category of people.
Example 2: A leading educational institution has been working on its historical data on admissions to understand admission patterns and see if they can develop a model. This data model will train a system to augment the decisions made by the administrators on potential candidates to attract and retain the best talent. The model also looks at including data on dropouts during the course, so they can identify such candidates in advance and put them through counseling sessions.
One of the main challenges the institution has is to eliminate any bias that the data might introduce in the system on admission
criteria. As the data is more than 100 years old, it also includes times when certain students were denied admission based on
their gender or race.
What’s common between the two examples here? Both are classic cases of digital ethics or ethical dilemma – a topic of concern not only for organizations but also for regulatory bodies and individuals for immediate consideration and deliberation.
The growth of digital media in India has largely been fueled by a moderate regulatory framework until now. Given the growing concerns around information and content available over digital platforms, a recent study done by Deloitte Touche Tohmatsu India LLP together with Bangalore Chamber of Commerce (BCIC) emphasized on an immediate need for India Inc. to introduce and adopt a “Digital Ethics framework” that would ensure a holistic view of ethics, and govern every digital intervention in the transformation journey of a business.
An accelerated pace of digital transition, consumption of goods and services via app-based interface, and proliferation of data bring numerous risks such as biased decision-making processes being transferred to machines or algorithms at development stage by humans. These biases can be a threat to the reputation and trust towards stakeholders, as well as cause operational risks, the study says. For example, it includes how technology is changing the way we do business, interact, and live. In this current technological era, many decisions are taken with inputs from artificial intelligence and other automated decision-making systems, especially in cases where structured data for decision-making was available.
This data was used to train smart algorithms to replicate human decision-making processes. There is a possibility that human biases involved in the decision-making process were transferred to the machines, which is one of the biggest concern areas in digital ethics, today.
“The pandemic compelled businesses and consumers to embrace digital technologies like artificial intelligence, big data, cloud, IoT and more in a big way. However, the need of the hour is to relook at the business operations layered on digital touch points with the lens of ethics, given biases might arise in the due course, owing to a faster response time to an issue,” says Vishal Jain, Partner, Deloitte India.
Societal pressure to do “the right thing” now needs a carefully consideration of the trade-offs involved in the responsible usage of technology. Its interplay becomes vital to managing data privacy rights while actively adopting customer analytics for personalized service. All these concerns have made it crucial to have an ethical framework that would ensure effective governance and risk mitigation aspects are in place, the study observes.
Manas Dasgupta, Chair of Young BCIC Expert Committee said, “Tech is advancing at a neck-break speed. In fact, it is getting ahead of us so fast that we are grappling to assess the true abilities and what prudential norms are to be applied.”
“Certain areas related to possible misuses of technologies such as privacy and security are fairly well-regulated both from legal as well as corporate governance aspects. However, inadvertent fallouts of technologies like autonomous machines that use AI / Robotics, etc. are yet to be fully understood,” he says.
These five steps include the following:
Committee: Formation of a committee which is a cross functional team with business, technology, and community experts collaborating to address all ethical concerns. This committee should roll into the organisation’s ethics committee, which would form the overall framework for ethics in the organisation.
Digital Ethics: A draft the policy on Digital Ethics
Adherence: Ensure adherence that all digital projects must be covered and assessed from the digital ethics perspective
Emphasize: To emphasis and make ethics an important part of the digital governance of all projects.
Education: Impart education on the need for the right ethics. Individuals involved must be assessed and reinforced with the knowledge from time to time.
In a recent speech, Masayoshi Son of Softbank said that in the near future the earth would be co-inhabited by humans and machines. We may soon see a world with 10 billion people and 10 billion robots. This, in other terms, is singularity, where each robot is connected to another robot. In that context, this becomes even more important as we may end up transferring our local biases to the machines. Those biases will permanently render some individuals outside the purview of services and facilities rendered by these robots.
“It is the need of the hour that the Industries start meaningful conversations and note sharing around good governance on these technologies and ensure that we are within our limits to stay fair to everyone in the society, remain transparent and responsible in our Digital endeavors,” says Dasgupta.