AINews & Analysis

Making Ethics The Default Norm In AI

AI

Ethics should be a continuous process to refine and evolve as technology advances and yes, AI can be tamed!

One Saturday evening, my 8-year-old was so silent that I didn’t realize he hadn’t gone out to play. There he was, huddled in a corner, binge watching his favorite animation series on YouTube. It was a bottomless stream one after another feeding his eyes. Already more than two hours into it, he had lost track of time, till I nudged him out of his trance and sent him out to play!

From curated sequencing of content of your preferences and clickstream, to automated CV screening for job postings, to autonomous cars, Artificial Intelligence (AI) and Machine Learning (ML) algorithms today, are ‘mainstream’ across all walks of life. Their multiple utilization notwithstanding,there are enough material out there cautioning us on the pitfalls of AI, indicating it’s not to be taken lightly.Take, for instance, articles quoting industry pioneers like Elon Musk cautioning on AI calling out “AI as our “biggest existential threat,”. Or Ted Talks like “Machine Intelligence makes human morals more Important” by Techno-Sociologist Zeynep.

Arguably, humanity has been dealing with disciplining a far superior form of intelligence, Human Intelligence, for ages now, to channelize its power in constructive ways. Surely, we have been quite successful with it as a species, despite points-in-time setbacks.

So what’s all this “fear” about AI? Well, with great power comes great responsibility. That responsibility can be misused by sections of humanity is a well-founded concern. The answer could very well lie in “history” – in the ways society has dealt with intelligence for ages. We need to find ways to make ethics the default norm in AI, specifically addressing the following key aspects:

  • Governance: Define the rules of the game: Questions which need to be answered: Where and for what purposes can AI be legitimately used? What guidelines should be adhered to, from an ethical standpoint? How do you ensure conformance to the guidelines? E.g., AI being used for any kind of discriminatory aspects of denying a post due to color/caste/sex is a strict no-no and should involve punitive action on the corporate/entity which cross the line.

Strides made on Governance: Tools like What-If Tool (by Google), Audit-AI (https://github.com/pymetrics/audit-ai by Pymetrics) are being developed to help identify biases. The next step is to get these conform to a set of industry-specific guidelines. This should be a mandatory part for any real-life use of AI for decision-making, enforced by a governing body (e.g., drawing parallels from PCI compliance in the card industry).

  • Explain-ability: Make AI decisions understandable and auditable: Explainable AI (XAI) refer to AI techniques that can be trusted and easily understood by humans. I believe this is by far the most important aspect to make AI ethical. If an AI is aiding/taking decisions on behalf of humans, the algorithm needs to clearly state the underlying rule it applied to arrive at a recommendation/conclusion “in no fuzzy terms and in humanly understandable language”.

If an AI algorithm is incapable of articulating the reasoning, it isn’t fit for purpose. It can’t hide under the guise of having been trained by XYZ relevant data. In fact, the “Branch of AI Sciences translating AI actions into humanly readable language” demands the highest attention than anything for AI advancement. Morality-based decision making by AI should be non-negotiable.

Strides made on Explain-ability: Entities like DARPA (Defense Advanced Research Projects Agency) are working on XAI toolkits that make AI decisions more explainable.

  • Accountability: Define the ownership model: An action/recommendation of AI cannot be left to itself. It needs to be owned by an individual or an entity who has been part of its creation, has used it as an instrument or has reaped its benefits.E.g., When an autonomous car is involved in an accident and the fault lies with the m/c, who is to own the damage? The car owner? The car manufacturing company? Or some other entity that authored the algorithm under the hood?

Strides made on Accountability: Maturity in Explain-ability followed by robust legal practices around the attribution of ownership should help make progress on this aspect which is still at a nascent stage.

  • Continuously refine to let ethics not be a catching-up with technology. The first 3 points are not something that’s done as a one-time activity. It should be a continuous process to refine and evolve as technology advances.

Of course, all this is easier said than done. A host of questions comes up: What is the agreed set of rules? How can we enforce it? Who is accountable? One can take heart in the fact that, if humanity could deal with something as powerful as nuclear energy and use it more often for constructive ways than otherwise, AI is something we can tame!

But we don’t need to get to it the hard way. Some companies are doing their bit. Google for instance, has announced that “for deciding ethics of AI development, it is setting up a Board with Diverse Opinions” just this March, 2019. But are the regulators doing enough? Certainly, we don’t want a Hiroshima moment for the world to take notice and act.

(Pradeep Ganesha is Senior Director, Program Management, Publicis Sapient)

Leave a Response