AINewsletterResearch & Whitepapers

Fostering Responsible AI Adoption In the Enterprise

AI

Indian enterprises developing artificial intelligence (AI) are making a considerable effort in adopting guidelines, frameworks, and other best practices. However, they lag behind their global counterparts when it comes to conducting third-party audits or impact assessments of their AI systems, according to a new study.

The report launched by AIMResearch evaluates the existing state of Accountable AI amongst Indian ventures. The record, labeled “ State of Accountable AI in India” highlights the initiatives made by organisations in India to make sure AI’s secure and also accountable advancement and also accentuates locations that require enhancement.

The rapid advances in AI and its potential to make decisions and automate tasks significantly impact individual autonomy and change societies’ functions. Thus, it has become essential that companies ensure that AI is deployed in a manner that does not negatively impact individuals or societies.

But the report shows that Indian enterprises developing AI are adopting guidelines, frameworks, and other best practices but lack behind in conducting third-party audits or impact assessments of their AI systems.

Two-thirds of the Indian AI companies or 66.7% have actually embraced an official danger examination or bookkeeping structure, yet just 6.9% have actually employed exterior auditors.

The report also reveals that bigger companies carry out relatively much better when recording danger examination standards, taking on prejudice discovery structures, guaranteeing safety and security, and also employing third-party auditors. Nevertheless, they fall back in carrying out routine human-rights effect evaluations of their AI systems.

The reports’ findings show that larger firms perform comparatively better when documenting risk evaluation guidelines, adopting bias detection frameworks, ensuring safety, and hiring third-party auditors. However, they fall behind in conducting periodic human-rights impact assessments of their AI systems.

Around 7 in 8 or 87.5% of the firms with large data science units have documented safety standards checklists for their AI systems compared to 75.0% and 60.0% of the medium-sized or small data science units. At the same time, large data science units at 37.5% are around twice as likely to not perform long-term impact assessment on any of their AI systems compared to medium-sized data units at 16.7% and small data science units at 20.0%.

Boutique AI firms that provide niche AI products or services do better in most principles of Responsible AI than big IT firms that provide AI-as-a-service. They are more likely to have audit guidelines, standards checklist for algorithmic fairness, and bias detection frameworks. They are also more likely to have a transparent AI system, perform periodic human rights impact assessment, and allow more human control.

Greater than 9 in 10 or 92.3% of the shop AI companies utilize an explainability structure to much better recognize their AI systems than the 75.0% of the IT companies giving AI-as-a-service. Likewise, around 15.4% of the Shop AI companies perform a civil rights effect evaluation on every AI system when just 6.7% of the companies giving AI-as-a-service do it.

Last but not least, companies with head office outside India have a greater possibility of preserving conformity criteria in taking on prejudice discovery structures. They are additionally more probable to talk to an AI Ethics and also establish their systems with multi-stakeholder partnerships.

AI companies with head office outside India at 35.7% have a greater possibility of taking on danger examination or prejudice discovery structures than companies with head office in India at 20.0%. Likewise, around 86.7% of the companies with head office outside India talk to stakeholders for each AI system they establish contrasted to 71.4% of the AI companies with head office in India.

According to a PwC report (based on a survey conducted between Aug and Sep 2020), India became a leading adopter of AI amid the pandemic. While the adoption accelerated and respondents were convinced of AI’s benefits, around half of them (48-49%) still expressed concerns about the resulting control, ethics, performance, and compliance risks.

Given its potential, AI and data could add $450-$500 billion to India’s GDP in the next 4-5 years, predicts Nasscom. For all the good that AI can bring, policy makers and responsible tech companies must recognize, execute, and mitigate its potential unintended, harmful effects.

Already many organizations are offering programs to help employees put ethics at the core of their respective workflows. These programs are designed to empower the entire organization to think critically about every step of the process of building AI solutions, and perhaps most importantly continue advancing AI that is safe and inclusive.

Going forward, our reliability on AI will deepen which will inevitably cause many ethical issues. Hence responsible AI is expected to become a key focus area for many organizations and will grow in importance in the coming years.

Leave a Response