CXOToday has engaged in an exclusive interview with Philip Miller, Senior Manager, Customer Success
What is the role of national frameworks and regulations in the battle against data bias, in addressing the growing concern among businesses using AI and machine learning?
The advent of AI and machine learning comes at a time of rapid technological innovation and change. For regulators and governments, it presents a challenge that has increased in scope and will continue to do so over the next several years and beyond. How do we effectively regulate something, AI in this case, which is constantly changing? I’d say the same way – through being adaptable and one step ahead of the future.
National frameworks must empower those making new regulations to be able to make changes with the pace of technology which might be a challenge both from a resource and decision-making perspective. We don’t have to wait until something with bad consequences happens to start thinking about how more effectively to regulate AI. I’m happy to see many countries actively working in that direction and hope that together as a society will be able to make most of AI.
What are the primary challenges that organisations face when addressing data bias?
There are two main challenges that organisations face when attempting to address data bias. The first one is related to the technology powering the data within the organisation and from external sources. The data is increasingly found in unstructured documents and data sources. As such, organisations are spending, in some cases, millions of dollars to manually extract the data in the documents, tag them and identify the facts and meaning in them. It is increasingly becoming untenable for organisations to handle such a huge amount of data, which must be used to train AI and machine learning applications.
The second challenge is more complicated and comes down to human resources. People like routine and consistency. Some will struggle to keep up with the technology. Some will refuse to make changes because “it’s the way we’ve always done it.” Some subject matter experts are still using obsolete technology, which must change. Technological acceptance is one of the causes of delays in change or resistance to change in an organisation. It takes effective leadership to push through this change, bringing innovation into the organisation but also ensuring that employees are empowered to use this new technology.
In what ways does data bias hinder the accuracy and fairness of AI and machine learning algorithms?
When thinking about data bias, we must consider not only the source, the intentions and meaning behind the generation of the data, but also the completeness of the data. Has enough data been captured to give the subject matter experts, or in this case AI, to generate a complete view of the topic, organised and structured to make the data not only machine readable, but also machine interpretable? Has human expertise in the business helped to tag the data with taxonomies and ontologies and semantically linked the data to find relationships in it and applied this metadata to the data?
If the answer to this is no, then bias will exist in the data. It can be racial in nature but increasingly we’re finding other type of bias, financial, sociological, historical, sexual, etc., All of these can skew the results and findings from AI, and they can be found in any organisation.
One aspect of development, which I think is not currently being addressed, is the use of developers, engineers and architects to build and use these new AI platforms. While certain results can be achieved using these resources, a massive blind spot exists when creating or using AI. It’s a skills gap, which has an impact on us in society today.
When social media networks were created, they used engineers and developers to build the technology. Launched in the world, they grew fast and were adopted by billions. It’s only now that we are starting to realise the impact these technologies have had and are having on us as a society. We mechanised our reach, our ability to share our lives and connect with others, on a global scale. But we don’t have the capacity to think at this level to interact with hundreds, thousands and millions at once. With it, came the negatives of social media – misinformation, abuse, dehumanisation, etc.
As it stands today, we’re making the same mistake, with data bias and its impact on other aspects of the technology. In the development process, the science and the humanities must both be considered. Sociologists, psychologists and historians must be consulted so that we can avoid making the same mistakes we made with social networks, which was not considering the human element in the data.
Tell us more about Progress’ global study on data bias regarding the perception of the issue among executives? How can it help businesses?
The comprehensive global study has some eye-opening statistics that I hope will ignite not only a conversation about data bias in the business but also inspire those businesses to put real change into practice. The issue won’t go away or vanish, and biases have a way a deeply embedding themselves into how we think and act unless we work now to change it.
Our research shows that although 55% of Indian organizations acknowledge the existence of data bias within their organization, 63% consider the lack of awareness and understanding as significant barriers to addressing it. Additionally, the lack of available expert resources, such as access to data scientists, is also a challenge. In addition, 57% of organizations in India believe that data bias could have serious social consequences if not addressed and 82% see technology as a crucial tool in the fight against data bias.
Once executives start addressing data biases, they will identify not only new ways of working but also be able to innovate on a larger scale. When the issue of data bias is tackled, the business will have access to more data, better data tools and new data, which can help bridge the gap with machine scale. It brings businesses and organisations closer to human-aligned AI, something that proponents say is crucial even if biases and other negatives are mitigated.
What measures could India take to promote inclusivity and fairness in AI algorithms, particularly in sectors with significant social and economic implications?
India, like so many other countries, has the power to ensure equality, inclusivity and fairness at the beginning of this age of AI. It will ensure that human-augmented AI, aligned for our good, will help lift all of humankind worldwide. Giving access to AI for all will mean that our pool of ideas and innovation will increase, the pace of change will accelerate and the economic and social benefits will be realised by all.
To guarantee fair and inclusive AI and avoid the consequences of unconscious bias, organizations should expand the talent diversity in the teams developing the algorithms behind AI, provide technical training on dataset management for expert practitioners and commit to continuous assessment of the technology they use. This combination of technology, training and practices will equip expert practitioners with what they need to develop protocols to detect, remediate and avoid creating biased algorithms and make sure every touch point in their technology or development stack factors in the reality of data bias.