Building Trust In Artificial Intelligence

Artificial intelligence is at an inflection point, writes BasisAI CEO Liu Feng-Yuan, and its widespread adoption depends on whether society can place its trust in it.

AsianScientist (Jul. 7, 2020) – From real-world applications such as voice recognition and forecasting demand to contact tracing in the fight against COVID-19, artificial intelligence (AI) is transforming how we design, build, work and live.

As the hype around AI grows, there is also an opposing force arising from concerns over the power of major technology companies, user privacy and the possibility of political manipulation.

The public is asking, “Can AI be trusted?”, “Can the corporations who are using AI make decisions in the interest of the consumer?” To answer this, we need to understand how AI systems work, and the conditions necessary to build trust.


The black box that is AI

Machine learning is about learning from past data and enabling decision automation. As the environment evolves, the data changes. Predictive functions in the systems need to be tested regularly to ensure they are working as intended.

But there has been little focus on how the system is built and maintained. In addition, those who use ‘black box’ AI cannot see into the system to know how decisions are made. While these black box systems are easy to use, the opaqueness means that unintended bias can creep in, resulting in decisions that are unfair.

Take the example of US investment banking firm Goldman Sachs, which was under scrutiny for developing AI algorithms with alleged implicit biases. A concern surfaced in a string of tweets authored by tech entrepreneur and Ruby on Rails founder David Heinemeier Hansson, claiming that his wife had received a credit limit that was 20 times lower than his, despite her more favorable credit score.

Goldman Sachs defended itself by saying that its “credit decisions are based on a customer’s creditworthiness and not on factors like gender, race, age, sexual orientation or any other basis prohibited by law,” but without being able to say more, it eroded trust in the application of algorithmic decision-making.


Overcoming biases in AI

As a technology, AI is neutral. It is a mathematical tool devoid of prejudices and emotional blind spots that drive human bias. But it is also silent on the appropriateness of using race or gender as a basis for awarding credit. To the algorithm, it is just another variable. While corporations don’t start out intending to be biased, they need to ensure that their decisions are fair and in the consumer’s interest.

Thus, the key to upholding trust in AI is by taking a responsible approach to using the technology. This is possible through a strong AI governance framework that can be designed into machine learning systems.

AI governance is a framework and process for organizations to ensure that their AI systems work as intended, in accordance to customer expectations, organizational goals and societal laws and norms. It is typically articulated as a set of principles that can be translated into actions, processes and metrics to guide the use of AI in a way that is explainable, transparent and ethical. When integrated with other parts of the organization, decision trade-offs can be made in view of overall compliance and risk management perspectives.


Misconceptions about AI governance

There are two common misconceptions about AI governance. First, that it hinders innovation. But speed and governance of AI systems need not be diametrically opposed. At BasisAI, we believe that if you build machine learning systems correctly from the ground up, you can achieve both. And it sets the foundation for innovation and trust. In this aspect, building an AI governance culture with accountable leadership with clear roles and responsibilities is crucial.

The second misconception is that amorphous governance concepts prescribed by policymakers do not apply in the battleground of building machine learning systems. In this arena, MLOps (a compound of ‘machine learning’ and ‘operations’) has proved to be a useful foundation for implementing sound AI governance in performant machine learning systems.

MLOps is an emergent machine learning practice that draws from DevOps approaches to increase visibility, automation and availability in machine learning systems. MLOps enables the design of a well-governed AI system, by allowing the user to understand how the system is performing, articulate how decisions are made, assess for bias, and intervene quickly when algorithms are not working as intended.


Building an ecosystem of trust

Ethical decisions ultimately need to be made by human beings. These decisions cannot be delegated to technology systems. But corporate leaders need a dual-key approach to AI governance—a robust engineering process such as MLOps, and a strong governance framework, to ensure the responsible use of AI. By doing so, corporations can create an ecosystem of trust amongst its consumers, investors and regulators, and retain the space for exploring innovative AI tools and methods.

Disruptive technology will always move ahead of what society is comfortable with. That’s what makes it powerful. But for it to become mainstream, technology needs to be trusted by the people who use it, and the people who often are unwitting consumers of it.

We believe that AI is at this inflection point, where what’s slowing more widespread adoption is trust. It is an imperative that BasisAI is here to solve, not just for the good of our customers, but for the society and future generations for whom AI will be part of the technological and social fabric.


Read BasisAI’s white paper on AI governance: The path to responsible adoption of artificial intelligence (2020)


———

Copyright: Asian Scientist Magazine; Photo: Alexander Sinn/Unsplash.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Liu Feng-Yuan is the co-founder and CEO of BasisAI. In his previous capacity, he was chief data scientist of the Singapore government, where he was responsible for setting up and growing the data science and AI capabilities within the Government Technology Agency.

Related Stories from Asian Scientist