Asian Scientist Magazine (Oct. 13, 2022) — Devesh Narayanan was in Israel when he began to feel the first stirrings of frustration. In 2018, in the third year of his engineering degree at a university in Singapore, Narayanan joined an overseas
entrepreneurship program that sent him to Israel to work on drone defense technologies.
“These programs tend to be very gung-ho about using technology to save the world,” said Narayanan in an interview with Asian Scientist Magazine. “It always felt a little empty to me.”
As he worked on the drones, Narayanan found himself growing increasingly concerned. The moral implications of the things he was doing seemed to be masked by the technical language of the detached instructions he received from his supervisors.
“You would get technical prompts instructing that the drone do things like ‘engage in these coordinates’,” he recalled. “It sounds like a technical requirement, when it is really about getting the drones to fight in hostile territories without being caught. But at that level of technology design, the moral and political considerations are kind of hidden.”
The experience made Narayanan realize how easy it could be for an engineer, caught up in solving a technical problem, to overlook the moral and political questions of their work.
Upon discovering that the questions he had been asking were not part of any engineering syllabus, Narayanan turned to moral philosophy textbooks and classes for answers. That curiosity has now led Narayanan to fully focus on the ethics of technology. As a research assistant at the National University of Singapore’s Centre on AI Technology for Humankind (AiTH), he investigates the ethics of artificial intelligence (AI) and what it means for AI to be ethical.
AiTH is just one of the many places in Asia where researchers are trying to understand how to make AI responsible and what happens when it is not.
 >
What it means to be ethical
From the Hippocratic Oath to the debates about embryonic stem cells and today’s concerns about data privacy and equity in vaccine delivery, scientific developments and ethics have always gone hand in hand.
But what does it mean for technology to be ethical? According to All Tech is Human, a Manhattan-based non-profit organization that aims to foster a better tech future, responsible technology should “better align the development and deployment of digital technologies with individual and societal values and expectations.” In other words, responsible technology aims to reduce harm and increase benefits to all.
As technology continues to shape human societies, AI is driving most of that change. Often unseen yet ubiquitous AI algorithms drive e-commerce recommendations and social media feeds. These algorithms are also being increasingly integrated in more serious matters such as the justice and financial systems. In early 2020, courts in Malaysia began testing the use of an AI tool for speedier and more consistent sentencing. Despite the concerns voiced by lawyers and Malaysia’s Bar Council around the ethics of deploying such technology without sufficient guidelines or understanding of how the algorithm worked, the trial went ahead.
The government-developed tool was trialed on two offences, drug possession and rape, and analyzed data from cases between 2014 and 2019 to produce a sentencing recommendation for judges to consider. A report by Malaysian research organization Khazanah Research Institute showed that judges accepted a third of the AI’s recommendations. The report also highlighted the limited five-year dataset used to train the algorithm, and the risk of bias against marginalized or minority groups.
The use of decision-making AI in other contexts, such as in approving bank loan applications or making clinical diagnoses, raises a similar set of ethical questions. What decisions can be made by AI and what shouldn’t? Can we trust AI to make those decisions at all? As researchers argue that machines themselves lack the ability to make moral judgements, the responsibility then falls to the human beings who make them.
 >
Making moral machines
The stakes of leaving such decisions up to AI can be monumental. Dr. Reza Shokri, a computer science professor at the National University of Singapore, believes that AI should only be used to make critical decisions if they are built on reliable and clearly explainable machine learning algorithms.
“Auditing the decision-making process is the first step towards ethical AI,” Shokri told Asian Scientist Magazine, adding that AI algorithms can have grave consequences if they operate on foundations and algorithms that aren’t fair or unbiased.
Shokri explained that bias often gets embedded in an algorithm when it is trained. Once supplied with training data, the algorithm extracts patterns from the data, which is then used in making predictions. If, for any reason, certain patterns are more dominant than others at the training stage, the algorithm might weigh the dominant data samples with more importance and ignore the less represented ones.
“Now imagine if these ignored patterns are the ones that apply to minority groups,” Shokri said. “The trained model would function poorly and less accurately on data samples from minority groups, leading to an unintended bias against them.”
For example, in 2021, Twitter famously drew controversy when users discovered that its AI-based image cropping algorithm preferred to highlight the faces of white people compared to the faces of people of color in thumbnails, effectively showing more white people on users’ feeds. A study by Twitter of over 10,000 image pairs later confirmed this bias.
 >
Getting rid of the jargon
Given everything that is at stake with AI, numerous organizations have attempted to come up with guidelines for building fair and responsible AI, such as the World Economic Forum’s AI Ethics Framework. In Singapore, the Model AI Governance Framework, first launched in January 2019 by Singapore’s Personal Data Protection Commission, guides organizations in ethically deploying AI solutions by explaining how AI systems work, building data accountability practices and creating transparent communication.
But for Narayanan, these discussions on AI ethics mean little if they are not grounded in defined terms, or if there isn’t a proper explanation for how they should be implemented in practice.
These frameworks “currently exist at an abstract conceptual level, and often propose terms like fairness and transparency—ideas that sound important but are objectionably underspecified,” said Narayanan.
“If you don’t have a sense of what is meant by fairness or transparency, then you just don’t know what you’re doing,” he continued. “My worry is that people end up building systems they call fair and transparent, but are biased and harmful in all the same ways they always have been.”
Shokri also echoed the need for clear definitions. “In the case of fairness, we need a clear description of the notion of fairness that we want to satisfy. For example, does fairness mean we want the outcome of an algorithm to be similar across different groups? Or do we want to maximize the performance of the algorithm on an underrepresented group?” said Shokri. “When the notion of fairness is clear, then data processing and learning algorithms can be modified to respect such notions.”
The problem, Narayanan further posits, is theoretically grounding principles this way is challenging, and not something that industry practitioners, such as with Singapore’s Model AI Governance Framework, might be able or willing to do.
“Principles, in my opinion, are in this weird no-man’s land: neither theoretically grounded, nor practically implementable. I worry that we’re focusing too much on solving the latter problem, at the expense of the former,” explained Narayanan.
As such, Narayanan’s research at AiTH has been dedicated to interrogating the definitions of terms used when discussing AI ethics. He is currently examining the discourse around transparency to determine what it actually entails in the context of building ethical AI.
“I am asking if transparency is an end in itself or if there are things like accountability and redress that it should help us get,” Narayanan explained.
He is particularly concerned about what he terms performative transparency—providing people with information about how an AI algorithm makes decisions, but without doing anything more than simply making that information available.
“For example, you could tell a job applicant that their resumes were screened by an automated algorithm, but then not provide any explanation for why they may be rejected and mechanisms to contest it or seek redress,” said Narayanan. “When people can be potentially harmed by a system, they would want a channel to fight an unfair decision. Transparency could help with this to some extent.”
A better understanding of transparency and the other terms that dominate AI ethics frameworks may help us design AI that is actually beneficial to all.
 >
Technology that centers humanity
But what exactly goes into designing AI that benefits humanity? Answering that question requires considering the myriad of different and intersecting factors that make us human, said Professor Setsuko Yokoyama of the Singapore University of Technology and Design. Yokoyama specializes in the speculative design of equitable technology, which incorporates the sociopolitical history of a particular digital technology to inform its ongoing design process.
For Yokoyama, who encourages a humanistic inquiry into digital technologies, clear definitions are crucial too.
“When we talk about ‘human-centric’ design, who are the ‘humans’ in question?” asked Yokoyama. “If it refers to a majority group in a society or a handful of elites that happened to be in the room where design decisions are made, that already indicates who is prioritized and who is left out.”
Yokoyama brings up a seemingly innocuous example to illustrate this point: speech-to-text technology. While you may be familiar with the technology through AI-powered automatic captions on YouTube videos, speech-to-text technology traces its beginnings back to the late 19th and early 20th century, when it was known as Visible Speech, and used as an assistive technology for deaf students to master oral communication.
“But at the same time it served as a corrective and assimilative tool for deaf students to be integrated into a larger society through the mastery of ‘normative’ speech,” said Yokoyama. “Though such design rationale might be characterized as ‘human-centric’, it stems from unchecked ableist assertions.”
Yokoyama uses intersectionality, which examines the intersecting effects of multiple different identity markers such as race, gender, class, disability status, national origins and other forms of discrimination, as a critical framework in her research. Starting with the premise that bias is multifaceted and intersectional, Yokoyama aims to mitigate such biases from getting entrenched in automatic speech recognition systems.
AI technology is no different, warned Yokoyama. “AI systems that are designed with a narrow and limited definition of humans would end up asserting and imposing a particular idea of who the humans are on the rest of us,” she said.
 >
A question of power
The risk of sidelining certain voices or communities in technology design is a concern that Narayanan shares too. While Narayanan believes making ethical AI decisions requires deep critical thinking and moral skills, he is also quick to emphasize that high-stakes decision-making should not be centered on just a few select people.
“I’m skeptical of leaving just a few people in charge,” Narayanan said. “You have people, like AI developers and tech designers, with the most technical expertise who are making the decisions about bias and harm. On the other hand, you have the users who are most affected by these systems. The problem is these people are not the ones who have the most power in shaping the systems.”
To illustrate this point, Narayanan recalled his conversations with Grab taxi drivers and other gig workers for a previous research project. While the terms of transparency and fairness didn’t appear to mean much to the workers, this changed when Narayanan approached the topic from the angle of practical concepts like wages and ride competition.
“It turns out they had a lot of things to say; they just didn’t have this language of abstract terms about fairness or transparency principles,” said Narayanan. “Because of this, it is important to figure out what material issues people care about, and how that connects to the things that we’re talking about.”
Narayanan and Yokoyama both run the Singaporean node of the Design Justice Network, a community that explores the intersections of design and social justice. The members of the network aim to use design to empower communities and avoid oppression, while centering the voices of those who are directly impacted by the outcomes of the design process.
In the end, Narayanan, Yokoyama and other researchers like them hope that clearer language will help pave the way for more diverse voices in discussions about AI ethics.
The usual challenges that AI presents—like job displacement, data security and privacy risks—are amplified due to unequal power dynamics, and the consequences are more dire for those who may be intentionally or unintentionally sidelined by biased AI algorithms. Discussing the fairness of algorithms behind AI technologies is undoubtedly a crucial step towards a better tech future for all, but what’s even more important is who gets to have a voice in those discussions in the first place.
This article was first published in the print version of Asian Scientist Magazine, July 2022 with the title ‘Fair Tech’.
Click here to subscribe to Asian Scientist Magazine in print.
—
Copyright: Asian Scientist Magazine. Illustration: Lieu Yipei