Keeping Smart City Algorithms Accountable

Although artificial intelligence promises to give city planners unprecedented insight into urban life, humans must remain involved in decision making.

AsianScientist (Jan. 30, 2020) – Despite the hype, smart cities are nothing new. Take it from Rob Kitchin, a professor of human geography at the National University of Ireland, Maynooth, who has seen the discourse shift from ‘wired’ cities in the 1980s to ‘cyber’ cities in the 1990s and even ‘sentient’ cities in the early 2000s.

In its latest manifestation as ‘smart,’ much of the focus has centered on the data and sensors that support fine-grained digital feedback. But there’s far more to smart cities than data, however voluminous. What differentiates the smart cities movement from previous attempts to make urban life better will be the use of artificial intelligence (AI) to make sense of all the data pouring out of smart cities.

“Cities have been using automated systems for a long time, for example, intelligent transport systems that can configure traffic lights based on the volume of traffic,” Kitchin told Supercomputing Asia. “With AI, however, these systems are starting to become autonomous or semi-autonomous, as opposed to having humans in the loop.”

While AI enables city planners to process vast amounts of data and uncover hidden patterns, the use of AI also raises the attendant questions of fairness, accountability and transparency. These questions of smart city governance are situated in broader debates about the ethics, governance and accountability of AI, but will have to be tackled head on—perhaps more urgently than for other applications of AI—if city planners are to maintain the trust and confidence of their constituents.

Illustration by Lam Oi Keat for Supercomputing Asia.

Data and its limits

The aspirational vision for AI in smart cities is a god’s-eye-view dashboard of city management and planning. Tech firms are leaping to solve the city-wide data management problem with offerings like Alibaba’s City Brain and Huawei’s Intelligent Operations Center and +AI Digital Platform to create an operating system layer for the city.

AI promises to rationalize disparate datasets and manage real time data flows. But the reality for many cities today is that AI remains limited to one-off inquiries and planning exercises; it is not yet a part of day-to-day operations running real-time data streams.

One reason for this is that AI is primarily a tool for automating decision making, a prospect planners are primed to be wary of. As recently as the 1950s, modernist planners believed in a comprehensive, predictive and objective science of planning. They imagined that even the most messy human spaces could be understood with the right data and models to analyze the city. This assumption resulted in top-down plans that were limited by what they measured, and which failed to understand the lived experience of citizens on the ground.

Led by urban activists like Jane Jacobs, who famously succeeded at preventing a highway from being built through the middle of Greenwich Village in Manhattan in the 1960s, planning has since taken a strong turn towards participatory modes to include constituents and stakeholders. Today, planners recognize that space is a highly politicized subject, with a complexity that often cannot be reduced to numbers.

So it is perhaps not surprising to find planners and urban scientists hesitant to cede control to the black box once again. Leaving city-wide dashboards and masterplans to AI systems risks ushering in a new wave of technocratic directives that could mistake data for ground truth

Design-driven data

Although AI has advanced our ability to analyze data, that data and the decisions made with it are by no means perfect or objective.

“If decisions about what is best for society are ceded to algorithms, whose notion of civic paternalism or stewardship is embedded in those systems?” Kitchin asked.

These are some of the most pressing questions in assessing the impacts of AI. And it is only more pressing in municipal applications where constituents’ lives and wellbeing are at stake. As Kitchin points out, those deploying AI technologies have a responsibility to evaluate if their values align with those of the algorithms or models. No computational process is neutral; fitting data to a curve is a value-laden choice about what we are optimizing towards. Are we optimizing for cost savings? Profit? Sustainability? Fairness and inclusion?

Rather than designing cities using data, what could be more important is ensuring that our use of data is underpinned by design thinking principles, said Associate Professor Bige Tuncer of the Singapore University of Technology and Design (SUTD).

“It is not so much about data-driven design as design-driven data; you need to understand what you want to get out of your computation,” she explained.

“We are going in the direction of responsive cities, where instead of the technology being the most dominant aspect, we look at how the collection, analysis, visualization and interpretation of the data can support the processes that designers and planners have to undertake to make cities more livable.”

For example, in one of her research projects on designing for ‘liveliness’ in public spaces, Tuncer used workshops to incorporate the feedback of stakeholders such as residents, turning their input into weights of a computational model. Tuncer described this method as an example of human-in-the-loop planning in practice, where weights and values in the model are expressly informed by local expertise.

Watching the watchers

On the technical side of things, AI governance research has centered on methods to mitigate the potential challenges and harms of AI decision making. Computer scientists are formalizing tests that can be run on algorithms to audit their decision making outcomes for fairness, accountability and transparency. These may become part of the auditing system for due diligence either in deploying home-grown algorithms or in procuring industry solutions.

In planning physical spaces, authorities typically incorporate public consultation as part of their planning review process. Smart city procurement processes, however, rarely include public review—putting sensors on a lamppost is seen as a technical operational choice rather than a political one. Kitchin observes that seems to be especially true in city governance structures where authority is consolidated and centralized, as is the case in leading smart cities like Singapore and Barcelona.

While it is early days for smart city-specific AI auditing, Kitchin expects that auditing processes will become commonplace in the near future. AI accountability reviews would slot in naturally with existing data compliance audits to meet requirements for regulations such as the European Union’s General Data Protection Regulation and Singapore’s Personal Data Protection Act. Such oversight will require city planners to express the values and goals for which these systems ought to be optimizing.

If we hope to avoid another pendulum swing in planning practices back towards unexplainable black boxes, smart cities will have to build in mechanisms for constituent participation, accountability and governance of the algorithmic systems that make cities smart. The truly smart city is one that serves its constituents and accounts for them in both their planning process and AI models.

This article was first published in the print version of Supercomputing Asia, January 2020.
Click here to subscribe to Asian Scientist Magazine in print.


Copyright: Asian Scientist Magazine.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Sara M. Watson is an independent writer and technology critic based in Singapore. Sara writes and speaks about emerging issues in the intersection of technology, culture, and society. Her work appears in The Atlantic, Wired, The Washington Post, Slate and other publications

Related Stories from Asian Scientist