Web Of Lies

False, misleading and potentially harmful content can spread fast on social media, enabled by algorithms that thrive on engagement. Can big tech platforms cut through the noise?

Asian Scientist Magazine (Oct. 31, 2022) — When social networking platforms entered the media ecosystem, incisive headlines and striking visuals became key elements for virality. But in today’s saturated landscape where information travels fast, speed has emerged as the primary tactic to beat out the competition and build engagement. At the same time, media institutions have to fight alternate narratives, half-truths and outright fabricated content.

Social media is plagued by an expanding information disorder, perpetuated by the platforms’ own algorithms and rules for success. In response, these platforms have deployed mechanisms to counteract misinformation—from enlisting content moderators to artificial intelligence (AI) tools. Facebook, for example, has partnered with third-party fact-checkers and previously removed hundreds of ‘malicious’ fake accounts linked to a Philippine political party.

Although big tech has begun to intervene, researchers who are studying this messy misinformation landscape can’t help but ask: Are tech giants doing enough, and can they be held accountable?

Machinations Of Manipulation 

Networks of misinformation and disinformation have altered the online media landscape, wielding the power to shape public perception. Masterminds behind these networks construct targeted and consistent messaging that would appeal to certain audiences. The messages are then disseminated and amplified by legions of  bot accounts, paid trolls and rising influencers.

In the Philippines, for example, such tactics have influenced public health issues like vaccine hesitancy. They have also furthered political agendas such as national elections and human rights violations including fabricated criminal charges.

But their success, in part, is enabled by the social media infrastructure itself. Platforms reward engagement: more likes and shares increase the likelihood that a post appears in users’ feeds. Meanwhile, a large burst of tweets containing key phrases can catapult a topic to the trending list.

As people within the same network are likely to have similar perspectives, recommendation algorithms arrange content to match those perceived preferences. This traps users in a bubble—an echo chamber—shielded from potentially opposing views.

Even users wishing to verify the information they encounter may find it difficult to look for the answers they need amid the deluge of online information, Dr. Charibeth Cheng, associate dean of the College of Computer Studies at De La Salle University in the Philippines, told Asian Scientist Magazine. For example, Google’s results are anchored on search engine optimization techniques. As such, sites that contain the relevant key phrases and receive the most clicks end up topping search rankings, potentially obscuring more reliable and robust sources.

“Constructing online discourse is not a matter of availability but of visibility,” explained Fatima Gaw, assistant professor at the University of the Philippines’ Department of Communication Research, in an interview with Asian Scientist Magazine. “Robust information sources cannot win in the game of visibility if they do not have the mastery of the platform.” For example, she explained that creators of biased or misleading content can still categorize their posts as ‘news’ to appear alongside other legitimate media sources, essentially guaranteeing their exposure to the audience.

Likewise, in Indonesia, ‘cyber troops’ used deceptive messages to swing the public in favor of government legislature and drown out critics, according to a report published by the ISEAS-Yusof Ishak Institute, a Singapore based research organization focused on sociopolitical and economic trends in Southeast Asia. These controversial policies included the easing of pandemic restrictions to encourage a return to normal activities just a few months into the COVID-19 outbreak, as well as law revisions that turned an autonomous corruption eradication body into a government agency. Cyber troops employ political actors to control the information space and manipulate public opinion online—backing them with funds and numerous bot accounts to master the algorithms and spread misleading content.

“Cyber troop operations not only feed public opinion with disinformation but also prevent citizens from scrutinizing and evaluating the governing elite’s behavior and policy-making processes,” the authors wrote.

Disinformation machinery therefore relies on deeply understanding the types of content and engagement that these platforms reward. And because social media thrives on engagement, there is little incentive to stop content that has the power to set off the next big trend.

“Platforms are complicit,” Gaw emphasized. “They enable actors of disinformation to manipulate the infrastructure in massive and entrenched ways. This allows these actors to stay in the platforms, deepen their operations and ultimately profit from the disinformation and propaganda.”

Reshaping Realities 

Another worrying disinformation ecosystem exists on YouTube, where manipulation tends to be condoned thanks to the platform’s algorithms and content moderation policies—as well as their lack of enforcement. For one, the lengthy video format provides an opportunity for possibly embedding false and deceptive content within the narrative in a more intricate, less obvious way.

“YouTube also has a narrow definition of disinformation and it is often contextualized to Western democracies,” Gaw said.

Flagging disinformation goes beyond discerning facts. Misleading content can contain true information, such as an event that really happened or a statement that was said, yet the interpretation can be twisted to suit a certain agenda, especially when presented without context.

Gaw added that YouTube’s recommendation system exacerbates the problem by helping to construct a “metapartisan ecosystem, where one lie becomes the basis of another to build a distorted view of political reality biased toward a certain partisan group.”

TikTok has also drawn flak fueling viral disinformation and historical distortion during the Philippine elections earlier this year, as reported in the international press. The TikTok videos typically highlight the wealth and infrastructure built under a former president, while glossing over the country’s ensuing debt as well as corruption and human rights cases raised against that political family.

Social media platforms have further sanctioned the rise of content creators as alternative voices, leading to them being perceived as equally credible if not more trustworthy than traditional news media, history books and scholarly institutions.

Even without the credentials of expertise, online influencers can “create proxy signals of credibility by presenting their ‘own research’ while projecting authenticity as someone outside the establishment,”explained Gaw. “Their rise also comes against the backdrop of declining trust in institutions, particularly the media, as an authority on news and information.”

The digital media environment is one where every issue is left up to personal perception, and perhaps most significantly, where established facts are fallible. However, Cheng believes that tech platforms cannot remain neutral.

“Tech companies should play a bigger role in being more socially responsible, and be willing to regulate the content posted, even if taking it down may lead to negative business effects.”

Treating The Information Disorder 

To counter the spread of false information and deceptive narratives, AI-powered language technologies can potentially analyze text or audio and detect problematic content. Researchers are developing natural language processing models to better recognize patterns in texts and knowledge bases.

For example, content-based approaches can check for consistency and alignment within the text itself. If an article is supposed to be about COVID-19, the technology can look for unusual instances of unrelated words or paragraphs, which may hint at misleading content.

Another approach called textual entailment checks whether the meaning of one fragment, such as a sentence or phrase, can be inferred from another fragment. Cheng noted, however, that if both fragments are false yet align with each other, the problematic content can likely still fly under the radar—much like Gaw’s earlier observation on one lie supporting another lie.

“If we have a lot of known truths, matching and alignment techniques can work well. But because the numerous truths in the world are constantly changing and constantly need to be curated, the model needs to be updated and retrained as well—and that takes a lot of computational resources,” Cheng said.

Evidently, developing technologies for detecting false or misleading content would first depend on building comprehensive references for comparing information and flagging inconsistencies. Another challenge that Cheng highlighted is the lack of contextually rich Asian language resources, which hampers the development of linguistic models for analyzing texts in local vernaculars.

However, the problem is much more complex. Decision making is never solely a rational affair, but rather a highly emotional and social process. Disputing false information and presenting contrary evidence may not be enough to alter perspectives and beliefs, especially deeply ingrained ones.

When ivermectin was touted as an effective drug against COVID-19, stories from recovered patients surfaced online and swiftly spread through social messaging apps. Many advocated for the drug’s clinical benefit, putting a premium on personal experiences that could have been explained away by mere coincidences and other variables. One success story in a non-experimental setup should not have debunked the evidence from large-scale scientific trials.

“It is not about facts and lies anymore; we need a more comprehensive way to capture the spectrum of false and manipulative content out there,” said Gaw.

Moreover, current moderation responses such as taking down posts and providing links to reliable information centers might not undo the damage. These interventions do not reach users who had already been exposed to such problematic content before their removal. Despite these potential ways forward, technological interventions are far from being the silver bullet to disrupting disinformation.

The rise of alternative voices and distorted realities compels researchers to delve deep into why such counter narratives are appealing to different communities and demographics.

“Influencers are able to embody the ‘ordinary’ citizen who has been historically marginalized in mainstream political discourse while having the authority within their communities to advance their political agenda,” Gaw continued. “We need to strengthen our institutions to gain people’s trust again through relationship and community building. News and content needs to engage with the people’s real issues, including their resentments, difficulties and aspirations.”

 

This article was first published in the print version of Asian Scientist Magazine, July 2022.
Click here to subscribe to Asian Scientist Magazine in print.

Copyright: Asian Scientist Magazine. Illustration: Shelly Liew/Asian Scientist Magazine

Erinne Ong reports on basic scientific discoveries and impact-oriented applications, ranging from biomedicine to artificial intelligence. She graduated with a degree in Biology from De La Salle University, Philippines.

Related Stories from Asian Scientist