Language Models Running Wild

Language modeling is the task of predicting text and one of the hottest trends in AI right now. But it has deep flaws, quoting racism, sexism, misinformation and more—making substantial monitoring key to developing language models.

Asian Scientist Magazine (Nov. 2, 2022) — When you think about the world of artificial intelligence, it can seem like we’re a long way off from genuine machines that think and reason. In truth, though, AI is already all around us, with applications in almost every industry and sector imaginable. And although modern AI systems are still some way off from reaching the high level of intelligence displayed by humans, the rate of progress over recent years has been staggering to say the least.

I didn’t write this opening paragraph. Not even a word. I just went online, and searched for websites that used AI-based language prediction models. On one such website, I put “Large Language Model and Its Future” as the headline, and voilà—I got the opening paragraph in just a few seconds.

Large Language Models (LLMs) are AI tools that feed on a pile of text freely available from sources such as digitized books, Wikipedia, newspapers and articles. The models can read, summarize and translate texts and predict future words in a sentence letting them generate sentences similar to how humans would speak or write. As tech giants like Google, Meta, Microsoft, Alibaba and Baidu race to develop their own language models, it is hard to predict how they might impact consumers. Little, if any, effort has been made by governments, scientific institutions and corporations in Asia and other parts of the world to set and implement policies and ethical boundaries around the use of LLMs.

Intelligent Guesswork 

Researchers trace the origin of language models to the 1950s, when English mathematician and philosopher Alan Turing proposed that a machine should be considered intelligent if a human could not tell whether another human or a computer is responding to their questions. In later years, technological advancements gave rise to natural language processing or NLP, which allowed computers to learn what makes language, a language, by identifying patterns in texts.

An LLM is a much more advanced and sophisticated step up from NLP. For example, a popular AI language model called GPT-3—the same application I used for writing this article’s introduction—can consume up to 570 GB of text information to make statistical correlations between hundreds of billions of words as well as generate sentences, paragraphs, and even articles based on language prediction. In fact, researchers have even used the language model to write a scientific research article and submitted it for publication in a peer-reviewed journal.

Nancy Chen, an AI scientist at Singapore’s Agency for Science, Technology and Research (A*STAR), told Asian Scientist Magazine that the basis of such language models is simple. “The model basically anticipates the subsequent words, given that it got the first several,” she said. It works in a similar manner to how a human might guess the missing words in a conversation.

Resource Intensive 

These LLMs can be tremendously useful to both governments and private industries. For instance, service-oriented companies can develop better chatbots to respond to unique customer queries, while governments may use the models to summarize public opinions or comments on a policy issue for making amendments. LLMs can also be used to simplify technical research papers and reports for the general audience. However, developing an LLM is resource-intensive, so mostly big tech companies are in the race for now.

“Big companies are all doing it because they assume that there is a very large lucrative market out there,” Shobita Parthasarathy, a policy researcher with the Ford School of Public Policy, University of Michigan, told Asian Scientist Magazine.

Researchers like Parthasarathy who are studying these models and their potential use say that the models need to be closely scrutinized, especially because LLMs work on historical datasets.

“History is often full of racism, sexism, colonialism and various forms of injustice. So the technology can actually reinforce and may even exacerbate those issues,” Parthasarathy said.

Parthasarathy and her team recently released a 134-page report pointing out how LLMs can have a tremendous socio-environmental impact. When LLMs become widespread, they will require huge data centers which can potentially displace marginalized communities. Those living near data centers will experience resource scarcity, higher utility prices and pollution from backup diesel generators, the report said. The operation of such data centers would require significant human resources and natural resources such as water, electricity, and rare earth metals. This would ultimately exacerbate environmental injustice, especially for low income communities, the report concluded.

No Rules 

As this is a growing phenomenon, these language models do not have a clear set standard and well-defined rules and regulations on what they should be allowed or limited to do.

As of now, “they’re all privately driven and privately tested, and companies get to decide what they think a good large language model is,” Parthasarathy said.

Additionally, like every other technology, LLMs can be misused.

“But we should not stop their development,” Pascale Fung, a responsible AI researcher at Hong Kong University of Science and Technology, told Asian Scientist Magazine. “The most critical aspect is putting principles of responsible AI into the technology [by] assessing any bias or toxicity in these models and making the necessary amendments.”

Researchers studying LLMs believe that there should be more comprehensive data privacy and security laws. That could be achieved by making companies transparent about their input data sets and algorithms, and forming a complaint system where people can register problems or potential issues, said Parthasarathy.

“We really need broader public scrutiny for large language model regulation because they are likely to have enormous societal impact.”

 

This article was first published in the print version of Asian Scientist Magazine, July 2022.
Click here to subscribe to Asian Scientist Magazine in print.

Copyright: Asian Scientist Magazine. Illustration: Shelly Liew/Asian Scientist Magazine

Saugat Bolakhe is a freelance science journalist based in Nepal. He writes about life science and the environment. When he is not reporting, he can be found hiking outdoors or humming songs.

Related Stories from Asian Scientist