Chatbot Revolution

ChatGPT-powered chatbots are on the rise but their reliability remains in question. As more companies build synthetic media into their business model, we analyze the scope of its skillset.

Asian Scientist Magazine (Oct. 19, 2023) —An unusual crew member was at the help desk in an agricultural expo organized earlier this year in Odisha, a state in eastern India. Ama KrushAI, a chatbot developed by social tech startup Samagra, answered farmers’ queries on topics ranging from cattle diseases and pest management to government schemes. At the core of this chatbot was ChatGPT, a generative AI-based Large Language Model (LLM), launched by Open AI in November 2022.

An LLM generates content based on statistical rules derived from a really large body of data. ChatGPT, for instance, was trained on text worth 300 billion words. With this wealth of information, it predicts which word or sentence will come next based on its expected occurrence in all the text that it has seen before.

ChatGPT became popular at an unprecedented rate: 100 million people were using it within two months of its release. Companies and social enterprises in many parts of the world have been adding ChatGPT capabilities to their chatbots. The next time a friendly chatbot assists you with booking a flight or finding clothes to buy online, chances are it will be powered by ChatGPT.

While chatbots have been around since the 1960s, their ubiquity is more recent with the growth in web and mobile apps where they act as an interface between a brand and its customers. Whereas these chatbots generally rely on conversational artificial intelligence (AI), ChatGPT packs an extra punch.

A major reason for its popularity is the chatbot-style interface where you can prompt it to write a poem or suggest vacation plans. This is why even though ChatGPT can do a lot of other things such as spot financial fraud, a major impetus has been to use it in chatbots.


Driving Conversations And Sales

For a large section of youth in developing countries, a career in tech is almost a certain ticket to upward mobility. Many young people, including those without an engineering education, spend hours after work exploring online platforms to upskill themselves in data science, AI, or app development. However, they don’t always receive one-on-one mentorship on these platforms, or not as often as required.

To meet this gap, Jovian, a consumer ed-tech startup based in Bengaluru, recently launched Jobot, a ChatGPTpowered chatbot. Jobot acts as an always-available personal tutor. It uses ChatGPT to provide instant answers to queries such as merits and demerits of different programming languages that students used to look up on Google. Unlike Google or any other search engine, Jobot stores the context of the conversation a student has had with it and answers accordingly.

“With Google search, you need to spend some time refining your searches and it doesn’t have any context of what questions you have asked it before. But if you relay your doubts to an instructor, they have some context, in which to answer your questions. ChatGPT functions in the same way,” Ashish Kankal, a senior software engineer at Jovian told Asian Scientist Magazine.

Other direct-to-consumer brands that don’t necessarily appear tech-intensive at first glance, are also embracing ChatGPT-powered chatbots. Think of small brands that sell food or artisanal jewelry through Instagram pages or Shopify stores. Asian Scientist Magazine spoke to Tillage, a Mumbaibased brand that sells locally-sourced food ingredients on Whatsapp. The brand uses, a third-party Whatsapp commerce platform to manage its operations. Recently, integrated ChatGPT to automate content generation.

“Our entire sales channel and point of sales is on WhatsApp, sending out weekly messages, receiving queries and orders, calculating prices, and sending payment links, order updates and cart reminders. These were all tasks that we were doing manually. now takes care of these tasks,” said Shival Shah, co-founder of Tillage, who added that this allows him to focus on solving more important problems and addressing specific customer queries.

While’s bot enables sales via WhatsApp, Hong Kong-based startup targets customers across multiple channels at once, including Facebook and Instagram. Its eponymous bot learns from a company’s files and website to add contextual information.

DFV, a fine wine dealer and a user, employs it to communicate more effectively with customers across regions with automated and realistic replies to queries on their social media channels.


Nonsense And Lies

As people played around with ChatGPT, many quickly found that it sometimes provided nonsensical or false outputs. To avoid this, more dedicated users figured out the right set of words to ask to get the best results for any query. Described as prompt engineering, this skill is so sought after that there are already hundreds of courses and tutorials. Today it’s even a career in its own right.

But one cannot expect a new user of a ChatGPT-powered chatbot to know about right prompts. Their queries, of course, will be structured as how they naturally speak. Mostly, ChatGPT handles these well and may even give the impression of having some sort of common sense. Except when it doesn’t and dreams up stuff that is completely divorced from what was asked of it. A significant drawback of large language models, this phenomenon is dubbed as hallucination. Since ChatGPT is generating text based on statistical rules without any understanding of what is a fact, it can output coherent but wrong information. There’s always a chance that it forms wrong associations. For example, you ask it for an actor’s birth year and it could tell you the year of their debut movie instead.

“The more creative a generative model is, the more they tend to hallucinate. Think of hallucination as the other side of creativity. So ChatGPT is very good for creating, but when it is not grounded in facts, that is a problem. That’s always going to be an issue for generative models,” said Pascale Fung in an interview with Asian Scientist Magazine. Fung is the director of the Centre for Artificial Intelligence Research (CAiRE) at the Hong Kong University of Science and Technology.

Hallucinations can potentially cause serious harm. For example, when a health chatbot delivers advice rooted in misinformation or a chatbot perpetuates racist or misogynistic stereotypes. ChatGPT’s propensity for misinformation is due to existing bias in its training data and its lack of understanding of truth. ChatGPT avoids these scenarios to some extent by declining to answer discriminatory questions, but the potential for harm remains, especially with ChatGPT-powered chatbots that may not have stringent filters.

The usefulness of these chatbots depends on how well ChatGPT reasons. In a recent study, Fung and her colleagues evaluated ChatGPT’s reasoning capability.

They analyzed its performance at deductive or top-down reasoning and inductive or bottom-up reasoning. Whereas deductive reasoning is about deriving specific conclusions from general premises, inductive reasoning is about inferring generalizable conclusions from specific cases or events.

The study found that while ChatGPT excels at deductive reasoning, it does poorly at inferring things bottom-up. This means that if you provide ChatGPT with detailed prompts about what you need, it will likely deliver a satisfactory output. But if you ask it to summarize some text or data, it may not perform as well as you expect it to.

Fung added that ChatGPT is bad at mathematical reasoning because it struggles to compute abstract concepts. Even though companies are using ChatGPT-powered bots for education or legal help, they are effective only to an extent and not when the users need help with analyzing their data, at least for now.

The current limitations of ChatGPT and bots based on it are an even greater problem for tech based on low-resource languages, a categorization that includes most languages in India and elsewhere in Southeast Asia. These are languages that don’t have enough annotated text to train AI models on.

Ama KrushAI, the chatbot for farmers, mitigates this challenge by leveraging additional local-language data, such as information about government schemes for farmers, from another project called Bhashini. Launched by the Indian Ministry of Electronics and IT, Bhashini collects and annotates data from government initiatives, publishers and citizen groups in different Indian languages.


Making Better Bots

Both Jovian and Tillage employees also use ChatGPT in other aspects of their work such as for creating outlines for to-do tasks or writing copies such as emails and meeting notes. At Jovian, both employees and students use ChatGPT. They reported productivity gains that allowed their teams to focus on the more meaningful aspects of their work. This illustrates how companies and brands can use ChatGPT internally by looking at it as a collaborator.

It is when using it for outward-facing functions that companies need to take a more responsible approach. Companies that build or use ChatGPT-powered bots must ensure that the solutions built on top of ChatGPT are ethical and safe for their customers.  They need to build toxicity detectors that go beyond what OpenAI builds into ChatGPT by figuring out how their apps could cause harm.

Tech policy experts say that it should always be clear when users are interacting with ChatGPT-generated content. “Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures,” Timnit Gebru, an ethical AI researcher, tweeted recently.

Despite all these concerns, ChatGPT is already shaking up industries. When Bill Gates saw a demo of ChatGPT last year where it aced the AP biology test, he found it to be as revolutionary as when he saw the demo of the first graphical user interface.

“It will change the way people work, learn, travel, get health care and communicate with each other. Entire industries will reorient around it,” Gates wrote on the potential of language models like ChatGPT on his blog.

This article was first published in the print version of Asian Scientist Magazine, July 2023.
Click here to subscribe to Asian Scientist Magazine in print.

Copyright: Asian Scientist Magazine. Illustration: Shelly Liew/Asian Scientist Magazine

Sachin Rawat is a freelance science writer & journalist based in Bangalore, India.

Related Stories from Asian Scientist

  • Generative AI’s Shadow On Jobs Generative AI’s Shadow On Jobs Generative AI might eliminate thousands of current jobs, but also create new ones. How worried should we be and who is most at risk?
  • Ink Gets Intelligent Ink Gets Intelligent In the face of changing artistic landscapes and creative credit controversy, some artists, researchers and computer scientists in Asia are exploring how generative AI can be used to […]
  • Playing Nicely With AI Playing Nicely With AI As curious students venture the uncharted territory of generative AI, education systems are tasked with ensuring its appropriate use while still delivering quality education.
  • An Eye On AI An Eye On AI Backed by immense computing power, breakthroughs in AI are transforming multiple facets of society, from the way we deliver patient care to how we harness renewable energy resources like solar power.
  • Accelerating Advancements In Healthcare Accelerating Advancements In Healthcare From disease diagnostics to drug development, healthcare applications are poised to receive a major boost from the combined power of AI and supercomputing.
  • Who Published It? Who Published It? As generative AI enters the landscape of scientific writing and publication, the scientific ecosystem is wrestling with questions surrounding the extent of the technology’s use, how to […]