Asian Scientist Magazine (Aug. 28, 2023) — New toys and gadgets have always been finding their way into schools—from the Tamagotchi to calculators and smartphones. While some are relatively benign, others can be highly disruptive to a student’s education. ChatGPT, one of the best known generative artificial intelligences (GenAIs), is the latest tool to rapidly enter the education scene for its ability to converse and produce written text like a human. Within months of its initial release, millions of users explored ChatGPT’s abilities, ranging from content creation and text translation to even code debugging.
Besides writing, other GenAIs, such as DALL-E and Midjourney, can generate original digital images from text prompts—some can even produce entire videos from a script or blog. With such fascinating capabilities, ease of accessibility and increasing popularity, the young and curious minds are inevitably finding ways to use them in the classroom.
Schoolboards are now faced with the difficult challenge of setting guidelines on how to use these technologies to enhance learning, while ensuring an even playing field for all students.
CAN THE AI DO MY HOMEWORK?
With a highly tech-literate generation, the launch of GenAIs saw students zealously applying these tools to their assignments. In a matter of minutes, they could churn out plausible essays, parse simple answers to quizzes, and succinctly summarize reports. In a survey reported by Forbes, 89 percent of students confessed to using the platform to complete homework assignments.Some state governments even blocked the service from their networks.
OpenAI, the developers of ChatGPT, are eager to work with educators in finding solutions. As a response to the proliferation of AI-assisted cheating, OpenAI has released a preliminary classifier to identify works produced by their proprietary technology.
OpenAI’s classifier positively identified 26 percent of AI-generated content, but incorrectly flagged nine percent of human-written text. Plagiarism detection giants like Turnitin have also released an AI detection software but are facing similar issues—flagging self-written essays as AI-fabricated. Moreover, outputs from ChatGPT and other GenAIs will likely only become harder to detect, as they learn from the feedback of the same classifiers used to catch them.
LET THE MACHINES TAKE OVER
Dr. Toby Walsh, a professor of AI at the University of New South Wales, Australia, believes that these are signs that we are testing for the wrong things.
“The funny thing is, we don’t set essays because there’s a shortage of essays, we set essays because that’s a way to measure people’s ability to build arguments, to think critically about a topic,” said Walsh in an interview with Supercomputing Asia.
When the task of content synthesis is surrendered to machines, we can focus more on thoughtful curation of the material. As an example, Walsh suggested that students can use ChatGPT to prepare an essay to analyze and critique. This directly allows us to test for the nuanced skills needed for presenting arguments, critical analysis and inquisition.
When the modern calculator was first introduced, it raised similar controversy among educators, but ultimately proved to be a positive force in the classroom. Routine mathematics could be outsourced, amplifying productivity, and allowing students to focus on more advanced mathematics. Similar to how the calculator is now part of every technology—in our watches, phones and computers— GenAIs have the potential to be a routine part of our lives.
Many software companies have already embraced this change and are incorporating AI assistants into their toolkit. Under the moniker “Microsoft Turing”, ChatGPT is now integrated into the Microsoft 365 suite, while productivity application Notion introduced their “Notion AI” to automate tedious tasks on their platform. Just as the rest of the world moves to embrace the new technology, students will need to know how to use them as they join the workforce.
A NEW PARADIGM FOR TEACHING AND LEARNING
As schools integrate GenAIs, teachers must consider their features that are most useful in the classroom, such as summarizing or synthesizing texts and breaking down concepts into easily understandable pieces.
For students, this tool is ideal for compiling revision notes or for chunking lengthy articles into digestible bits. Instead of rewriting personal notes, students can utilize the software to more effectively summarize class material and customize them according to their needs and learning styles. For instance, students can instruct ChatGPT to arrange history notes in chronological order in preparation for a biographical essay, or sort by themes to revise for an exam.
AI also makes for an excellent personal tutor, allowing students to ask questions without fear of judgment. “You can sit there and ask it questions, and it doesn’t matter how naive or repetitive the questions are,” noted Walsh. As an example, a challenging topic for modern students is learning programming languages. These tools can provide well-annotated code with clear explanations of each line to facilitate better learning.
But it’s not just students who can benefit from GenAIs. Educators can use these tools in their lesson plans to prepare revision guides for students, design multiple-choice quizzes and even mark essays by highlighting gaps in logic.
Additionally, GenAI can provide individualized learning catered for each student. One company that offers such services, Carnegie Learning, uses AI to track a student’s progress to plan lesson activities. This level of personalization goes beyond what a single teacher can do for a full classroom and can make the learning process more effective for students.
However, these tools are not without flaws. Currently, GenAIs are still prone to inventing fantasies, and proclaiming them as facts. Both students and instructors need to curate content synthesized by these tools carefully before using them.
SHOULD WE BE WORRIED ABOUT GenAI?
Even as society embraces the new technology, we must be aware of concerns perpetuated over its use in the wider context. Surrounding the promotion of GenAIs in education is a haze of legal and ethical concerns.
One issue regards the confidentiality of information shared through these applications. For instance, ChatGPT, which explicitly states that conversations are recorded to improve the chatbot, suffered a data breach on March 20, 2023, leading to leaked conversations and payment information. Coupled with previous security breaches, Italy has currently banned its use while other European nations have imposed strict regulations.
In the US, there are ongoing class-action lawsuits over the use of copyrighted data to train these algorithms. The ethical training of the model raises concerns over the intellectual property rights of artists, such as the ownership of the works produced by models trained on copyrighted data.
Nevertheless, experts are hopeful that a more sustainable solution will emerge. A similar problem faced the music and film industries in the past with the launch of file-sharing service Napster. The
surge in internet piracy was tackled by the provision of streaming services, which allowed users to access content while maintaining standards of copyright.
According to Walsh, “We’re going to have a similar evolution in terms of GenAI, whether we’re returning value back to the people whose text, computer code or images that it was trained on.”
As the works of GenAIs produce tangible returns, lawmakers wrestle with policies on ownership rights. When faced with the conundrum of two identical pieces of work produced by the same GenAI,
Margaret Esquenet, a partner with Finnegan law firm, noted in a Forbes article that under US law, “Even assuming that the rightful copyright owner is the person whose queries generated the AI work, the concept of independent creation may preclude two parties whose queries generated the same work from being able to enforce rights against each other.”
The new dilemmas raised over the fair use of GenAIs are not easy to tackle and there are many perspectives to consider. As part of embracing the use of GenAIs in the classroom, teachers also need to start posing these questions to students to think about.
TEACHERS VS GenAIs
The sudden emergence of GenAI’s capabilities in producing high-quality works have left some wondering if these robots will eventually supplant teachers altogether. According to Walsh, the answer is no.
Teachers remain a source of inspiration for their students and have the ability to understand their psychology and mental barriers, which are crucial factors in education, Walsh elaborated. It remains unclear whether the social intelligence and emotional connection that teachers provide can ever be replicated by machines. For now, the role of educators remains paramount in shaping the minds of future generations.
That said, educators now face a daunting challenge in ensuring the appropriate use of GenAIs in the classroom. As these machines continue to evolve and become more human-like and intelligent, the challenges will only become more complex. Regardless of the restrictions imposed, educators will play a crucial role in this process by edifying students on the potential risks and benefits of GenAIs and encouraging their responsible and ethical use.
–
This article was first published in the print version of Supercomputing Asia, July 2023.
Click here to subscribe to Asian Scientist Magazine in print.
Copyright: Asian Scientist Magazine.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.