Ink Gets Intelligent

In the face of changing artistic landscapes and creative credit controversy, some artists, researchers and computer scientists in Asia are exploring how generative AI can be used to preserve traditional art.

By Jill Arul and Denise Gonsalves

Asian Scientist Magazine (Oct. 17, 2023) —Sayeda Samia Nasrin vividly remembers one of the first times she had henna applied as a five-year-old for her aunt’s wedding. The dye paste was administered with care and stained a simple pattern on her small hand.

She recalls being envious of the older girls and women whose hands and feet looked like works of art, decorated with intricate designs of pointed leaves and dotted swirls.

During her teen years, Nasrin worked as a freelance henna and makeup artist and looked for ideas and references online. In 2020, as a student pursuing a bachelor’s degree in computer science and engineering at Chittagong Independent University in Bangladesh, she realized she could combine her two interests—henna and computer science—to explore how artificial intelligence (AI) could be used to enhance the traditional art form.

With the advent of generative AI capable of producing unique text and images, Nasrin saw that the focus was primarily on European or Western art forms. To develop the new field of AI-generated art into one that celebrates the variety of artworks available globally and preserves traditional art styles, Nasrin and other Asia-based researchers have begun training and adapting existing image generating systems to produce traditional art.


Creating Competition 

Currently, the most common neural networks used for generating images are Generative Adversarial Networks (GANs) and diffusion models like the popular OpenAI system, DALL-E. Notably, OpenAI is also the non-profit AI research laboratory responsible for the recent industry shaking generative AI sensation, ChatGPT.

In 2014, GANs became a turning point in AI development. Rather than immediately generating often error-filled images from input data, the system runs on two adversarial networks—the generator and the discriminator.

The generator is first trained to identify an object from a set of images. Once it can effectively identify the object, it starts to make fake samples of the original. The fake samples and original images are then fed to the discriminator which attempts to determine which images were produced by the generator and which are from the original dataset.

The objective of the generator is to trick the discriminator into misidentifying the fakes as originals; the objective of the discriminator is to correctly identify the fake images. In each round of this game, the ‘loser’ updates its model—resulting in continually refined images that become almost indistinguishable from the input data. More recently however, diffusion models have emerged as frontrunners for users due to their ability to produce highly realistic images.

However, such powerful and easily accessible tech comes with challenges. As it becomes easier for anyone to create realistic videos and images, many artists worry that the technology might impact their careers and have even begun to push back against AI models that are trained to mimic specific styles without crediting original artists.

“When AI needs to ‘create’ traditional-style artworks, a large number of unlicensed traditional-style works will be used to train neural networks,” Gu Li, associate professor from the Guangzhou Academy of Fine Arts, told Asian Scientist Magazine. “Ethics remain an open issue in the online debate and whether or not traditional style artworks created by AI are protected by copyright law is still controversial.”


Making HennaGAN

In her work, Nasrin leveraged deep convolutional GANs (DCGANs) to generate henna designs that are comparable to designs produced by human artists.

“We had no idea [what it would be like] at the beginning, so we had very high expectations of the images that would be produced,” she shared with Asian Scientist Magazine. “I thought we would get near-perfect designs, but the data available for existing designs was not sufficient to train the model to perfection.”

The first thing Nasrin had to do was collect data, so she went about gathering 10,000 publicly available images of henna designs.

However, because the art-form is dyed onto skin rather than painted on an even surface, she also had to go through the tedious task of removing images with ‘noise’ like jewelry, tattoos or nail polish—anything that could confuse the AI. After removing duplicates and images that couldn’t be cleaned, she was left with 1915 images.

Next, she fed the data to the GAN to begin learning and producing images. To produce high-quality images, the system must be tuned by adjusting hyperparameters such as the image size, number of updates and number of samples produced before an update. In a series of experimental runs, Nasrin tweaked the hyperparameters to obtain better learning rates and images.

Although the AI generated henna designs for Nasrin, some were on warped hands and missing a few fingers. While her work proved that DCGANs could produce henna designs, she remains conflicted about AI’s role in producing authentic and high-quality henna patterns. With AI’s help, traditional art can become more accessible and affordable to people, she said. “This is great, but my concern is, it might diminish the value of traditional art by lowering its perceived value or authenticity.”


Entering Traditional Art Landscape 

In a study from Beihang University in Beijing, researchers explored how AI can be used to classify and create traditional artworks—especially traditional Chinese landscape paintings. One of the paintings the researchers looked at was A Panoroma of Rivers and Mountains by Wang Ximeng. The painting is regarded as the best example of the traditional Chinese blue-green landscape painting technique. The project was split into two parts—using AI to differentiate traditional Chinese paintings from Western oil paintings, and producing artwork in the style of traditional Chinese paintings with generative AI.

Tang Yingxi, a researcher at Zhicheng International Academy, was one of the collaborators on the project. With his background in classical computer vision models, he and others in the team began training different AI models. To do that, they gathered three sets of artworks—western oil paintings, traditional Chinese paintings and cropped images from A Panorama of Rivers and Mountains.

Then, they experimented with several classification models before moving on to the creation phase of the project using both DALL-E and the Night Café generator. Later on, the team invited professional Chinese traditional painters to evaluate the artworks and identify whether their AI model had effectively simulated the blue-green landscape technique.

It had. The team’s project showed that AI can be used to identify and create, not only artworks styled like traditional Chinese paintings, but also specific styles within the genre. Although some researchers noted that AI could not match the emotional depth that is found in human works, it could accelerate the creation of Chinese paintings by inspiring painters’ imaginations.

While AI can never replace the historical value of a hand-painted cultural artwork, it can expand opportunities for people to appreciate and enjoy traditional art. In a study from Lanzhou Resources and Environment Voc-Tech University in Gansu, China, researchers evaluated the impact of AI in cross-cultural dissemination. They found that audiences prefer to learn about culture through personal experience—something that generative AI could potentially play an important role in, given its ability to realistically replicate significant cultural artifacts.


Capturing An Audience

As researchers, engineers and artists harness generative AI to produce a variety of artworks, it can become difficult for audiences to distinguish between human and AI creations. To find out if viewers harbor any bias towards or against AI-generated artworks, Associate Professor Gu Li and Professor Yong Li from the Guangzhou Academy of Fine Arts conducted two studies that surveyed both art experts and non-experts.

The first study separated a group of 106 Chinese participants into two groups. One group was told the digital paintings they saw were generated by AI and the second group was told that the paintings were created by famous artists. However, all the paintings—six Western-styles and six Chinese-styles—were created by human artists. The participants were then asked a series of questions to determine how much they liked the paintings and how willing they were to buy or collect them.

The next study included a new set of participants made up of 143 experts and 156 non-experts to compare the difference that expertise makes.

“The study builds on previous research of in-group preferences—where observers feel a sense of identity and belonging when looking at artworks from their own culture and would give higher aesthetic evaluations compared to those from another culture,” shared Gu. “We expected that observers would show an in-group preference for AI-generated Chinese artworks over AI-generated Western artworks. This was true among the non-experts, while the expert group showed no particular preference for either.”

When it came to preferences between AI-generated artwork and artist-made work, experts rated the AIgenerated works lower in both likeability and collectability, while non-experts showed no preference.

“On the positive side, AI would empower the innovative development and cultural transmission of traditional art in China, especially traditional arts on the verge of being lost,” shared Gu. “In addition, we think generative AI would promote education reform. When the styles or artworks could be easily generated by AI, it would be extraordinarily important for art educators to pass on the connotations and cultural essence of traditional Chinese art through effective teaching techniques.”

In their paper published in Frontiers in August 2022, Gu and Yong made the additional effort to distinguish between AI as a tool and as a creator. “Recently, platforms such as ChatGPT and Midjourney have stirred up extensive discussions in art colleges and the literary field. Those who consider AI as an agent may worry that it will replace humans, but the technological foundation of generative AI is brain-like neural networks—while the technology has made amazing advances, it does not rival the human brain,” shared Gu. “It’s important to consider AI as a tool rather than an agent. It is not replacing us; it is collaborating with us to co-create.”

This article was first published in the print version of Asian Scientist Magazine, July 2023.
Click here to subscribe to Asian Scientist Magazine in print.

Copyright: Asian Scientist Magazine. Illustration: Ajun Chuah

Denise is an all-around storyteller based in Manila, Philippines. She’s passionate about telling stories that intersect travel, culture, and the natural world.

Related Stories from Asian Scientist