
AsianScientist (Aug. 3, 2018) – When it comes to allowing others inside our heads, most of us only crack open the door for a select few, likely close family members or trusted psychologists. But if you were really struggling, would you consider sharing your innermost thoughts with a robot?
Robot therapists aren’t as far-fetched as you might think. In the 1960s, Joseph Weizenbaum of the Massachusetts Institute of Technology’s Artificial Intelligence Laboratory developed ELIZA, an early chatbot that could emulate the conversations of a psychotherapist. Since then, many increasingly sophisticated applications bringing artificial intelligence (AI) into the mental health realm have emerged.
The brainchild of Stanford psychologists and AI experts, Woebot combines machine learning and natural language processing (NLP) to assess users’ moods and offer them appropriate cognitive behavioral therapy. Emotionally intelligent chatbot Wysa, developed by Indian entrepreneurs Jo Aggarwal and Ramakant Vempati, uses AI and NLP techniques to track users’ emotions and act as their virtual mental wellness coach. Singapore-born Cogniant integrates AI technology with face-to-face therapy and aims to prevent mental illness relapses by monitoring existing patients and assisting them with therapy goals.
AI and mental health: what are the risks?
In 2018, an estimated 340 million people in Asia will require mental health services. With professional help shortages, rural isolation, high costs and stigma being the main obstacles to treatment, AI-centered mental health innovations could be particularly pertinent in the region. Yet, could involving AI in something as potentially delicate as mental health pose any threat?
Given that AI is currently used for mental health diagnostics and wellness coaching rather than treatment, Professor Pascale Fung of the Hong Kong University of Science and Technology says privacy is the main concern.
“For AI to do a good job, it needs access to patient records, past history and family medical knowledge. Security and safety of this data is very important. There are concerns about AI being hacked or data being stolen for other purposes,” she says. “On the other hand, that’s something we should be worried about when dealing with patient records anyway.”
Indeed, researchers have noted that the misuse of sensitive information shared between a patient and AI can have significant consequences, for both the user and the profession’s integrity. To avoid distrust, it’s important for developers to fully disclose data policies to users from the beginning, says Mr. Neeraj Kothari, co-founder of Cogniant. He says that users can then make an informed decision about what they share.
“We have signed an agreement to say we won’t sell data to a third party,” he adds. “The best way we can progress is to demonstrate through action that we are here to help, not to harm.”
Another risk is that humans could potentially become attached to a therapy chatbot, as was the case for many of ELIZA’s ‘patients,’ who believed that they were truly conversing with a human. This led to the coining of the phrase ‘the ELIZA effect,’ which describes the tendency of people to assume computer behaviors are equivalent to human behaviors.
However, while acknowledging the need for more research in this area, Fung doesn’t believe this problem is unique to AI—people also become attached to devices such as mobile phones and television sets, she says.
“In every generation, there have always been concerns with new technology. When it becomes obsessive, people go to professionals for help.”
People may also joke with or lie to AI, but such instances can be minimized through the use of deception detection techniques like facial recognition, says Kothari.
“In general, if AI is designed for the benefit of patients and is non-judgmental and non-intrusive, there will be no reason to lie to the AI.”
The possibility of deception has more to do with human responsibility than technological downfalls, adds Fung.
“[AI] is a tool and what people decide to do with it is up to humans.”
Complementing, not replacing
Most researchers in the field acknowledge that AI is not a therapist replacement, but believe that it can be a supporting tool. Professor Zhu Tingshao of the Chinese Academy of Sciences and his colleagues, for example, developed an AI-based system currently integrated into Weibo that recognizes people who express suicidal thoughts; subsequently, it sends them hotline numbers and messages of support. While the researchers can’t determine if people subsequently seek help, Zhu says the technology is still a proactive step towards suicide prevention.
“Right now when it comes to suicide intervention, we need the [suicidal] people to do the contacting themselves. [But] few people with a problem want to actively ask for help,” says Zhu, who adds that the tool has received positive feedback so far. “We cannot take the place of psychologists or counselling professionals, but we can help people know their mental health status and, if needed, provide some help in time [to prevent suicide].”
Ms. Bhairavi Prakash, founder of The Mithra Trust, an Indian organization that runs wellbeing initiatives combining technology and community engagement, believes that AI can be useful for promoting wellness. However, she too doesn’t think that it can provide complete treatment, especially for severe mental illness. Attempting to apply it to such cases could be dangerous, and should not be attempted until the technology is more sophisticated, she says.
“You don’t know if AI is triggering the person, you can’t see their facial reactions,” says the work and organizational psychologist. “If someone is delusional or hallucinating and talking to the AI, it won’t know how much is real.”
Legislation also needs to catch up before AI is assigned more tasks, adds Prakash. For example, human psychologists may be required by law to notify the authorities if a patient shows signs of wanting to harm others, but it’s unclear how such cases should be handled by AI.
Fung echoes the need for clearer legislation, saying that humans must still remain in the loop for important decisions such as medication prescriptions.
“Machines aren’t perfect. Humans aren’t either, but we do have laws or regulations [to deal with] human error or medical accidents. We don’t really have very good regulations for machine error.”
In the future, Fung envisions AI helping to create better personalized treatment plans, while Zhu says it will make mental health services more efficient. Prakash feels AI-based tools will encourage people to make the initial step in seeking help.
“In conversations people have with [AI], they are so open because there is zero judgment. They can talk about anything. That is extremely liberating and great for mental wellness.”
———
Copyright: Asian Scientist Magazine; Photo: Shutterstock.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.