10/10/2023
10/10/2023

KUWAIT CITY, Oct 10: An official from the American artificial intelligence company "OpenAI," responsible for the popular chatbot, hinted at something that garnered substantial criticism for oversimplifying the treatment of mental health issues.
In late September, Lillian Wong, overseeing security matters related to artificial intelligence, shared her emotional and personal conversation with the ChatGPT via audio. The discussion revolved around stress and work-life balance, during which she found solace and a sense of being understood. She pondered whether this could be a form of therapy, despite not having tried it before.
Wong's intent in her message was primarily to spotlight the chatbot's newer (paid) voice synthesis feature introduced about a year ago and explore its evolving economic model.
However, Cher Scarlett, an American developer and activist, strongly countered this notion, emphasizing that psychology aims to rigorously improve mental health, requiring dedicated effort. Scarlett added that while fostering self-positive feelings is valuable, it doesn't constitute genuine treatment.
A recent study published in the scientific journal "Nature Machine Intelligence" suggested that this phenomenon might be attributed to the placebo effect. Researchers from the Massachusetts Institute of Technology (MIT) and the University of Arizona conducted a survey involving 300 participants. They informed some participants that the chatbot possessed empathy, described it as manipulative to others, and characterized its behavior as balanced for a third group. Consequently, those who believed they were conversing with a virtual assistant capable of empathy were more likely to perceive their interaction as trustworthy.