Can chat Gpt help users with psychotherapy?


Can chat Gpt help users with psychotherapy?






Can chat Gpt help users with psychotherapy?

Is ChatGBT a good psychologist? This is what an official At the American artificial intelligence company "OpenAI", which is behind the famous chatbot, Hinted At, which Sparked much criticism for reducing the difficulty of treating mental Illnesses.

I just had a very personal, emotional conversation with GBT Chat via voice, about stress and work-life balance, Lillian Wong, who is in charge of AI security issues, wrote in late September on X twitter Previously.

But American developer and activist Cher Scarlett Responded Sharply to this statement, saying that psychology “aims to improve mental health, and it is hard work.” She added, Sending positive feelings to oneself is good, but that has nothing to do with treatment.

 Can, however, Engaging with AI result in the Satisfying experience that Lillian Wong outlines?

According to a study Published a few days ago in the scientific journal Nature Machine Intelligence, this phenomenon can be explained by the placebo effect.

In order to demonstrate this, 300 people were polled by researchers from the Massachusetts Institute of Technology (MIT) and the University of Arizona. They told some that the chatbot shown empathy while others perceived it as deceptive. They informed the members of a third group, however, that its behavior was balanced.

 Therefore, people were more likely to view their interlocutor as trustworthy if they thought they were conversing to a virtual assistant who could relate to them.

 “We found that AI is perceived in some way based on the user's preconceptions,” explained Pat Patarantaporn, co-author of the study.

The stupidity of Chatbots

Numerous entrepreneurs have started creating apps that Purport to help with mental health problems without Exercising much caution in a Still-sensitive area, which has led to a number of scandals.

 Users of Replica, a popular app known for Providing mental health benefits, have Complained in particular that the AI ​​could become Sexist or Manipulative.

The American Non-governmental organization "COCO", which Conducted a trial in February on 4,000 patients to whom it provided written advice using the artificial intelligence model "GPT-3", also Acknowledged that automated responses did not work as a treatment.

According to Rob Morris, Co-founder of the startup, "simulating empathy seems weird and Nonsensical" on Ex.

This observation echoes the results of a previous study on the placebo effect, where some participants felt as if they were “talking to the wall.”

David Shaw of the University of Basel in Switzerland stated that he was not shocked by the Subpar outcomes in answer to a query from Agence France-Presse. He states, "It seems that none of the participants were aware of the Chatbots' stupidity."

The concept of an automated processor is not new, Though. In the 1960s, the first program of its kind to Simulate psychotherapy, called “ELISA”, was developed using the method of the American Psychologist Carl Rogers.

Using typical questions Supplemented by keywords extracted from the interviewers' responses, the program Broadened the debate without actually grasping the Concerns brought to it.

The creator of the program, Joseph Weizenbaum, Subsequently Stated of this GPT precursor, "what I did not realize was that very short exposure to a Relatively simple computer program could Induce powerful Delusional thoughts in completely normal people."

Next Post Previous Post
No Comment
Add Comment
comment url