
Patients liked clear communication and relaxation exercises, while reporting rigid dialogue and technical glitches.
A pilot study tested a conversational agent called Terabot to support recognition and regulation of anger, shame, and fear in hospitalized people diagnosed with schizophrenia. Patients generally rated the experience positively, especially for clarity, friendly appearance, and helpful exercises, but rated empathy and emotional understanding only moderately. Qualitative feedback centered on the relationship feeling both helpful and limited, with common complaints about rigid communication, weak personalization, and technical problems.
Quick summary
- What the study found: Inpatients mostly accepted Terabot and often liked its clarity, presence, and relaxation-style exercises, while noting moderate emotional understanding and repeated issues with rigidity and technical performance.
- Why it matters: Emotion regulation support in schizophrenia may be deliverable through guided digital conversations, but the “relationship” still drives user trust and engagement, even when the therapist is a machine.
- What to be careful about: This was exploratory and time-limited, and it did not establish effectiveness; technical failures and limited relational depth may reduce benefit for people who need nuanced, responsive care.
What was found
In the journal article A conversational agent as a virtual therapist for patients diagnosed with schizophrenia: A preliminary study, researchers evaluated whether Terabot was acceptable for inpatients diagnosed with schizophrenia.
Thirty-five people took part. Thirty-four completed symptom monitoring using the Brief Psychiatric Rating Scale, and scores did not significantly change from the start to the end of the study period.
On acceptability questions completed by 32 participants, most ratings clustered around neutral to positive. Clarity of Terabot’s statements rated highly, and the exercises were generally viewed as helpful, while perceived empathy and emotional understanding were closer to moderate.
What it means
The headline result is practical: participants could engage with a structured, emotion-focused conversational tool and often found it usable and worthwhile.
The sticking point was not whether people would talk to a virtual therapist, but whether the therapist could respond like a good listener. Patients described benefits such as feeling “understood” and appreciating “objectivity,” yet also flagged missing follow-up questions, repetitive replies, and a sense that Terabot rushed to exercises.
That combination matters because emotion regulation work depends on timing, validation, and tailoring. If a system cannot track the thread of a person’s story, it may struggle to support shame, fear, or anger in moments that require careful pacing.
Where it fits
Schizophrenia often involves difficulties with emotions as well as thinking and behavior, and inpatient settings can be busy and resource-stretched. Tools like Terabot are aimed at extending access to structured skills practice, not replacing comprehensive care.
The study also shows a well-known principle in psychotherapy: the working relationship influences engagement. Even with a robot, patients judged the experience heavily on relational qualities, including warmth, responsiveness, and perceived emotional attunement.
How to use it
The strongest fit is as a guided add-on for brief skills practice, especially relaxation and calming techniques patients can reuse outside sessions. Patients explicitly mentioned the practical value of exercises they could apply in daily life.
Use also depends on staffing. A facilitator’s presence was generally rated as helpful, and notes suggest some patients sought reassurance from the human when the interaction broke down.
Clinically, a reasonable role is “skills rehearsal with guardrails”: offer the tool when a person is stable enough to engage, and ensure a human can step in if the interaction becomes confusing, frustrating, or emotionally off-target.
Limits & what we still don’t know
This preliminary study focused on acceptability, not clinical effectiveness. It also faced technical issues like freezing and stuttering, plus limitations in understanding longer or unclear answers.
Personalization was a recurring gap. Patients and facilitators described rigid dialogue, low responsiveness, and difficulty supporting deeper emotional recall, all of which could be crucial for meaningful emotion regulation practice.
More controlled trials are needed to test whether Terabot improves outcomes and whether upgrades in relational responsiveness and dialogic flexibility can strengthen therapeutic alliance.
Closing takeaway
Terabot was mostly acceptable to hospitalized people diagnosed with schizophrenia, with clear communication and relaxation exercises standing out as positives. But “good enough” usability is not the same as therapeutic depth, and moderate empathy ratings plus rigid dialogue are not minor issues in mental health care. The next step is not bigger claims, but better responsiveness, fewer failures, and careful testing of real clinical benefit.
Data in this article is provided by PLOS.
Related Articles
- Performance crises in professional soccer grow from hidden vulnerabilities and escalating cycles, coaches report
- Indian adolescents spent almost nine hours a day sedentary, with private school students sitting much more
- Nurse-delivered brief counselling reduced anxiety after self-poisoning at six months, but not at one year
- Patient reported frailty score after stroke predicted higher mortality in a Swedish registry study
- Scientist climate activism grew through belonging spaces and created hybrid scientist activist identities over time
- Heat exposure in older adults in India linked to worse health and more depressive symptoms
- Hemodialysis patients had low quality of life, tied to education, insurance, smoking, and years on dialysis
- Multimodal aspiration prevention reduced aspiration and pneumonia in stroke rehabilitation patients