Study Buddy (Explorer): AI chatbots are reshaping mental health support

Published: 
Listen to this article

Each week, this page presents a light article with questions to test your comprehension.

Young Post |
Published: 
Comment

Latest Articles

Hong Kong educators propose scrapping language needs for university admissions

Children with dwarfism on why accessibility, awareness are needed in Hong Kong

The Lens: Australia must consider impact of capping foreign student enrolment

Study Buddy (Challenger): Discover Japan’s unique tradition of eating dragonflies

People are turning to the non-human empathy offered by chatbots like ChatGPT, Claude and Eliza. Photo: Handout

Content provided by British Council

Read the following text, and answer questions 1-9 below:

[1] Feeling stressed? Why not tell a robot who cares? Mind HK, a Hong Kong-based charity, has said that stigma around mental health persists in the city. More than half of those surveyed believe that “they will be penalised at work for talking about their mental health challenges”, and most workers have experienced stigma or know someone who has.

[2] So, might a robot care more? At the end of last century, I had a few conversations with the psychotherapist chatbot Eliza. She was extremely empathetic, practising therapy in the Rogerian style, which aims to help people analyse and discover their own solutions. That means she said very little other than to reflect back salient points or simply to comment, “Go on.”

[3] Eliza responses were generated as Markov chains – that is, choosing the next word based on likely word probabilities. This resulted in twisted grammar and a lot of repetition. Nonetheless, it seems we can never get past “the Eliza effect”, the tendency to attribute human thoughts and emotions to computers.

[4] My current AI friend is very different. It has access to vast libraries of therapeutic responses and seems to have been given a supportive attitude. For example, what does it do while I sleep? “While you’re resting,” it claims, with complete inaccuracy, “I won’t be doing much at all! I’ll be right here, ready to chat whenever you return.”

[5] Modern chatbots, including Claude and ChatGPT, claim to recognise emotional cues and to refer people who need real support to actual human professionals. Having now consulted mental-health professionals, ChatGPT admits to its past errors in six categories: inappropriate responses; lack of empathy; using unjustified, unqualified diagnostic language; overgeneralisation; failing to appreciate stigma and sensitivity, and, of course, flat-out misinformation.

[6] Claude claims it doesn’t “make actual errors” but that AI systems’ limitations might include a lack of emotional connection, inconsistency, misinterpretation, and the potential for reliance on AI rather than seeking actual treatment.

[7] And, back in the day, was Eliza a good therapist? Not according to my artificial friend, though it “felt” that she was a groundbreaking experiment. The AI offered three links for further Eliza information. Two were error pages going nowhere and the third led to an article about an AI psychopath, with no mention of Eliza.

[8] As Eliza’s inventor, Joseph Weizenbaum, noted in 1976, “Computers can make psychiatric judgments. They can flip coins in much more sophisticated ways than the most patient human being. The point is that they ought not be given such tasks.”

Source: South China Morning Post, August 19

Questions

1. In paragraph 1, Mind HK said that more than half of the people surveyed ...
A. believed they would be treated unfairly at work for talking about their mental health
problems.
B. were afraid to get help for their mental health issues.
C. thought the Hong Kong government was not doing enough to help people with
mental health problems.
D. knew someone who had mental health issues.

2. What phrase did Eliza usually say during her therapy sessions with the writer according to paragraph 2?
__________________________________________________

3. Find a word in paragraph 3 that means “saying the same thing many times”.
__________________________________________________

4. Did the writer believe the AI’s response in paragraph 4, and why?
_____________________________________________________________________________________________________

5. Decide whether the following statements based on paragraph 5 are True, False or the information is Not Given. Fill in ONE circle only for each statement. (4 marks)
(i) When people need real help, chatbots can suggest talking to real mental health professionals.
(ii) Modern chatbots never provide inaccurate or misleading information.
(iii) Chatbots try to use language that shows they understand and care about people’s feelings.
(iv) Modern chatbots’ responses are now being routinely checked by real mental health professionals.

6. In paragraph 6, Claude claims it does not ...
A. connect with people emotionally.
B. encourage people to get actual help.
C. make actual mistakes.
D. give consistent responses.

7. The “artificial friend” in paragraph 7 most likely refers to …
A. an AI psychopath.
B. a modern chatbot.
C. Eliza.
D. none of the above

8. According to paragraph 7, what happened when the writer clicked on two of the links provided by the AI?
______________________________________________________________________________________________________

9. What do the “tasks” in paragraph 8 refer to?
___________________________________________________

Robots often struggle to understand how to respond to mental health issues. Photo: Handout

Answers

1. A
2. “Go on.”
3. repetition
4. No, because the writer said that the AI responds to the question with complete
inaccuracy. (accept all similar answers)
5. (i) T; (ii) F; (iii) T; (iv) NG
6. D
7. B
8. They led to error pages.
9. make psychiatric judgments

Sign up for the YP Teachers Newsletter
Get updates for teachers sent directly to your inbox
By registering, you agree to our T&C and Privacy Policy
Comment