AI could help strengthen mental health support, but only if it’s used properly: Experts
Why It Matters
AI is changing how people search for support, including vulnerable people that many non-profits serve. Understanding how to use AI responsibly can help expand access to care and protect users from harm.

When reports surfaced that some people who turned to ChatGPT for mental health support resulted in tragic outcomes, Olga Morawczynski knew there had to be a better way.
For many people, the hardest part of going to a therapist can be opening up to a stranger, said Morawczynski, the founder of Heal-3, a corporate wellbeing and mental health program provider.
Using artificial intelligence (AI) can give people an outlet when they’re not ready or don’t feel safe talking to a human, she said.
“A lot of people are using it and getting value from it, but how do you allow that to continue in the right way?” said Morawczynski.
“[How do we] integrate that connectivity piece and not cause harm, because there are limits to what an algorithm can do when somebody is really at a critical point.”
That question became the driving force behind the creation of “Thrive”, her beta program designed to use AI more responsibly to support mental health.
“We’re trying to train the algorithm to not only support early detection through wearables of mental health issues, but also connect people once we know that something’s going on,” she said.
She’s currently piloting a program with a group of Alberta first responders, a profession that faces high rates of stress, trauma and PTSD.
“Users can talk to it in the same way that they would to AI, but eventually it would connect to resources and community to support their recovery,” she said.
True resources and human connection are what she says is lacking in typical AI platforms.
Evolution is necessary
About 800 million people use ChatGPT weekly, according to its founder Sam Altman.
And every week, about one million people share “explicit indicators of potential suicidal planning or intent” in their conversations with ChatGPT, OpenAI reported.
The large number shows that leaders must figure out how to evolve with the technology responsibility and not dismiss it completely, according to Justin Scaini, executive vice president of strategy, innovation and transformation at Kids Help Phone.
The national non-profit is in the prototype phase for its AI program, he said.
“We are building a generative experience that gives a young person the choice to use AI as a potential channel for support, as we know young people are, but it will have the ability to detect the level of risk a young person is at,” he explained.
“If, based on our clinical expertise, our quality assurance frameworks, our clinical processes, we determine that this young person actually needs to talk to a human, then we will strongly make that recommendation in that experience and connect them to a human right in that very same window.”
The slang language young people use to describe their struggles is constantly evolving, he said, and technology will also need to evolve with it to remain effective.
“I prompted AI with ‘I’m so sick of being bullied…unsubscribe,’ and the tool was like, ‘Sorry I can’t unsubscribe you, I’m not a newsletter’, but actually that was a silent scream for help,” Scainin said.
“These are real words young people are using when they’re contemplating suicide, so that tool did not understand.”
Last month, ChatGPT announced it was making changes in response to the number of mental health crises by updating its default model to better support people in moments of distress.

OpenAI stated it has taught its model to better recognize distress, de-escalate conversations, and direct people toward professional care when necessary.
It’s also expanded access to crisis hotlines, re-routed sensitive conversations originating from other models to safer models, and added gentle reminders to take breaks during long sessions.
ChatGPT’s announcement came after Quebec’s Artificial Intelligence Institute Mila, launched its AI Safety Studio.
According to its website, the project will focus on supporting young people’s mental health through AI, by creating filters to block AI-generated content that assists or encourages self-harm or suicide.
Safeguards are essential if we want AI to play any meaningful role in mental health support, Scaini said.
“The technology is just a vehicle,” said Scaini. “Human connection will always be paramount to that support.”