Like many in the treatment field, I’ve wondered about the growing use of chatbots to provide help for people suffering from psychological and substance disorders. Seems to me there’s a lot to worry about.

Here’s an example:

The creator of an AI therapy app shut it down after deciding it’s too dangerous. Here’s why he thinks AI chatbots aren’t safe for mental health

The author doesn’t deny that artificial intelligence can be of use for limited purposes, such as answering questions about the weird aftereffects of cannabis edibles, or how to tell if you may or may not be suffering from depression.

“AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation,” he admits, “But when the person seeking help is someone in crisis, someone with deep trauma, someone contemplating ending their life– [then] AI becomes dangerous. Not just inadequate. Dangerous.”

A strong statement, indeed. But in this case, I suspect he’s right.

The problem in brief: The Large Language Model (LLM) systems that dominate the AI approach to mental health seem to experience real challenges in terms of tracking and assessing changes in the person’s mental state over time— a requirement for recognizing when AI is no longer the appropriate level of care. That means someone who really should be sent to a crisis team or emergency room may not be encouraged to consult one when they most need it.

Which, in the opinion of the article’s author, makes AI no longer safe for widespread use in mental health care.

Now for the really bad news: the number of people who rely on AI as their primary therapist is growing fast. That includes truly vulnerable populations, such as adolescents. According to one survey, some 13% of users between 12 and 21 admit to using AI in exactly that way.

Can we blame people for relying on AI when it’s cheap and available and they already spend half their lives in front of a monitor? Did we expect some other outcome?

If we did, we’re not using common sense.

If you’re interested in the subject– and I hope you are– here’s a link to a more thorough discussion, from researchers at Stanford.

Exploring the Dangers of AI in Mental Health Care