AI Therapy Apps: Promising Tool or Problematic Shortcut?


A friend texted me last month to say she’d been “talking to an AI therapist” and it was helping more than she expected. She’d tried to get in to see a psychologist through her GP, but the wait was over three months. So she downloaded an app instead.

I didn’t know quite what to say. Part of me was glad she’d found something. Another part of me felt uneasy.

AI-powered mental health apps have exploded over the past two years. Apps like Wysa, Woebot, and newer platforms using large language models are being marketed as accessible, affordable alternatives to traditional therapy. Some are even being integrated into employee assistance programs.

And in Australia, where psychologist wait times are still stretching beyond what’s reasonable in most capital cities, the appeal is obvious.

But should we be comfortable with this?

What These Apps Actually Do

Most AI therapy apps fall into two categories. The first are structured programs — they walk you through CBT (cognitive behavioural therapy) exercises, mood tracking, and guided reflections. Woebot is a good example. It doesn’t pretend to be a human therapist. It’s more like a digital workbook with some conversational scaffolding.

The second category is newer and more complex. These are apps powered by large language models that can hold open-ended conversations about your mental health. They feel more like talking to a person. And that’s where things get interesting — and potentially risky.

A growing number of tech advisory firms have been helping Australian health organisations evaluate these tools. The consensus seems to be cautious optimism, with a strong emphasis on the cautious part.

What the Evidence Supports

There’s decent evidence that app-based CBT can help with mild to moderate anxiety and depression. A meta-analysis published in the Journal of Medical Internet Research found that digital CBT interventions produced small to moderate effect sizes — not as strong as face-to-face therapy, but meaningfully better than doing nothing.

For people on long waitlists, that matters. Something is often better than nothing when you’re struggling.

Mood tracking apps also have research behind them. The simple act of logging how you feel each day can increase emotional awareness and help people spot patterns they might otherwise miss.

Where the Concerns Lie

The problems start when AI apps are positioned as a replacement for human therapy rather than a supplement.

Safety is the big one. If someone discloses suicidal ideation to a chatbot, what happens? Most apps have crisis protocols built in — they’ll provide helpline numbers or escalation pathways. But the quality of these responses varies enormously. And an AI can’t pick up on the subtle cues a trained clinician would notice in a real conversation.

There’s also the question of accuracy. Large language models can sound confident while being completely wrong. In a mental health context, bad advice isn’t just unhelpful — it can be harmful. Telling someone with PTSD to “sit with the discomfort” without proper clinical framing, for instance, could make things worse.

Privacy is another concern. These apps collect deeply personal data. Who owns your therapy transcripts? Where are they stored? Are they being used to train future models? Many apps are vague on these points, and Australian privacy law hasn’t fully caught up.

The Access Argument

Here’s where I find myself genuinely torn. The Australian mental health system is under enormous strain. Medicare-funded psychology sessions are limited. Rural and regional areas have chronic shortages of mental health professionals. And cost is a barrier for many people, even with rebates.

If an AI app can provide evidence-based support to someone who otherwise wouldn’t get any help at all, that’s meaningful. Dismissing these tools entirely feels like a position of privilege — it assumes everyone can access a good therapist when they need one.

But the answer isn’t to accept lower standards of care for people who can’t afford better. It’s to be clear-eyed about what these tools can and can’t do.

A Practical Framework

Here’s how I think about it:

Use AI apps for: Mood tracking, guided breathing exercises, structured CBT modules, journaling prompts, psychoeducation, and bridging the gap while you’re waiting for professional support.

Don’t rely on AI apps for: Crisis support, trauma processing, complex mental health conditions, medication decisions, or replacing an ongoing therapeutic relationship.

Always check: Whether the app is evidence-based, who developed it, what their data privacy policies look like, and whether it’s been evaluated by a reputable body.

Where We Go From Here

AI mental health tools aren’t going away. They’re going to get more sophisticated, more accessible, and more embedded in how Australians manage their wellbeing. That’s not inherently bad.

But we need better regulation, clearer consumer information, and honest conversations about the limitations. A chatbot is not a therapist. And pretending it is doesn’t serve anyone — least of all the people who are struggling most.

If you’re using one of these apps and finding it helpful, I’m not here to tell you to stop. Just keep your expectations realistic, and keep looking for human support when you need it.

Your mental health deserves more than a good algorithm.