The Rise of AI in Wellbeing: Support, Risks and the Human Role

AI has become part of everyday working life more quickly than many of us expected. What began with productivity tools and automation is now moving into more personal territory, including wellbeing, therapy and companionship. For organisations focused on supporting their people, this raises thoughtful questions about how AI is being used, where it may help, and where clear boundaries still matter.

The pace of change has been notable. Between 2022 and mid-2025, the number of AI companion apps surged by 700%, highlighting just how quickly people are turning to technology for emotional support and connection. Whether organisations are actively discussing this or not, many employees are already engaging with AI in ways that touch on their wellbeing.

In this blog, we explore how AI is currently showing up in wellbeing and emotional support, why people are drawn to it, where it may play a helpful role, the risks and limitations that need careful consideration, and why the human element remains essential.

How AI Is Showing Up in Wellbeing
AI is no longer confined to scheduling meetings or drafting content. Increasingly, it is appearing in wellbeing and mental health spaces, often through chatbots that offer emotional check-ins, guided reflection or prompts designed to encourage self-awareness. Alongside this, AI companion apps aim to reduce feelings of loneliness by providing a sense of presence or ongoing conversation.

Many wellbeing platforms are now using AI to personalise content, track mood or suggest reflective exercises. These tools are becoming more accessible and widely used, often marketed as discreet and immediate forms of help. In many cases, this engagement happens independently of workplace support provision, meaning organisations may be unaware of how or when people are using them

Why People Are Turning to AI for Support
The reasons people are drawn to AI in this context are largely human ones. AI is available around the clock, particularly outside traditional working hours when support can feel harder to access. For some, it feels private and non-judgemental, reducing the anxiety that can come with opening up to another person.

It also offers immediacy. In moments of stress, uncertainty or isolation, AI provides a response straight away. Rather than seeing this as a rejection of human support, it can be more helpful to view it as a signal. Often, people turn to AI because existing support feels unavailable, inaccessible or difficult to navigate. In that sense, this rise of AI reflects unmet needs as much as technological progress.

Where AI Can Play a Helpful Role
When used thoughtfully, AI can offer support as part of a wider wellbeing approach. It may help people pause and reflect, name what they are feeling, or access information and signposting that encourages further support. For some, it can act as a first step, helping to build awareness or confidence before seeking human-led support.

AI can also sit alongside coaching or therapy, offering prompts or reflections between sessions that help individuals stay engaged with their development. Its value is clearest when it is positioned as a complement rather than a solution, supporting wellbeing without replacing the depth and complexity that human relationships provide.

The Risks and Limitations to Consider
Alongside the potential benefits, there are important limitations and risks that deserve careful attention. AI does not have professional judgement, lived experience or the ability to fully understand circumstances, nuance or vulnerability. It cannot safely hold trauma, respond appropriately in crisis situations, or reliably challenge unhelpful thinking in the way a trained professional can.

Crucially, AI is not always accurate or safe, particularly when used as a substitute for therapy or emotional support. There have been documented instances where AI tools have provided misleading, inappropriate or even harmful responses, including content that could encourage self-harm or reinforce distress. This highlights a serious risk when AI is treated as a therapist, confidant or trusted friend.

There are also wider concerns around data privacy, confidentiality and safeguarding, especially when sensitive personal information is shared. Over time, reliance on AI for emotional support may reduce help-seeking from qualified professionals or trusted human connections, increasing isolation rather than reducing it.

For organisations, these risks extend beyond individual wellbeing. They raise questions about duty of care, ethical responsibility and psychological safety. Ignoring how AI is being used does not remove these risks; it simply means they go unacknowledged and unmanaged.

What This Means for HR and Wellbeing Leaders
For HR and wellbeing leaders, the growing presence of AI in emotional support presents both an opportunity and a responsibility. AI-based tools are already being used, often informally and outside organisational visibility. Avoiding the conversation altogether does not prevent this use; it simply removes the chance to shape it thoughtfully.

This raises practical questions for organisations. How does AI sit alongside existing wellbeing provision? Where should boundaries be drawn? What guidance might employees need to understand what AI can, and cannot, offer? Addressing these questions openly can help reduce risk, protect trust, and reinforce a commitment to ethical, people-centred wellbeing support.

Rather than viewing AI as something to resist or fully embrace, many organisations are finding value in acknowledging its presence while being clear that it does not replace professional, person-centred support.

Why the Human Role Still Matters
As AI becomes more visible in wellbeing spaces, the value of human connection becomes even clearer. Coaching, emotional intelligence development and therapeutic support offer empathy, insight and professional judgement that technology cannot replicate. Human practitioners can respond to situations, hold complexity, and gently challenge unhelpful patterns in ways that support lasting change.

These relational elements are central to psychological safety, trust and meaningful development at work. While AI may offer moments of reflection or reassurance, it cannot replace the depth of understanding that comes from being truly heard and supported by another person.

For organisations, maintaining this human core is not only a wellbeing consideration, but a cultural one. It shapes how supported people feel, how safe they are to speak up, and how sustainable performance is over time.

A Thoughtful Way Forward
AI is neither inherently positive nor inherently problematic. Like any tool, its impact depends on how it is used, and the context in which it sits. In wellbeing, this means balancing innovation with care, curiosity with caution, and efficiency with ethics.

As technology continues to evolve, organisations have an opportunity to lead with intention. By staying informed, setting clear boundaries and keeping human-led support at the centre of their strategies, HR and wellbeing leaders can ensure that AI enhances rather than undermines the way people are supported at work.

The future of wellbeing is unlikely to be purely human or purely technological. Instead, it will be shaped by how thoughtfully the two are brought together.

As conversations around AI and wellbeing continue to grow, it’s a good time to reflect on how people are being supported day to day. We offer leadership coaching, EI assessments and training, and therapeutic wellbeing programs that keep the human element front and centre. If you’d like to explore what might be most helpful for your organisation, let’s have a chat.

Next
Next

Wellbeing Trends 2026: The Shifts Emerging from 2025 and What Comes Next