I didn’t set out to study why people are talking to machines about their deepest fears. I arrived there the long way around – through military service, trauma, silence, and years of watching capable people quietly fall apart because the cost of speaking felt higher than the cost of suffering. As a former Australian Army combat engineer and peacekeeping veteran living with PTSD, I know firsthand how hard it can be to talk, especially when you’re trained to stay composed, useful, and strong. Like many others, I learned to function while silently struggling. That tension between outward competence and inner distress sits at the heart of The Silence Paradox.
Today, as a PhD researcher in conversational AI and mental wellbeing, I study a phenomenon that is both confronting and deeply human: hundreds of millions of people are confiding in AI chatbots about their mental health, loneliness, anxiety, and despair – often before they ever speak to another person. This isn’t because machines are better than humans, but because they are available, non-judgmental, affordable, and emotionally low-risk. In a world where stigma, cost, access, and fear still block most people from professional support, AI has quietly become a first listener. My research explores what this means ethically, psychologically, and socially, and where the real risks and responsibilities lie when technology begins to occupy relational space once reserved for people.
People should listen because this isn’t a tech story, it’s a human one. The Silence Paradox isn’t about replacing therapists or celebrating machines; it’s about asking why so many people feel safer talking to an algorithm than to each other, and what that reveals about the systems we’ve built. I speak from lived experience, academic research, and the uncomfortable middle ground between optimism and alarm. If we don’t talk honestly about why silence is spreading and why AI is filling the gap we risk missing the deeper crisis entirely.
Chris Rhyss Edwards is a writer, doctoral researcher, and former soldier exploring what happens when people have no one left to talk to. A military veteran living with PTSD, he studies the growing role of AI chatbots in mental wellbeing and emotional support. His work blends lived experience, research, and cultural critique to ask difficult questions about silence, connection, and the future of care.
When I hiked the entire Camino de Santiago last year I spent a night sleeping in a construction dumpster because all the hostels were full:)
- Mental Health & Wellbeing
- AI, Chatbots & Human Connection
- Loneliness, Silence & Stigma
- Veterans, Trauma & Lived Experience
- Conversational AI & Ethics
- Technology as Emotional Infrastructure
- Trust, Disclosure & Psychological Safety
- The Future of Care & Support Systems
- Digital Intimacy & Human–Machine Boundaries
- Finding Voice in a Noisy World
Everyday people who have something to say but no one to say it to. WHO has stated that 1 in 8 people on the planet experience some form of mental disorder ranging from anxiety to mania, yet most will never seek help or talk about it out of fear. This is why chatbots have become so popular, so my audience is very broad, 1/8th of the planet!
Listeners will leave with a clearer understanding of why so many people are turning to AI for emotional support and what that trend reveals about our current mental health landscape. Rather than framing AI as a cure or a threat, the episodes will help audiences see it as a mirror, reflecting the barriers of stigma, access, cost, and fear that prevent people from speaking openly. This reframing alone often changes how people think about technology, care, and connection.
The audience will also gain practical insight into how to engage with AI tools safely and intentionally. We can discuss when AI can be a helpful first step for reflection or emotional literacy, where its limits are, and why it should supplement – not replace – human support. For professionals, leaders, and carers, the conversations offer a grounded way to think about boundaries, dependency, and responsibility when technology enters emotionally sensitive spaces.
Finally, listeners will walk away with permission to speak sooner, not stronger. The chats aim to reduce shame around silence, normalize help-seeking in its many forms, and remind people that needing a low-risk place to start doesn’t make them weak, it makes them human. Whether someone is personally struggling, supporting others, or shaping policy and technology, the conversations will offer both insight and hope at a moment when quiet suffering has become far too common.
- Why do you think so many people feel safer talking to AI chatbots about their mental health than talking to other people?
- As both a veteran living with PTSD and a PhD researcher, how does your lived experience shape the way you view AI’s role in mental wellbeing?
- What are the real risks when AI becomes a “first listener” for people in distress, and what risks do we ignore if we simply reject these technologies outright?
- Is the rise of AI companions a sign of technological progress, or a warning signal about the breakdown of our social and care systems?
- If you could change one thing about how we talk about mental health, silence, or support today, what would it be and where does technology fit into that change?
The Silence Paradox: The Quiet Revolution of AI, Emotion and Human Connection Paperback – 24 December 2025
TEDxQUT “The Uncomfortable Truth Between Right & Wrong”, Zurich Innovation World Championship Bronze Medal Winner 2019, named Global Entrepreneur of the Year at The Pitch at the Palace event 2019, finalist in the Prime Ministers Veteran Entrepreneur of the Year Awards 2019
I’ve spoken at iMedia and ad:tech events across APAC, US, UK
The Silence Paradox: The Quiet Revolution of AI, Emotion and Human Connection
15,000+
