Chris Rhyss Edwards
Author & CEO FOLQ.ai
Location & Time Zone:
Brisbane, Australia (GMT+10)

Detailed Biography:

I didn’t set out to study why people are talking to machines about their deepest fears. I arrived there the long way around – through military service, trauma, silence, and years of watching capable people quietly fall apart because the cost of speaking felt higher than the cost of suffering. As a former Australian Army combat engineer and peacekeeping veteran living with PTSD, I know firsthand how hard it can be to talk, especially when you’re trained to stay composed, useful, and strong. Like many others, I learned to function while silently struggling. That tension between outward competence and inner distress sits at the heart of The Silence Paradox.

Today, as a PhD researcher in conversational AI and mental wellbeing, I study a phenomenon that is both confronting and deeply human: hundreds of millions of people are confiding in AI chatbots about their mental health, loneliness, anxiety, and despair – often before they ever speak to another person. This isn’t because machines are better than humans, but because they are available, non-judgmental, affordable, and emotionally low-risk. In a world where stigma, cost, access, and fear still block most people from professional support, AI has quietly become a first listener. My research explores what this means ethically, psychologically, and socially, and where the real risks and responsibilities lie when technology begins to occupy relational space once reserved for people.

People should listen because this isn’t a tech story, it’s a human one. The Silence Paradox isn’t about replacing therapists or celebrating machines; it’s about asking why so many people feel safer talking to an algorithm than to each other, and what that reveals about the systems we’ve built. I speak from lived experience, academic research, and the uncomfortable middle ground between optimism and alarm. If we don’t talk honestly about why silence is spreading and why AI is filling the gap we risk missing the deeper crisis entirely.

Short Introduction (for hosts to read on-air):

Chris Rhyss Edwards is a writer, doctoral researcher, and former soldier exploring what happens when people have no one left to talk to. A military veteran living with PTSD, he studies the growing role of AI chatbots in mental wellbeing and emotional support. His work blends lived experience, research, and cultural critique to ask difficult questions about silence, connection, and the future of care.

Fun/Friendly Fact:

When I hiked the entire Camino de Santiago last year I spent a night sleeping in a construction dumpster because all the hostels were full:)

Topics of Expertise:
  • Mental Health & Wellbeing
  • AI, Chatbots & Human Connection
  • Loneliness, Silence & Stigma
  • Veterans, Trauma & Lived Experience
  • Conversational AI & Ethics
  • Technology as Emotional Infrastructure
  • Trust, Disclosure & Psychological Safety
  • The Future of Care & Support Systems
  • Digital Intimacy & Human–Machine Boundaries
  • Finding Voice in a Noisy World
Key Message / Core Story:

The central idea I share is this: we are living through a quiet crisis of silence, not a crisis of technology. Millions of people aren’t turning to AI chatbots because they prefer machines, they’re doing so because speaking to other humans has become too costly, risky, or inaccessible. Stigma, time pressure, fear of judgment, and broken support systems have created a world where many people feel they must carry their inner lives alone. AI has stepped into that gap, not as a solution, but as a symptom.

Through The Silence Paradox and my research into conversational AI and mental wellbeing, I explore what it means when technology becomes a first listener, sometimes the only one. I argue that this moment forces us to confront uncomfortable questions about trust, disclosure, dependency, and care. The story isn’t about whether AI is “good” or “bad,” but about what happens when emotionally neutral, always-available systems begin to occupy relational space once held by friends, family, and professionals.

Ultimately, the story I want listeners to walk away with is one of responsibility and agency. We can design AI that supports people without replacing human connection, but only if we’re honest about why silence has become the default. If we fail to address the human conditions driving this shift, we won’t just build better machines—we’ll normalize a world where suffering is managed quietly instead of met collectively.

Ideal Podcast Audience:

Everyday people who have something to say but no one to say it to. WHO has stated that 1 in 8 people on the planet experience some form of mental disorder ranging from anxiety to mania, yet most will never seek help or talk about it out of fear. This is why chatbots have become so popular, so my audience is very broad, 1/8th of the planet!

Listener Takeaways:

Listeners will leave with a clearer understanding of why so many people are turning to AI for emotional support and what that trend reveals about our current mental health landscape. Rather than framing AI as a cure or a threat, the episodes will help audiences see it as a mirror, reflecting the barriers of stigma, access, cost, and fear that prevent people from speaking openly. This reframing alone often changes how people think about technology, care, and connection.

The audience will also gain practical insight into how to engage with AI tools safely and intentionally. We can discuss when AI can be a helpful first step for reflection or emotional literacy, where its limits are, and why it should supplement – not replace – human support. For professionals, leaders, and carers, the conversations offer a grounded way to think about boundaries, dependency, and responsibility when technology enters emotionally sensitive spaces.

Finally, listeners will walk away with permission to speak sooner, not stronger. The chats aim to reduce shame around silence, normalize help-seeking in its many forms, and remind people that needing a low-risk place to start doesn’t make them weak, it makes them human. Whether someone is personally struggling, supporting others, or shaping policy and technology, the conversations will offer both insight and hope at a moment when quiet suffering has become far too common.

Suggested Interview Questions:
  • Why do you think so many people feel safer talking to AI chatbots about their mental health than talking to other people?
  • As both a veteran living with PTSD and a PhD researcher, how does your lived experience shape the way you view AI’s role in mental wellbeing?
  • What are the real risks when AI becomes a “first listener” for people in distress, and what risks do we ignore if we simply reject these technologies outright?
  • Is the rise of AI companions a sign of technological progress, or a warning signal about the breakdown of our social and care systems?
  • If you could change one thing about how we talk about mental health, silence, or support today, what would it be and where does technology fit into that change?
Books / Courses / Products:

The Silence Paradox: The Quiet Revolution of AI, Emotion and Human Connection Paperback – 24 December 2025

Achievements / Media Features:

TEDxQUT “The Uncomfortable Truth Between Right & Wrong”, Zurich Innovation World Championship Bronze Medal Winner 2019, named Global Entrepreneur of the Year at The Pitch at the Palace event 2019, finalist in the Prime Ministers Veteran Entrepreneur of the Year Awards 2019

Speaking Experience:

I’ve spoken at iMedia and ad:tech events across APAC, US, UK

Company / Brand / Project:

The Silence Paradox: The Quiet Revolution of AI, Emotion and Human Connection

Audience Size / Reach:

15,000+

Promotion Commitment:
Yes
Video Sharing Permission:
Yes
Clip/Promo Permission:
Yes