Anni Rowland-Campbell, Hannah Stewart, Ghada Ibrahim
“The real problem is not whether machines think but whether men do.”
— B.F. Skinner
On July 4th and 5th, I had the privilege of co-facilitating the 25th Brave Conversation — this time in Stuttgart, Germany as part of the School for Talents Program - University of Stuttgart. For those who don’t know, Brave Conversations is a global series that started back in 2017, with the aim of creating space for people to pause, reflect, and ask honest questions about the technologies shaping our lives.
This wasn’t about coding or learning how AI works under the hood. It was about something deeper:
How do we want to live with these technologies?
What does it mean to trust them?
And how can we make sure we stay in the driver’s seat — not just as users, but as active citizens and future decision-makers?
Over the first day, a group of students joined us to explore AI through real-life scenarios — from education and work to healthcare, mental health, and hiring. We didn’t start with definitions or theories. We started with stories, with questions, with shared uncertainty.
The aim wasn’t to give them answers — it was to help them not feel overwhelmed or stuck when thinking about technology. We wanted them to see that it’s okay to feel conflicted, and that it’s possible (and necessary) to ask tough questions like:
I was struck by how open and curious the students were as they discussed amongst their groups. They weren’t trying to solve everything — they were genuinely trying to understand.
At the end of Day 1, we gathered in a circle for reflection. And this is where something beautiful happened.
Together, the group came to some shared understandings of what trust actually means — not from a textbook, but from their own lived experiences. These reflections became our starting point for Day 2:
These definitions weren’t planned. They emerged. They felt real.
And they gave us a foundation to explore the big questions around trust in AI:
Can we trust machines to care?
What happens when algorithms make decisions about our health or jobs?
What if our emotional bonds are being shaped by systems designed to keep us engaged — not supported?
One of the most important things I’ve learned from Brave Conversations is that sometimes, the best outcome is not clarity — but depth.
By the end of our time together, the students didn’t leave with answers. They left with better questions:
And that’s exactly what I hoped for leading the first day, slowing down and listening to each other is one of the most radical things we can do in tech spaces. It’s easy to get caught up in fear or hype. But what we need more of -especially with young people- is space to reflect, to share honestly, and to realize that we all have a role to play in shaping the future.
“We become what we behold. We shape our tools and then our tools shape us.”
— Marshall McLuhan
The second day began with something I found incredibly grounding — a marvelous dose of history led by Anni.
We walked through the origins of the internet, the web, computers and more. The building blocks of this “advanced” world we now take for granted. It wasn’t a lecture — it was a much-needed reminder that all of this didn’t appear out of thin air. People built it. People made decisions. And if we want to understand where AI is going, we need to know how we got here.
Too often we treat technology as something neutral or inevitable. But this part of the day really emphasized that understanding history gives us the language — and the agency — to shape the future.
The second half of the day was something entirely different. Designed and led by Hannah. The students participated in The Simulation Game –and wow, what a shift in energy.
The premise was simple: as a group, they had to become the embodiment of an AI Agent. Some were the Head, some the Hands, others were Observers or Policy. The mission? Implement a goal based on a single prompt — together, as a system.
What unfolded was a fascinating social experiment. It took just one goal to trigger a chain of action that turned into organized chaos — full of power dynamics, communication breakdowns, and surprising moments of innovation.
One thing became immediately clear: letting go of the “whole self” was not easy. The Hands couldn’t speak. The Head couldn’t move. Everyone had to play their role and trust others to do theirs. Communication had to follow a strict path — linear, frustrating, and slow.
What was even more interesting was how friction was baked into the design of the game –on purpose. And we saw how friction isn’t a failure of trust, but a necessary part of building it. It exposed our assumptions about control, autonomy, and cooperation.
At the end of the game, during our reflection circle, someone pointed out something that stuck with me: No one had even considered collaborating with the other team – not even when one of the AI Agents suggested it! The room laughed, but there was a pause too. We all felt it – the aha moment. There were other dynamics in play, and trust was still forming.
Another layer emerged when we looked at how the Observers and Policy roles shaped the process. Their feedback added new rules and considerations with each round much like regulation and ethics shape real-world AI systems.
It was honestly such a rich experience to witness. Not just fun or insightful, it felt real in a way that traditional “AI discussions” often don’t. We weren’t just talking about trust. We were feeling it, messing it up, and trying again.
“The most dangerous phrase in any language is: ‘We’ve always done it this way.’”
— Grace Hopper
Brave Conversations #25 was a reminder of why these spaces matter.
Not because they give us definitive answers, but because they allow us to sit with questions that matter, even when uncomfortable – together, across backgrounds and disciplines, with courage and empathy.
If we want technology that serves people –all people– then we have to keep building spaces like this.
Because trust is never just built between a user and a tool. It’s built between humans, through conversation, over time.
Written by Ghada Ibrahim
Frankfurt, Germany
Creative Commons CC BY-NC-SA: This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.
CC BY-NC-SA includes the following elements:
BY
– Credit must be given to the creator
NC
– Only noncommercial uses of the work are permitted
SA
- Adaptations must be shared under the same terms