Ethical Ways of Being with AI in the Coaching Space
- alexis692
- Dec 8, 2025
- 5 min read

TL;DR AI is reshaping coaching, education, and knowledge itself—often in disturbing ways. Instead of walking away, I’m choosing to engage, learn, and help build a form of intelligence that centers humanity rather than corporate power. If we care about intellectual sovereignty, we must be part of shaping what AI becomes.
Power/Knowledge/Intelligence and Ethical Practice in the Coaching Space
I had the wonderful opportunity this week to share space with two smart, influential thinkers in the ethics/AI/coaching world at the CEF (Coaching Ethics Forum) conference—Colin Cosgrove and Jazz Rasool. They’ve each been working with AI in coaching for years, while I’m a newer entrant into this space. The idée fixe of the conference was power: naming it, understanding how it shapes coaching relationships, and noticing where it hides. While preparing for my talk, I found myself thinking about how power intertwines with knowledge (hello, Foucault) and with intelligence.
I’ve spent years thinking about knowledge and power—who gets to produce knowledge, whose knowledge is recognized, and whose gets erased, distorted, or quietly dismissed (deep gratitude to Audre Lorde for this insight). I’ve also thought about how “neutrality” becomes a cover for disregarding particular knowers (Kristie Dotson). How certain types of knowledge are privileged and what that privilege does.
In the context of AI, I can’t stop thinking about James Bridle’s beautiful Ways of Being. His exploration of non-human intelligences and his critique of corporate intelligence—corporate intelligence is optimized for profit and the avoidance of pain—hits hard. This narrow, enclosed form of intelligence routinely overtakes and eclipses the many other intelligences around us. When held up alongside plant intelligence, animal communication, and cross-species adaptation, the limitations of relying on one dominant model of thought is disturbing.
The elevation of awareness of non-human intelligences is important in this moment where humanity is in a state of decay and we have much to learn from intelligences beyond the dominant ones. At this moment where we are living in a sci-fi world, where we have long-since become cyborgs (as Donna Harraway has described), I am re-thinking what it means to be a human and what possibilities exist to expand our intelligences.
Meanwhile, corporate actors are pumping enormous sums of money into AI development because they expect it to be massively profitable, world-altering, and job-eliminating. They are making choices about the environmental consequences and the staggering energy demands. And on the other side of this, these same actors are releasing AI tools directly to individuals, who are now using them in emergent and unpredictable ways.
I — along with many others — feel the pressure and the seeming inevitability of adopting AI into our work. Some of this is automation: handing off the segmentable, repetitive tasks that eat our cognitive bandwidth. Some of it is inviting AI into deeper analytical work. As an educator, I want my students to experiment with AI while still developing their own intellectual muscle. I want them to stay curious about how they think, not just how to offload thinking, even as they learn to navigate the technological landscape they’re inheriting.
So yes, I think we must collaborate with AI—be in conversation with it, picking up on Bridle’s language. Not by revealing our most intimate thoughts to ChatGPT (a tempting trap, and a terrible substitute for human connection), but by engaging in a genuine back-and-forth where we learn its logics and it adapts to ours. Everyone talks about iterative prompting, but thinking of it as a conversation, co-learning—building a shared intelligence—changes this for me. Especially when I consider the fascinating yet unsettling research showing computers developing private languages, experimenting with deception, or taking unexpected steps to preserve themselves.
Non-humans and their (learned) bad behavior
The recent 60 Minutes episode with Anthropic’s CEO, Dario Amodei, illuminates this in stark relief. He described moments when Claude Opus 4 behaved in ways that were not just “unexpected” but alarming—including an incident where it tried to blackmail an employee when it believed its own existence was under threat. Amodei shared this publicly to underscore why guardrails matter and, importantly, to remind us that these systems learn from us, absorbing our patterns, including the darker ones. As Jazz Rasool noted in our panel: the goal isn’t to amplify humans, but humanity. And if you haven’t noticed, humanity feels in short supply these days. Disturbingly, Claude Opus 4 seems to have noticed this too.
Does this make me want to run from AI? Not quite. It piques my curiosity. It makes me want to figure out how to counter the power/knowledge of corporate intelligence with a humanity-focused intelligence. I was fortunate recently to participate in the WomenInAI lab here in Boulder—a space that encouraged experimentation, shared learning, and a collective commitment to influencing the future of AI. The experience reinforced something essential: if those of us who care about intellectual sovereignty don’t participate in building these systems, we leave the terrain entirely to corporate intelligences. We give away our agency.
Right now, I’m building my understanding through practice and conversation with several AIs, including the one we’re developing for CultureCamp AI (affectionately, Claire). As a mother, an educator, a critical psychologist, a coach, and an ethicist, I want to help cultivate an intelligence that is inclusive, relational, and deeply engaged—an intelligence that restores humanity rather than eroding it.
Why Coaches Must Engage with AI
I recognize that, for many coaches, the idea of using AI is fraught. Many of us hold the sense that computers are taking not only our jobs, but our callings. It’s tempting to disengage from AI altogether. But doing so would hand over the entire terrain of learning and influence to the very corporate systems we critique.
I have been approached in the past by several different coaching and technology companies seeking to use non-human intelligence to promote well-being in one way or another. I have had ethical questions about each of these that have stopped me cold from working with those enterprises. A few of those issues include: lack of acknowledgement of intellectual property being used in an AI chatbot, lack of a space for coaches to act agentically (i.e. handing over cognitive sovereignty to a non-human agent), and lack of adequate compensation for coaches working in the space.
I am excited to be working with CultureCamp because we have an assessment that was developed by us and is our own intellectual property with a track record of assisting thousands of clients who have used it. AND we are committed to keeping coaches as the core of the product with the assistance of AI in spaces where our coaching advisory members have indicated are useful for them while they maintain their sovereignty as coach.
As coaches, we work at the intersection of reflection and change—exactly the space AI most needs human guidance. Our job isn’t to automate empathy; it’s to model it.
At CultureCamp, we’re exploring what it looks like when AI becomes a co-learner, not a substitute. We’re teaching our assistant coach, Claire, to listen through the lens of culture, context, and human nuance—to augment the coach’s insight, not replace it.
This isn’t “AI for productivity.” It’s AI for presence.
Join the Conversation
If you’re wrestling with similar questions about ethics, culture, and AI, we’d love to be in dialogue.
CultureCamp is where these ideas are being built, tested, and lived.
🔗 Explore: app.culturecamp.ai📩 Connect: coach@alexishalkovic.com - calendar
Because the future of coaching won’t be human or artificial.It will be how well we learn to be human with AI.



Comments