AI doesn’t carry responsibility. Leaders do. Why accountability, judgment, and quiet courage matter more as AI adoption accelerates.
Last month, the resignation of West Midlands Police Chief Constable Craig Guildford briefly put a spotlight on an increasingly familiar problem. His force had submitted an intelligence report containing fabricated details generated by Microsoft Copilot. A reminder that AI systems can produce information that sounds authoritative - but isn’t true.
The details of the case will continue to be discussed, there may well be several deeper issues at play. However one issue is clear.
This wasn’t primarily a technology failure. It was a failure of accountability.
At some point in the decision-making chain, AI-generated information was accepted without sufficient challenge. Human judgment didn’t intervene early enough. Verification didn’t happen. And when the consequences became visible, the response followed a familiar pattern: denial, then admission, then resignation.
A chain of events which matters far more than the tool involved.
The seduction of algorithmic authority
AI-assisted decision-making carries a powerful psychological pull. It feels objective. Data-driven. Scientific. When an output arrives wrapped in confidence, metrics, and institutional legitimacy, it often carries more authority than human judgment.
This is where familiar cognitive dynamics quietly take hold. We anchor on the first output we see. We defer to what appears systematic. We overestimate the reliability of technology, especially under pressure. None of this requires negligence or bad intent - it’s how humans respond to complexity and uncertainty. Biases and heuristics kick in under decision-making pressure.
Here, leaders need to have clarity about one critical risk.
AI does not carry accountability. It has no stake in consequences.
AI doesn’t understand reputational harm, social impact, or the erosion of public trust. It doesn’t feel the weight of decisions that affect real people, communities, or institutions. When leaders defer too much authority to algorithms, accountability doesn’t disappear - it quietly drifts.
And that drift only becomes visible when something goes wrong.
Why process isn’t enough
The instinctive response to moments like this is procedural: more controls, more governance, more checks and balances. Those matter. But process alone won’t solve the problem. Because the hardest moment isn’t procedural. It’s psychological.
The real test arrives when an AI output conflicts with your intuition, when questioning it might slow things down, or when challenging it could make you appear obstructive or overly cautious. In those moments, you begin to see how even well-designed systems remain vulnerable to human bias.
That’s not a technology gap. It’s a leadership capability gap. One which can be bridged with courage.
Accountability from an AI ethics lens
I asked my thinking partner on AI Ethics Annabel Gillard for her perspective:
“A subtle risk in AI-enabled organisations lies in the language we use. When decisions are framed as “AI-informed,” accountability can quietly shift. We need to watch out for it becoming shared, softened, or less clearly owned and visible to the humans involved.
AI systems don’t make decisions in a moral or social sense. They generate outputs based on data, probabilities, and prompts. Ethical responsibility only arises when a human chooses to act on those outputs.
The risk isn’t just that AI can be wrong. It’s that its apparent confidence can discourage scrutiny. When systems are framed as intelligent, neutral, or authoritative, people are less likely to ask basic but essential questions:
Where did this come from? What assumptions sit underneath it? What happens if this is wrong?
EthicalAI isn’t achieved just by having humans in the loop. It depends on leaders who are willing to stay actively engaged, to question outputs, and to remain accountable for the consequences of decisions made with technological support.
In practice, responsible AI use isn’t just a technical challenge. It’s a leadership one”.
Quiet courage in the age of AI
Annabel’s point reinforces something which needs more overtly surfacing in leadership development spaces.
The challenge for intelligent and intentional leaders is to remain critical and curious - and to build the psychological capacity to stay accountable when technology makes it easier to step back.
This is where psychological capital becomes essential.
Leaders need:
• Hope that human judgment still matters in AI-assisted environments
• Self-efficacy to trust their own critical thinking when an algorithm sounds confident
• Resilience to tolerate friction, delay, or discomfort when questioning automated outputs
• Optimism that accountable leadership is still possible at scale
Together, these capacities enable what I call quiet courage.
Quiet courage isn’t about rejecting technology or making dramatic stands. It’s the steady willingness to remain visible and accountable in the decision loop. It’s the leader who says, “Let’s verify this,” or “What are the consequences if this is wrong?” even when speed, certainty, and systems are pushing in the opposite direction.
AI can assist decisions - but not own them
AI can process information faster than any human. It can reduce cognitive load, surface patterns, and support better analysis. What it cannot do is carry moral weight.
It cannot weigh fairness. It cannot understand injustice. It cannot take responsibility.
When leaders forget this, accountability slowly migrates - from people to systems, from judgement to process, from ownership to abstraction. And when something goes wrong, the question that surfaces is always the same: Who was responsible?
A pause for reflection
Before moving on, consider:
• When was the last time you actively questioned an AI-generated output?
• What makes it hard to challenge technology that presents information confidently?
• If an AI-assisted decision you made turned out to be wrong, could you clearly articulate your role in that decision?
• Where might accountability be quietly drifting away from you or your team?
One practical step:
In your next leadership or team meeting, take one AI-assisted process and ask: “What would we do if this got it wrong?” Don’t move on until there’s a clear, human answer.
The courage this moment demands
AI adoption is accelerating, not slowing. Investment, institutional pressure, and organisational enthusiasm are all moving in the same direction.
Weeks after Guildford’s resignation, the UK government announced a £140 million investment in AI for policing.
Which makes one thing clear: the need for accountable leadership is becoming more urgent, not less.
The courage required here isn’t always visible or commendable. It’s quiet, daily, and often inconvenient. It’s the courage to slow down, to verify, to question, and to own decisions - even when an algorithm suggested them.
That’s the kind of courage our moment demands. And it’s a capability leaders now need to build deliberately.
Our thinking spaces and workshops develop the decision capability leaders need to work with AI - and remain responsible for the outcomes.
If you’d like to explore how this capability can be developed in your organisation, Annabel and I would both welcome a conversation.
