By Stephanie McCabe
Stephanie is the President of KnowerTech, a board director, and former Deputy City Manager for the City of Edmonton. With a background in engineering and executive leadership, she has led major transformational initiatives across both the public and private sectors, including technology, infrastructure, and business growth.
I have been in a board meeting where we knew, collectively, that management was not ready to launch a major initiative. Everyone in the room sensed it. It was the most important thing to be said that day.
We said it in the last five minutes.
By then there was no time to deal with it properly. No time to explore what “not ready” actually meant, what would make them ready, what the board’s role was in closing that gap. The conversation that should have shaped the entire meeting happened in the margins of it, and we moved on.
I have been in another board meeting — several, in fact — where we collectively doubted whether the management team had what it took to navigate a major change. It took us multiple meetings to actually surface that concern out loud. Multiple meetings of decisions made in the shadow of something nobody was willing to name.
Both of those failures had nothing to do with expertise. Everyone in those rooms was experienced, capable, and well-intentioned. The failure was courage. The courage to say the important thing early, when it could still change something, rather than late, when it was too uncomfortable to leave unsaid any longer.
The most expensive thing a board can do is know something and wait to say it.
I am telling you these stories because they are the truest thing I know about how boards fail. And they are directly relevant to AI — not because AI is a unique governance challenge, but because it is a current one, arriving at exactly the moment when most directors are least equipped to name what they actually think.
Most board members don't know that much about AI. That is the starting point, not the weakness.
I want to say this plainly because most governance conversations around AI skip it entirely: the majority of directors sitting on boards right now do not have enough firsthand experience with AI to govern it confidently. They have read about it. They have heard presentations about it. They may have experimented with a consumer tool once or twice. But they do not have the working knowledge that comes from using it daily, seeing where it succeeds and where it fails, and understanding what adoption actually feels like from the inside.
That gap is real and it matters. But it is not the problem. The problem is what happens when boards respond to that gap by performing confidence they do not have — by asking the questions that sound sophisticated rather than the ones that are actually true, by accepting management’s AI narrative without the literacy to probe it, by treating the risk register as a substitute for genuine engagement.
The gap is closeable. Directors who are not yet using AI personally should start. Not to become technical experts — that is not the job — but to have enough firsthand experience to know the difference between an organization that is genuinely building AI capability and one that is performing it for the board’s benefit.
What is not closeable is the gap created by a board that will not admit what it does not know. That board will always be one step behind the conversation it is supposed to be governing.
In it to win it together.
The best boards I have been part of share a quality that is harder to describe than it sounds: they are genuinely in it together. Not performing alignment. Not managing consensus. Actually committed to the same outcome, willing to hear perspectives that complicate their own, and courageous enough to say the thing that needs to be said even when it is uncomfortable.
That quality matters for every governance challenge. It matters most for the ones where the knowledge is uneven, the stakes are high, and the right answer is not obvious. AI is all three of those things simultaneously.
A board that functions this way brings different perspectives into the room and actually uses them. The director with a technology background sees something the director with a finance background misses, and vice versa, and the conversation between them produces something neither would have reached alone. That is not a process. It is a culture. And it is built, or not built, over many meetings, through whether people consistently have the courage to say what they actually think.
You hear different perspectives. You are in it to win it together. People have the courage to say the unsaid.
The what if question.
One of the things I wish boards had asked me more when I was on the management side is simply: what if we did this?
Not what could go wrong. Not what are the risks. What if we actually did it — what would become possible?
That question requires a different posture than most governance conversations default to. It requires genuine curiosity rather than protective skepticism. It requires a board that sees its role as enabling good decisions rather than preventing bad ones — or more precisely, a board that understands those are not the same job and that doing only the second one eventually makes the first one impossible.
On AI specifically, the what if question is the one most boards are not asking. They are asking what could go wrong, what is our exposure, what does the policy say. Those are necessary questions. They are not sufficient ones.
A board that only asks what could go wrong will always find something. Meanwhile, the company falls behind.
Here is a scene that is playing out in boardrooms right now. A director raises AI on the agenda. The discussion turns quickly to the risk of recording board meetings — who has access to the transcript, where the data is stored, what the liability exposure is. The board resolves to proceed cautiously. The item closes.
Meanwhile, the company being governed is trying to figure out whether it is allowed to experiment with AI at all, whether the CEO has set a genuine expectation or just an aspiration, and whether anyone will be supported if they try something and it goes wrong. And the people doing the work are asking quieter questions: do I have time to learn this? Is this company moving fast enough for my career? If we’re not exploring AI, am I falling behind just by staying here?
The board managed the risk of the meeting recording and missed the opportunity of the company.
That is not governance. That is the last-five-minutes problem in a different form. The board knew what the important conversation was — is our company genuinely moving on this, and what is our role in making sure it does — and chose the safer, smaller question instead.
Boards are so focused on managing the risk of AI that they have stopped noticing they are also managing away the opportunity.
This is not an argument against risk management. It is an argument that risk management is only part of the job. The cost of caution needs to be accounted for alongside the cost of the risk being managed.
You are not just governing internal adoption. You are governing against competitors who built AI into their entire business model from day one.
Most board conversations about AI are framed as an internal change management question: is our company adopting AI fast enough? That is a necessary question. It is not the complete one.
Right now, in your industry, there are companies that did not adopt AI. They were built with it. No legacy processes to retrofit. No cultural resistance to manage. No technical debt accumulated over years of doing things the old way. Their entire operating model — how they price, how they staff, how they deliver, how they compete — was designed around AI from the first day. They are not trying to do what you do with better tools. They are trying to replace what you do entirely.
This is not a technology sector problem. It is happening across industries simultaneously. At the AI application layer, AI-native startups captured nearly $2 in revenue for every $1 earned by incumbents in 2025 — 63% of the market, up from 36% the year before. In healthcare, AI-native companies are automating revenue cycle management and clinical documentation. In legal services, they are handling briefings and research. In financial services, they are underwriting and compliance. In the advertising industry, one AI-native startup founder stated plainly: “We want to disrupt the traditional ad agency” — and raised $12 million in seed funding to do it. These companies are not trying to do what incumbents do with better tools. They are rebuilding the business model from the ground up, with no legacy costs, no cultural resistance to manage, and no technical debt to carry. The question for any board is not whether this is happening in their industry. It is how far along it already is.
Sources: Menlo Ventures, The State of Generative AI in the Enterprise, December 2025; Business Insider, December 2025.
The board’s job is not only to ask whether the company is keeping up with the technology. It is to ask whether the company’s business model is still the right one for a world where AI-native competitors exist.
That is a different conversation than the one most boards are having. It is a strategic question, not a change management one. And it belongs on the agenda way before the risk register.
Ozone explored this tension previously in the article on resilience governance: organizations cannot eliminate uncertainty, but they can build the capability to adapt. AI may be the clearest current test of this, forcing boards to confront whether they are governing for protection alone, or for long term resilience and relevance.
Most boards are asking management whether the company is using AI. They are accepting yes as a sufficient answer. It is not.
The adoption culture — whether people are genuinely learning, whether the CEO is modeling use, whether there is psychological safety to try and fail — is not something management will self-report accurately when things are going badly. It requires the board to ask better questions. And asking better questions requires directors who have enough literacy to recognize a good answer when they hear one.
Here is what is being delegated by default in most boardrooms right now:
None of these are things management will surface unprompted. All of them are governance questions.
These are not risk questions. Risk questions exist in every governance framework already. These are the questions that tell you whether your CEO is leading the transformation or managing the appearance of it. They are also the questions that require a board to be genuinely in it together — because hearing the answers honestly, and responding to them honestly, is not a solo act.
Has the CEO declared, clearly and publicly, that using AI is an expectation — not a suggestion? There is a difference between a CEO who has communicated that AI matters and one who has been clear about where the organization is going. The destination should not be negotiable. How people get there — the support, the timeline, the genuine investment in helping teams adapt — absolutely is. A declaration is not a mandate with no flexibility. It is honesty about direction combined with genuine commitment to the journey. A board that cannot tell whether their CEO has done this is not asking closely enough.
How do you know your team actually understands how to use AI? Not whether they have access to the tools. Whether they know how to use them well enough to get real value. Listen for specifics. A good answer names what people are doing, not what they have been given.
How have you improved customer outcomes using AI? Not internal efficiency. External impact. If the answer is only about what AI has done for the company’s operations, the transformation is pointing in the wrong direction.
What is the biggest threat and the biggest opportunity AI represents for this company right now? Both in the same breath. A CEO who can answer only one of them is not seeing the full picture. A board that only wants to hear one of them is not asking the full question.
What are you most afraid of, and what are you most excited about? This is not a governance question. It is a human question. That is exactly why it works. A CEO who is genuinely inside this transformation will answer it differently than one managing the narrative around it. The board’s job is to know the difference — and to create the conditions where an honest answer is actually possible.
That last point matters more than it might seem. The CEO will only be as honest as the room makes it safe to be. The culture of a board — whether it is genuinely safe to say the unsaid, whether difference is honoured rather than smoothed over, whether people are in it to win it together — determines the quality of the information the board actually gets. Governance is not just about asking the right questions. It is about being the kind of room where the right answers can be said.
A board exists to do four things: see the forest through the trees, be a genuine sounding board for management, manage risk, and deliver long-term value. AI done right advances all four. AI done badly — or not done at all — erodes them.
The adoption problem is real. Most organizations are not moving fast enough, not because the technology is hard but because leadership has not made it non-negotiable — and boards have not asked why not. That is where the long-term value is being lost. Not in a failed implementation, not in a data breach, but quietly, quarter by quarter, in the gap between the companies that are genuinely building AI capability and the ones whose boards accepted yes as a sufficient answer.
That is the board’s job. Say the important thing early. Ask the harder question. Be in it to win it together. The forest you are supposed to be seeing is right there.
This document was developed in collaboration with Claude, an AI assistant made by Anthropic. The ideas, arguments, and experiences reflected here are entirely my own. Claude was used as a thinking partner and writing tool — using AI in my own work is something I advocate for, and this document is an example of that in practice.
50% Complete
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.