There is a moment I have watched play out in boardrooms and leadership meetings with remarkable consistency. Someone returns from a conference, or reads a competitor's press release, and the question lands on the table: "What is our AI strategy?"
It is a reasonable question. But the way it is usually asked reveals the problem. AI is being treated as a destination, something to acquire, announce, and deploy, rather than a lens through which to examine how decisions are made, how data flows, and whether the organisation is actually ready to benefit from any of it.
I have observed this pattern across multiple organisations. The technology is rarely the stumbling block. The stumbling blocks are almost always the decisions made before the technology is ever switched on.
The organisations that will get the most from AI are not the ones moving fastest. They are the ones asking the right questions before they move at all.
Four Decisions That Determine Whether AI Creates Value or Chaos
Based on what I have seen, there are four decision failures that consistently undermine AI initiatives. None of them are technical. All of them are leadership problems.
The vendor demo is compelling. The capability looks impressive. The procurement decision gets made. Then, three months in, the organisation discovers that no one has defined what problem this tool is actually solving, who owns it, or how success will be measured. AI adoption driven by vendor enthusiasm rather than business need is one of the most common and costly mistakes I have observed. The technology works. The use case just does not exist.
AI does not generate insight from thin air. It surfaces patterns from data, and if that data is inconsistent, incomplete, siloed, or ungoverned, the AI will amplify those problems rather than solve them. Organisations that attempt to layer AI capabilities onto a weak data strategy are not accelerating their performance. They are accelerating their existing dysfunction. Before any serious AI investment, the honest question is: do we actually trust our data?
Fear of being left behind is a powerful driver of bad decisions. When a competitor announces an AI initiative, the pressure to respond can override the discipline to think clearly. I have seen organisations commit significant budget to AI programmes that were defined entirely by what a competitor appeared to be doing, with no analysis of whether the same approach made sense for their own business model, customers, or operational context. Reactive AI investment almost never delivers the value that deliberate, outcome-led investment does.
Governance, ethics, and compliance considerations are frequently left until after deployment, treated as a legal formality rather than a strategic input. In regulated industries, this is particularly dangerous. Questions around data privacy, model explainability, bias, and accountability need to be embedded into the design of any AI initiative from the outset, not retrofitted when a regulator or incident forces the conversation. The organisations getting this right are involving legal, compliance, and risk functions at the beginning, not the end.
What Good AI Decision-Making Actually Looks Like
The organisations I have observed navigating this well share a few common traits. They are not necessarily moving the fastest, and they are not always deploying the most sophisticated models. But they are deliberate, and that deliberateness is what separates value creation from expensive disappointment.
- The problem is defined before the solution is selected. Use cases are evaluated against business outcomes, not capability demonstrations.
- A data governance framework exists and is actively maintained. The organisation knows where its data lives, who owns it, and whether it can be trusted.
- AI investment decisions are made with input from technology, legal, compliance, and the business. No single function owns the decision.
- Success metrics are established before deployment, not after. There is a clear definition of what good looks like.
- Ethical and regulatory considerations are built in from the start. Explainability, fairness, and accountability are design requirements, not compliance checkboxes.
A Note for Board Members and Senior Leaders
If you are sitting on a board or in a leadership team being asked to approve an AI investment, there are a handful of questions worth asking before you sign off.
What specific problem are we solving? If the answer is vague, or if it is framed primarily around what competitors are doing, that is a warning sign. AI strategy must connect to a concrete business problem or opportunity.
What does our data look like today? The quality of your AI output will be limited by the quality of your data input. If the organisation does not have a clear answer to this question, the AI investment is premature.
Who is accountable? AI initiatives that span multiple functions need clear ownership. If responsibility is diffuse, accountability will be too, and when something goes wrong, and something always eventually goes wrong, no one will own the response.
Have legal and compliance been involved from the start? Not informed after the fact. Involved from the start. Particularly in financial services, healthcare, or any regulated environment, this is non-negotiable.
AI is not inherently risky. Undisciplined decision-making is. And that has always been true, long before AI entered the conversation.
The Opportunity for Those Who Get This Right
None of this is an argument against AI investment. The capabilities being developed right now are genuinely significant, and organisations that build the foundations correctly will compound real advantages over time.
But the window for thoughtful, deliberate adoption is narrowing. The pressure to move quickly is only increasing. Which means the leaders who can hold the line, ask the hard questions, and insist on rigour before deployment, are exactly the kind of people their organisations need most right now.
The technology will keep improving. The discipline of good decision-making is what you bring to it.
Thinking About AI Investment in Your Organisation?
I work with boards and leadership teams to cut through the noise and build technology strategies grounded in business reality. If you are navigating an AI decision and want a frank, independent perspective, let's talk.
Start a Conversation