How to build an AI roadmap for a mid-market business
An AI roadmap is not a list of tools to buy. It is a sequenced set of decisions about where AI creates commercial leverage — and in what order.
An AI roadmap is a sequenced set of commercial decisions about where AI creates measurable leverage, and in what order. It is not a technology plan, a tool catalogue, or a list of use cases to approve.
At Graph Digital, an AI advisory firm for UK mid-market businesses, I work with CEOs and boards who have approved AI investment, watched pilots multiply across departments, and now face the board's question: what is all of this actually returning? The pattern is not a lack of ambition. It is a lack of sequence.
A useful AI roadmap is not a technology wishlist — it is a sequence of commercial decisions that starts with what you already have, identifies where AI creates measurable leverage, and orders investment by confidence of return rather than enthusiasm of proposal.
AI roadmap, AI strategy, AI implementation plan: what's the difference?
These three terms are used interchangeably, and that confusion is itself a source of roadmap failure.
| AI Roadmap | AI Strategy | AI Implementation Plan | |
|---|---|---|---|
| Primary focus | Commercial sequencing — which investments, in what order, governed by what criteria | Direction — the commercial rationale for AI investment and criteria for success | Technical delivery — how to build and deploy specific solutions |
| Built by | Commercial leadership (CEO, CFO, COO) with or without advisory | Executive team, often with advisory input | IT, engineering, or implementation partner |
| Starting point | Portfolio view of what current AI investment is producing | Business model, commercial objectives, competitive position | Specific use cases or platforms already decided |
| Key output | Keep/kill/scale decisions and a sequenced investment plan | AI vision, governance principles, strategic criteria | Delivery milestones, technical architecture, resourcing |
| When you need it | When AI investment is running but outcomes are unclear | Before significant AI investment begins | When strategy and roadmap decisions are already made |
An AI strategy sets the direction. An AI roadmap sequences the execution. An implementation plan delivers the execution. Without a strategy, the roadmap has no commercial anchor. Without a roadmap, the strategy remains directional. This article covers the middle layer.
What an AI roadmap actually is
An AI roadmap is a sequenced set of commercial decisions about where AI creates measurable leverage, and in what order.
Most searches for this topic return frameworks structured as technology procurement guides: define your AI vision, assess your data maturity, identify use cases, build a prioritised list, assign an owner. They miss the point that matters most to a CEO or CFO facing an AI investment question.
An AI roadmap is not a list of capabilities to acquire. It is a sequenced set of commercial decisions: where does AI create the most measurable financial leverage for this business, in what order should we invest, and what criteria determine whether each investment is worth making or continuing?
Built correctly, it tells you three things: what to stop, what to scale, and in what order to move next. Tools, vendors, and technical architecture all follow from those decisions. Without them in place, what you have is a technology catalogue with aspirational timelines attached.
What most AI roadmaps get wrong
The failure mode is consistent. A team is asked to build a roadmap. They produce a list: AI in customer service, AI in operations, AI in marketing, AI in finance. The list is prioritised by department enthusiasm and distributed as the strategy. Six months later, the same number of pilots are running, the same ROI questions are unanswered — and there are more of them.
In February 2026, I was talking to the CEO, CFO and COO of a 500-person London business growing at around 20% a year. They had AI tools running across the organisation. None of the three could tell me what those tools were collectively costing, or what any of them had produced commercially. "We don't know what we don't know," the CEO said: about their data, their systems, their total AI investment picture. That is the starting condition for most mid-market businesses with active AI investment, and it is exactly what a tools-list roadmap cannot fix.
The tools-list approach assumes the organisation has a clear picture of what is already running, what it costs, and what it is producing. Most do not. They know the visible tools — the ones with a named vendor and a monthly invoice. They have a much less clear picture of the experiments that started in different departments, the AI features embedded in existing software, and the pilots approved but never assessed against a commercial criterion.
Without that picture, the consequence is predictable: investment is driven by whoever makes the loudest case for the next initiative. Every quarter adds more activity. The ROI question becomes harder to answer, not easier.
The right starting sequence
To build an AI roadmap that holds, start with a clear view of what your current AI investment is producing — before sequencing anything new. That means answering four questions in order: what it costs across all initiatives, which initiatives have demonstrable commercial return, which should be stopped, and where evidence justifies scaling. Without those answers, AI planning has no foundation and the sequence has no anchor.
A commercially-anchored roadmap begins with four steps before any sequencing can happen:
-
Establish total AI investment cost. What does current AI investment actually cost across all initiatives? Visible costs (licences, subscriptions, vendor fees) and less visible ones (staff time, integration maintenance, data infrastructure). Most organisations do not have a single consolidated number.
-
Identify measurable commercial return. Which initiatives have a measurable commercial return? Not a projected return. Not an activity metric. Revenue attributable, cost reduction verifiable, risk reduction quantified. For most initiatives, this question cannot currently be answered — a failure of measurement architecture, not of effort.
-
Decide what to stop. Which should be stopped before more capital compounds? The question no tools-list plan asks. Every initiative has momentum — a sponsor, a vendor relationship, a team with invested time. Stopping requires a decision framework. Without one, weak pilots survive by default.
-
Find evidence of scaleable return. Where is there evidence of return that justifies scaling? Not enthusiasm. Which initiatives have demonstrated measurable commercial return at small scale? Scaling without evidence is how capital gets wasted at speed.
These four steps cannot be completed from inside the roadmap-building exercise. They require a clear view of the current portfolio — what exists, what it costs, what it is producing. Without that view, sequencing new investment has no foundation.
What the portfolio-first approach requires. This sequence takes 4–6 weeks before any new investment is sequenced. Kill decisions create internal friction — every initiative has a sponsor, and stopping it is a political act as much as a commercial one. That friction is the price of building a roadmap on evidence rather than enthusiasm, and it is exactly the point where most tools-list roadmaps fail: they avoid the hard stops and end up sequencing accumulation rather than eliminating it.
Why the portfolio view comes first
The most common error in roadmap development is treating the current state of AI investment as background context rather than the primary starting point.
If your organisation already has Copilot deployed, a chatbot on the website, and two or three pilot projects underway, the question is not "what AI should you add?" It is "what do you already have, and what is it producing?" The answer determines everything that follows — which initiatives to scale, which to stop, how much capital is genuinely available for new investment.
Unlike a use-case catalogue approach, which simply adds new AI categories to an existing list, the portfolio view forces a reckoning with what is already running before committing to what comes next.
Most CEOs and COOs lack a structured mechanism to produce that view — a portfolio picture that shows what AI investment costs, what it returns, and what criteria would justify more. Without it, sequencing your AI roadmap is guesswork.
The portfolio view surfaces one further risk that organisations with existing AI activity rarely have clear visibility of: whether current AI initiatives are built on a data foundation that can actually scale. AI deployed on fragmented, siloed, or inconsistent data does not produce unreliable results occasionally — it compounds the structural problem at AI speed. That risk is almost never visible until a structured portfolio assessment forces it into view.
How the AI Portfolio Review anchors your AI roadmap
The AI Portfolio Review is the structured mechanism that produces the portfolio view required to anchor a commercially-sequenced AI roadmap. It is not a delay before the roadmap begins — it is the foundation on which every subsequent sequencing decision is built.
The Review produces three outputs that the AI roadmap requires to function:
- A portfolio map: every AI initiative, its cost basis, and its commercial return status
- Keep/kill/scale recommendations: structured decisions on every active initiative
- A governance blueprint: the criteria framework that governs all future AI investment
Every initiative is assessed through the Portfolio Decision Framework — the four-criteria assessment at the core of the AI Portfolio Review: commercial return, strategic alignment, operational feasibility, and portfolio fit. Each initiative receives one of four decisions.
Operational feasibility covers more than technical delivery. In practice it includes regulatory readiness — whether an initiative's AI application falls under restricted or high-risk categories under frameworks such as the EU AI Act or equivalent UK guidance. An initiative that is technically deliverable but exposes the business to compliance risk fails the operational feasibility criterion on those grounds, regardless of its commercial return. Boards in 2026 are increasingly raising this question; the Portfolio Decision Framework surfaces it as a criterion, not an afterthought.
- Kill: no commercial return, no strategic fit — capital is better redeployed elsewhere
- Pause: potentially viable but not yet evidence-based; put on hold pending proof of return
- Scale: demonstrated commercial return at small scale; ready for increased investment
- Replace: the outcome is right but the current approach is wrong; redesign before reinvesting
The distinction between Kill and Pause matters. Pause preserves optionality for initiatives where the case is incomplete but plausible. Kill is for initiatives where continued investment has no commercial logic. Most organisations find this distinction clarifying rather than difficult once the criteria are applied consistently, because it separates the question of the initiative's potential from the question of the sponsor's judgement.
A review typically identifies three to four initiatives that should be stopped or redirected. Not because investment has been badly managed: investment was made without consistent decision criteria. That is the norm, not the exception. The capital freed is typically in the range of £80k–£200k [illustrative], which funds the next phase of the roadmap.
The AI Portfolio Review is step one of the roadmap — not a prerequisite before it.
How to structure an AI roadmap for board presentation
The board is not reviewing a project list. It is making capital allocation decisions. Structure the presentation around four questions in sequence:
-
Where we are. The portfolio map: what AI investment exists, what it costs, what it is currently producing. Not a slide with tool logos: a structured view of investment against return.
-
What we know. Which initiatives have produced measurable commercial return and which have not. Most organisations cannot produce this section without a portfolio review first.
-
What we recommend. Keep/kill/scale decisions, backed by the four-criteria framework. The board does not need to review every initiative. It needs to approve the decision logic.
-
What comes next. The sequenced roadmap: which initiatives will be scaled, which new investments are proposed, on what criteria, and what accountability structure governs the next 12 months.
This structure follows the capital allocation logic the board already applies to every other major investment decision.
One question the board will raise that this structure does not address by default: what are your competitors doing with AI? This is a legitimate driver of board-level anxiety, and the right place to answer it is in section 4 (What comes next). The sequenced roadmap should position proposed investments against the competitive question explicitly: which of these creates leverage relative to what competitors are building, not just relative to the current internal portfolio. Boards that have already approved AI investment often find the internal return question and the competitive position question are asked in the same breath. Addressing both in the same presentation removes the deferral that otherwise follows.
What a 12-month AI roadmap milestone structure looks like
A credible 12-month structure — the Decision-Gate Roadmap Model — is organised around decision gates, not delivery milestones. Each phase ends with a decision, not a status report.
Months 1–2: The AI Portfolio Review runs. Keep/kill/scale decisions are made and ratified. The roadmap has a foundation.
Months 3–4: Confirmed initiatives are scaled. Confirmed drains are stopped and capital is redirected.
Months 5–8: Any new initiative goes through the Portfolio Decision Framework before approval. Same four criteria, same four decisions. This prevents the accumulation problem from re-establishing itself.
Months 9–12: Every initiative funded in months 3–8 is reviewed against the commercial return commitment it was approved on. The roadmap is revised on what the evidence now shows, not just updated with new dates.
How to govern the roadmap
The single most common reason roadmaps fail to hold is the absence of a governance layer that applies the same decision criteria to every new initiative.
Without it, the pattern is consistent: the roadmap is built, approved, launched, and within two quarters new initiatives appear that were not on it, proposed by departments, vendors, or enthusiastic members of the leadership team. None has been assessed against the portfolio criteria.
The governance structure that prevents this is straightforward: every new AI initiative must pass the four-criteria gate before receiving approval or budget. Commercial return must be stated and testable. Strategic alignment explicit. Operational feasibility assessed. Portfolio fit confirmed. If it cannot pass at proposal stage, it does not proceed.
This is not bureaucracy. It is what makes the roadmap mean something.
Governance failure also has a human dimension that commercial frameworks do not always make explicit. The most common resistance to kill decisions is not analytical disagreement — it is the concern of the teams whose work is being stopped. Operational feasibility, as a criterion in the Portfolio Decision Framework, includes this question: can an initiative be stopped without damaging capability the organisation needs, creating structural disengagement, or fracturing relationships with delivery partners? Kill decisions made on criteria agreed in advance, applied consistently, and explained transparently are far more likely to hold. Those treated as purely financial calls, without acknowledgement of the human element, rarely survive the first leadership meeting without being quietly reversed.
As organisations move from passive AI tools and copilots toward autonomous agents that act and decide without step-by-step human approval, the governance layer described here becomes more critical, not less, and the roadmap itself needs to evolve.
The Four Failure Modes of AI Planning
Four failure modes appear consistently across mid-market AI roadmap efforts.
1. The tools-list roadmap. Lists AI capabilities by category, prioritised by vendor recommendation or team enthusiasm. No commercial decision anchor. Produces accumulation, not clarity.
2. The roadmap without governance. A structured plan at launch with no mechanism to maintain decision criteria over time. The accumulation problem returns within two quarters.
3. The roadmap built without portfolio view as its anchor. Sequences new investment without a clear picture of what existing investment is producing.
4. The roadmap built by IT, not commercial leadership. A technology plan that does not connect to commercial outcomes. Useful as architecture guidance. Not useful as a capital allocation framework.
Where to start
If your organisation already has AI investment running — and most do, whether or not there is a formal inventory — the starting point is not to plan the next initiative. It is to build the portfolio view.
The AI Portfolio Review is the structured mechanism for doing that. It produces the portfolio map, the keep/kill/scale decisions, and the governance framework that anchors everything that follows. The roadmap built from that foundation is built on evidence — not enthusiasm.
The AI Portfolio Review runs in 4–6 weeks, for a fixed fee. The roadmap built from it covers the next 12 months.
Graph Digital, an AI advisory firm for UK mid-market businesses, builds AI roadmaps from portfolio evidence — not vendor catalogues.
Frequently asked questions
How long does it take to build an AI roadmap?
The timeline depends on where you start. Beginning with a portfolio view — the right starting point — means the AI Portfolio Review runs in 4–6 weeks. The roadmap follows in parallel or immediately after. A commercially-grounded roadmap covering the next 12 months, with keep/kill/scale decisions, board presentation structure, and governance criteria, can be in place within 8–10 weeks. Building without the portfolio view first takes longer, not less — because the plan has to be revised once the evidence does not support the original assumptions.
What should an AI roadmap include?
A commercially-grounded roadmap should include: a portfolio view of current AI investment (what exists, what it costs, what it is producing); keep/kill/scale decisions on existing initiatives; a sequenced plan for new investment with commercial criteria for each; a board presentation structure (where we are, what we know, what we recommend, what comes next); a 12-month milestone structure organised around decision gates; and a governance framework requiring all future AI initiatives to pass the same four criteria before approval.
How often should we update our AI roadmap?
The milestone structure includes a built-in revision at months 9–12. That is the minimum. In practice, the roadmap should be reviewed each quarter against commercial return commitments — not to revise the plan wholesale, but to apply the four-criteria gate to any newly proposed initiatives and to confirm whether funded initiatives are returning what they were approved on.
What is the difference between an AI roadmap and an AI strategy?
An AI strategy sets the direction — the commercial rationale for investment, the criteria for success, the operating model. An AI roadmap sequences the execution — which initiatives, in what order, with what criteria, governed by what decision framework. Without a strategy, the roadmap has no commercial anchor. Without a roadmap, the strategy remains directional without a mechanism for execution.
How do I get board approval for an AI roadmap?
Structure the presentation as a capital allocation decision, not a technology update. Use the four-part structure in this article: where we are (portfolio map), what we know (what is and isn't returning), what we recommend (keep/kill/scale decisions backed by criteria), what comes next (sequenced roadmap with accountability structure). The board needs to approve the decision logic — the criteria for investment — not review every initiative line by line. That is a materially easier ask.
Where does an AI roadmap start if we already have some AI tools running?
It starts with a portfolio view of what you already have: every AI initiative currently running, what it costs, and what commercial return it has produced or committed to produce. Until you have that picture, you cannot sequence new AI investment rationally. Organisations with existing AI activity that try to build a roadmap forward without first taking stock of where they are have to revise the plan once the evidence surfaces.
What is a portfolio view and why does the roadmap need one?
A portfolio view is a structured picture of every AI initiative in the organisation — what it costs, what it is producing, and whether it passes the criteria for continued investment. The roadmap needs one because the four steps that anchor a commercially-sequenced plan (what does it cost, what is returning, what should stop, what should scale) cannot be completed without it. The portfolio view is the evidence base on which every subsequent roadmap decision is built.
Do I need an AI roadmap template?
An AI roadmap template can provide a starting scaffold, but it cannot replace the commercial decisions the roadmap is built on. The most common problem with template-driven AI roadmaps is that they prompt you to fill in fields (use cases, owners, timelines) before the harder questions have been answered: what is current investment actually producing, and what should stop before anything new is added? A template-filled roadmap built on unanswered questions is a structured version of the tools-list problem. The right starting point is a portfolio view, not a template.
Does this approach work for smaller businesses as well as mid-market?
The commercially-sequenced AI roadmap approach applies wherever AI investment decisions need to be made and justified. The AI planning questions — what does current investment cost, what is it returning, what should stop, what should scale — are relevant whether an organisation has five AI tools or fifty. For smaller businesses, the portfolio review typically runs faster and at lower cost because there are fewer initiatives to assess. The governance framework scales down in the same way: fewer criteria gates, simpler decision log, same principle.
Who should own the AI roadmap internally?
The AI roadmap should be owned by commercial leadership — typically the CEO or COO — not the IT function. This is not about technical capability. It is about decision authority. The roadmap is a capital allocation instrument; the person who owns it needs to be able to approve investment, stop initiatives, and hold commercial outcome accountability. Where the roadmap is delegated to IT, it invariably becomes a technology plan disconnected from commercial outcomes.
Can we build an AI roadmap without external help?
Yes — but the portfolio view that anchors it is harder to produce internally than it looks. The difficulty is not analytical; it is structural. Internal teams typically cannot produce objective keep/kill/scale recommendations on initiatives they have sponsored, proposed, or delivered. An external portfolio review addresses this structural conflict directly. The roadmap itself — once the portfolio view exists — can be built and maintained internally with appropriate governance in place.
What does building an AI roadmap typically cost?
The cost depends primarily on how you approach the portfolio view that anchors the roadmap. An external AI Portfolio Review engagement runs at a fixed fee. Internal approaches can reduce that cost but typically require more elapsed time, an honest internal audit process, and explicit decision authority to make kill recommendations stick. The capital freed by the review — typically in the range of £80k–£200k [illustrative] — often exceeds the engagement cost by a meaningful margin.
How do we know if our AI roadmap is working?
The milestone structure in this article includes a built-in review at months 9–12. For ongoing monitoring: each initiative on the roadmap should have a stated commercial return commitment from the point of approval. The governance framework applies the four-criteria gate to all new proposals. If the proportion of initiatives reaching Scale decisions is increasing quarter on quarter, and new Kill decisions are declining, the roadmap is functioning as a decision architecture — not just as a planning document.
How do we handle internal resistance when kill decisions affect department sponsors?
Kill decisions are the most politically difficult element of roadmap governance. The mitigation is structural rather than interpersonal: the Portfolio Decision Framework applies consistent criteria to all initiatives, so the decision is not about the sponsor — it is about the evidence. Presenting the criteria to leadership before the review runs, and securing agreement on what a Kill decision means before any initiative is assessed, removes the personalisation that makes sponsor resistance stick. Most leaders will accept a Kill decision made on consistent criteria they agreed to in advance.
What happens when all four criteria are met but budget forces a choice between initiatives?
The Portfolio Decision Framework is a portfolio tool — designed for exactly this situation. When multiple initiatives pass the four-criteria gate and budget constrains how many can proceed, prioritise by: (1) strength of commercial return evidence at smallest scale — the initiative with the clearest proof of return gets capital first; (2) strategic alignment depth — favour the initiative most central to the core commercial model, not adjacent to it; (3) operational feasibility differential — where execution risk is meaningfully different, choose the lower-risk path first. The framework does not eliminate difficult choices. It makes them consistent and defensible.
What does mid-roadmap governance failure look like, and how do you correct it?
Mid-roadmap governance failure is almost always a gate failure: new initiatives appear without going through the four-criteria review. The signals are a growing number of active initiatives despite the roadmap, kill decisions not being implemented, and budget allocated to new proposals before the existing portfolio has been assessed. The correction is a mid-cycle portfolio review — applying the same framework to the current state, including any initiatives added since the original roadmap was built. One quarter of consistent governance typically restores the decision architecture.
