Implementation is the strategy

The book’s central argument is that the UK’s AI problem is not a strategy problem. It is an implementation problem. Over two decades, the UK has produced world-class strategy documents — and then failed to convert them into institutional change. The same structural barriers (fragmented governance, inadequate procurement, the separation of technology design from policy design) have recurred across GDS, Universal Credit, NHS digitisation, and now AI.

The book argues this is not because the strategies were wrong but because what gets implemented, in what sequence, with what resources, under whose accountability — these decisions constitute the real strategy regardless of what policy documents declare. When government staffs AI implementation teams with junior analysts on temporary contracts, that communicates priorities more clearly than any white paper.

The book proposes a concrete five-year programme of reform across five delivery strands: governance, procurement, infrastructure, skills, and international positioning. It is deliberately more demanding than conventional policy prescriptions because, as the author argues, conventional prescriptions have demonstrably not worked.

Reject both US and EU models. Pursue an adaptive path.

The book frames the UK’s fundamental choice as between three roads, and argues strongly for the third:

1

The US model: Prioritise speed, accept platform dependency on hyperscale providers, rely on market forces. Produces remarkable innovation but extreme geographic concentration, algorithmic bias, and only 37% public trust in AI.

2

The EU model: Prioritise citizen protection through comprehensive regulation, accepting slower adoption and higher compliance costs. Achieves 68% public support but the EU’s share of global AI patents has declined from 12% to 8%, and 18% of researchers relocated in four years. (The book notes that even the EU is now retreating from this position via the Digital Omnibus simplification package.)

3

The UK's adaptive path (recommended): Build sovereign capability in areas that matter most, set governance standards that others adopt, and use the UK’s distinctive institutional assets — from the NHS to the City of London — as proving grounds for AI that is both innovative and accountable. Democratic values, innovation excellence, and social inclusion as mutually reinforcing rather than trade-offs.

The book argues this is not a “split the difference” position. It requires five concrete choices that no other nation is currently making (detailed below under International Positioning).

Redesign governance around three principles

The book diagnoses the UK’s core institutional failure as governance fragmentation: multiple bodies with partial AI remits (AI Safety Institute, Sovereign AI Unit, DSIT, CDDO, sectoral regulators, Geospatial Commission) producing what it calls “strategies that organisations politely ignore.” It proposes three design principles and specific structural reforms.

Three institutional design principles

Adaptive capacity: Distributed decision-making, embedded feedback loops, iterative learning with tolerance for failure, and investment in durable human capability rather than short-term projects.

Genuine trust: Transparency in design (publish code, training data, performance metrics), clear accountability with real mechanisms for redress, responsiveness to evidence of harm, and inclusive deliberation that brings affected communities into governance rather than asking them to ratify decisions made elsewhere.

Authentic inclusion: From the start not as retrofit; power not just voice; distributed across scales (individual, organisational, system, societal); adequately resourced; and measured with course correction.

Create an AI Coordination Authority

Not a new regulator, but a statutory body modelled on the OBR — independent of departmental interests, with a statutory mandate, and required to publish regular assessments. Initially a Cabinet Office unit reporting to the PM’s Office, seeking statutory mandate within 18 months. First task: publish an AI Readiness Assessment across all major departments.

The critical design choice: between a body that coordinates (convenes, recommends, publishes) and one that directs (mandates, enforces, allocates). The book argues that coordination without enforcement has been tried and has failed.

Three governance models for institutional reform

Adaptive learning institutions: Cross-functional teams rather than siloed departments; regular review cycles; failure documentation and learning; investment in capability not just projects; external accountability through public reporting.

Multi-stakeholder governance forums: Represented participation from government, business, civil society, academia and affected communities; shared agenda-setting; resource pooling; distributed action with mutual accountability. Modelled on the Open Banking precedent, where participation was mandated by the CMA, not voluntary.

Community-embedded innovation: Labs rooted in specific places working on locally identified challenges; resourced and supported; connected to a broader national network; with clear scaling pathways for innovations that work.

Transform procurement from barrier to enabler

The book identifies procurement as one of the single most important levers government has, and one of the most persistently misused.

Specific reforms proposed

Mandate the Procurement Act 2023’s competitive flexible procedure for all AI contracts above £500,000, with built-in review stages and exit-by-design provisions.

Maximum two-year initial terms for all AI platform contracts with mandatory review before extension, replacing the current pattern of long lock-in commitments.

“Exit-by-design” provisions in all major platform contracts — contractual provisions for data portability, architecture documentation, and transition support that make switching providers feasible rather than theoretically possible.

Outcome-based contracting pilots (the book suggests DWP benefit processing AI and HMRC compliance analytics as initial candidates) evaluated on whether they deliver intended outcomes, not on whether systems were deployed on time and on budget.

The book argues that the Procurement Act provides the legislative authority; what is missing is institutional capability and willingness to use it.

Build sovereign infrastructure with four design requirements

Drawing on evidence from Estonia and India, the book argues for four infrastructure design requirements and a concrete sovereign compute strategy.

Four design requirements

Interoperability first: All systems must exchange verified data; no critical information in isolated silos. The opposite of the fragmented approach that has characterised UK government IT.

Privacy and security by design: Radical transparency allowing citizens to track who accesses their data, when, and why — following the Estonian model. Transparency about data access as the mechanism for building trust.

Efficiency before scale: Demonstrate value with immediate, visible benefits before scaling, rather than imposing burdens before delivering benefits (the lesson from GOV.UK Verify’s failure vs the NHS App’s success).

Inclusion as non-negotiable: Robust offline alternatives, human support for those unable to navigate digital interfaces, comprehensive testing against edge cases and vulnerable populations, clear audit trails and appeals processes for automated decisions.

Sovereign compute strategy

“Sovereignty by default” as the direction of travel for the £250 billion in cross-Atlantic AI infrastructure investment, negotiated contract by contract and prioritising the most sensitive functions. In practice, this means every major agreement should include at minimum: a data residency clause for specified categories of sensitive data, a documented exit pathway with defined migration timescales and costs, and audit access provisions giving UK regulators meaningful visibility into how systems operate. These are commercially standard provisions in other regulated sectors; the challenge is institutional willingness to insist on them.

Data localisation for the most sensitive public functions: Health, justice, welfare, and national security data stored and processed within UK jurisdictions where UK law applies and UK regulators can intervene without foreign consent.

Multi-vendor, exit-friendly architectures: Open standards, workloads that can be moved between on-premises, sovereign, and commercial environments without prohibitive cost.

Measurable target: At least 50% of sensitive government AI workloads running on audit-capable infrastructure by 2030, tracked through an annual Sovereign Compute Tracker.

The book acknowledges the DeepSeek moment (January 2025) — arguing that if smaller, more efficient models close the gap, sovereign compute becomes more feasible, not less relevant.

Build a new social contract for learning

The book argues that the skills challenge cannot be solved by central government or the private sector alone. It proposes structural reforms organised around five pillars.

Lifelong learning infrastructure

A national “Digital Lifelong Learning Passport” — — but designed around the failures of its predecessors. Individual Learning Accounts (collapsed amid fraud in 2001), the National Retraining Scheme (quietly abandoned), and the Lifelong Loan Entitlement (focused on funding access, not skills portability) all failed to solve the demand-side problem: without employer recognition and individual incentive, no skills record is worth maintaining. The Passport proposed here addresses this through employer co-investment giving firms a stake in the record's accuracy, government credits making maintenance financially worthwhile for individuals, and integration with existing platforms (particularly the Lifelong Loan Entitlement infrastructure) rather than building from scratch. It would track formal and informal achievements, linked to peer mentorship and project-based assessment.

Cross-sector delivery alliances

Regional digital skills consortia co-governed by employer coalitions, FE colleges, universities, and civic partners. National digital skills observatories linking real-time labour market data with provider outcomes. Micro-credential and portfolio frameworks recognised across sectors.

Digital citizenship from primary school

Digital literacy as civic and ethical, not just technical. Embedded from primary school upwards, covering privacy, security, democracy, and online identity. The book cites Estonia where 90% of citizens engage safely in digital government.

Funding mechanisms

An AI skills levy extending the Apprenticeship Levy to cover AI reskilling for existing employees, with smaller firms able to draw on levy funds more easily.

Hypothecated digital skills funding ring-fenced from annual budget competition — the book argues that competing through annual budget rounds has systematically under-funded long-term capability building.

Scale required: The book notes 10 million adults lack essential digital skills. The “One Big Thing” initiative trained 470,000 civil servants in foundational AI literacy — but foundational literacy is the floor, not the ceiling. The investment needed is at Spending Review scale.

Civil service target

An aspirational target of 10% of the civil service in DDaT (Digital, Data and Technology) roles by 2030 — whether the precise figure is 10%, 8%, or 7% matters less than committing to a measurable trajectory and the workforce plan to achieve it. In practice, the majority of growth will need to come from reclassifying and reskilling existing civil servants rather than net new recruitment. Near-term target: 50,000 civil servants trained to intermediate AI level in high-priority departments (DWP, HMRC, NHS, Home Office) by end of Phase 1.

Close the legitimacy gap — not with more frameworks but with institutional power

The book’s sharpest argument: the UK’s most persistent AI governance failures are legitimacy failures, not technical ones. Standard responses (better transparency, more ethics frameworks, additional consultation exercises) have been tried repeatedly and have not worked. Two decades of evidence — from the NHS to the exam algorithm, from care.data to predictive policing — demonstrate this.

Mandatory legitimacy assessment

Every high-stakes public AI deployment subject to a mandatory assessment: not only “does this system produce accurate outputs?” but “will the people affected by this system experience it as legitimate?”

Three specific requirements: (1) Named human accountability for every consequential algorithmic decision — a specific individual, not a committee. A workable threshold must distinguish between decisions where AI determines or substantially shapes an outcome affecting an individual's rights, liberty, livelihood, or access to essential services (where named accountability applies) and decisions where AI assists human judgment without determining the outcome (where institutional accountability suffices). Defining this threshold would be an early task of the proposed AI Coordination Authority. (2) Meaningful explanation rights in plain language, delivered in time for the explanation to matter. (3) Genuine redress with teeth — independent of the deploying institution, resourced to investigate, empowered to compel changes including suspension. The most credible institutional home is an expanded Information Commissioner's Office, which already has jurisdiction over automated decision-making under UK GDPR; an alternative is a specialist algorithmic ombudsman within the Parliamentary and Health Service Ombudsman's office. Either route is viable; leaving redress without an institutional home is not.

Give communities genuine power through workable mechanisms

The book argues that transparency without power redistribution is "accountability theatre" — but that power must be channelled through mechanisms that work within existing institutional structures:

Mandatory community impact assessments for all AI systems with significant public impact, sequenced before procurement decisions are made. The procuring authority must publish the assessment and respond formally to community concerns before awarding a contract — giving reasons where it proceeds despite objections. The model is environmental impact assessment, which has embedded community voice in planning decisions without giving any single group an absolute veto.

Funded citizen panels with the power to require modification of AI deployments in specific localities, operating within defined timescales. Where a panel identifies evidence of disproportionate harm, the deploying institution must either modify the system or publish a detailed justification for proceeding.

Affected-party representation with meaningful influence on the bodies that define requirements and evaluate proposals for high-impact public AI systems — ensuring communities shape system design from the outset.

Mandatory public registries of all algorithmic systems used in public-sector decision-making, with impact assessments and performance data.

Invest in institutional ethics capacity

The UK does not need another ethics toolkit. It needs institutions capable of making and defending difficult trade-offs under uncertainty. Trained ethics officers embedded in deploying organisations with authority to delay procurement. Regular adversarial testing of AI systems by independent bodies. Cross-sector learning networks that share failure cases without reprisal.

Make five distinctive global choices

The book argues that if “adaptive path” is to mean more than splitting the difference, the UK must make five specific choices that no other nation is currently making. Each involves political cost; each prevents “third way” from becoming “no way.”

1

AI assurance as an export industry. The UK’s combination of world-class AI research, principles-based regulatory tradition, and the AI Safety Institute creates a unique foundation for becoming the global leader in AI testing, auditing, and certification. This is a market, not just a public good.

2

AI in complex, high-trust public systems. If the UK can demonstrate AI deployment that improves outcomes, maintains equity, and commands public trust within the NHS, it will have created a replicable model that dozens of countries need. No other nation has this combination of scale and institutional depth in a single public system.

3

Mandatory algorithmic transparency for public-sector AI. Neither the US nor the EU currently requires publication of algorithmic impact assessments for all public-sector AI deployments. The UK could, building on existing equalities legislation.

4

A “data trust” model for sensitive public data. Between the US approach (data governed by private platforms) and the EU approach (data governed by comprehensive regulation), the UK could pioneer independent data trusts with fiduciary obligations to data subjects, authorised to negotiate terms with AI developers.

5

Binding infrastructure sovereignty provisions. "Sovereignty by default" as the direction of travel for incoming cross-Atlantic AI investment, negotiated into specific contracts with priority given to the most sensitive public functions.

Target: the UK demonstrably leading in at least two of these five areas by 2030.

A phased five-year implementation roadmap

The book proposes three phases across five delivery strands (governance, procurement, infrastructure, skills, international positioning). It explicitly names the strategic choices and decision points leaders will face at each stage.

Phase 1: Foundations

2025–2026

Governance: Establish AI Coordination Authority as Cabinet Office unit. Publish AI Readiness Assessment across all major departments. Seek statutory mandate within 18 months.

Procurement: Issue guidance mandating competitive flexible procedure for AI contracts above £500k. Launch two outcome-based contracting pilots (DWP, HMRC). Exit-by-design provisions in all new platform contracts.

Infrastructure: Finalise Edinburgh exascale supercomputer spec. Begin expanded AI Research Resource procurement. Publish first annual Sovereign Compute Tracker. Invest in domestic chip design.

Skills: Extend “One Big Thing” to intermediate level for 50,000 civil servants. Ring-fence 5% of AI Growth Zone budgets for local skills development. Launch Digital Lifelong Learning Passport design.

International: Prioritise AI assurance credentials through the AI Safety Institute. Begin responsible AI certification frameworks. Deepen Commonwealth partnerships.

Phase 2: Scaling and Testing

2027–2028

Governance: Seek primary legislation for AI Coordination Authority. Annual readiness assessments with departments required to respond publicly.

Procurement: Scale from two pilots to ten departments. Publish outcome-based contract evaluation including honest assessment of what did not work.

Infrastructure: AIRR capacity to 10× baseline. First AI Growth Zone fully operational with integrated skills, compute, and innovation. Decision point: whether to impose data localisation requirements for sensitive public-sector AI workloads.

Skills: AI skills levy pilot: extend Apprenticeship Levy to cover AI reskilling in five sectors. Decision point: whether to protect skills funding through hypothecated mechanisms.

International: Choose between domain leadership (AI assurance, healthcare AI) or broader influence. Focus capacity-building investment: Commonwealth, regions, or multilateral.

Mid-2028 political continuity test: Falls within the likely general election window. The strategy must by this point have demonstrated measurable value in at least three high-visibility public services, built constituencies that extend beyond the originating government, and created institutional structures whose removal would be politically costly.

Phase 3: Systemic Embedding

2029–2030

Governance: AI Coordination Authority operating with statutory mandate. Governance forum publishing annual “UK AI Implementation Report” with honest assessment of progress, failures, and course corrections.

Procurement: Agile AI procurement the default across government, not the exception. Exit-by-design in all major platform contracts.

Infrastructure: AIRR to 20× baseline. Sovereign Compute Tracker showing at least 50% of sensitive government AI workloads on audit-capable infrastructure. At least three AI Growth Zones functioning as integrated skills-compute-innovation ecosystems.

Skills: Civil service at or approaching the DDaT target. Skills levy extended UK-wide. Regional digital skills consortia operational in every AI Growth Zone.

International: UK demonstrably leading in at least two of five distinctive contributions. Global recognition as the preferred partner for responsible AI deployment.

What leaders must do differently

The book identifies three areas where specific constituencies face trade-offs that no roadmap can resolve. A common thread: meaningful progress requires accepting friction.

Business leaders

AI adoption decisions that appear purely technical often carry strategic consequences that only become visible later. The depth of integration with particular platforms, the terms under which data is shared, and the degree of dependency created all shape organisational options for decades. The book warns against “business lock-in” — the point at which dependency on a single AI platform becomes so deep that switching is prohibitively expensive.

Civil society

The test is whether participation in AI governance means genuine power or consultative theatre. Processes that seek input without conferring decision-making authority create the appearance of inclusion without meaningful influence. Communities significantly affected by AI deployment should have structured, resourced, and consequential roles in shaping the systems that affect their lives — through community impact assessments, funded citizen panels, and meaningful representation in requirements-setting, not merely consultative theatre.

Research institutions

The challenge is translating globally competitive AI research into policy influence where it matters. Excellence in publication does not automatically translate into impact on the UK’s most pressing implementation challenges. Research institutions must choose between global academic prestige and domestic policy impact, and resource the latter accordingly.

The tools exist, both technological and organisational. The knowledge exists, hard-won through decades of success and failure. The roadmap is set out. What remains is the willingness to start before conditions are perfect, to persist when progress seems slow, and to hold leadership accountable for delivery rather than announcement.
— Chapter 8 conclusion