Sovereignty Options

Delivering Control in Practice

As set out previously, sovereignty in AI is not a label but a condition that must be demonstrated through accountable ownership, jurisdictional infrastructure, control of intelligence, and sovereignty of demand, each of which must be established within the relevant jurisdiction and sustained under conditions of disruption.

In this context, sovereignty is not solely a technical or operational concern. It sits firmly within the governance dimension of ESG, and increasingly within its social dimension, where the behaviour of systems, the control of data, and the alignment of outcomes with societal expectations are subject to scrutiny at board level. As AI systems become embedded in decision-making, service delivery and critical infrastructure, the question of who controls those systems, how they operate, and whether they reflect the expectations of those who rely upon them becomes a matter of governance accountability rather than technical preference.

The question that follows is not whether those conditions can be defined, but how they can be delivered in practice. This requires consideration of three interrelated factors: the extent of current exposure, the range of sovereignty options available, and the pathways through which transition can be achieved over time.

1. Structural Reality: Sovereignty Must Be Engineered

This is not a trivial extension of the prior discussion. The conditions required for sovereignty do not arise naturally from existing AI delivery models; they must be actively constructed. While the conceptual framework for sovereignty can be stated with clarity, its implementation requires the deliberate unwinding of systems that have been engineered in the opposite direction.

Modern AI ecosystems are built around scale, interoperability and centralised control of intelligence, typically delivered through multi-jurisdictional, multi-national structures in which ownership, infrastructure and governance are distributed by design.

Designing for sovereignty within such an environment requires a deliberate sequence of steps to re-anchor ownership and accountability, localise infrastructure, and design out external dependencies from the control of intelligence, bringing it fully within the jurisdiction of delivery and into alignment with the expectations defined by sovereignty of demand. In practice, this involves working against embedded structures that cannot be unwound without consequence.

This is not an instantaneous transition. It unfolds over time and, in many cases, depends upon alignment with public policy, regulatory frameworks and sustained support from government, particularly where enabling conditions sit beyond the control of any single organisation. Sovereignty of demand is central in this context, as it defines both the requirement for control and the standard against which that control is judged.

Where policy frameworks are unstable, inconsistent or subject to continual revision, the ability to establish and sustain that control is materially constrained.

The challenge is not theoretical; it is structural, and it must be addressed as such.

2. Jurisdictional Tension: Control Under Conflict

This structural tension becomes particularly visible in relation to regulatory compliance. Frameworks such as GDPR are predicated on the ability to define, constrain and govern the movement and processing of data within identifiable legal boundaries. However, where AI systems operate across distributed environments, and where data access or model interaction may traverse jurisdictions, those boundaries become less clear in practice.

In parallel, a number of jurisdictions have established legal mechanisms with extraterritorial reach, under which authorities may assert rights of access to data or infrastructure beyond their immediate geographic boundary. These provisions are not unique to any single country, but reflect a broader pattern in which states extend legal authority over digital systems wherever they are held or operated.

The result is the potential for conflicting obligations, where requirements to restrict or localise data under one regime sit in tension with lawful access provisions under another. This is not simply a question of compliance, but of control. Where such conflicts exist, the ability to guarantee that systems operate solely within the intended jurisdiction becomes conditional.

In such circumstances, sovereignty of demand becomes particularly significant, as it defines the expectations against which these tensions are assessed and highlights the gap between nominal compliance and demonstrable control.

This reinforces the distinction between sovereignty as an assertion and sovereignty as a condition that can be tested and verified under scrutiny.

3. Sovereignty as a Spectrum of Options

In this context, sovereignty does not present itself as a single solution or architecture. It emerges instead as a set of options, each reflecting a different balance between control and dependency, performance and independence, efficiency and enforceability.

These options are shaped not only by technical design, but by commercial models, governance structures and the availability of capability within the jurisdiction, as well as the expectations defined through sovereignty of demand.

Sovereignty cannot therefore be treated as a binary state. Systems do not move cleanly from non-sovereign to sovereign; they occupy positions along a spectrum, determined by the extent to which control is retained or ceded across ownership, infrastructure, intelligence and demand.

In many cases, systems will exist in a state of transitional sovereignty, reflecting evolving expectations driven by sovereignty of demand rather than a deficiency in design. As views on control, accountability and alignment evolve, systems must adapt over time, with elements of sovereignty progressively established while others remain externally dependent.

This may be entirely appropriate depending on the use case, the risk profile and the tolerance for external dependency. However, it requires clarity. Transitional states must be understood as such, with a clear view of which elements remain externally controlled, how that affects exposure, and what actions are required to move toward full sovereignty where necessary. Trade-offs should be explicit, not obscured by language that implies a level of control that is not, in practice, present.

4. Transition: Cost, Constraint and Exposure

Progression along this spectrum is not cost-neutral. It is constrained by existing infrastructure, commercial arrangements and prior investment decisions, particularly where systems are embedded within cloud-native or externally managed environments.

Moving toward higher levels of sovereignty may require material changes to architecture, including capital investment in local infrastructure, reconfiguration of delivery models, and, in some cases, the replacement or replication of capabilities that are currently accessed externally. This includes not only compute capacity, but the physical and energy infrastructure required to sustain it.

AI systems, particularly at scale, introduce significant energy demands. High-density deployments, often operating at levels in excess of 100kW per rack, represent a step change from traditional multi-rack enterprise environments. These requirements place pressure on power availability, cooling systems and site design, and materially increase the cost of deployment and operation.

In regions where energy costs are elevated, or where electrical infrastructure is constrained, this becomes a limiting factor. The ability to establish sovereign AI capability is therefore influenced not only by policy and architecture, but by the availability and affordability of energy. In such environments, the trade-offs between sovereignty, cost and scale become more pronounced, and may constrain both the pace and extent of development.

The pace of this transition is often driven by exposure rather than intent. In practice, many organisations do not have full visibility of where their data resides, how it is replicated, or the infrastructure required to sustain it within a defined jurisdiction.

More significantly, governance and control frameworks have not kept pace with the operational reality of distributed systems or the introduction of AI as an active layer of decision and output. Accountability is frequently fragmented across providers, platforms and internal teams, leaving no single point at which control can be asserted or verified with confidence.

This creates a structural misalignment. Ownership may be nominally assigned but not enforceable in practice; infrastructure may be localised yet externally dependent; control of intelligence may sit outside the jurisdiction; and sovereignty of demand may be insufficiently defined.

As a result, decisions about risk, compliance and resilience are made in environments where neither control nor consequence is fully understood at board level, and action is often deferred until external pressures force a reassessment.

5. State-Level Resilience: Sovereignty Under Disruption

A further consideration arises in relation to state-level resilience. Where sovereign capability is required to be sustained under conditions of disruption, including conflict or systemic failure, purely domestic solutions may not be sufficient.

This is particularly the case for critical national functions such as banking, healthcare, customs and other essential public services, where continuity of operation is fundamental to societal stability and sovereignty of demand is absolute.

In such cases, resilience may depend upon structured cross-border arrangements that allow systems to maintain continuity of operation without compromising sovereignty. This introduces the potential for new forms of agreement, analogous in principle to established international conventions, through which data and AI systems are afforded defined protections from extraterritorial intervention while preserving clear lines of ownership and accountability.

Under such arrangements, it may be possible for a state to maintain a mirrored operational presence within the jurisdiction of another, ensuring continuity of capability even in the event of disruption, loss of territory, or systemic failure. The objective is not to dilute sovereignty, but to extend its resilience beyond a single point of failure.

Such models require careful design. Sovereignty cannot be preserved if control is ceded; equally, resilience cannot be achieved if systems are entirely isolated. The balance between these conditions suggests that sovereignty, at its most advanced, may depend not only on domestic control, but on internationally recognised frameworks that enable continuity without loss of authority.

6. From Theory to Practice

It should also be recognised that these challenges are not new, nor are they purely theoretical. Over the past eight years, these issues have been examined in practice, with approaches tested against the realities of market delivery, regulatory constraint and operational sustainability. This has required not only the development of conceptual models, but their validation in terms of whether they can be implemented, maintained and operated at scale in real-world conditions.

Through this process, a clear distinction has emerged between models that are theoretically coherent and those that are practically deliverable. Many approaches can approximate elements of sovereignty; far fewer can do so in a manner that is commercially viable, operationally sustainable and capable of meeting the conditions required for demonstrable control across all four components.

The analysis set out in this paper is informed by practical experience. The issues described have been examined, tested and refined against the constraints of real-world delivery over an extended period. This work has led to the development of approaches that address these challenges in a manner that is both operationally viable and commercially sustainable.

While this paper does not prescribe a single solution, it reflects an understanding not only of the problem space, but of the conditions under which it can be resolved in practice.

Conclusion

The purpose of this paper is to highlight the issues that must be considered by boards responsible for the implementation of sovereign AI systems. As these systems become embedded within core operations and critical infrastructure, accountability for their governance, oversight, policy alignment and control rests with the board.

In this context, sovereignty can be understood as the clarification of control, ensuring that authority, decision-making and accountability are explicitly defined, verifiable and exercised within the relevant jurisdiction, and in alignment with the expectations of those who rely upon the system.

This paper does not prescribe a single model, but sets out the range of sovereignty options available, enabling informed decisions about the form of sovereignty required and the trade-offs that may need to be accepted.

Sovereignty, in this context, is not claimed. It is engineered. The question is how.

Next
Next

So You Say You’re Sovereign. Show Me How.