So You Say You’re Sovereign. Show Me How.

Defining Sovereignty in AI

The term “sovereign” has become firmly embedded in the language of artificial intelligence, used across the industry by cloud providers, platform developers, governments and emerging companies to signal control, trust and alignment with national or organisational priorities; however, its increasing prevalence has not been matched by a corresponding level of precision, and there is now a material risk that the term is being applied more as an expression of intent than as a condition that can be demonstrated in practice.

This lack of clarity is not a matter of semantics. Sovereignty, in its classical sense, has always denoted the existence of supreme authority within a defined boundary, exercised without external interference and, critically, capable of being enforced. That definition remains entirely valid, but its application in the context of AI is considerably more complex, as systems are built on distributed infrastructure, governed by models developed across multiple jurisdictions, and reliant on data flows that are not easily contained within geographic borders. In such an environment, sovereignty cannot be inferred from architectural design or contractual assurances; it must be established through conditions that hold under scrutiny, particularly when systems are subject to stress, failure or external intervention.

If the term is to retain meaning, it must therefore be recast as a test rather than a label, requiring alignment between accountability, physical control and functional governance within a defined jurisdictional boundary. In practical terms, that test can be understood through four interrelated components: ownership, infrastructure location, control of intelligence, and sovereignty of demand. The first three constitute the tools through which sovereignty is exercised, while the fourth provides the basis upon which that exercise is judged.

The first of these, ownership, concerns the location of accountability, requiring that the entity responsible for the system’s behaviour and its consequences is both identifiable and subject to the jurisdiction in which the system is deployed. It is not sufficient for responsibility to be nominal or mediated through contractual arrangements if, in practice, the accountable party sits beyond the reach of domestic enforcement. AI systems, while capable of acting with increasing autonomy, do not themselves possess responsibility; where they misbehave, produce harmful outcomes or act in ways that are inconsistent with expectation, accountability must resolve to the individuals or organisations responsible for their design, deployment and operation. Sovereignty therefore requires that responsibility is not abstracted or deferred, but clearly anchored to identifiable human authority within the governing jurisdiction.

The second component, infrastructure location, brings the discussion into the physical domain, recognising that despite the abstraction introduced by cloud computing, all AI systems ultimately depend upon tangible assets, including data centres, compute hardware and network infrastructure, each of which exists within a specific legal and geographic context. Where those assets are located outside the jurisdiction in which the system is intended to operate, they become subject to external legal regimes and potential intervention by foreign authorities, thereby limiting the ability of domestic actors to guarantee continuity of service, restrict access or prevent interference. In such circumstances, sovereignty becomes conditional, as control over the system depends upon factors that lie beyond the boundary within which authority is expected to be exercised.

The third component, control of intelligence, addresses the extent to which the behaviour of the system itself is governed within the jurisdiction, and whether the functional substance of the software is subject to local control. It is entirely possible for infrastructure to be located domestically while the underlying models, decision logic or policy frameworks are defined and maintained elsewhere, particularly where systems rely on proprietary models accessed through external interfaces. In these cases, the locus of control over how the system behaves does not reside with the entity deploying it, but with the entity that governs the intelligence layer. Sovereignty therefore requires the ability to modify, constrain and direct system behaviour without reliance on external permission, ensuring that control extends beyond infrastructure into the operation of the system itself. This does not preclude the use of models developed outside the jurisdiction, including open source models, provided that their deployment places full control of performance, behaviour and outputs within the jurisdiction of delivery, and that they are not subject to external governance once in operation.

Taken together, these three elements form the tools of sovereignty, establishing who is accountable, where control is physically grounded, and how system behaviour is governed. However, the existence of these tools alone does not in itself demonstrate sovereignty. Control may exist in a technical sense without being exercised in a manner that is consistent with the expectations of the society within which the system operates. It is therefore necessary to consider a fourth component, sovereignty of demand, which provides the context against which the use of these tools is assessed.

Sovereignty of demand reflects the values, beliefs and expectations of the community within a given jurisdiction, recognising that sovereignty is not solely exercised through enforceable control, but also expressed through the norms that shape acceptable behaviour. Unlike the preceding components, it does not operate as a direct mechanism of control, but as a marker of alignment, indicating whether the behaviour and outcomes of a system are consistent with domestic legal, cultural and ethical frameworks. In this sense, sovereignty is not simply a question of whether control exists, but whether it is exercised in a manner that is considered legitimate within the relevant societal context.

This distinction becomes particularly significant in systems that do not merely process information but influence behaviour or shape outcomes at scale. In such cases, the issue is not only where systems are hosted or who operates them, but whether the frameworks governing their behaviour are aligned with the society in which their effects are felt. Where influence is exerted through systems governed or shaped by external frameworks, there exists the potential for outcomes to be influenced by priorities that do not reflect domestic expectations. As AI systems play an increasing role in filtering information, guiding decisions and framing choices, the expectation that both control and underlying logic should align with the values of the community they serve becomes more pronounced.

In addition to defining the conditions under which sovereignty can be said to exist, this framework also has implications for governance and regulation, particularly in relation to the allocation of accountability across AI systems that are increasingly composed of multiple interdependent layers. A more disciplined application of sovereignty requires that responsibility is not diffused across those layers, but clearly understood and assigned. Providers of infrastructure, where they control the physical environment in which systems operate, carry responsibility for the availability, integrity and jurisdictional compliance of that environment. Those who govern the intelligence layer, whether through ownership of models, control of decision logic or operation of AI-enabled services, carry responsibility for system behaviour, the use and handling of data, and the outcomes generated through their application. Where systems influence behaviour or shape decisions, this responsibility extends beyond technical performance into the broader question of how such influence is exercised and to what end.

This perspective also has implications for how sovereignty is considered within broader ESG frameworks. While issues of control and accountability are often treated as matters of governance, the increasing role of AI systems in shaping decisions, influencing behaviour and affecting societal outcomes suggests that sovereignty must also be understood as a social consideration. In particular, sovereignty of demand aligns closely with the “S” dimension of ESG, insofar as it concerns the relationship between technological systems and the societies they serve. Where AI systems operate at scale, the extent to which they reflect domestic values and maintain alignment with societal expectations becomes not only a matter of compliance, but of social legitimacy.

A further implication arises in relation to resilience, where traditional approaches, developed primarily for data systems, are not directly applicable to AI. Conventional resilience models are structured around tiers of data protection and recovery, focusing on the preservation and restoration of stored information, and implicitly treating systems as repositories of data rather than as active agents of decision and output. While such approaches remain necessary, they are not sufficient in the context of AI, where the primary function of the system lies not in storing information but in generating outcomes. Treating AI as an extension of data storage risks overlooking the more fundamental requirement that, in the event of disruption, the system must be capable of continuing to operate and produce equivalent outputs within the jurisdiction.

Resilience in this context must therefore be understood not as the recovery of data, but as the continuity of function. Where AI systems underpin critical operations, the expectation is not merely that information can be retrieved following a failure, but that control of the system, and the outputs it produces, can be sustained or re-established without reliance on external actors. A sovereign AI system must therefore be capable of assuming or maintaining operational control within its jurisdictional boundary under conditions of failure, ensuring that the loss of external dependency does not result in the loss of capability.

Taken together, these elements provide a structured basis for assessing whether an AI system can properly be described as sovereign. Where ownership is accountable within the jurisdiction, infrastructure is located within that jurisdiction, and the intelligence layer is fully controllable without external dependence, sovereignty can be said to exist, provided that the use of that control aligns with societal expectations and can be sustained under conditions of disruption. Where one or more of these conditions is not met, sovereignty is either partial or absent, regardless of how the system is described.

This framework also clarifies why many current approaches, particularly those developed at global scale, do not fully meet the threshold implied by the term. Such platforms are typically optimised for cross-border operation, centralised governance of intelligence, and distributed infrastructure, all of which are highly effective for many applications but do not necessarily align with the stricter requirements of jurisdictional control, enforceable accountability and operational independence that sovereignty entails. This is not a question of capability, but of structural alignment between system design and the conditions required to demonstrate sovereignty in its fullest sense.

As AI becomes more deeply embedded in systems upon which society depends, the distinction between acceptable dependency and unacceptable exposure will become increasingly significant, and the use of the term “sovereign” will require greater discipline. Organisations that claim sovereign capabilities should therefore be able to demonstrate, in clear and specific terms, who is accountable for their systems, where those systems operate, who controls their behaviour, and how that control is exercised in a manner consistent with the expectations of the society they serve.

The question that follows from this is not whether sovereignty is desirable in the abstract, but how it can be delivered in practice within the constraints of modern AI systems, and what trade-offs such delivery entails. That is the subject of the next discussion.

Next
Next

Eight units arrive