A digital artwork of a woman with pale skin and glowing blue eyes surrounded by interconnected digital lines and dots, representing futuristic or AI technology.

Develop with Our AI Platform

Explore our range of services designed to help you move forward with confidence, wherever you're headed next.

🔷 MiniMax M2.5 – UK Sovereign Execution Layer
£0.00

Pricing to be confirmed

UK Launch Model

MiniMax M2.5 is deployed as a UK-installed AI execution system, enabling organisations to move beyond conversation into real-world task delivery within a controlled and jurisdictionally aligned environment.

It supports coding, tool use, search, and multi-step workflows, allowing AI to interact with systems, process information, and carry out tasks across applications and data. While the underlying model may originate externally, its performance, behaviour and outputs are governed locally, ensuring that execution remains aligned with UK operational, regulatory and societal requirements.

Rather than simply generating responses, MiniMax enables structured, task-driven behaviour. This allows AI to plan, act, and complete operations within defined control boundaries, making it well suited to environments where outcomes matter as much as insight.

Typical use cases include:

Coding & Development Support – generating, debugging, and refining code

Process Automation – executing repeatable workflows across systems

Multi-Step Tasks – handling sequences of actions and decisions

System Integration – connecting AI into existing tools, data, and platforms

By enabling AI to operate across multiple steps and systems, MiniMax supports more complex and integrated workflows than traditional single-response models, while maintaining control of execution within the jurisdiction of deployment. This allows organisations to automate processes, accelerate development, and embed AI directly into operational environments without compromising governance.

It also provides a flexible approach to task execution, where workflows can adapt dynamically based on inputs, results, and context, while remaining subject to local control and oversight.

MiniMax forms the execution layer within our platform, working alongside core models to turn intelligence into action. It enables organisations to move from analysis and response into automation and delivery, supporting more efficient and scalable ways of working within a sovereign operational framework.

Deployment Model

MiniMax M2.5 is initially offered through a tokenised access model, providing organisations with immediate access to advanced AI execution capabilities without the need for upfront infrastructure deployment. This is delivered on SambaNova SN40L16 systems.

As requirements evolve, this can transition to dedicated client environments, including installation on client-specific machines or clustered infrastructure. This enables organisations to progressively increase levels of control, performance and isolation, aligning deployment with operational, regulatory and sovereignty requirements over time.

This staged approach allows organisations to move from accessible entry into AI capability towards fully controlled, jurisdictionally aligned execution environments, without disruption to workflows or systems.

Location and Availability

MiniMax M2.5 is deployed from UK-based infrastructure, with initial capability located in Manchester. Additional locations are planned across the South of England, Scotland and Wales, supporting broader geographic resilience and coverage.

UK sovereign deployment, with infrastructure and control located in Manchester, is expected to be available from 1 May 2026, with options for dedicated client machines and clustered environments.

Dedicated provision for clients requiring private deployment is expected to be available within 120 days of order placement.

🔷 DeepSeek – Sovereign Transition Model
£0.00

Pricing to be confirmed

DeepSeek – Advanced Reasoning and Development Capability

DeepSeek models are designed to support advanced reasoning, coding and analytical workloads, enabling AI systems to operate across complex problem spaces and multi-step tasks.

They provide strong performance in technical domains, particularly where structured logic, code generation and iterative problem solving are required. This makes them well suited to environments where AI must not only generate responses, but analyse, plan and execute tasks across systems and data.

DeepSeek supports:

Advanced Reasoning – handling complex analytical and multi-step problem-solving tasks

Code Generation – generating, debugging and optimising code across languages

Technical Analysis – processing structured and unstructured data in technical domains

Workflow Support – enabling multi-step operations and decision sequences

By enabling deeper reasoning and structured execution, DeepSeek supports development environments where accuracy, logic and repeatability are critical.

Deployment Model

DeepSeek models are initially provided through a hosted environment to support development, testing and early-stage deployment.

This allows organisations to begin building and validating AI-driven workflows without delay, supporting rapid iteration and integration across systems. The hosted model is intended for development purposes, enabling teams to explore capability and define operational requirements.

As requirements mature, deployments transition to UK-based infrastructure, including dedicated client environments and clustered systems. This enables organisations to progressively establish control over performance, behaviour and outputs within the jurisdiction.

Location and Availability

DeepSeek is currently delivered through SambaNova-hosted infrastructure located in California, providing immediate access for development use.

Production deployment within the UK is aligned to infrastructure expansion, with environments provisioned on the next import cycle of SN40L systems. This enables transition from development access to fully sovereign operation, with control of execution, performance and outputs established within the UK jurisdiction.

🔷 LLaMA – Sovereign Transition Model
£0.00

Pricing to be confirmed

LLaMA – Flexible and Customisable AI Capability

LLaMA is deployed as a flexible, open model within our platform, supporting a wide range of general AI workloads alongside the ability to customise behaviour for specific domains and applications.

As an open model, LLaMA enables organisations to adapt and fine-tune AI capability to meet their own requirements, making it well suited to environments where control, transparency and adaptability are important. It provides a strong foundation for conversational systems, content generation and embedded AI functionality across tools and workflows.

LLaMA supports:

General AI Workloads – conversation, summarisation and content generation

Customisation – fine-tuning for domain-specific applications

Integration – embedding AI into existing systems, tools and workflows

Adaptable Behaviour – modifying outputs and responses based on use case requirements

By combining flexibility with structured deployment, LLaMA enables organisations to develop AI systems that can evolve over time while remaining aligned with operational and governance requirements.

Deployment Model

LLaMA is initially provided through a hosted environment to support development, testing and integration.

This allows organisations to rapidly experiment with use cases, refine models and embed AI capability into existing systems without the need for upfront infrastructure deployment.

As deployment requirements mature, environments can transition to UK-based infrastructure, including dedicated client machines and clustered systems. This enables full control over model behaviour, performance and outputs within the jurisdiction, supporting sovereign operation.

Location and Availability

LLaMA is currently delivered through hosted infrastructure outside the UK, providing immediate access for development and early-stage deployment.

UK-based deployment is aligned to infrastructure expansion, with environments provisioned as additional SN40L capacity is introduced. This supports a structured transition from development access to fully sovereign operation within a controlled UK environment.

Interested in Other Models

We continue to evaluate and integrate additional AI models as they emerge, ensuring that organisations can access a broad and evolving set of capabilities.

All models are assessed against our sovereign deployment framework, ensuring that performance, behaviour and outputs can be controlled within the jurisdiction of delivery, regardless of model origin.

Where appropriate, additional models will be made available through the same structured approach, enabling immediate access for development with a defined transition path to UK-based sovereign deployment.