Argyll Data Development Ltd

Acceptable Use Policy

Effective Date: 6th May 2026

This Acceptable Use Policy (“Policy”) governs access to and use of the Platform, Software, APIs, Models, infrastructure, and related services provided by Argyll Data Development Ltd (“ADD”, “we”, “us”, or “our”).

This Policy forms part of:

  • the ADD Platform Terms of Service;

  • the ADD Software and Platform Licence Agreement;

  • applicable order forms;

  • service schedules;

  • SLAs;

  • and related commercial agreements.

By accessing or using the Platform, Software, APIs, or related services, you agree to comply with this Policy.

If you are accessing or using the Platform on behalf of a company, organisation, governmental body, educational institution, or other legal entity, you represent and warrant that you have authority to bind that entity to this Policy.

ADD may update this Policy from time to time in accordance with applicable Platform Terms and commercial agreements.

1. Purpose and Scope

This Acceptable Use Policy defines rules and standards governing access to and use of:

  • the Platform;

  • Software;

  • APIs;

  • Models;

  • infrastructure;

  • developer tooling;

  • and related services provided by ADD.

This Policy is intended to:

  • protect the security, integrity, availability, and lawful operation of the Platform;

  • prevent unlawful, harmful, abusive, or disruptive activity;

  • support responsible use of artificial intelligence technologies;

  • and protect customers, infrastructure providers, operational partners, and third parties from misuse or operational risk.

This Policy applies to:

  • all Customers;

  • Authorised Users;

  • developers;

  • integrations;

  • applications;

  • automated systems;

  • and any third party accessing or using the Platform or Software through Customer accounts, credentials, APIs, or environments.

Compliance with this Policy is a condition of access to and continued use of the Platform and Software.

Nothing in this Policy limits additional restrictions, obligations, or compliance requirements contained in:

  • applicable laws or regulations;

  • commercial agreements;

  • SLAs;

  • security schedules;

  • third-party licence terms;

  • or operational policies made available by ADD.

2. Lawful and Responsible Use

The Platform and Software may only be used for lawful, authorised, and responsible purposes in compliance with:

  • applicable laws and regulations;

  • this Policy;

  • applicable agreements with ADD;

  • and relevant third-party licence or operational requirements.

Customers and Authorised Users must:

  • act responsibly and in good faith when using the Platform or Software;

  • maintain appropriate human oversight over use of AI-generated outputs;

  • implement appropriate governance, security, and access controls;

  • and ensure that use of the Platform does not create unlawful, harmful, deceptive, abusive, discriminatory, or operationally unsafe outcomes.

The Customer is responsible for:

  • all activity conducted through its accounts, APIs, credentials, integrations, and Authorised Users;

  • ensuring it has lawful rights to process Customer Data;

  • and ensuring outputs generated through the Platform are appropriately reviewed, validated, and used responsibly.

The Platform operates as an inference service only. Customer prompts, submitted content, and AI-generated outputs are not used for foundation model training unless expressly agreed otherwise in writing.

Use of the Platform or Software must not:

  • violate the rights of others;

  • compromise security or operational integrity;

  • interfere with Platform availability or performance;

  • or expose ADD, infrastructure providers, licensors, customers, or third parties to legal, regulatory, security, or operational risk.

2. Lawful and Responsible Use

The Platform and Software may only be used for lawful, authorised, and responsible purposes in compliance with:

  • applicable laws and regulations;

  • this Policy;

  • applicable agreements with ADD;

  • and relevant third-party licence or operational requirements.

Customers and Authorised Users must:

  • act responsibly and in good faith when using the Platform or Software;

  • maintain appropriate human oversight over use of AI-generated outputs;

  • implement appropriate governance, security, and access controls;

  • and ensure that use of the Platform does not create unlawful, harmful, deceptive, abusive, discriminatory, or operationally unsafe outcomes.

The Customer is responsible for:

  • all activity conducted through its accounts, APIs, credentials, integrations, and Authorised Users;

  • ensuring it has lawful rights to process Customer Data;

  • and ensuring outputs generated through the Platform are appropriately reviewed, validated, and used responsibly.

The Platform operates as an inference service only. Customer prompts, submitted content, and AI-generated outputs are not used for foundation model training unless expressly agreed otherwise in writing.

Use of the Platform or Software must not:

  • violate the rights of others;

  • compromise security or operational integrity;

  • interfere with Platform availability or performance;

  • or expose ADD, infrastructure providers, licensors, customers, or third parties to legal, regulatory, security, or operational risk.

4. AI-Specific Usage Restrictions

Customers and Authorised Users must not use the Platform, Software, Models, APIs, or related services to generate, facilitate, support, or automate:

  • unlawful decision-making activities;

  • unlawful discrimination or profiling;

  • unlawful surveillance or monitoring activities;

  • unlawful biometric identification or analysis;

  • harmful deceptive content intended to mislead, defraud, impersonate, or manipulate individuals or organisations;

  • malicious synthetic media, including unlawful deepfake content;

  • disinformation campaigns or coordinated deceptive activity;

  • generation of malware, ransomware, exploit code, credential theft tools, or malicious cyber capabilities;

  • automated harassment, abuse, intimidation, or unlawful targeting of individuals or groups;

  • unlawful collection, processing, or exploitation of personal data;

  • activities prohibited under applicable export control or sanctions laws;

  • or any use likely to create material legal, ethical, operational, security, or reputational risk.

The Platform operates as an inference service only. Outputs generated through the Platform may be:

  • probabilistic;

  • incomplete;

  • inaccurate;

  • inconsistent;

  • or dependent on Model selection, prompts, and operational conditions.

Customers remain solely responsible for:

  • human review and oversight;

  • evaluating outputs;

  • validating accuracy and suitability;

  • compliance decisions;

  • and determining whether outputs are appropriate for downstream use.

Customers must not represent:

  • AI-generated outputs as human-generated where such representation would be unlawful or deceptive;

  • outputs as guaranteed factual, authoritative, or professionally certified;

  • or outputs as independently verified without appropriate review and validation.

ADD may impose:

  • Model-specific restrictions;

  • usage limitations;

  • safety controls;

  • output filtering;

  • operational safeguards;

  • or access restrictions,

where reasonably necessary for security, legal compliance, licensing, infrastructure protection, abuse prevention, or operational integrity.

5. Security Monitoring and Abuse Prevention

ADD may monitor operation and use of the Platform, Software, APIs, infrastructure, and related services where reasonably necessary to:

  • maintain security and operational integrity;

  • detect, prevent, investigate, or mitigate abuse, fraud, unlawful activity, or security incidents;

  • enforce this Policy and applicable agreements;

  • protect infrastructure, Models, customers, licensors, and third parties;

  • and comply with legal or regulatory obligations.

Monitoring activities may include:

  • operational telemetry;

  • usage analytics;

  • authentication and access logging;

  • API activity monitoring;

  • infrastructure diagnostics;

  • capacity and performance monitoring;

  • and automated abuse detection systems.

Customer prompts and AI-generated outputs are not routinely reviewed by human personnel except where reasonably necessary for:

  • legal compliance;

  • security;

  • abuse prevention;

  • technical support;

  • or operational integrity.

Customers and Authorised Users must not:

  • interfere with monitoring or security controls;

  • attempt to conceal abusive or unlawful activity;

  • bypass security mechanisms, usage restrictions, or operational safeguards;

  • or engage in conduct likely to compromise the security, availability, or integrity of the Platform or infrastructure.

ADD may:

  • investigate suspected violations of this Policy;

  • suspend or restrict access;

  • apply temporary or permanent usage controls;

  • rotate credentials or revoke access tokens;

  • remove or disable access to content or integrations;

  • or cooperate with infrastructure providers, licensors, regulators, or law enforcement authorities where required by applicable law or reasonably necessary to protect operational integrity or security.

Where legally permitted and reasonably practicable, ADD will use commercially reasonable efforts to:

  • limit the scope of disclosure;

  • protect confidential information;

  • and notify affected Customers prior to disclosure of Customer information to governmental or regulatory authorities.

5. Security Monitoring and Abuse Prevention

ADD may monitor operation and use of the Platform, Software, APIs, infrastructure, and related services where reasonably necessary to:

  • maintain security and operational integrity;

  • detect, prevent, investigate, or mitigate abuse, fraud, unlawful activity, or security incidents;

  • enforce this Policy and applicable agreements;

  • protect infrastructure, Models, customers, licensors, and third parties;

  • and comply with legal or regulatory obligations.

Monitoring activities may include:

  • operational telemetry;

  • usage analytics;

  • authentication and access logging;

  • API activity monitoring;

  • infrastructure diagnostics;

  • capacity and performance monitoring;

  • and automated abuse detection systems.

Customer prompts and AI-generated outputs are not routinely reviewed by human personnel except where reasonably necessary for:

  • legal compliance;

  • security;

  • abuse prevention;

  • technical support;

  • or operational integrity.

Customers and Authorised Users must not:

  • interfere with monitoring or security controls;

  • attempt to conceal abusive or unlawful activity;

  • bypass security mechanisms, usage restrictions, or operational safeguards;

  • or engage in conduct likely to compromise the security, availability, or integrity of the Platform or infrastructure.

ADD may:

  • investigate suspected violations of this Policy;

  • suspend or restrict access;

  • apply temporary or permanent usage controls;

  • rotate credentials or revoke access tokens;

  • remove or disable access to content or integrations;

  • or cooperate with infrastructure providers, licensors, regulators, or law enforcement authorities where required by applicable law or reasonably necessary to protect operational integrity or security.

Where legally permitted and reasonably practicable, ADD will use commercially reasonable efforts to:

  • limit the scope of disclosure;

  • protect confidential information;

  • and notify affected Customers prior to disclosure of Customer information to governmental or regulatory authorities.

7. Third-Party Models and Open-Source Usage

The Platform and Software may include, integrate with, provide access to, or rely upon:

  • third-party Models;

  • Open-Source Components;

  • hosted infrastructure services;

  • external APIs;

  • and technologies licensed or operated by third parties.

Customers and Authorised Users must comply with:

  • applicable third-party licence terms;

  • Open-Source Component licence obligations;

  • operational restrictions;

  • Model-specific usage requirements;

  • and any legal or regulatory limitations associated with such technologies.

Customers must not:

  • use Models or Open-Source Components in breach of applicable licence terms;

  • remove or alter attribution, copyright, or licensing notices where required;

  • misrepresent ownership of third-party or open-source technologies;

  • or use the Platform or Software in a manner that could cause ADD, licensors, infrastructure providers, or other customers to breach applicable licensing obligations.

Availability of specific Models, APIs, Open-Source Components, or Third-Party Services may change due to:

  • licensing requirements;

  • legal or regulatory restrictions;

  • infrastructure availability;

  • security considerations;

  • operational requirements;

  • or provider decisions.

ADD may:

  • add, remove, replace, suspend, or restrict access to Models or Third-Party Services;

  • apply operational safeguards or usage restrictions;

  • or modify supported technologies where reasonably necessary for legal, operational, security, licensing, or infrastructure reasons.

Nothing in this Policy grants Customers ownership of:

  • third-party Models;

  • Open-Source Components;

  • proprietary technologies;

  • or related intellectual property rights belonging to licensors or third parties.

Nothing in this Policy limits rights expressly granted under applicable open-source licences relating to Open-Source Components.

7. Third-Party Models and Open-Source Usage

The Platform and Software may include, integrate with, provide access to, or rely upon:

  • third-party Models;

  • Open-Source Components;

  • hosted infrastructure services;

  • external APIs;

  • and technologies licensed or operated by third parties.

Customers and Authorised Users must comply with:

  • applicable third-party licence terms;

  • Open-Source Component licence obligations;

  • operational restrictions;

  • Model-specific usage requirements;

  • and any legal or regulatory limitations associated with such technologies.

Customers must not:

  • use Models or Open-Source Components in breach of applicable licence terms;

  • remove or alter attribution, copyright, or licensing notices where required;

  • misrepresent ownership of third-party or open-source technologies;

  • or use the Platform or Software in a manner that could cause ADD, licensors, infrastructure providers, or other customers to breach applicable licensing obligations.

Availability of specific Models, APIs, Open-Source Components, or Third-Party Services may change due to:

  • licensing requirements;

  • legal or regulatory restrictions;

  • infrastructure availability;

  • security considerations;

  • operational requirements;

  • or provider decisions.

ADD may:

  • add, remove, replace, suspend, or restrict access to Models or Third-Party Services;

  • apply operational safeguards or usage restrictions;

  • or modify supported technologies where reasonably necessary for legal, operational, security, licensing, or infrastructure reasons.

Nothing in this Policy grants Customers ownership of:

  • third-party Models;

  • Open-Source Components;

  • proprietary technologies;

  • or related intellectual property rights belonging to licensors or third parties.

Nothing in this Policy limits rights expressly granted under applicable open-source licences relating to Open-Source Components.

9. Reporting Security Issues and Abuse

Customers and Authorised Users should promptly report:

  • suspected security vulnerabilities;

  • unauthorised access;

  • credential compromise;

  • abusive or unlawful activity;

  • operational security concerns;

  • or suspected violations of this Policy or applicable agreements.

Reports may be submitted to ADD through designated operational, support, or security contact channels made available by ADD from time to time.

Customers reporting security issues must:

  • act responsibly and in good faith;

  • avoid activities likely to disrupt services, infrastructure, Models, customers, or third parties;

  • avoid accessing, modifying, or retaining data beyond what is reasonably necessary to demonstrate the issue;

  • and comply with applicable laws, regulations, and operational requirements.

Unauthorised:

  • penetration testing;

  • vulnerability scanning;

  • exploit testing;

  • Model extraction testing;

  • or security research activities,

remain prohibited unless expressly authorised in writing by ADD.

ADD may:

  • investigate reported issues;

  • cooperate with infrastructure providers, licensors, regulators, or law enforcement authorities where required;

  • and take reasonable operational, technical, or legal measures necessary to protect the Platform, Software, infrastructure, customers, or third parties.

Submission of a report does not:

  • create any contractual entitlement;

  • create any right to compensation or reward;

  • or authorise continued testing or investigative activity unless expressly agreed by ADD in writing.

10. Changes to this Policy

ADD may update, modify, supplement, replace, or revise this Policy from time to time where reasonably necessary to reflect:

  • changes in applicable laws or regulations;

  • security requirements;

  • operational practices;

  • infrastructure changes;

  • licensing obligations;

  • technological developments;

  • Model availability or capabilities;

  • abuse prevention requirements;

  • or evolving industry standards and risk management practices.

Updated versions of this Policy may be published through:

  • the Platform;

  • customer portals;

  • Documentation repositories;

  • the ADD website;

  • or other appropriate communication channels.

Continued access to or use of the Platform, Software, APIs, Models, or related services following the effective date of an updated Policy constitutes acceptance of the revised Policy.

Where a Customer is subject to a separate written enterprise agreement, supply contract, SLA, or negotiated commercial arrangement, the terms of that agreement shall prevail in the event of conflict with this Policy.