AI Policy

This document outlines the principles, rules, and processes governing the use of AI tools and serves as a foundation for responsible AI integration into Agile Collective's operations. The purpose of this policy is to provide clear guidance on the ethical, safe, and effective use of AI tools by team members at Agile Collective. As AI technologies continue to evolve, we aim to:

This policy applies to all Agile Collective employees and contractors, as well as any other individuals who use AI tools in the context of Agile Collective's business operations, client work, or internal projects.

See our AI Definitions and Terminology Page for help with the various definitions used in this policy. For a shorter overview of our guiding AI principles, please look at our AI Policy Overview.


General Principles

Agile Collective is conscious of the ethical and environmental challenges posed by AI and prioritises safety, ethics, and environmental sustainability, whenever possible. This policy builds on several core principles.

Human Oversight

Team members must be able to explain and justify the use of AI tools.

AI tools must be treated as collaborative assistants to enhance human capabilities. They are not substitutes for human decision-making, judgment, or accountability.

Accountability

Team members are fully accountable for the quality, completion, and outcome of AI-assisted work.

All AI outputs (both external and internal deliverables) must undergo human review to ensure they meet Agile Collective's quality standards and ethical guidelines.

Data Privacy and Security

Team members must avoid sharing more data with the AI than strictly necessary, in particular they must not share personal data of any form without consent.

Transparency

Team members must review client-specific AI policies and inform the client of AI usage when more than just foundation/support usage is required. These policies must then be followed.

This is Agile Collective's default policy, which applies to all our clients, but in some cases, there may be additional requirements, which would sit in an "AI Policy" document within a client's "Contracts" directory.

Environmental Responsibility

We are mindful of the environmental impact of generative AI tools and commit to choosing environmentally responsible AI options.

Risk Awareness and Mitigation

We acknowledge the risks associated with AI tools such as hallucinations, data leaks, bias, data poisoning, etc. Therefore, we only use approved AI tools on client projects.

To address these risks, we will:

Responsible and Compliant Usage

AI tools cannot be used to generate or modify content in ways that harm others, break UK law, or otherwise violate company policy.

Prerequisites for Using AI Tools

Only approved AI tools can be used for client work.

Team members may use non-reviewed AI tools for personal skill-building and experimentation, provided such usage does not involve client or proprietary company data and is aligned with organisational goals and policies. This excludes "Rejected AIs," as these are deemed high risk and should never be used on work projects or devices.

Agile Collective will reference this AI policy in its contracts.

Team members are responsible for reviewing and respecting any AI policies required by the client organisation.

How to Use AI at Agile Collective

AI usage in workflows involving real data must be transparent to the relevant clients. Before using AI on a project, team members must review client-specific AI policies and inform the client of AI usage when more than just foundation/support usage is required or planned. Teams must follow client policies and guidelines around the use of AI tools when those policies are more stringent than Agile Collective's guidelines.

General Usage

Accountability

Transparency and Explainability

Risk Mitigation

When utilising AI tools, team members must adhere to the principle of data minimisation. This means sharing only the strictly necessary information with the AI system to achieve the desired outcome.

Prohibited Usage

Prohibited Data

In addition to specific AI policies, tool usage must comply with all other relevant company policies, including but not limited to those related to data privacy, security, and intellectual property.

Authorised AI Tools and Vendors

The AI Circle evaluates and approves AI tools and vendors for use at Agile Collective. This includes:

AI Tool and Vendor Evaluation Guidelines:

Team members who have reason to believe company or client data may have been compromised by an AI tool or vendor must file an incident report via Agile Collective Help Desk. Security events will be reviewed and addressed by the AI Circle following Agile Collective's standard incident response process.

Agile Collective's use of AI tools is subject to laws that have jurisdiction in the United Kingdom. Compliance concerns should be raised to the AI Circle.

Notwithstanding the above, AI tools may be used to assist with the development of:

All hiring and HR-related material created with the assistance of AI tools (and any updates thereto) must be subjected to human review, with a particular focus on identifying and addressing any potential biases introduced by the AI model.

AI tools will not be used in place of humans to screen or interview prospective employees; however, assistive tools such as notetakers can be used in the interview and hiring process as long as these are disclosed to the applicant.

Updates

This policy will be reviewed at least once per calendar year by Agile Collective's AI Circle. Changes and updates will be reviewed and approved by the Anchor Circle (who may choose to involve the wider membership). Reviews may occur more frequently if needed due to changes in law or technology.

Agile Collective team members will be made aware of new and updated company wide AI policies through standard communications channels (All-Company meetings, Rocket, email, etc.). New and current team members will be asked to indicate they have read and understood those policies.

Last updated: