AI Policy
This document outlines the principles, rules, and processes governing the use of AI tools and serves as a foundation for responsible AI integration into Agile Collective's operations. The purpose of this policy is to provide clear guidance on the ethical, safe, and effective use of AI tools by team members at Agile Collective. As AI technologies continue to evolve, we aim to:
- Ensure that AI tools support and enhance human skills, rather than replace them.
- Protect company, client, and sensitive data from misuse.
- Promote transparency, accountability, and ethical use in all AI-assisted workflows.
- Minimise risks associated with AI tools, such as inaccurate outputs, security vulnerabilities, or ethical concerns.
- Maintain the trust of our clients and team members in all aspects of our work.
- Ensure that our environmental impact is considered when using AI.
See our AI Definitions and Terminology Page for help with the various definitions used in this policy. For a shorter overview of our guiding AI principles, please look at our AI Policy Overview.
General Principles
Human Oversight
AI tools must be treated as collaborative assistants to enhance human capabilities. They are not substitutes for human decision-making, judgment, or accountability.
Accountability
All AI outputs (both external and internal deliverables) must undergo human review to ensure they meet Agile Collective's quality standards and ethical guidelines.
Data Privacy and Security
- Consented - Any user data must have prior consent before being put into a public AI.
- Minimised - Team members should not provide AI tools with any more data than necessary to achieve desired results.
- Anonymised - Sensitive data should be anonymised to safeguard confidentiality.
- Controlled - Team members must not input any company or client data into an unapproved AI system (we need to have reviewed their data retention policies).
Transparency
This is Agile Collective's default policy, which applies to all our clients, but in some cases, there may be additional requirements, which would sit in an "AI Policy" document within a client's "Contracts" directory.
Environmental Responsibility
Risk Awareness and Mitigation
To address these risks, we will:
- Share information around responsible AI usage.
- Provide resources and training on safe, responsible, and effective AI usage.
- Promote awareness of AI limitations and encourage team members to exercise sound human judgment when interpreting AI outputs.
- Only use approved AI tools for client work.
Responsible and Compliant Usage
Prerequisites for Using AI Tools
Team members may use non-reviewed AI tools for personal skill-building and experimentation, provided such usage does not involve client or proprietary company data and is aligned with organisational goals and policies. This excludes "Rejected AIs," as these are deemed high risk and should never be used on work projects or devices.
Agile Collective will reference this AI policy in its contracts.
Team members are responsible for reviewing and respecting any AI policies required by the client organisation.
How to Use AI at Agile Collective
General Usage
- AI outputs may be used as a foundation or support, not as a final product. For example, AI may assist with drafting or reviewing work but requires human oversight and refinement before completion.
- AI tools may not substantially generate core deliverables from end-to-end without oversight and human involvement (such as production-ready code, a final report, or major design components).
- Before incorporating AI into project work, team members must review any relevant client AI usage policy. Client policies may impose additional restrictions on how AI-generated outputs can be used.
Accountability
- AI outputs must be critically evaluated for accuracy, relevance, and appropriateness.
- AI models are trained on human-generated data, which may contain biases. Team members must remain vigilant and correct any biases or inaccuracies in AI-generated content.
- AI outputs must be reviewed to ensure they meet the highest standards of quality and professionalism.
Transparency and Explainability
- Team members must be able to explain and justify the use of AI in their work, ensuring transparency and alignment with project goals.
- Agile Collective team members should always be open and transparent with their team and the client about their use of AI and how it was used to assist with their work.
- While it is not necessary for team members to include attribution when AI has been used in an incidental way (e.g., copyediting), substantial use of AI in a work product (e.g., to generate code, reports, or analyses) should be documented. This documentation should include specific models and prompts used in case someone else wants to reproduce or extend the results.
Risk Mitigation
When utilising AI tools, team members must adhere to the principle of data minimisation. This means sharing only the strictly necessary information with the AI system to achieve the desired outcome.
Prohibited Usage
- AI tools shall not be used in any way that violates consent or the human rights of others. This includes, but is not limited to, the creation of nonconsensual image, voice, or video deepfakes.
- AI tools shall not be used to create or modify content in a manner that is inauthentic or deceptive.
- AI tools shall not be used to create content that is illegal or otherwise violates company policies.
- Rejected AIs should not be used on any client work or installed on any Agile Collective devices (e.g., laptops and mobile phones).
Prohibited Data
- Secrets: passwords, API keys, private keys, tokens.
- Confidential client/company data: unless the tool and use case are explicitly approved.
- Personal data: unless explicitly approved for that specific use case.
- Special Category Data: prohibited unless there is explicit approval and a documented Data Protection Impact Assessment (DPIA)/risk assessment.
In addition to specific AI policies, tool usage must comply with all other relevant company policies, including but not limited to those related to data privacy, security, and intellectual property.
Authorised AI Tools and Vendors
The AI Circle evaluates and approves AI tools and vendors for use at Agile Collective. This includes:
- Risk assessments for all AI vendors, updated regularly by the AI Circle.
- A maintained list of authorised tools for reference.
- Review of integrated AI features in existing tools (e.g., Miro, Google Workspace, Jira, Confluence, Adobe Creative Suite, macOS) to ensure compliance with the list of approved models and providers, which may have already been reviewed by the AI Circle.
- Notifications to team members if any AI tools are not authorised for use.
- A process for requesting new AI tools via Rocket (internal request system).
AI Tool and Vendor Evaluation Guidelines:
- AI models for everyday use should be stable and follow industry-standard best practices for safety, ethics, and environmental sustainability.
- The AI Circle may make exceptions for experimental models and usage on a case-by-case basis. These exceptions will be noted in the company decision log.
- Where possible, Agile Collective will avoid using tools that create vendor dependencies.
- Where possible, AI tools should utilise Agile Collective's existing security infrastructure, such as standardised logins. Access to specific tools may be restricted based on role and project requirements.
Team members who have reason to believe company or client data may have been compromised by an AI tool or vendor must file an incident report via Agile Collective Help Desk. Security events will be reviewed and addressed by the AI Circle following Agile Collective's standard incident response process.
HR and Legal Compliance
Agile Collective's use of AI tools is subject to laws that have jurisdiction in the United Kingdom. Compliance concerns should be raised to the AI Circle.
Notwithstanding the above, AI tools may be used to assist with the development of:
- HR-related communications, such as memos and policy updates.
- Hiring notices.
- Promotion packets created by employees.
- Coaching documentation, leveling reviews, and performance improvement requests.
All hiring and HR-related material created with the assistance of AI tools (and any updates thereto) must be subjected to human review, with a particular focus on identifying and addressing any potential biases introduced by the AI model.
AI tools will not be used in place of humans to screen or interview prospective employees; however, assistive tools such as notetakers can be used in the interview and hiring process as long as these are disclosed to the applicant.
Updates
This policy will be reviewed at least once per calendar year by Agile Collective's AI Circle. Changes and updates will be reviewed and approved by the Anchor Circle (who may choose to involve the wider membership). Reviews may occur more frequently if needed due to changes in law or technology.
Agile Collective team members will be made aware of new and updated company wide AI policies through standard communications channels (All-Company meetings, Rocket, email, etc.). New and current team members will be asked to indicate they have read and understood those policies.
Last updated: