Generated at repello.ai/tools
AI Acceptable Use Policy
Organization: Your Organization Effective date: 2026-05-05 Risk profile: Balanced — allow approved tools, audit by default Jurisdictions: United States; European Union (EU AI Act)
1. Purpose & scope
This policy governs how Your Organization employees, contractors, and other authorized users may use artificial intelligence (AI) tools — including large language models (LLMs), AI assistants, and autonomous coding agents — in their work. It applies to all AI use that touches Your Organization systems, data, customers, or intellectual property, whether the tool runs on company infrastructure, on a personal device, or via a third-party SaaS service.
2. Approved tools
Only the following AI tools are approved for use with Your Organization data classified as "Internal" or higher:
2.1 Conversational AI / LLMs
- ChatGPT (OpenAI)
- Claude (Anthropic)
2.2 Coding agents and developer assistants
- Claude Code
- Cursor
- GitHub Copilot
Use of any coding agent not on this list is prohibited without prior written approval from the security team. Shadow agents introduce credential exposure, supply-chain risk, and audit gaps that this policy is designed to prevent.
2.3 Procurement and exceptions
Adoption of an AI tool not on the approved list requires a lightweight security review. Time-boxed trials (≤30 days) of free / personal tiers may proceed with manager approval, provided no Confidential or Regulated data is processed.
3. Data classification & handling
Your Organization classifies data as: Public, Internal, Confidential, Regulated. The following table sets the maximum data classification permitted for each AI usage pattern.
| Usage pattern | Public | Internal | Confidential | Regulated |
|---|---|---|---|---|
| Free / consumer tier of approved tool | ✅ | ❌ | ❌ | ❌ |
| Approved enterprise tier (zero-data-retention) | ✅ | ✅ | ✅ | ✅ |
| Self-hosted / on-prem model | ✅ | ✅ | ✅ | ✅ |
| Coding agent on developer workstation | ✅ | ✅ | ✅ (with sandbox) | ❌ |
3.2 Prompt sanitization
Users must not paste credentials, API keys, private SSH keys, internal URLs that bypass authentication, or unredacted personal data into AI prompts. Code snippets must be reviewed for embedded secrets before submission.
4. Prohibited uses
The following uses of AI tools are prohibited regardless of tier or approval:
- Generating, modifying, or distributing content that violates applicable law, Your Organization's code of conduct, or third-party intellectual property rights.
- Making consequential decisions about individuals (hiring, performance, compensation, termination, customer credit, healthcare access) based solely on AI output without documented human review.
- Misrepresenting AI-generated content as human-authored when authorship is material (regulatory filings, sworn statements, attestations).
- Using AI to bypass Your Organization security controls, surveillance of colleagues, or any activity an attacker would be prohibited from performing.
- Any use that would constitute a prohibited AI practice under Article 5 of the EU AI Act (subliminal manipulation, exploitation of vulnerabilities, social scoring, real-time biometric identification in public spaces, etc.).
5. Coding agents — engineering controls
Coding agents (Claude Code, Cursor, Copilot, Cowork, etc.) operate with the developer's full local privileges by default. The following controls are mandatory for all agent use on Your Organization workstations.
5.1 Workstation hardening
- Agents must run inside an OS-level sandbox (Apple Seatbelt on macOS, bubblewrap or firejail on Linux, container sandbox on Windows).
- Filesystem mounts available to the agent must be scoped to the active project. Home directory,
~/.ssh,~/.aws, and other credential paths must not be mounted in. - Network egress from the agent process must route through a known proxy or be allowlisted. Direct outbound to arbitrary domains is prohibited.
5.2 Credentials
- Long-lived credentials (AWS root keys, GCP service-account keys, GitHub PATs) must not be available in any environment the agent can read.
- Cloud access for agents must use short-lived, scoped tokens (AWS STS, GCP workload identity, OIDC). Token TTL ≤ 4 hours.
- Database access for agents must use read-only or row-level-restricted roles unless write access is the explicit task.
5.3 Shell execution
Shell execution by agents is permitted only inside the sandbox defined in 5.1. Auto-approval of shell commands is disabled by default; per-command confirmation is required for destructive operations (rm, git push --force, package install, network requests).
5.4 Model Context Protocol (MCP) servers
- Only MCP servers from the centrally maintained allowlist may be installed on workstations or in agent runtimes.
- Each MCP server must have an attested SHA-256 hash; servers that update without re-attestation are auto-disabled (rug-pull defense).
- Tool descriptions must be reviewed for hidden directives ("tool poisoning") before allowlisting.
- MCP traffic must be routed through Your Organization's MCP gateway (or equivalent inspection layer) when handling Internal-or-higher data.
5.5 Audit logging
- All agent sessions must emit structured logs to the central SIEM: prompt content, tool invocations, file accesses, network requests, and exit conditions.
- Retention: minimum 1 year for sessions touching Internal-or-higher data.
- Conversation memory and long-running agent context must be encrypted at rest with a key the security team can revoke.
6. Output handling and review
- Code generated by AI tools must be reviewed by a human author before merge. AI-authored commits must include the agent's identifier (model + version) in the commit trailer.
- Customer-facing content generated by AI must be reviewed by the responsible business owner before publication.
- AI output must not be cited as authoritative source. Where the output influences a decision, the human reviewer is the decision-maker of record.
7. Incident response
If you suspect an AI tool has leaked sensitive data, has been subject to prompt injection, or has executed an unintended action:
1. Stop the session immediately. Do not continue interacting with the tool. 2. Notify the security team via the standard incident-reporting channel within 1 business hour. 3. Preserve session logs, prompts, and outputs. Do not delete history. 4. For agent sessions, capture the full conversation including tool calls and file modifications.
8. Training, awareness, and review
- All employees with AI tool access must complete annual training covering this policy, prompt injection awareness, and data-classification basics.
- Engineers using coding agents must complete an additional module covering credential hygiene, MCP supply chain, and sandbox configuration.
- This policy is reviewed annually or when a material change in the AI tooling landscape (new agent capability, new regulation, new published vulnerability class) is identified.
9. Enforcement
Violations of this policy are subject to Your Organization's standard disciplinary process. Material violations involving Confidential or Regulated data — or any deliberate circumvention of agent sandboxing or credential controls — may result in immediate access revocation and termination of employment or contract.
Generated by Repello AI's AI Acceptable Use Policy Generator on 2026-05-05. Repello: AI security platform for autonomous red-teaming (ARTEMIS) and runtime guardrails (ARGUS). Get a demo.
Cite as: Repello AI Acceptable Use Policy Generator, accessed 2026-05-05.