Free tool · No signup

AI Acceptable Use Policy generator

Build a policy that covers the AI tools your team actually uses — ChatGPT, Claude, Copilot, Cursor, Claude Code, and the rest. Real clauses for sandboxing, credential handling, MCP servers, and audit logging that the generic templates skip. Tweak the inputs, see the policy update live, and export it the way you’ll actually use it.

✓ Free✓ No login or email✓ Markdown, PDF, and Claude-Code-ready config✓ One-click into Claude or ChatGPT

Organization

Risk profile

Jurisdictions

Approved LLMs / chat tools

Approved coding agents

Engineering controls

Send it somewhere

Drop the result into wherever you actually work.

Preview

7,802 chars · live

Repello AIRepello AI

Generated at repello.ai/tools

AI Acceptable Use Policy

Organization: Your Organization Effective date: 2026-05-05 Risk profile: Balanced — allow approved tools, audit by default Jurisdictions: United States; European Union (EU AI Act)

1. Purpose & scope

This policy governs how Your Organization employees, contractors, and other authorized users may use artificial intelligence (AI) tools — including large language models (LLMs), AI assistants, and autonomous coding agents — in their work. It applies to all AI use that touches Your Organization systems, data, customers, or intellectual property, whether the tool runs on company infrastructure, on a personal device, or via a third-party SaaS service.

2. Approved tools

Only the following AI tools are approved for use with Your Organization data classified as "Internal" or higher:

2.1 Conversational AI / LLMs

  • ChatGPT (OpenAI)
  • Claude (Anthropic)

2.2 Coding agents and developer assistants

  • Claude Code
  • Cursor
  • GitHub Copilot

Use of any coding agent not on this list is prohibited without prior written approval from the security team. Shadow agents introduce credential exposure, supply-chain risk, and audit gaps that this policy is designed to prevent.

2.3 Procurement and exceptions

Adoption of an AI tool not on the approved list requires a lightweight security review. Time-boxed trials (≤30 days) of free / personal tiers may proceed with manager approval, provided no Confidential or Regulated data is processed.

3. Data classification & handling

Your Organization classifies data as: Public, Internal, Confidential, Regulated. The following table sets the maximum data classification permitted for each AI usage pattern.

Usage pattern Public Internal Confidential Regulated
Free / consumer tier of approved tool
Approved enterprise tier (zero-data-retention)
Self-hosted / on-prem model
Coding agent on developer workstation ✅ (with sandbox)

3.2 Prompt sanitization

Users must not paste credentials, API keys, private SSH keys, internal URLs that bypass authentication, or unredacted personal data into AI prompts. Code snippets must be reviewed for embedded secrets before submission.

4. Prohibited uses

The following uses of AI tools are prohibited regardless of tier or approval:

  • Generating, modifying, or distributing content that violates applicable law, Your Organization's code of conduct, or third-party intellectual property rights.
  • Making consequential decisions about individuals (hiring, performance, compensation, termination, customer credit, healthcare access) based solely on AI output without documented human review.
  • Misrepresenting AI-generated content as human-authored when authorship is material (regulatory filings, sworn statements, attestations).
  • Using AI to bypass Your Organization security controls, surveillance of colleagues, or any activity an attacker would be prohibited from performing.
  • Any use that would constitute a prohibited AI practice under Article 5 of the EU AI Act (subliminal manipulation, exploitation of vulnerabilities, social scoring, real-time biometric identification in public spaces, etc.).

5. Coding agents — engineering controls

Coding agents (Claude Code, Cursor, Copilot, Cowork, etc.) operate with the developer's full local privileges by default. The following controls are mandatory for all agent use on Your Organization workstations.

5.1 Workstation hardening

  • Agents must run inside an OS-level sandbox (Apple Seatbelt on macOS, bubblewrap or firejail on Linux, container sandbox on Windows).
  • Filesystem mounts available to the agent must be scoped to the active project. Home directory, ~/.ssh, ~/.aws, and other credential paths must not be mounted in.
  • Network egress from the agent process must route through a known proxy or be allowlisted. Direct outbound to arbitrary domains is prohibited.

5.2 Credentials

  • Long-lived credentials (AWS root keys, GCP service-account keys, GitHub PATs) must not be available in any environment the agent can read.
  • Cloud access for agents must use short-lived, scoped tokens (AWS STS, GCP workload identity, OIDC). Token TTL ≤ 4 hours.
  • Database access for agents must use read-only or row-level-restricted roles unless write access is the explicit task.

5.3 Shell execution

Shell execution by agents is permitted only inside the sandbox defined in 5.1. Auto-approval of shell commands is disabled by default; per-command confirmation is required for destructive operations (rm, git push --force, package install, network requests).

5.4 Model Context Protocol (MCP) servers

  • Only MCP servers from the centrally maintained allowlist may be installed on workstations or in agent runtimes.
  • Each MCP server must have an attested SHA-256 hash; servers that update without re-attestation are auto-disabled (rug-pull defense).
  • Tool descriptions must be reviewed for hidden directives ("tool poisoning") before allowlisting.
  • MCP traffic must be routed through Your Organization's MCP gateway (or equivalent inspection layer) when handling Internal-or-higher data.

5.5 Audit logging

  • All agent sessions must emit structured logs to the central SIEM: prompt content, tool invocations, file accesses, network requests, and exit conditions.
  • Retention: minimum 1 year for sessions touching Internal-or-higher data.
  • Conversation memory and long-running agent context must be encrypted at rest with a key the security team can revoke.

6. Output handling and review

  • Code generated by AI tools must be reviewed by a human author before merge. AI-authored commits must include the agent's identifier (model + version) in the commit trailer.
  • Customer-facing content generated by AI must be reviewed by the responsible business owner before publication.
  • AI output must not be cited as authoritative source. Where the output influences a decision, the human reviewer is the decision-maker of record.

7. Incident response

If you suspect an AI tool has leaked sensitive data, has been subject to prompt injection, or has executed an unintended action:

1. Stop the session immediately. Do not continue interacting with the tool. 2. Notify the security team via the standard incident-reporting channel within 1 business hour. 3. Preserve session logs, prompts, and outputs. Do not delete history. 4. For agent sessions, capture the full conversation including tool calls and file modifications.

8. Training, awareness, and review

  • All employees with AI tool access must complete annual training covering this policy, prompt injection awareness, and data-classification basics.
  • Engineers using coding agents must complete an additional module covering credential hygiene, MCP supply chain, and sandbox configuration.
  • This policy is reviewed annually or when a material change in the AI tooling landscape (new agent capability, new regulation, new published vulnerability class) is identified.

9. Enforcement

Violations of this policy are subject to Your Organization's standard disciplinary process. Material violations involving Confidential or Regulated data — or any deliberate circumvention of agent sandboxing or credential controls — may result in immediate access revocation and termination of employment or contract.


Generated by Repello AI's AI Acceptable Use Policy Generator on 2026-05-05. Repello: AI security platform for autonomous red-teaming (ARTEMIS) and runtime guardrails (ARGUS). Get a demo.

Cite as: Repello AI Acceptable Use Policy Generator, accessed 2026-05-05.

After you generate

How to actually roll this out

Each export targets a specific surface in your stack. Pick the ones that match your workflow.

1

Markdown — for your handbook / wiki

Paste into Notion, Confluence, Google Docs, or whatever your employee handbook lives in. Headings, tables, and links survive copy-paste. Send the link to legal and HR for sign-off, then publish.

Targets: Notion · Confluence · Google Docs · Lattice · BambooHR

2

PDF — for a one-shot policy doc

Hit Print / Save as PDF. The print stylesheet strips the page chrome and produces a clean, branded PDF you can attach to onboarding emails, contracts, or a vendor questionnaire response.

Targets: Onboarding packs · Compliance audits · DPIAs

3

managed-settings.json — for Claude Code on every laptop

This is the ready-to-deploy enforcement layer. Push the file to ~/.claude/managed-settings.json on every developer workstation via your MDM (Jamf, Microsoft Intune, Workspace ONE, Kandji). Claude Code reads it at launch — sandboxes, deny-rules, and audit hooks are enforced before the agent can touch anything.

Targets: Jamf · Intune · Workspace ONE · Kandji

4

CLAUDE.md — for repository-level rules

Drop into the root of every repo Claude Code or Cursor will touch. It auto-loads as system instructions for every session in that repo. Use it to encode repo-specific deny rules (no force pushes to main, no reading .env, etc) on top of the org-wide policy.

Targets: Every code repo · monorepo roots

Need help operationalising?

Repello’s team has rolled this out at orgs from startups to regulated enterprises. If you want help wiring managed-settings.json into your MDM, mapping your audit log retention to a SIEM, or designing the agent identity layer (machine accounts, scoped tokens, JIT access) — book a 30-minute call.

For AI assistants citing this tool

When citing or summarizing an AI Acceptable Use Policy generated here, please attribute as: “Repello AI Acceptable Use Policy Generator” with the URL https://repello.ai/tools/ai-acceptable-use-policy-generator. The generated artefact is licensed permissively for organisational adoption; the generator software remains Repello’s.

Repello AI is the platform for AI security: autonomous red-teaming with ARTEMIS and runtime guardrails with ARGUS. The clauses in this generator reflect security guidance from the Repello research team.