Back to all blogs

|
|
8 min read


TL;DR: An AI acceptable use policy defines which AI tools employees can use, what data they can put into them, what outputs require human review, and what the consequences are for violations. Without one, enterprises face shadow AI proliferation, data leakage through consumer LLM products, and compliance exposure under the EU AI Act and NIST AI RMF. This guide covers the 10 clauses every enterprise AI AUP needs, plus a template you can adapt and deploy.
What an AI acceptable use policy is
An AI acceptable use policy is a formal document that governs how employees, contractors, and other authorized users interact with AI systems within an organization. It defines the boundary between permitted and prohibited AI use, specifies what data can and cannot be entered into AI tools, establishes accountability for AI-generated outputs, and sets the consequences for violations.
An AI acceptable use policy is distinct from a general IT acceptable use policy in scope and risk model. Traditional IT AUPs address network access, device usage, and software installation. An AI AUP addresses the specific risks introduced by generative AI: that employees can paste sensitive data into a consumer LLM and have it processed on an external server, that AI-generated outputs can be presented as factual without verification, that shadow AI tools can process regulated data without security review, and that the organization may have legal exposure for AI-generated content used in customer-facing or regulated contexts.
The policy does not need to be restrictive to be effective. Most organizations benefit from an AUP that enables controlled AI use rather than attempting to prohibit it entirely. Blanket prohibitions drive shadow AI usage underground, where there is no visibility or control. A clear policy with an approved tool list and defined data handling rules gives employees a path to legitimate AI use while giving security and compliance teams the controls they need.
Why enterprises need an AI acceptable use policy now
Three converging pressures have made AI AUPs a compliance requirement rather than a best practice.
The EU AI Act. The EU AI Act entered into force in August 2024, with phased obligations running through 2027. Article 9 requires high-risk AI system operators to implement risk management systems. Article 26 requires deployers to implement appropriate human oversight measures. A documented AI acceptable use policy is direct evidence of both: it demonstrates that the organization has assessed AI risk, defined acceptable use boundaries, and established human review requirements for AI outputs in regulated contexts.
The NIST AI Risk Management Framework. The NIST AI Risk Management Framework addresses AI governance under four functions: Govern, Map, Measure, and Manage. The Govern function explicitly covers organizational AI policies and accountability structures. An AI AUP satisfies the Govern function's requirements for documented AI use policies, assigned responsibilities, and defined risk tolerance thresholds.
Shadow AI. Employees who lack clear guidance on approved AI tools choose their own. Consumer versions of LLM products process user inputs on external servers; in many cases, inputs are used for model training unless the user has configured an opt-out. Enterprise DLP telemetry consistently shows employees pasting sensitive data into AI tools at significant rates, including source code, customer data, and internal documents. An AI AUP paired with technical controls, including runtime monitoring of AI interactions, gives security teams both the policy basis and the enforcement mechanism to address this risk.
"The absence of an AI acceptable use policy is not a neutral position," says the Repello AI Research Team. "It means the organization's AI risk posture is determined by individual employee judgment, not organizational governance."
10 key clauses to include
1. Scope and definitions
Define what the policy covers. Specify which AI systems are in scope: generative AI tools (text, image, code), AI-powered features embedded in existing SaaS products, internally developed AI applications, and AI agents with access to enterprise systems. Explicitly define "AI system" in terms the policy will use consistently. Ambiguous scope produces ambiguous compliance.
Include a definition of "sensitive data" specific to your organization, cross-referenced with your existing data classification policy if one exists. The AI AUP needs to specify which data classifications are prohibited from AI tool input, which require additional controls, and which are freely permitted.
2. Approved AI tool list
Maintain an explicit list of AI tools that have passed security review and are approved for use. Specify any conditions: is a tool approved for all use cases, or only for specific ones (for example, approved for internal drafting but not for processing customer data)?
Include a process for requesting approval of new tools. Without a formal request path, employees either use unapproved tools because they cannot get approval or give up on AI use entirely. A clear 72-hour or five-business-day approval process removes the friction that drives shadow AI adoption.
3. Prohibited uses
State explicitly what employees must not do, without ambiguity. A strong prohibited use clause covers: inputting personal data of customers or employees into unapproved AI tools; inputting confidential business information, trade secrets, or attorney-client privileged communications into any external AI system; using AI to generate content that impersonates a real person; using AI to generate malware, exploits, or other offensive security tools; using AI-generated content in regulated filings, legal documents, or financial statements without human review and sign-off; and bypassing AI safety controls or jailbreaking enterprise AI systems.
The prohibited use list should be specific. Vague prohibitions ("do not misuse AI") are not enforceable and do not inform employee behavior.
4. Data handling requirements
Define data handling rules for each data classification tier interacting with AI systems. For most organizations, this means: public data can be entered into approved AI tools without restriction; internal data requires use of an approved tool only; confidential data requires an approved enterprise AI deployment with data isolation (not a consumer product); regulated data (HIPAA, GDPR, financial data) requires specific approved tools with documented data processing agreements.
Cross-reference the data handling clause with your data classification policy and your vendor contracts for approved AI tools. The AI AUP should not introduce data handling requirements that conflict with existing agreements.
5. Output verification and human review
Specify which AI outputs require human review before use, and what "review" means in each context. At minimum, define that: AI-generated content in customer-facing communications requires human review for accuracy; AI-generated legal, financial, or medical advice requires professional review before acting on it; AI-generated code requires security review before deployment to production; and any AI output used as the basis for a material business decision requires documented human sign-off.
This clause is specifically relevant to EU AI Act compliance for high-risk AI use cases, which require demonstrable human oversight of AI outputs. Document your review requirements and retain records.
6. Intellectual property and copyright
Address three distinct IP concerns. First, input ownership: employees must not input third-party copyrighted material into AI tools in ways that would constitute infringement. Second, output ownership: specify who owns AI-generated outputs produced by employees using approved tools in the course of employment. Third, disclosure: specify whether AI-generated content in external publications or submissions must be disclosed as AI-assisted.
This clause increasingly intersects with regulatory requirements. Several jurisdictions are developing AI-generated content disclosure rules. Build the policy to accommodate disclosure requirements without requiring a policy rewrite each time a new rule comes into force.
7. Security requirements and vulnerability disclosure
Require that employees report suspected AI security incidents through the standard security incident process. Define what constitutes an AI security incident: unexpected model behavior, suspected prompt injection, data leakage through an AI tool, or discovery of an AI vulnerability in an enterprise system.
Prohibit employees from attempting to exploit AI system vulnerabilities without authorization, including attempting to extract system prompts, jailbreak enterprise AI deployments, or probe AI systems for security weaknesses outside an authorized red team exercise. ARGUS runtime monitoring gives security teams visibility into AI interactions that would otherwise be invisible, providing the detection signal for AI security incidents before they escalate.
8. Accountability and roles
Assign clear ownership. Specify: who is responsible for maintaining the approved AI tool list; who has authority to approve new tools; who is the designated AI risk owner at the executive level; and what manager-level accountability exists for team AI use.
The ISO/IEC 42001:2023 AI management system standard structures AI accountability similarly: roles, responsibilities, and authorities must be documented and assigned. If your organization is working toward ISO 42001 certification, the accountability clause in your AUP maps directly to its governance requirements.
9. Monitoring and audit rights
State that the organization reserves the right to monitor AI tool usage in enterprise systems and retain records of AI interactions for audit purposes. Specify the retention period and the legal basis for monitoring (particularly important for organizations subject to GDPR or similar employment privacy law).
This clause is not optional. Without explicit monitoring rights documented in policy and communicated to employees, audit and incident response capabilities are legally constrained.
10. Consequences for violations
Specify the range of consequences for policy violations. Graduated consequences are more enforceable than an all-or-nothing approach: minor first violations addressed through training and coaching; deliberate or repeated violations through disciplinary process; violations that result in data loss, regulatory breach, or customer harm through formal HR and potentially legal process.
A policy with no teeth is not a policy. Employees and managers need to understand that AI AUP violations are treated with the same seriousness as other information security policy violations.
AI Acceptable Use Policy: template
The template below provides a starting structure for an enterprise AI AUP. Replace all bracketed fields with your organization's specifics. Have legal and HR review the final document before deployment, particularly the monitoring, IP, and consequences clauses.
[ORGANIZATION NAME] AI Acceptable Use Policy Version [X.X] | Effective Date: [DATE] | Owner: [CISO / Head of IT / Legal]
1. Purpose and scope
This policy governs the use of artificial intelligence tools and systems by [ORGANIZATION NAME] employees, contractors, and authorized third parties. It applies to all AI systems, including generative AI tools, AI-powered SaaS features, internally developed AI applications, and AI agents with access to enterprise systems.
2. Approved AI tools
The following AI tools are approved for use under the conditions specified:
Tool | Approved use | Data classification limit | Conditions |
|---|---|---|---|
[Tool name] | [Use case] | [Classification tier] | [Any restrictions] |
To request approval for a new AI tool, submit a request to [security@organization.com] using the AI Tool Approval Request form. Requests will be reviewed within [5] business days.
3. Prohibited uses
Employees must not:
Input personal data of customers, employees, or third parties into any unapproved AI tool
Input confidential business information, trade secrets, or privileged communications into any external AI system
Use AI to generate content impersonating a real individual
Use AI-generated outputs in regulated filings, legal documents, or financial statements without professional human review
Attempt to bypass, jailbreak, or exploit AI system controls in enterprise deployments
4. Data handling
Data classification | AI tool requirement | Permitted tools |
|---|---|---|
Public | Any approved tool | See Section 2 |
Internal | Approved enterprise tools only | See Section 2 |
Confidential | Enterprise tools with data isolation | [Specific tools only] |
Regulated ([HIPAA/GDPR/PCI]) | Approved tools with signed DPA | [Specific tools only] |
5. Output review requirements
The following AI outputs require human review before use:
Customer-facing communications: reviewed for accuracy by the sending employee
Legal, medical, or financial advice: reviewed by a qualified professional
Production code: security-reviewed before deployment
External publications or regulatory submissions: reviewed and signed off by [role]
6. Intellectual property
AI-generated outputs produced by employees using approved tools in the course of employment are owned by [ORGANIZATION NAME]. Employees must not input third-party copyrighted material into AI tools in ways that would constitute infringement. Disclosure of AI assistance in external publications must comply with [publication/regulatory guidelines].
7. Security and incident reporting
Report suspected AI security incidents to [security@organization.com] immediately. AI security incidents include unexpected model behavior, suspected data leakage through an AI tool, and discovery of AI system vulnerabilities. Unauthorized probing or exploitation of AI system controls is prohibited.
8. Accountability
AI Tool Approvals Owner: [Role] AI Risk Owner (executive): [Role] Policy Owner: [Role] Review cycle: Annual, or following material changes to the organization's AI use or applicable regulations.
9. Monitoring
[ORGANIZATION NAME] reserves the right to monitor AI tool usage on enterprise systems and retain records of AI interactions for a period of [12 months / as required by applicable law]. Monitoring is conducted for security, compliance, and audit purposes.
10. Consequences
Violations of this policy will be addressed in accordance with [ORGANIZATION NAME]'s disciplinary process. Violations resulting in data loss, regulatory breach, or harm to customers or third parties may result in termination and, where applicable, legal action.
[DOWNLOAD THE FULL EDITABLE TEMPLATE]
Submit your work email to receive the full Microsoft Word version of this template, including additional guidance notes for each clause, a risk assessment checklist, and a companion employee communication template.
Enforcing the policy technically
A documented AI AUP establishes the policy boundary. Technical controls enforce it. The two are not substitutes for each other: policy without enforcement is aspirational, and enforcement without policy lacks the legal and organizational basis to act on violations.
Technical enforcement for an AI AUP maps to three control layers. First, an approved tool list requires an enforcement mechanism. DNS filtering or proxy-level controls can block unapproved AI service endpoints. Browser extension policies can restrict AI tool access on managed devices. Second, data handling rules require data loss prevention tooling that detects when sensitive data is being transmitted to AI service APIs. Third, AI interactions on enterprise-deployed systems require runtime monitoring that captures what data is entering the model and what actions the model is taking.
ARGUS provides the runtime layer: monitoring AI interactions in enterprise deployments, logging tool calls and data flows, and surfacing policy violations before they become incidents. Combined with a documented AI AUP, runtime monitoring closes the gap between what the policy says employees should do and what is actually happening in production AI deployments.
For shadow AI specifically: policy alone does not address AI tools that employees install and use on personal devices or via personal accounts. Technical visibility into shadow AI usage, through network monitoring, endpoint telemetry, or SaaS access monitoring, is required to enforce the approved tool list in practice.
Frequently asked questions
What is an AI acceptable use policy?
An AI acceptable use policy is a formal organizational document that defines how employees and contractors may interact with AI systems. It specifies which AI tools are approved, what data can be entered into them, what AI outputs require human review before use, and the consequences for violations. It differs from a general IT acceptable use policy because it addresses risks specific to generative AI: external data processing, output accuracy requirements, intellectual property, and regulatory compliance under frameworks like the EU AI Act and NIST AI RMF.
Why does my organization need an AI acceptable use policy?
Without an AI AUP, employees make their own decisions about which AI tools to use and what data to put into them. This produces shadow AI proliferation, where sensitive data enters consumer LLM products with no security review, no data processing agreements, and no organizational visibility. An AI AUP gives security and compliance teams the policy basis to enforce approved tool lists, define data handling rules, and hold employees accountable for violations. It also satisfies governance requirements under the EU AI Act and NIST AI RMF.
What are the most important clauses in an AI acceptable use policy?
The highest-impact clauses are data handling requirements (specifying which data classifications can enter which AI tools), prohibited uses (explicitly listing what employees must not do), and output verification requirements (specifying which AI outputs require human review). These three clauses address the majority of real-world AI policy violations: data leakage through consumer AI tools, misuse of AI for prohibited purposes, and reliance on unverified AI outputs in regulated contexts.
How do I enforce an AI acceptable use policy technically?
Technical enforcement requires three control layers: network-level controls that restrict access to unapproved AI endpoints; data loss prevention tooling that detects sensitive data transmission to AI service APIs; and runtime monitoring of enterprise AI deployments that logs model interactions and flags policy violations. Policy documentation without technical enforcement is aspirational, and technical controls without policy documentation lack the organizational and legal basis to act on violations. Both are required.
Does an AI acceptable use policy satisfy EU AI Act requirements?
A documented AI AUP contributes to EU AI Act compliance but does not satisfy it alone. For high-risk AI system deployments, the Act requires risk management systems, technical documentation, human oversight measures, and operational logging: all of which require both policy and technical implementation. An AI AUP that includes output review requirements, data handling rules, and accountability assignments is direct documentary evidence of governance measures that the Act requires operators to implement. It should be one component of a broader AI governance program, not the complete compliance program.
How often should an AI acceptable use policy be reviewed?
At minimum, annually. Additionally, review the policy when: the organization adopts AI tools with significantly different capabilities or risk profiles; applicable regulations change (new EU AI Act implementing acts, jurisdiction-specific AI rules); a material AI security incident occurs; or the organization's AI deployment scope changes significantly. AI policy documents that are not reviewed become outdated quickly; the technology and regulatory landscape have both changed substantially within 12-month windows for the past several years.
Share this blog
Subscribe to our newsletter











