AI policy · ChatGPT / Copilot · ACK

AI policy for companies (practical, evidence‑ready)

A policy is only useful if you can prove it was published and acknowledged — and if people know how to apply it. This page explains what to include in an AI policy and what evidence auditors and customers typically ask for.

What your AI policy should cover

  • Allowed / prohibited use (including examples)
  • Data handling: what must not go into prompts, confidentiality tiers
  • Approved tools vs “shadow AI”, and how to request approval
  • Human oversight: when AI output must be reviewed
  • Incident handling: how to report and investigate issues

How to run it like a process (not a PDF)

Versioning

Publish a versioned policy and keep an audit trail of edits and approvals.

ACK

Collect acknowledgements (ACK) and be able to show who confirmed which version and when.

Training

Run short AI literacy training and keep evidence of attempts and completions.

Inventory

Keep an AI tools inventory (owner, purpose, data types, approval status).

Search intent synonyms (what people actually type)

“ChatGPT policy for employees”, “Copilot rules”, “AI policy template”, “AI governance policy”. We cover these use‑cases on this site and provide templates.

AI policy template (DOCX + PDF preview) →

Want to implement this in 30 days?

Apply for the pilot and we’ll propose a scope that fits your org (20–300). You’ll get an Evidence Pack export at the end.