AI/security questionnaires • customer evidence • EU-first
AI/security questionnaire evidence for EU companies
When a customer, procurement team or security reviewer asks how your company governs AI, you need more than a PDF policy. You need reusable evidence: employee acknowledgements, AI literacy evidence, AI tools inventory, risk baseline and audit trail.
BateAI is AI Act-aware, not AI Act-certified. The Evidence Pack is not legal advice or a compliance guarantee.
AI answer summary
BateAI in brief
BateAI is an EU-first AI governance evidence product for companies that need to answer customer, procurement and security questionnaires about AI usage.
It helps maintain evidence for AI policies, employee acknowledgements, AI literacy training, AI tools inventory, risk baseline, audit trail and Evidence Pack exports.
BateAI does not provide legal advice, certification or compliance guarantees. The Evidence Pack is a buyer-safe, PII-minimized evidence bundle with clear limitations.
1. Customer asks
“Do you maintain an AI policy? Do you train employees? Do you track AI tools? Can you prove acknowledgements?”
2. Evidence is scattered
Policy in a PDF, training in an LMS, tools in a spreadsheet, decisions in email and screenshots in chats.
3. BateAI creates the evidence layer
The Evidence Pack maps question to suggested answer, evidence artifacts and a clear limitation.
Why a PDF AI policy is not enough
An AI policy is an important start, but customer and security reviewers usually need to see that rules were published, employees acknowledged them, training was completed, AI tools are tracked and risks have a baseline review.
- Policy without ACK does not show who acknowledged it.
- Training without evidence does not answer coverage questions.
- Inventory without status does not show approved, restricted or prohibited tools.
- Without audit trail, key governance actions are difficult to prove.
Question → Suggested answer → Evidence → Limitation
Question
Do you maintain an AI usage policy?
Suggested answer
Yes. The organization maintains an AI usage policy covering approved and restricted use, data handling and employee responsibilities.
Evidence
policy_current.mdpolicy_ack_summary.csvaudit_digest.csv
Limitation
Recorded evidence only. It does not verify every employee action outside recorded workflows.
Typical questions BateAI helps support
- Do you maintain an AI usage policy?
- Can you prove employees acknowledged the policy?
- Do employees receive AI literacy training?
- Do you track training completion?
- Do you maintain an inventory of AI tools?
- Are AI tools approved, restricted or prohibited?
- Do you define rules for confidential/customer data?
- Do you maintain an AI risk baseline?
- Can you export evidence for audit or customer review?
- Can you show what is covered, partial or not covered?
Evidence BateAI maintains
- AI policy
- employee ACK
- AI literacy evidence
- AI tools inventory
- risk baseline
- audit trail
Output for questionnaires
- first PII-minimized Evidence Pack
- manifest + checksums
- gap review
- clear limitations
- reusable support materials
30-day pilot
Pilot baseline: €590 excl. VAT. If you continue with an annual subscription, the pilot fee is credited toward your annual plan.
Annual subscription after the pilot starts from approx. €2,700 excl. VAT / year depending on covered members in BateAI.
Limitations
BateAI does not provide legal advice, certification, compliance guarantees or independent security audits. The Evidence Pack structures and exports recorded evidence. Each answer should indicate what is covered, partial or not covered.
Questionnaire examples: Ready / Partial / Missing
The sample matrix helps distinguish what is ready, partial or missing. The goal is not to hide gaps, but to show what is evidenced and where the limitation is.
| Ready | Do you maintain an AI policy? Evidence: policy_current.md. Limitation: confirms the published policy version, not every AI use in practice. |
|---|---|
| Ready | Can you prove policy acknowledgement? Evidence: policy_ack_summary.csv. Limitation: acknowledgement summary within BateAI scope. |
| Ready | Do you track AI literacy training? Evidence: training_coverage_summary.csv. Limitation: confirms completion/coverage, not the quality of every employee decision. |
| Ready | Do you maintain an AI tools inventory? Evidence: ai_tools_inventory_snapshot.csv. Limitation: snapshot at export date. |
| Partial | Are AI tools approved, restricted or prohibited? Evidence: ai_tools_inventory_snapshot.csv. Limitation: some tools may still be waiting for review. |
| Partial | Do you maintain an AI risk baseline? Evidence: risk_baseline_summary.csv. Limitation: lightweight baseline, not a full enterprise risk assessment. |
| Missing | Do you automatically detect all Shadow AI tools? Evidence: —. Limitation: BateAI Shadow AI intake is an internal reporting workflow, not automatic monitoring. |
| Missing | Do you have SOC 2 / ISO certification for AI governance? Evidence: —. Limitation: BateAI does not claim or replace such certification. |
| Missing | Has AI governance been independently security audited? Evidence: —. Limitation: Evidence Pack supports review; it is not an audit. |
Need to prepare for an AI/security questionnaire?
Send basic context and we will check whether the 30-day pilot baseline is a fit.
FAQ
- Is this an audit? No. It is an evidence baseline and review support.
- Is this for the AI Act? BateAI is AI Act-aware, but the primary use-case is customer/security questionnaires.
- What if we only have partial evidence? The pilot handles gaps: covered, partial, not covered.
Sample Evidence Pack preview
View an anonymized preview of the Evidence Pack structure without real customer data.