Most recruiting agencies using AI tools today are non-compliant with the EU AI Act — and most don't know it. The regulation doesn't require you to stop using AI. It requires you to use it in a way you can document, explain, and defend. This recruitment AI compliance checklist gives you the exact steps to get there before the August 2026 deadline for high-risk AI systems.
Work through each section in order. By the end, you'll have a compliance foundation you can build on — without hiring a lawyer or consultancy to do it for you.
Step 1 — Audit Every AI Tool in Your Recruitment Workflow
Before you can comply, you need a clear picture of what you're running. Start by listing every tool that touches a candidate in any way — from application to offer.
For each tool, capture:
- What it does (score, rank, filter, generate, communicate)
- At which stage it operates (application, screening, shortlisting, outreach, follow-up)
- Whether a human reviews its output before it affects a candidate
- Whether candidates are informed it's being used
Common tools that fall into scope: ATS plugins with scoring features, ChatGPT or Claude for candidate communication, CV parsing and ranking tools, job ad generators, automated outreach sequences, interview scheduling bots.
If a tool touches a hiring decision for EU-based roles, it's in scope under the Act's Annex III high-risk classification.
Step 2 — Classify Each Tool by Risk Level
Not all AI in your stack carries the same obligation. The EU AI Act distinguishes between:
- High-risk: Tools that score, rank, filter, or make decisions about candidates. Full compliance obligations apply.
- Limited risk: Tools that generate content (job ads, emails) without directly deciding on candidates. Transparency obligations apply.
- Minimal risk: Tools used purely for internal productivity (summarisation, drafting). No specific AI Act obligations.
A CV screener that ranks candidates and moves lower scores to an archive folder? High-risk. A tool that drafts a personalised outreach email for a recruiter to review and send? Limited risk. A grammar checker? Minimal risk.
This classification determines what you need to do next for each tool.
Compliance Checklist for High-Risk AI Tools
For every tool you've classified as high-risk, work through all of the following:
- Request technical documentation from your vendor. Under the EU AI Act, providers of high-risk AI systems are legally required to supply documentation of how their system works, what data it was trained on, and its known limitations. Ask for it in writing. If they can't provide it, document that you asked.
- Verify the tool's accuracy across protected groups. Review whether the tool has been validated to perform consistently across gender, age, ethnicity, and other protected characteristics. Ask the vendor directly. A lack of answer is itself a data point.
- Add a human review checkpoint before any rejection. No AI tool should be the sole reason a candidate is eliminated. Ensure a consultant reviews the AI's output — even a ranked shortlist — before profiles are archived or candidates are removed from consideration.
- Add a candidate transparency notice to your application flow. Candidates must be told that AI is used in processing their application. A single clear sentence in your application confirmation email or privacy policy is the minimum. Example: "We use AI-assisted tools to support our initial CV review process."
- Update your privacy policy. Explicitly state which AI systems are used, what personal data they process, and how long that data is retained. Link this to your existing GDPR disclosures.
- Create a risk management log. Document the risks associated with each high-risk tool — including potential for bias, over-reliance, or data quality issues — and what mitigations are in place. This doesn't need to be complex. A shared document updated quarterly is sufficient.
- Define a process for human override. Specify who has the authority to override the AI's output and under what circumstances. This should be written into your internal workflow documentation.
- Log incidents and near-misses. If the AI produces clearly incorrect or potentially discriminatory output, record it. This log demonstrates active oversight and is required for high-risk systems.
Compliance Checklist for Limited-Risk AI Tools
For tools generating content about candidates — outreach emails, job ads, reports — a lighter set of requirements applies:
- Disclose AI-generated content where relevant. If you're sending an AI-drafted message to a candidate, ensure a recruiter has reviewed and approved it before sending. Candidates need not be told every email was drafted by AI, but any significant document (assessment report, feedback summary) should be human-reviewed.
- Do not use AI-generated content to misrepresent roles or candidates. AI job ad generators can hallucinate role details or inflate candidate profiles. Every piece of AI-generated content that goes external should pass a human accuracy check.
- Retain editing control. Automated send sequences (e.g., AI-triggered follow-up emails) that go out without human review before each send should be reviewed at the template level at minimum, with a clear opt-out or escalation mechanism.
What Good Documentation Looks Like in Practice
At AI Experts, every automation we build for recruiting agencies includes a simple compliance package from day one:
- A one-page tool description: what it does, what data it processes, what decisions it influences
- A risk register entry: the top two or three risks and how they're mitigated
- A human checkpoint specification: at which step a consultant reviews the output and what they're looking for
- A transparency notice template ready to drop into the client's application flow
None of this takes more than a few hours to produce. But it's the difference between being audit-ready and scrambling when a candidate or regulator asks questions.
Your Compliance Timeline
The EU AI Act's high-risk obligations come into force in August 2026. Here's how to pace the work:
- Now: Complete Steps 1 and 2 — audit your tools and classify them. This takes one focused afternoon.
- This month: Contact vendors of high-risk tools and request documentation. Add candidate transparency notices. Update your privacy policy.
- Next 90 days: Build human review checkpoints into your workflow where they don't exist. Create your risk management log. Define your override process.
- Ongoing: Review the log quarterly. Update documentation when tools change. Train your team on the checkpoints.
Agencies that start now will have a compliance process that's embedded in how they work — not a last-minute scramble. The ones that wait until Q2 2026 will be retrofitting compliance onto workflows that weren't designed for it, which is harder and more disruptive.
The Bigger Picture
Compliance isn't just about avoiding fines. Larger corporate clients placing roles through boutique agencies are beginning to ask about AI governance as part of their vendor due diligence. Being able to hand over a clear, honest account of how your AI works — and what safeguards you have in place — is increasingly a commercial differentiator, not just a legal requirement.
The agencies that treat this seriously now will be the ones their clients trust with sensitive, senior, and high-volume mandates in 2027 and beyond.
Want automations built compliant from day one?
Every workflow I build includes documentation, human checkpoints, and transparency notices as standard. Book a free call to see what that looks like for your agency.
Book Your Free Call