AI in recruitment moved from novelty to normal in Kenyan HR in 2025. By mid-2026, more than 60% of Zaajira customers use at least one AI feature in their hiring loop. This article gives an honest assessment of what works, what does not, and what the Data Protection Act 2019 expects of you.
What AI is genuinely good at
- CV parsing and structured extraction. Solved problem. AI parses Kenyan CVs (often inconsistently formatted) into structured fields with >95% accuracy in 2026.
- Screening question generation. Given a job description, modern LLMs produce screening questions that are roughly as predictive as those written by experienced recruiters, in a fraction of the time.
- Async interview transcription and scoring. Voice-to-text + structured scoring of behavioural answers produces consistent, auditable signals — particularly valuable when running large volume hiring.
- Candidate communication. First-line FAQ ("What is the salary?", "What stage am I at?") handled by AI cuts recruiter inbox load by ~40%.
What AI is not good at
- Final hiring decisions. Don't. Both ethically problematic and statistically weaker than a calibrated human panel.
- Detecting nuanced cultural fit. Best left to humans who know your team.
- Leadership assessment. Track record, references, and structured interviews still beat AI here by a wide margin.
Data Protection Act 2019: what applies
If you use AI to make or substantially influence a hiring decision, you have specific obligations:
- Inform candidates that AI is part of the process (s.30).
- Provide a human-review option for any candidate who requests one (s.35 — right not to be subject to a decision based solely on automated processing).
- Document your model logic. You don't have to publish weights, but you must be able to explain to the ODPC how a decision was reached.
- Register as a data controller with the ODPC if you process more than 10 employees' personal data.
The ODPC has issued enforcement notices to employers who failed to inform candidates that AI was being used.
Bias risk: real but manageable
AI scoring systems can amplify historical bias if trained naively on past hiring decisions. The mitigations that work in practice:
- Audit the model output quarterly across gender, county, and disability dimensions. Look for material score gaps.
- Do not feed the model demographic data unless there is a specific lawful and disclosed purpose.
- Calibrate against ground truth — track 12-month performance of AI-scored hires vs control.
A modern Kenyan hiring loop, AI-augmented
- Application + AI parse + AI must-have score.
- Async voice/video interview, AI-transcribed and AI-scored against rubric.
- Recruiter human review of top 25%.
- Structured manager interview, human-scored.
- Reference + final.
This loop typically cuts time-to-hire by 40–60% versus an unaided process, while improving 90-day retention.
Key takeaways
- Use AI for parsing, screening, and async assessment scoring. Keep humans on the final call.
- Inform candidates and offer human review on request — it's the law and it's good practice.
- Audit for bias quarterly.
Zaajira ships an AI hiring stack designed for the East African market — multi-language CV parsing, async interviews in English and Swahili, and Data Protection Act-aligned defaults. Get started in under 5 minutes.
