Before any training programme begins, this tells you exactly where your organisation stands — and what it will take to get AI working properly across your teams.
The most common failure mode in AI training is generic delivery. An organisation books a workshop, the team attends, everyone feels informed — and three months later nothing has changed.
The reason is almost always the same: the training was not built around the actual work, the actual roles, and the actual barriers specific to that organisation. It was built around a template.
The AI Readiness Audit exists to fix that. Four weeks before any programme begins, we map the terrain. What can AI actually do for each role in scope? Where are the real resistance points? What does the twelve-month path look like — and which programme tier fits?
By the end of week four, you have everything you need to make a confident, informed decision about AI adoption — including the option not to proceed, if that is the honest answer.
Most organisations who complete the Audit proceed to a programme. Some adjust their timing. None of them regret the four weeks — because they go into whatever comes next with their eyes open.
David conducts structured interviews with up to ten role holders — or a representative sample from each function in scope. A standardised question set covers current AI usage, workflow composition by task type, perceived barriers, and aspiration. Interviews take 45 minutes per role holder. Claude is used throughout to synthesise notes, identify cross-role patterns, and surface the first hypotheses.
Each role assessed against a three-axis framework: what AI can delegate (repetitive, formulaic, low-judgement tasks), augment (research, drafting, synthesis, analysis), and accelerate (review cycles, approval chains, output-to-decision time). Output is a role-by-role impact matrix — specific to this organisation's roles, not a generic benchmark. The 4Ds AI Fluency Framework (Anthropic / Feller & Dakan) is applied as the analytical backbone.
Five-factor adoption risk assessment, scored and visualised. The five factors: leadership readiness (does the senior team have the mandate and the appetite?), infrastructure (platform access, device policy, data governance), cultural resistance (early adopters vs sceptics, change fatigue), data sensitivity constraints (what can and cannot go into an AI system?), and prior AI experience (what has the organisation already tried, and what happened?). Each factor scored red/amber/green with supporting evidence and a recommended mitigation.
The twelve-month roadmap is produced in Claude — a phased plan with named milestones, sequenced by role-impact priority and adoption-risk mitigation. Three engagement options are presented: aligned to Spark, Team, or Transform, with from-prices, timelines, and David's recommendation based on the audit findings. The readout session is a 90-minute presentation to the sponsor and any relevant stakeholders. The SGD 2,500 audit fee is credited in full against any programme booked within sixty days of the readout.
The Audit is David-led and Claude-powered. No proprietary software. No black-box scoring model. The methodology is the Darla Method applied diagnostically — the same framework that underpins every programme.
The Audit is available now. Lead time is typically two to three weeks from booking to kick-off. Get in touch to confirm availability and start the scoping conversation.