DARL\A AI
Home The 4Ds 01 · Delegation 02 · Description 03 · Discernment 04 · Diligence
03 · The third D

Discern-
ment.

How you evaluate it

"Is this actually good — or does it just look good?"

— The third D

Where your experience earns its keep.

AI output can be fluent, well-structured, and completely wrong. Or right in structure but flat in voice. Or technically accurate but strategically off. Or correct in every detail and yet missing the one nuance that the situation actually turned on. Discernment is the skill that stops you from publishing the first draft, regardless of how good it sounds.

For experienced professionals, this is actually where your edge lives — and it is widening, not narrowing, as AI gets better. You bring twenty or thirty years of pattern recognition that no model has. You know when a client argument doesn't land, when a strategy feels thin, when language is borrowed rather than earned, when the recommendation is technically defensible but practically wrong. AI can mimic the shape of good work. Senior professionals can tell the difference. That gap is the durable edge.

Discernment isn't about distrust. It's about professional standards applied consistently. You wouldn't publish a junior colleague's first draft without reading it carefully — apply the same rigour here. The best AI users treat every output as a strong starting point, not a finished product. They read it the way they'd read a draft from someone they're mentoring: looking for what's right, what's missing, and what needs reworking.

There's a particular discernment skill that becomes valuable specifically with AI: spotting confident wrongness. AI rarely tells you it's unsure. It writes statistics with the same fluency whether they're sourced or invented. It cites cases that don't exist as comfortably as cases that do. Knowing when to stop and verify — and which claims warrant verification — is one of the highest-value skills in modern professional work. It's also a skill that AI cannot teach itself, because it requires knowing what good actually looks like in your domain.

The professionals who get this right treat AI output as raw material, not as deliverable. They evaluate it before they polish it. They notice what's flat. They reject what doesn't sound like them. They send things back for another pass without ego. Most importantly, they recognise that the quality of the final work is determined more by how well they discern than by how well AI initially produces.

— In practice

What discernment protects.

01
The confident hallucination. Spotting when AI has fluently included a statistic, citation, or fact it cannot possibly have verified — and either checking it or removing it. Particularly important in regulated, professional, or client-facing work.
02
The wrong voice. Recognising when the tone is technically correct but sounds nothing like you — too formal, too casual, too vendor-pitch, too anodyne — and knowing how to redirect rather than accept it as the new house style.
03
The hollow recommendation. Identifying when a strategy memo has good structure but the actual recommendation is too generic to be useful. Often signalled by phrases like "a balanced approach" or "a phased rollout" without specifics that would commit anyone to anything.
04
The mispriced confidence. Knowing which outputs are strong enough to use as-is, which need editing, which need to go back for another pass with a sharper brief, and which to discard entirely. Most professionals over-use the first category and under-use the last.
05
The missing nuance. Spotting when AI has produced a technically correct answer that misses the one thing that actually matters in this specific situation — the relationship history, the political context, the cultural read. Pattern recognition AI doesn't have.
— Where it breaks

The two failure modes to watch.

— Failure 01

Discerning too little.

Treating AI output as finished work because it sounds fluent. Publishing first drafts. Forwarding without reading. The cost is reputational and accumulates quickly: each unscrutinised output erodes professional standards by a small but measurable amount.

— Failure 02

Discerning too much.

Rewriting every word AI produces, defeating the purpose of the collaboration. Treating AI output as inherently suspect rather than as a strong starting point. The cost here is silent — hours spent hand-finishing work that didn't require it, gains foregone.

Apply the 4Ds to your work.