DARL\A AI
Home The 4Ds 01 · Delegation 02 · Description 03 · Discernment 04 · Diligence
04 · The fourth D

Dili-
gence.

What you own

"Am I responsible for this — and do I stand behind it?"

— The fourth D

The principle that holds it all up.

Diligence is the final and most important of the four principles. Whatever AI produces in your name is yours. The accuracy, the ethics, the consequences — they don't transfer to the model. They stay with you. There is no professional context in which "the AI said so" is an acceptable answer when something goes wrong, and there never will be. Diligence is the principle that recognises this, plainly.

Diligence covers three things, in order. Accuracy: have you checked the facts, the figures, the citations, the names, the dates, the quotes? Ethics: is this use of AI fair to the people it affects — clients, colleagues, employees, customers, third parties? Is it transparent where it needs to be? Is the data being handled appropriately? Accountability: are you prepared to stand behind this work if questioned, in a meeting, in front of a regulator, in front of a court, in front of your own team? These are not abstract principles. They are practical professional standards that have always governed senior work and continue to govern it now.

The professionals who build the most durable reputations with AI are the ones who treat it as a force multiplier for their standards, not a way to reduce the standards they're held to. AI lets you do more, faster, and at higher quality — it does not lower the bar for being careful. The temptation runs the other way: when output appears effortlessly, the temptation is to apply less effort to checking it. Diligence is the discipline that resists that temptation as a permanent professional habit.

There is also a quieter aspect of Diligence that matters increasingly: data care. What you put into AI matters. Confidential client information, employee data, financial figures, board material, M&A targets — these need to be handled with awareness of how the AI system processes, retains, or trains on the input. The rule of thumb: don't put anything into AI that you couldn't put into an email to a contractor without an NDA. For most professionals, that rule alone is sufficient. For regulated industries, more formal controls are required.

The relationship between Diligence and the other three Ds is what makes the framework hold together. Delegation gets the right work to AI. Description briefs it well. Discernment evaluates what comes back. Diligence is what makes you accountable for the result. Without it, the other three principles produce work fast — but you can't sign your name to it with confidence. With it, AI becomes what it should be: a serious instrument in the hands of a serious professional.

— In practice

What diligence means in practice.

01
Fact-checking, line by line. Every data point, quote, citation, name, date, figure, statistic, or specific reference AI has included gets verified before the work goes anywhere. Particularly when AI sounds most confident — that is when it is most likely to be invented.
02
Confidentiality discipline. Not feeding confidential client, employee, financial, or strategic data into AI systems without understanding how those systems handle it. Knowing the platform's data policy. Defaulting to caution when uncertain.
03
Material transparency. Being transparent about AI involvement when it is material — in published writing, in client proposals, in credited creative work, in academic submissions. Not always required. Always considered.
04
The owned draft. Treating every AI output as a draft you are professionally responsible for, not a finished product you can simply forward. The work goes out under your name. It carries your professional standards, not AI's.
05
The questionable use case. Pausing on uses where AI is technically capable but ethically uncertain — surveillance, manipulation, deception, work that displaces colleagues without their knowledge. Diligence is the principle that asks not just "can I?" but "should I?"
— Where it breaks

The two failure modes to watch.

— Failure 01

Diligence at the start.

Being careful about data, accuracy, and accountability when you first start using AI — and gradually loosening the discipline as you get more comfortable. The pattern is the same as with any new system: familiarity breeds shortcuts. The cost is hidden until it isn't.

— Failure 02

Diligence delegated upward.

Assuming someone else — your firm's policy, your team's review process, the regulator's audit — will catch what you didn't. Diligence is non-delegable. It sits with the professional whose name is on the work. Always.

Apply the 4Ds to your work.