Safety and verification

Before you trust an AI answer, check these six things.

AI can sound confident before it is correct. This review routine helps you slow down before an AI draft becomes workplace output.

Why this matters

Trust should be earned by checking, not tone.

The risk is not only that AI may be wrong. The risk is that it may be wrong in a polished, plausible way. A simple review routine protects your judgement.

Source signal

Inspired by NIST AI Risk Management Framework resources.

Read source

The six checks.

1

Facts

Which claims need evidence from a trusted source?

2

Dates

Could this be out of date, especially for policy, pricing, law, or product features?

3

Names

Are people, companies, job titles, documents, and products named correctly?

4

Numbers

Do calculations, totals, percentages, currency, and comparisons make sense?

5

Policy

Does this match your workplace rules, approved tools, privacy standards, and escalation path?

6

Context

What did AI not know about your workplace, audience, relationship, or risk?

Copy-paste prompt: review this AI output.

# ROLE You are my AI output reviewer. # TASK Review this AI-generated draft before I use it at work. # DRAFT [paste the AI output] # WORK CONTEXT [audience, purpose, sensitivity, decision involved, deadline] # OUTPUT FORMAT 1. Claims that need verification 2. Dates, names, numbers, or links to check 3. Missing context or assumptions 4. Tone or audience risks 5. Privacy or policy risks 6. What a human should decide 7. A safer revised version if appropriate # RULES - Do not invent evidence. - Mark uncertainty clearly. - If this touches legal, HR, finance, medical, safety, or confidential matters, tell me to escalate to a qualified person.

When to ask a human.

Always slow down

  • People decisions.
  • Customer commitments.
  • Legal or policy claims.

Verify separately

  • Numbers and dates.
  • Links and citations.
  • Market or product claims.

Own the final

  • Tone.
  • Judgement.
  • Workplace context.