Boundary check

Before using AI at work, know what is being observed.

A plain-English check for AI tools that analyse messages, performance, emotion, sentiment, meetings, or behaviour.

Why this matters

Some AI risk is not about prompts. It is about workplace monitoring.

If an AI tool analyses communication, productivity, sentiment, or behaviour, workers need clear boundaries before trusting it with workplace life.

Source signal

Inspired by reporting on AI worker-surveillance tools.

Read source

Five boundary questions.

1

What is collected?

Prompts, files, chats, meetings, clicks, tone, sentiment, productivity, or behaviour?

2

Who can see it?

Only you, your manager, admins, vendors, HR, compliance, or a wider analytics dashboard?

3

What is inferred?

Does the tool make claims about mood, performance, risk, engagement, or intent?

4

Can you challenge it?

Is there a human review process if the AI gets context wrong?

5

What is optional?

Can workers opt out, limit data, or use a safer workflow?

Copy-paste prompt: monitoring boundary check.

# ROLE You are my workplace AI boundary analyst. # TOOL OR USE CASE [describe the tool or AI use case] # WHAT IT MAY ACCESS [chat / email / meetings / files / sentiment / productivity / not sure] # OUTPUT FORMAT 1. What data may be collected 2. What inferences may be risky 3. Questions to ask before using it 4. What workers should be told clearly 5. A safer limited-use version # RULES - Do not assume the tool is harmless. - Do not assume the tool is bad. - Focus on clear consent, visibility, human review, and practical boundaries.

Use this when evaluating a workplace AI rollout.

Low concern

  • Voluntary drafting support.
  • No sensitive monitoring.
  • User controls the input.

Medium concern

  • Meeting or message summaries.
  • Admin logs.
  • Shared analytics.

High concern

  • Emotion or sentiment scoring.
  • Productivity surveillance.
  • Automated worker judgements.