Last Updated: 8 December 2025

STOP Framework: When Should You NOT Use AI?

When Should You NOT Use AI?

Quick Answer: The STOP Framework provides four critical checkpoints before using AI: Security (will sensitive data be exposed?), Trust (can outputs be verified?), Ownership (who owns generated content?), and Purpose (does AI genuinely add value here?). Developed by Riley Coleman after their own data leak during an LSE ethics course, STOP prevents the costly mistakes that derail AI adoption in 73% of cases.

Key Characteristics:
  • 4 checkpoints: Security → Trust → Ownership → Purpose
  • Created from Riley's personal data leak experience at LSE
  • Takes 2 minutes to complete before any AI task
  • Prevents legal, ethical, and reputation risks
Real Example:

During their LSE AI ethics course, Riley fed sensitive student data into an AI tool without thinking. The exposure wasn't catastrophic, but the violation of trust was profound and entirely preventable. That moment of personal failure sparked the creation of STOP.

Frequently Asked Questions

What is the STOP Framework?

STOP is a 4-checkpoint assessment you run before using AI on any task. It stands for Security (will data be exposed?), Trust (can outputs be verified?), Ownership (who owns what's generated?), and Purpose (does AI genuinely add value?). It takes 2 minutes and prevents legal, ethical, and reputation risks.

When should you NOT use AI?

You should not use AI when: (1) sensitive data is involved and could be exposed, (2) you can't verify the accuracy of outputs and the stakes are high, (3) the ownership of generated content is unclear and you need commercial rights, or (4) AI is adding complexity rather than genuine value.

How do I assess AI security risks?

Ask: Does this task involve client information, personal data, proprietary details, or anything covered by privacy laws? If yes, check the AI provider's data retention policies. If the tool is third-party and stores your input, don't use it for sensitive data.

What are AI ownership concerns?

Most generative AI tools were trained on copyrighted work without permission. If you use AI-generated content for commercial client work, you may be creating derivative works based on stolen IP. The legal landscape is unresolved.

How long does the STOP assessment take?

2 minutes. STOP is designed to be fast. Four questions: Security? Trust? Ownership? Purpose? If all are clear, proceed. If any raise concern, stop and reconsider.