The Designer’s Dilemma
Here’s the uncomfortable truth. AI tools are genuinely transformative for design work, but our casual approach to sharing information creates risks we’re only beginning to understand.
Every time we interact with AI, we face the same challenge:
- Generic prompts = Generic answers = Wasting your time
- Detailed prompts with good context = More valuable outputs = Accidentally sharing confidential information
Get it wrong in either direction, and we either waste time on generic advice or accidentally expose corporate IP that could cost our company a competitive advantage but for us it could be our reputation or worse, our creativity.
We’re treating these tools like trusted colleagues rather than what they actually are. Third-party services with their own data storage, usage policies, ethical stance and business models.
As designers, we almost exclusively work in the pre-launch phases of projects . That means by its very nature, we work with valuable corporate IP, customer insights, and designs that (hopefully) give us a competitive advantage.
But there’s another concern emerging from recent research: MIT’s Media Lab published findings showing that over-reliance on AI for direct problem-solving diminishes creative thinking and cognitive flexibility. The very skills that make us valuable as designers. ChatGPT’s Impact On Our Brains According to an MIT Study. The study found that when people start tasks with AI assistance, they struggle to activate the same neural networks needed for creative and critical reasoning when later working independently.
Here’s my golden rule:
Before adding something into an AI system, ask this question:
How would your CEO react to you posting that publicly on LinkedIn with your name next to it?
- If they would be totally fine with it—go for it
- If they wouldn’t—then you shouldn’t
So what’s the alternative?
Don’t use AI?
Absolutely not. “Just Say No” has never been my thing. You probably don’t know this about me, but a lifetime ago I started my career in drug & alcohol harm minimisation programmes for young people.
What I’m proposing is a similar Harm Minimisation approach to the way we think about and use AI systems.
What You’ll See Across All Three Demos
In each walkthrough, you’ll watch me navigate the same design challenge using different AI tools, showing:
- Different strengths: How each tool excels at different aspects of strategic preparation
- Privacy-first prompting: Exact language that gets great results without confidentiality breaches
- Real-time thinking: How I refine prompts and build on responses
- Strategic output: What I actually walk into the kickoff meeting with
The result? Complete preparation for a complex design challenge without sharing a single piece of confidential information.
The Scenario : Getting Prepared for Project Kickoff
Let me set the scene for the following three live demos of different AI tools tackling the same design challenge – whilst preserving privacy and corporate IP.
Here’s the brief I’m working with (abstracted to protect confidentiality):
“I’m a service designer preparing for a kickoff meeting about redesigning employee onboarding for the first 90 days. We want to use AI to create hyper-personalised learning journeys that adapt to individual needs, learning styles, and disciplines.”
The traditional approach would be using google to search, read half a dozen (at best) resources and writing some notes and then a list of questions for the kickoff.
Instead, here are three live demos showing how to use AI strategic whilst protecting sensitive information.
Demo 1: CustomGPT act as my “Senior Design Mentor”
Watch Strategic Thinking Enhancement in Real-Time
What you’ll see: I’ll take you inside a live session with my Design Mentor CustomGPT, showing exactly how I approach this onboarding challenge without sharing any confidential details.
Live demo highlights:
- The exact prompts I use
- How the conversation evolves to explore behaviour change principles
- Real-time problem-solving for AI + human integration challenges
- How I extract actionable insights for the upcoming kickoff meeting
Why this approach works:
- No project specifics shared, but highly relevant strategic thinking
- Builds frameworks I can apply to any onboarding challenge
- Creates a dialogue that enhances rather than replaces my thinking
Demo 2: Claude Deep Research Live Session
Comprehensive Industry Analysis Without Project Exposure
What you’ll witness: A live deep-dive research session using Claude to map the current landscape of AI-enhanced employee onboarding, adult learning principles, and cultural integration strategies.
Live demo highlights:
- How I structure research queries for maximum comprehensiveness
- Real-time synthesis of academic research and industry case studies
- Identifying patterns across multiple sources and time periods
- Converting broad research into specific strategic insights
The strategic value:
- Comprehensive industry context without revealing project scope
- Latest research and best practices at my fingertips
- Intelligent questions ready for stakeholder conversations
Demo 3: NotebookLM Synthesis in Action
Watch Research Transform into Strategic Intelligence
What you’ll experience: Taking all the insights from the previous demos and watching NotebookLM synthesise them into actionable strategic frameworks and presentation materials.
Live demo highlights:
- Feeding public research and frameworks into NotebookLM
- Generating AI podcast discussions between different expert perspectives
- Creating stakeholder-ready insights and recommendations
- Building compelling narratives that bridge theory and practice
The transformation:
- Generic research becomes specific strategic guidance
- Multiple perspectives synthesised into coherent approaches
- Presentation-ready materials for kickoff facilitation
The Bigger Picture
AI as Strategic Enhancement, Not Replacement
Notice what’s happening across all three demos:
These tools don’t replace my expertise – they amplify my strategic thinking capacity.
They don’t need my project details – they help me build better frameworks for approaching any similar challenge.
They didn’t solve my problems – they give me better tools for thinking through complex design challenges.
Riley Coleman
Helping product and design professionals build AI literacy and responsible implementation practices that amplify human potential whilst maintaining trust and ethics.