Is DeepSeek trustworthy for business use?

Quick Answer: DeepSeek scores poorly on human-centered trustworthy AI: Privacy (1/10), Security (2/10), Fairness (1-2/10), Transparency (4/10), Social Impact (3/10). Despite achieving competitive AI at ~5% of Western costs, January 2025 database leak revealed significant vulnerabilities. R1's "thinking process" feature shows promising transparency innovation.

Key Characteristics:
  • Cost breakthrough: competitive AI at ~5% of Western development costs
  • January 2025 database leak revealed significant security vulnerabilities
  • DeepSeek-R1's "thinking process" showing reasoning is innovative transparency
  • Triggered 17% Nvidia stock drop due to efficiency breakthrough
Real Example:

DeepSeek's launch caused Nvidia's stock to drop 17% by delivering competitive AI at roughly 5% of Western development costs. DeepSeek-R1 showed a surprising transparency innovation: unlike OpenAI's o1 or Claude 3.5 Sonnet, it actively shows its reasoning process by default. However, a January 2025 database leak exposed significant security vulnerabilities, and its privacy score of 1/10 reflects storage of all user data on Chinese servers.

Article

A Trustworthy AI Assessment of Deepseek

Build the business case for trustworthy AI design with proven ROI frameworks.

Riley ColemanRiley Coleman
February 06, 2025·6 min read

Feb 2025

DeepSeek: When Technical Brilliance Meets Ethical Challenges

let’s talk about why

G’day!

What a start to the year when it comes to AI announcements

Many of you would have seen the news that a chinese company just released a new market leading AI model called DeepSeek.

Their achievement is remarkable: creating AI models that match or exceed Western capabilities at just 5% of the cost. The impact was immediate and dramatic – Nvidia’s stock dropped 17%, and even the ‘Magnificent 7’ tech companies felt the tremors.

I’ll be honest – it’s forcing us to confront some complex questions about AI development and ethics. When assessed against Human-Centered AI principles, DeepSeek presents a fascinating mix of innovation and deep ethical concerns.

Let’s pull this thread apart.


The Privacy Paradox: 1/10

DeepSeek scores poorly on privacy and data protection, storing all user data on Chinese servers – everything from chat histories to keystroke patterns. Think of it as having someone not just reading your diary, but watching you write it and then sharing it with others without your permission. Their privacy policy grants broad rights to exploit user data and share it with authorities.

However, let’s add some context here. While concerns about Chinese server storage are valid; let’s not forget Snowden’s revelations about the NSA’s PRISM program remind us that Western tech isn’t immune to government surveillance either.

The reality is if you are using US AI Products or Chinese – user privacy faces challenges regardless of where servers are located.


Transparency: A Surprising Bright Spot – 4/10

While they’ve released some model weights, crucial information about training data and processes remains hidden. However, DeepSeek-R1 actually represents a significant innovation in AI explainability.

Unlike most current AI systems – including OpenAI’s o1 and Claude 3.5 Sonnet – DeepSeek-R1 actively shows its work.

It begins by outlining its understanding of user intent, acknowledging potential biases, and explaining its reasoning pathway before delivering answers.

DeepSeek-R1 shows the thinking process to the author’s prompt (Source Forbes 2025)

This “thinking out loud” approach isn’t just a feature – it’s a paradigm shift in how AI systems communicate with users. While other models need prompting to explain their reasoning, DeepSeek-R1 does this by default.


Security Concerns Remain: 2/10

The January 2025 database leak highlighted significant vulnerabilities in DeepSeek’s security infrastructure. This isn’t just about data breaches – there are fundamental concerns about data transmission and vulnerability to jailbreaking techniques.

The Real Challenges
Fairness and Accountability:

When it comes to fairness and non-discrimination, DeepSeek scores a troubling 2/10. Evidence shows systematic biases and censorship, with limited documentation about bias detection or mitigation strategies.

Their accountability score of 1/10 reflects a concerning lack of independent oversight mechanisms.

Social Impact: A Nuanced Picture 3/10

While the technology is impressive, with less training time requiring less energy, there are still serious questions about potential misuse and broader societal impacts. However, their cost-effective approach could democratize access to advanced AI capabilities – if the ethical challenges can be addressed.


Practical Implications

For individuals and organisations, this nuanced picture leads to some clear recommendations:

For Individual Users:

  • Appreciate the advanced transparency features while remaining cautious about data sharing
  • Consider alternatives with stronger privacy protections for sensitive applications
  • Be aware that privacy concerns exist across all major AI platforms.

For Organisations:

  • Conduct thorough risk assessments before deployment, however I can’t see a reason you would risk your data and commercial IP with this system.

For Developers:

  • Use open-source model components locally when possible
  • Implement additional safety measures
  • Monitor for biases and security vulnerabilities

Quick Commercial Break :

Upcoming Webinar Announcement

𝗧𝗵𝘂𝗿𝘀𝗱𝗮𝘆, 𝗙𝗲𝗯 𝟭𝟯, 𝟮𝟬𝟮𝟱
𝟵:𝟯𝟬 𝗔𝗠 𝗚𝗠𝗧+𝟳 | 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 | 𝗙𝗿𝗲𝗲
(recordings sent to all who register)

Join me for a FREE 45-minute workshop where I’ll show you 𝗛𝗢𝗪 𝗧𝗢 𝗖𝗥𝗘𝗔𝗧𝗘 𝗬𝗢𝗨𝗥 𝗣𝗘𝗥𝗦𝗢𝗡𝗔𝗟𝗜𝗦𝗘𝗗 𝗔𝗜 𝗔𝗗𝗢𝗣𝗧𝗜𝗢𝗡 𝗕𝗟𝗨𝗘𝗣𝗥𝗜𝗡𝗧

It will help take the guess work and wasted effort by helping to:

  1. 𝙄𝙙𝙚𝙣𝙩𝙞𝙛𝙮 𝙬𝙝𝙖𝙩 𝙩𝙖𝙨𝙠𝙨 𝘼𝙄 𝙞𝙨 𝙗𝙚𝙨𝙩 for in your everyday workflow, and which tasks are left best in your human hands
  2. I𝙙𝙚𝙣𝙩𝙞𝙛𝙮 𝙬𝙝𝙖𝙩 𝘼𝙄 𝙘𝙖𝙥𝙖𝙗𝙞𝙡𝙞𝙩𝙞𝙚𝙨 𝙞𝙣 𝙮𝙤𝙪𝙧 𝙖𝙫𝙖𝙞𝙡𝙖𝙗𝙡𝙚 𝙩𝙤𝙤𝙡𝙨
  3. Provide v𝗲𝗿𝘆 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘀𝗲𝗱 𝗮𝗱𝘃𝗶𝗰𝗲 𝗵𝗼𝘄 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 𝙘𝙖𝙣 𝙝𝙚𝙡𝙥 𝙮𝙤𝙪 𝙬𝙞𝙩𝙝 𝙚𝙖𝙘𝙝 𝙨𝙥𝙚𝙘𝙞𝙛𝙞𝙘 𝙩𝙖𝙨𝙠. like ChatGPT, Claude & Gemini

The best part – this is a repeatable systematic approach.


Limited to 50 spots for focused attention.
REGISTER FOR FREE HERE https://shorturl.at/Ks58X

REMEMBER TO ADD IT TO YOUR CALENDAR

Looking Forward

The fascinating part about DeepSeek’s case is how it highlights the complex tension between technical achievement and ethical AI development. Their transparency innovations show that ethical assessment isn’t a zero-sum game – an AI system can excel in some areas while falling short in others.

What makes this situation particularly interesting is how it forces us to confront our own biases in AI ethics assessment. Are we holding different regions to different standards? How do we balance incredible technical achievements with legitimate ethical concerns?

The path forward isn’t about choosing between innovation and ethics – it’s about demanding both. DeepSeek’s case shows us both what’s possible in AI development and what ethical challenges we still need to solve.

I’d be particularly interested in hearing your thoughts on this balance. How do you weigh transparency benefits against privacy concerns in AI systems? And how do we ensure that the race for AI advancement doesn’t come at the cost of essential ethical principles?



Would love your feedback below


Until next time… Take it easy.

Riley

This approach to AI design leadership ensures human-centered design principles remain at the forefront of technological advancement.

Key Principles of AI Design Leadership

Understanding AI design leadership requires a systematic approach to implementation. Our research shows that successful AI design leadership strategies incorporate three fundamental elements:

  • Human-centered approach – Ensuring technology serves human needs
  • Ethical framework – Maintaining responsible design practices
  • Continuous learning – Adapting to evolving technologies and methodologies

Implementing AI Design Leadership in Your Practice

The practical application of AI design leadership involves both strategic planning and tactical execution. Design leaders who excel in AI design leadership consistently demonstrate superior outcomes in user satisfaction and business impact.

“The future of design isn’t about choosing between human and artificial intelligence,  it’s about ensuring human agency grows stronger as AI grows more powerful.” – Riley Coleman, AI Flywheel

Resources for AI Design Leaders

Continue your AI design leadership journey with these carefully curated resources:

Ready to advance your AI design leadership expertise? Our proven frameworks and community support ensure sustainable professional growth in the evolving design landscape.

This approach to AI design leadership ensures human-centered design principles remain at the forefront of technological advancement, creating meaningful impact for users and sustainable value for organisations.

RC

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

How does DeepSeek score on trustworthy AI principles?

Privacy 1/10, Security 2/10, Fairness 2/10, Accountability 1/10, Transparency 4/10, and Social Impact 3/10. Transparency is the highest because DeepSeek-R1 shows its reasoning by default.

Is DeepSeek's transparency innovation genuinely significant?

Yes. DeepSeek-R1 actively shows its work by outlining understanding of user intent and acknowledging potential biases before delivering answers. This is a paradigm shift in AI explainability.

Should my organisation use DeepSeek for business purposes?

The author states 'I can't see a reason you would risk your data and commercial IP with this system.' Developers may use open-source components locally with additional safety measures.

Are Western AI tools any better on privacy than DeepSeek?

While concerns about Chinese server storage are valid, Snowden's revelations remind us that Western tech is not immune to government surveillance either.