Which major companies are removing AI ethical guardrails and what can you do about it?

Quick Answer: OpenAI now permits military applications, Meta has hollowed out content moderation teams, and X (Twitter) claims mandatory rights to use your data for AI training. These are not isolated incidents but a coordinated retreat from ethical AI practices. Ethical alternatives exist: Anthropic (Claude) and Mistral prove responsible AI development is both possible and profitable.

Key Characteristics:
  • OpenAI opened the door to military applications; Meta reduced content moderation teams; X requires mandatory user data rights for AI training
  • Despite strong GDPR and EU AI Act, tech giants treat multi-million dollar fines as a cost of doing business
  • Anthropic and Mistral demonstrate that responsible AI development is not just possible but profitable
  • Practical steps: audit your digital footprint, review privacy settings, switch to ethical AI alternatives
Real Example:

The article describes a 'domino effect' where each company's step back enables others to follow. OpenAI, once a stalwart of ethical AI, opened the door to military applications. Meta's content moderation teams were hollowed out. Most of OpenAI's safety team resigned. Meanwhile, Anthropic and Mistral are proving that innovation does not require compromising values.

Article

Major Companies Removing AI Ethical Guardrails

Master AI design leadership with Australia’s expert guidance.

Riley ColemanRiley Coleman
January 23, 2025·3 min read

The Great AI Ethics Retreat

What happened

What you can do

f

Happy New Years!

I know seems kind of late to be saying it, but its the first newsletter in 2025.

I hope you all had an awesome festive season.

Let’s kick off this year, with some concerning news.

I usually write about Trustworthy AI, but it’s equally important to highlight companies who are deliberately dismantling safety and protections.


I’ve noticed a troubling shift in the last couple of months. It keeps me awake at night. The new AI guardrails to protect our digital lives are being dismantled, piece by piece, often so subtly that many haven’t noticed. But the implications are profound, and I believe we need to talk about it.


The Quiet Unravelling

Picture this: You’re living in a house. Someone is coming in at night and, bit by bit, removing the locks from your doors and windows. That’s essentially what’s happening with our digital lives right now. Let me share what I’ve observed:


The Domino Effect

  • OpenAI, once a stalwart of ethical AI, has opened the doors to military applications
  • Meta’s content moderation teams have been hollowed out
  • X (formerly Twitter) now claims mandatory rights to use your data for AI training


These aren’t isolated incidents. They’re part of a coordinated retreat from ethical AI practices. It’s to save money and loosen regulations. We should all be concerned.

Why This Matters Now

The timing isn’t coincidental. With possible new rules and a changing political scene, tech giants seem to be prepping for a world with fewer limits. It’s like a choreographed dance. Each company’s step back lets others to follow.

The Global Ripple Effect

While sitting in my favourite café in Chiang Mai last week, I was catching up on AI news. I realised something striking. Despite the EU’s strong GDPR and AI Act, tech giants ignore regional privacy laws. They see the multi-million dollar fine as a cost of doing business. They are willing to pay it to get around, not to respect rights.


A Tale of Two Approaches

But here’s where it gets interesting. Not everyone’s joining this race to the bottom. Let me share a contrast that gives me hope:


The Ethical Pioneers

Companies like Anthropic and Mistral are proving that responsible AI development isn’t just possible. It’s profitable. They’re showing that innovation doesn’t require compromising our values.


The Traditional Giants

Meanwhile, traditional tech giants are making choices that prioritise speed over safety:

  • Meta’s reduction in fact-checking capabilities
  • X’s expanded data collection powers
  • OpenAI’s has had most of their safety resign

What Can You Do?

After spending years in this space, I’ve learned that our power lies in our choices. Here’s what I recommend:

Practical Steps:

  1. Audit Your Digital Footprint
    • Review which platforms have your data
    • Check privacy settings on existing accounts
    • Consider alternative platforms that align with your values
  2. Make Informed Choices
    • For chat AI: Consider switching to Claude or Mistral
    • For social media: Explore privacy-focused alternatives
    • For Meta platforms: Review and restrict data sharing

Looking Forward

The next two years are crucial. They will determine if we get AI systems we can trust or ones that spread bias and misinformation. As someone deeply embedded in this space, I believe we’re at a crossroads.

The future isn’t written yet. Every time you choose where to share your data, you’re casting a vote for the kind of AI future you want to see. It’s why I’ve become more vocal about these issues. We must have these conversations now.

What are your thoughts on these changes?
How are you adapting your digital habits in response?


RC

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

Why are tech companies removing AI safety measures now?

With possible new rules and a changing political scene, tech giants appear to be preparing for a world with fewer limits. Each company's step back enables others to follow.

What practical steps can individuals take?

Audit your digital footprint, check privacy settings on existing accounts, consider switching to ethical alternatives (Claude or Mistral), and review data sharing on Meta platforms.

Do EU regulations actually protect users?

Not as effectively as expected. Tech giants treat multi-million dollar fines as a cost of doing business rather than respecting rights.

Are there profitable alternatives to unethical AI development?

Yes. Anthropic (Claude) and Mistral prove that responsible AI development is not just possible but profitable.