How is US AI legislation affecting international AI cooperation?

Quick Answer: New US AI policy mandates removal of misinformation, DEI, and climate references from NIST frameworks while explicitly opposing UN, OECD, and G20 governance efforts. Algorithmic bias doesn't replicate discrimination—it optimizes for it systematically. Within 3 years, AI will influence banking, healthcare, employment, and criminal justice globally.

Key Characteristics:
  • Policy explicitly opposes UN, OECD, and G20 AI governance efforts
  • English comprises ~50% of internet content while 7,100+ languages exist
  • Algorithmic bias optimizes for discrimination—faster, harder to detect, self-reinforcing
  • AI will be embedded in banking, healthcare, employment, criminal justice within 3 years
Real Example:

The US 'America's AI Action Plan' mandated the NIST AI Risk Management Framework to eliminate references to misinformation, DEI, and climate change. In hiring, AI trained on biased historical data perfects discrimination. The language divide compounds this: English accounts for approximately 50% of internet content, while Hindi (260 million speakers) represents only 0.1%.

Article

US govt AI legislation Spells the End of Global AI Cooperation

Master AI design leadership with Australia’s expert guidance.

Riley ColemanRiley Coleman
July 24, 2025·10 min read
   

The End of Global AI Cooperation as We Know It

Policy change & why it impacts YOU

Bias & Discrimination unchecked

International Cooperation Breakdown

What actions you can take to protect yourself

 

How New Policy Changes Will Shape Your Digital Life

What’s Happening Right Now

A new policy document from the United States has fundamentally changed the global approach to artificial intelligence development.

This isn’t about politics – it’s about technology that will affect every aspect of your life within the next three years.

The policy states clearly:

“The United States is in a race to achieve global dominance in artificial intelligence (AI). Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.”

More critically, the document mandates:

“the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.”

What this means: The safety systems designed to prevent AI from discriminating against people are being systematically removed.

Why This Matters More Than You Think

When most people hear “AI,” they think of ChatGPT or asking Siri a question. But AI is rapidly becoming embedded in the invisible infrastructure of modern life.

Within three years, AI will be operating behind the scenes in:

  • Banking: Determining your loan applications and credit scores
  • Healthcare: Assisting with medical diagnoses and treatment recommendations
  • Employment: Screening job applications and determining promotions
  • Government services: Processing benefit applications and making eligibility decisions
  • Education: Influencing educational opportunities and resource allocation
  • Criminal justice: Informing policing decisions and sentencing recommendations

Think of it like electricity – you don’t see it, but it powers everything. AI is becoming the digital electricity of modern society.

How AI Amplifies Inequality: The Technical Reality

Here’s what non-technical people need to understand: AI doesn’t just mirror existing problems; it optimises for them.

Consider how Instagram or Youtube works. If you accidentally click on conspiracy content once, the algorithm notices and starts feeding you increasingly extreme content because that’s what drives engagement.

AI systems work the same way, but for everything:

In hiring: AI can ensure a human will never see your CV.
If past hiring favoured certain demographics, AI doesn’t just continue that pattern – it perfects it, becoming more systematically discriminatory than human recruiters ever were.

In healthcare: AI limits what treatment you are offered.
If historical data shows certain groups received different treatment, AI optimises those disparities, making them more systematic and harder to detect.

In financial services: AI decides if you are worthy of loans, credit cards or store purchases. By finding increasingly sophisticated ways to discriminate, using proxy data that humans might miss.

Unlike human bias, which can be inconsistent, can be challenged and is usually limited by how many people it impacts at once.

AI bias has NONE of those limitations:

  • Systematic and consistent across all decisions
  • Faster and more efficient at discriminating
  • Harder to detect and challenge
  • Self-reinforcing over time

The Language Divide: A Global Digital Apartheid

The scope of this challenge extends far beyond individual bias. We’re facing a fundamental language inequality that affects billions.

The stark reality:

  • Over 7,100 languages are spoken worldwide
  • English accounts for approximately 50% of all internet content
  • Only 10 languages account for 80% of online content

What this means:

If you speak Hindi (260 million speakers worldwide), you’ll find only 0.1% of internet content in your language.


If you speak Bengali, Swahili, or most Indigenous languages, your digital world becomes severely limited.

AI systems are trained on this English-dominated data, meaning they inherently reflect Western, English-speaking perspectives and values.

Impact Analysis: Who Gets Left Behind

For Indigenous Communities Worldwide

Indigenous communities face compounded disadvantage:

  • Language erasure: AI systems trained on dominant languages accelerate the digital marginalisation of Indigenous languages
  • Cultural invisibility: Traditional knowledge systems and cultural values become unrepresented in AI decision-making
  • Service exclusion: Government and commercial AI systems may fail to recognise Indigenous names, locations, or cultural contexts
  • Economic barriers: Reduced access to AI-enhanced opportunities in education, employment, and business

For Non-English Speaking Communities

Communities whose primary language isn’t English face systematic digital exclusion:

  • Information poverty: Reduced access to AI-enhanced information, services, and opportunities
  • Economic disadvantage: Difficulty accessing AI-powered job platforms, financial services, and business tools
  • Educational barriers: Limited access to AI-enhanced learning resources and educational opportunities
  • Healthcare disparities: AI diagnostic tools and health information systems primarily optimised for English-speaking populations

For Specific Demographic Groups

Women globally: Risk of AI systems perpetuating gender stereotypes without bias mitigation frameworks, affecting hiring, lending, and healthcare decisions.

Racial and ethnic minorities: Systematic algorithmic discrimination in criminal justice, hiring, healthcare, and financial services, with reduced oversight and correction mechanisms.

People with disabilities: AI systems may fail to accommodate diverse needs without inclusive design requirements.

Elderly populations: Risk of exclusion from AI-enhanced services due to interface design and technology access barriers.

Rural communities: Limited representation in training data leading to poor performance of AI systems in rural contexts.

The International Cooperation Breakdown

The policy document explicitly positions the US against international cooperation efforts by

  • United Nation
  • OECD, and
  • G20

to ensure AI development serves all of humanity.

Implications for Australia and Other Nations

For Individual Australians

Immediate concerns:

  • AI tools you use daily may become more biased against non-American perspectives
  • Reduced protection against algorithmic discrimination in services and employment
  • Potential exclusion from AI-enhanced opportunities if systems aren’t designed with Australian contexts in mind

Longer-term impacts:

  • Dependence on AI systems that reflect foreign values and priorities
  • Reduced influence over AI governance that affects Australian society
  • Risk of digital colonialism where Australian data enhances foreign AI systems without local benefit

For Australian Businesses and Government

Strategic challenges:

  • Pressure to adopt AI systems that may not align with Australian fair trading and anti-discrimination laws
  • Risk of technological dependence on foreign AI infrastructure
  • Difficulty maintaining sovereign control over AI governance affecting Australian citizens

Compliance concerns:

  • Potential conflicts between US-developed AI systems and Australian consumer protection laws
  • Challenges in ensuring AI systems meet Australian workplace equity and safety standards

What You Can Do

As an Individual

Diversify your AI usage:

  • Consider Swapping ChatGPT etc to European AI Mistral, which operates under EU AI legislation with mandatory fairness protections

    >> Remember the commercial companies respond only to commercial pressures
  • Be aware that your choice of AI tools influences market direction

Stay informed:

  • Learn basic AI literacy (how to use & potential risks) to recognise when AI systems may be making decisions that affect you

    >> The FASTEST way to safety is for ALL to be AI literate
  • Understand your rights regarding automated decision-making in your country

Advocate for protection:

  • Contact your representatives about the need for strong AI governance frameworks
  • Support organisations working on AI ethics and digital rights

For Australian Organisations

Assess your AI supply chain:

  • Review whether your AI tools align with Australian values around fairness and inclusion
  • Consider the legal liability of deploying biased AI systems in Australian contexts
  • Implement local bias testing for any AI system affecting Australian consumers or employees

Plan for technological sovereignty:

  • Reduce dependency on single-source AI providers
  • Invest in local AI capabilities that reflect Australian contexts and values



Prompt to research non-US alternatives to AI tools you use

Instructions

1. Go to perplexity.ai

2. Copy & paste below text and replace the text
[INSERT YOUR AI TOOL NAME] with the name of your current tool you’d like to find alternative for.

I’m currently using [INSERT YOUR AI TOOL NAME] and want to find non-US alternatives.

Please research:

1. FIND ALTERNATIVES


List 5-6 non-US alternatives to [INSERT TOOL NAME], focusing on:
– Australian companies
– European/EU companies (especially with GDPR compliance)
– Canadian companies
– Other democratic countries with strong privacy laws
– Open-source options
For each, provide: company name, country, and key features.
2. ETHICS & RESPONSIBILITY For each alternative, check:
– Do they have a responsible AI policy?
– How do they handle bias and fairness?
– Are they transparent about their AI training?
3. PRIVACY & DATA Research each company’s:
– Where they store user data
– GDPR/privacy law compliance
– Whether they use your data to train AI models
– Data retention policies
4. PRACTICAL COMPARISON
Compare each alternative on:
– Cost vs [INSERT TOOL NAME]
– Ease of switching/migration
– Performance and features
– Language support beyond English
– Customer reviews and reliability
5. INDEPENDENCE CHECK
Verify for each option:
– Do they use US-based AI models under the hood?
– Any financial ties to US tech companies?
– How technically independent are they?
Create a simple comparison table ranking each alternative on ethics, privacy, performance, and ease of switching (1-5 scale).

Become AI Literate

Increase Productivity & Protect Yourself from AI Harms

Approachable AI for Busy Non-Tech Professionals

Know what AI can do and how you can use them to get different tasks done


Ability to select the right AI tools for the right tasks


Rapidly improve your prompting AI so you can get quality output consistently.


Learn to minimise risks and use AI responsibility.


Future-proof your career with AI expertise that matters.

 

Date

Aug 18 – Sept 12

Delivery

Self Paced E-learning
+
2 x Hands-on Practical Skill Building Workshops

Time

1. Wed 8/27
6:30 PM—8:00 PM
(GMT+10)

2. Wed 9/10

6:30 PM—8:00 PM (GMT+10)

Level

Beginners

Register Now

The Path Forward

This isn’t about choosing sides in a geopolitical competition. It’s about ensuring that as AI becomes the invisible infrastructure of modern life, it serves all of humanity rather than reflecting the biases and values of whoever builds the most powerful systems first.

The next few years will determine whether AI becomes a force for global inclusion and human flourishing, or a tool that systematically perpetuates and amplifies existing inequalities on a planetary scale.

The choices we make today—about which AI systems to use, which policies to support, and which companies to empower—will shape the digital world our children inherit.

The bottom line: AI is becoming too important to be governed by any single country’s values, no matter how well-intentioned. Global challenges require global solutions, and the stakes have never been higher.

This analysis is based on the publicly available “America’s AI Action Plan” released by the White House in July 2025, along with current research on AI bias, language distribution online, and digital inclusion.

Resources for AI Design Leadership 

Continue your AI design leadership journey with these carefully curated resources:

Ready to advance your AI design leadership expertise? Our proven frameworks and community support ensure sustainable professional growth in the evolving design landscape.

This approach to AI design leadership ensures human-centered design principles remain at the forefront of technological advancement, creating meaningful impact for users and sustainable value for organisations.

RC

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

How does the US AI policy affect people outside America?

The policy opposes UN, OECD, and G20 AI governance efforts. Since most widely-used AI systems are built by US companies, removing fairness protections affects anyone using these tools globally.

Why does AI bias affect non-English speakers disproportionately?

Over 7,100 languages exist but English accounts for 50% of internet content. Hindi, with 260 million speakers, represents just 0.1%, creating systematic digital exclusion.

What can individuals do to protect themselves from biased AI?

Consider European alternatives like Mistral (operating under EU AI legislation with mandatory fairness protections), build basic AI literacy, and advocate for strong AI governance frameworks.

How is AI bias different from human bias?

Unlike human bias, AI bias is systematic, consistent, faster at discriminating, harder to detect, and self-reinforcing over time. AI does not just mirror existing problems; it optimises for them.