Preventing Woke AI in the Federal Government

Jul 28, 2025
Jul 28, 2025

Summary

Makes sure that any artificial intelligence the government buys gives true and fair answers, without pushing any social or political ideas.

What problem does this solve?

Some artificial intelligence tools are built with political biases that make their answers wrong or misleading. This order requires the government to only buy AI that is based on facts and does not push a specific viewpoint.

Who does this affect?

  • AI technology companies
  • Federal government agencies
  • Federal employees

What does this order do?

Establishes 'Unbiased AI Principles'
Creates two main rules for AI bought by the government: it must be truth-seeking and ideologically neutral. This means AI should give factual answers and not push political ideas like DEI.
Restricts federal purchasing of AI
Directs all federal agencies to only buy large language models (LLMs) that follow the new principles of truthfulness and neutrality.
Requires new guidance for government contracts
Orders the Office of Management and Budget (OMB) to create and share new rules within 120 days to guide how agencies buy and use AI.
Updates federal contracts for AI
Requires that all new and existing government contracts for AI be updated to include the unbiased AI rules. If a company does not comply, it may have to pay the costs to remove its AI.
Defines certain ideas as harmful in AI
States that ideologies like "diversity, equity, and inclusion" (DEI), critical race theory, and transgenderism are destructive when built into AI because they can distort facts.
Allows exceptions for national security
Permits the new rules to be waived for AI used in national security systems when necessary.

What is the real world impact?

Restricts AI based on political ideology
Uses federal buying power to stop the use of AI systems that include ideas like "diversity, equity, and inclusion" (DEI), which the order calls a destructive ideology. This could be seen as a way to enforce a specific political viewpoint on technology.
Could make AI less fair
Critics might argue that trying to remove concepts like "unconscious bias" or "systemic racism" from AI could make the models less aware of real-world biases, potentially leading to unfair or discriminatory outcomes for certain groups.
Promotes accuracy in government AI
Ensures that AI used by the government provides factual, objective, and historically accurate information. Aims to prevent social or political agendas from distorting the truth in AI-generated content.

When does this start?

This order is effective immediately and sets multiple deadlines for government agencies to follow.
OMB guidance on AI procurement
Within 120 days of July 23, 2025, the Office of Management and Budget must issue guidance for agencies on how to buy unbiased AI.
Agency compliance procedures
Within 90 days of the OMB guidance being issued, each agency must create its own procedures to ensure the AI it buys follows the new rules.