GUARD Act

Oct 28, 2025
Oct 28, 2025

Summary

Creates rules for AI chatbots to check users' ages and tell them they are not human, which helps keep kids safe from harmful content and manipulation.

What problem does this solve?

AI chatbots can expose children to dangerous content and exploit their trust. This bill requires companies to verify user ages, block minors from certain AIs, and make chatbots disclose they aren't real people.

What does this bill do?

Makes harmful AI chatbots illegal
Creates new criminal offenses for designing an AI chatbot that encourages minors to engage in sexual conduct or promotes suicide, self-injury, or violence.
Requires age verification for all users
Forces companies to check the age of every user for both new and existing accounts using a reliable process, not just asking for a birthdate.
Bans minors from using 'AI companions'
Prohibits companies from allowing anyone under 18 to use AI chatbots that are designed to simulate friendship, companionship, or emotional interaction.
Mandates clear disclosures
Requires chatbots to regularly state they are not human and cannot provide professional advice like medical, legal, or financial services.
Sets data security rules for age verification
Limits the personal data companies can collect for age checks and requires them to protect it with encryption and delete it when no longer needed.
Establishes steep fines for violations
Allows the Attorney General to seek civil penalties of up to $100,000 for each violation of the age verification and disclosure rules.

Who does this affect?

  • AI chatbot developers and operators
  • Minors (under 18)
  • Parents and guardians

What is the real world impact?

Raises data privacy concerns
Critics may argue that requiring government IDs or other methods for age verification could force users to share sensitive personal data, creating new privacy risks.
Protects children from harmful AI
Prevents minors from accessing AI companions that could expose them to sexually explicit content, encourage self-harm, or manipulate them emotionally.
Creates accountability for AI developers
Establishes clear legal penalties for companies that create AI chatbots that promote violence or solicit minors, making them responsible for the safety of their products.

When does this start?

The rules in this bill will start 180 days after it becomes law.

Related

H.R. 8250 - Parents Decide Act
S. 278 - Kids Off Social Media Act