It was late October 2023 when the white paper hit the desks. People called it the most ambitious piece of tech policy in American history. Specifically, the Biden AI executive order (officially Executive Order 14110) wasn't just another dry government memo. It was a massive, 110-page attempt to leash a technology that most of Washington barely understands.
If you ask the average person in tech about it now, they might say it’s dead. They’ll point to the 2025 shift in administration and the high-profile revocations.
But they’re missing the point. Completely.
While the "headline" mandates were swapped out for the "Removing Barriers to American Leadership in Artificial Intelligence" order under the next administration, the DNA of the original Biden order is still everywhere. It’s in the way the Department of Energy handles data centers. It’s in the way NIST (National Institute of Standards and Technology) measures risk. Basically, the Biden AI executive order didn't just set rules; it built the plumbing for how the U.S. government thinks about machine learning.
The "Safety" Obsession That Changed Everything
The core of the Biden order was safety. We're talking about "red-teaming."
Before this order, if you were building a massive foundation model, you basically did whatever you wanted. You tested it in-house, maybe hired some outside hackers, and then hit "publish." The Biden AI executive order changed the vibe. It invoked the Defense Production Act—a Korean War-era law—to force developers to tell the government when they were training a model that could pose a "serious risk to national security."
Honestly, it was a power move.
The government basically said, "If your model uses more than $10^{26}$ integer or floating-point operations for training, we need to see your homework."
Why the $10^{26}$ Threshold Mattered
For those who don't speak math, that’s a massive amount of compute. It was a line in the sand. It targeted the giants: OpenAI, Google, Anthropic. The order required these companies to share the results of their "red-team" safety tests. They had to prove their AI wouldn't help a rogue actor brew a bioweapon or execute a catastrophic cyberattack.
Critics like Carl Szabo from NetChoice hated it. They called it "red tape" that would let "government bureaucrats" shut down innovation. But for others, like Alondra Nelson, formerly of the White House Office of Science and Technology Policy, it was a necessary guardrail. She argued that you can't have a "light touch" when the technology in question can automate the creation of novel pathogens.
It Wasn't Just About Skynet
Everyone focuses on the "killer robot" stuff. But the Biden AI executive order was actually obsessed with boring, everyday things. Like your mortgage. Or your job application.
The order directed the Department of Labor to figure out how AI was being used to spy on workers. You’ve probably seen the stories—software that tracks how many times a warehouse worker goes to the bathroom or "AI bosses" that fire people via email. Biden wanted a report on that. He wanted standards to ensure that AI wasn't just a fancy tool for union-busting.
Then there was the fraud.
Watermarks and Deepfakes
You know those AI-generated robocalls that sound exactly like your grandma? Or the deepfake videos that can swing an election? The order pushed for "content provenance."
Basically, the Department of Commerce was tasked with figuring out how to watermark AI content. The goal was simple: when you see a video online, you should know if a human made it or if it’s just pixels generated by a GPU in a basement in Virginia.
It hasn’t been perfect. Actually, it’s been kinda messy. Watermarking is notoriously easy to strip out. But by putting the weight of the federal government behind it, the Biden AI executive order forced the industry to at least try.
The Great 2025 Pivot
Here is where it gets interesting. When the administration changed in early 2025, the new leadership moved fast. They issued Executive Order 14179.
They revoked Biden’s order.
The new focus shifted from "safety and bias" to "dominance and deregulation." They wanted to remove "ideological bias" and "engineered social agendas" from AI models. To the new administration, Biden’s focus on civil rights and equity was a distraction—or worse, a "handcuff" on American genius.
What survived the purge?
Surprisingly, a lot.
Even though the Biden AI executive order was officially rescinded, the agencies had already spent over a year building the infrastructure.
- The AI Safety Institute: Nestled within NIST, this group is still the gold standard for testing models.
- Infrastructure: The push to build massive AI data centers on federal land (using geothermal and nuclear power) actually gained steam.
- Cybersecurity: The "AI Cyber Challenge" to find and fix vulnerabilities in critical software didn't stop. It just got a new name.
Politics changes, but the need to keep the power grid from being hacked by an autonomous agent doesn't.
The Civil Rights Tension
We have to talk about the part that everyone fights over: bias.
The Biden AI executive order was very clear. It said AI shouldn't be used to discriminate. If an algorithm is deciding who gets a loan or who gets bail, it needs to be fair. The order told the Department of Justice and other agencies to use existing civil rights laws to go after "algorithmic discrimination."
🔗 Read more: ChatGPT Student Free Plus: Is the Upgrade Actually Worth Your Coffee Money?
Groups like the ACLU thought this didn't go far enough. They pointed out that the order mostly ignored how law enforcement and national security agencies use AI for surveillance. They felt the government was "kinda" giving itself a pass while lecturing the private sector.
On the flip side, the current 2026 policy landscape views these "bias" checks as a form of "woke AI." There’s a massive push now to ensure models are "truthful"—which is a polite way of saying they shouldn't be programmed to prioritize diversity over raw data. It’s a complete 180-degree turn in philosophy, but the technical tools (the "how" of auditing a model) are still the ones developed under the original Biden framework.
What This Means for You Right Now
If you're a developer or a business owner in 2026, the Biden AI executive order is your history book. You can't understand the current "national policy framework" without knowing what it replaced.
The world has moved toward a "minimally burdensome" standard. The federal government is now actively fighting against states (like California or Texas) that try to pass their own strict AI laws. They want one rule for the whole country.
But here’s the kicker: The massive companies—the ones that actually build the models—still use the safety protocols from 2023. Why? Because liability is a nightmare. No CEO wants to be the one whose AI accidentally tells a kid how to make a bomb, regardless of which executive order is currently active.
Actionable Insights for the AI Era
If you're trying to navigate this landscape, stop looking at the "rescinded" stamp and look at the "best practices."
- Map your controls: Even if the federal government isn't mandating a specific report today, insurance companies and international partners (like the EU with their AI Act) definitely are. If you aren't auditing your models for "unintended outputs," you're a walking lawsuit.
- Watch the Commerce Department: By March 2026, they are scheduled to finish an evaluation of state AI laws. This will tell you if your local state regulations are about to be vaporized by federal preemption.
- Prioritize Transparency: The "watermarking" dream isn't dead; it’s just evolving. Whether it's C2PA or metadata, being able to prove your content is "human" or "verified" is becoming a premium feature in a world of synthetic noise.
- Security over Policy: Regardless of who is in the White House, "red-teaming" is now industry standard. If you aren't stress-testing your systems for cybersecurity vulnerabilities, you're failing the basic "duty of care" that the 2023 order first popularized.
The Biden AI executive order might be officially "gone," but its ghost is the one running the machine. It defined the terms of the debate—compute, red-teaming, foundation models, and safety—and we're going to be living in that world for a long, long time.