Could Artificial Intelligence Save Your Life?
ProTip: On Day One, Trump Swept Away Biden’s AI Overregulation
Let’s start with the good news: Could AI save your life? Maybe!
The average primary care visit lasts about 15 minutes. During that time, a doctor must gather symptoms, review a decade of test results, recall past diagnoses, weigh drug interactions, ask pertinent questions, and still have time to make a plan. Even the most experienced physician cannot handle that much.
Here’s where AI could step in—not as a replacement, but as a partner. Imagine an AI tool quietly scanning your test results from the past ten years, recognizing subtle changes in liver enzymes, flagging abnormal patterns in your EKGs, and cross-referencing them with the latest peer-reviewed studies. By the time the doctor walks in, the AI has already prepared a dashboard of possible diagnoses, suggested follow-up tests, and highlighted urgent anomalies. The physician remains in charge—but is now better informed, faster, and more confident.
That’s the promise of AI in medicine: not to replace human judgment, but to supercharge it.
So far, so good.
The AI regulators stand on the other side of this optimistic future—and their foot is firmly on the brake. Their worldview is shaped less by actual clinical trials and more by cinematic dread: think HAL 9000 from 2001: A Space Odyssey. These AI catastrophists view every algorithm as a potential existential threat and respond with a predictable instinct: regulate first, ask questions later.
Consider the Biden administration’s AI Executive Order (EO 14110), which created a sprawling web of risk assessments, compliance burdens, and procurement restrictions. Agencies were required to catalog every AI use case, layer on risk management structures, and adhere to strict guidelines before any model could even see the light of day.
Now contrast that with the Trump administration’s response. In January 2025, Executive Order 14179 aimed to reverse this inertia by prioritizing “American AI dominance.” It swept away the Biden-era AI policies and tasked the Office of Management and Budget (OMB) with issuing memos that encourage innovation, fast-track procurement, and favor U.S.-developed models. The message? Unshackle AI from red tape and let the technology run responsibly, but without fear.
Even more telling: Trump’s energy policies are now steering infrastructure toward powering AI data centers. The two administrations' contrast in pace and attitude toward AI development couldn’t be starker.
Cross the Atlantic, and you’ll find the EU AI Act—a masterclass in preemptive overregulation.
The Act takes a risk-based approach, assigning AI systems into tiers: unacceptable risk, high risk, limited risk, and minimal risk. This sounds logical until you realize just how many common applications fall under “high risk "—from hiring algorithms to educational tools to anything remotely involving public safety. These systems are subject to rigorous testing, documentation, transparency mandates, and even prior authorization. And how do you think your physician’s AI helper fits in with this?
Back in the U.S., the regulatory ecosystem had begun tilting toward Europe’s mindset. Over at the Brookings Institution—where no opportunity to regulate goes unexploited—analysts are still haunted by having let the Internet go mostly unregulated in the 1990s. They don’t intend to make that mistake again. Their advice: caution, moratoriums, and layered governance frameworks—in short, a “Mother May I?” regulatory scheme.
In response, free-market voices like the Cato Institute sounded the alarm. Their warning: overreach now will crush innovation later. Overregulation doesn’t prevent harm—it delays benefits. Especially the kind that might help your doctor spot a tumor before it spreads or identify the early signs of neurological decline.
No one denies that AI has risks—misuse, and data privacy. But the real question is how we address those risks. Do we smother technology under preemptive constraints like the EU? Do we treat AI like nuclear power instead of like the next evolution of computing? Or do we let the technology evolve—monitoring, testing, correcting along the way—so that its benefits, like better medical care, can reach us sooner?
In this debate, the stakes are more than theoretical—they’re personal. And possibly even life-saving. Now, picture yourself back with your Doctor trying to figure out what is wrong with you and what to do about it? While some regulators fear AI might take over the world, many want it to help their doctor review their medical history.
See:
https://www.economist.com/business/2023/10/24/the-world-wants-to-regulate-ai-but-does-not-quite-know-how https://www.cato.org/commentary/regulators-must-avert-overreach-when-targeting-ai
https://www.dajv.de/ai-act/the-us-innovates-the-eu-regulates-contrasting-approaches-to-ai-regulation-across-the-atlantic/