Who Decided The AI Industry Answers To Nobody?
If you've used AI, you probably hold two things at the same time: genuine appreciation for what it can do, and a growing sense that it's more confident than it deserves to be. The tool that saved you an hour is also the same tool that lied to you: stating something false as an established fact. At low stakes, that gap is a quirk or an inconvenience. When the same systems are making decisions about your job application, your insurance claim, or your family's medical care, it becomes something else.
The social media version of a powerful novel technology is recent enough that we are all living in it: the misinformation, the bots, the documented harm to kids. The US government eventually reached its own conclusion about what social media platforms represent. When it forced the TikTok sale on national security grounds, it acknowledged that whoever controls a platform shaping what hundreds of millions of people see and believe holds a form of power that no democratic process authorized. The bi-partisan concern identified was power held by foreign ownership.
The government decided that power was too dangerous to hold, then left it in the hands of American billionaires. Some of the people now building AI's version of this already hold that power in social media, and are not guessing about what unaccountable scale produces.
The public has already formed a view. Gallup found that 80 percent of Americans support AI safety regulation even if it means slower development. That number is bipartisan, and it reflects what people have arrived at from experience, not from policy briefings. Most democracies have reached similar conclusions as evidenced by legislation: the EU's risk-based AI framework is already in force, covering hiring, credit, healthcare, and law enforcement. China, the country most frequently cited as the reason the US cannot afford regulation, has regulation.
The United States is an exception. The question is why.
Part of the answer is what some of the people building this infrastructure have said publicly about democratic governance. Peter Thiel, who co-founded Palantir (a data analytics and surveillance company whose systems are deployed across immigration enforcement, military targeting, and domestic law enforcement agencies in the United States), stated more than fifteen years ago that he no longer believes freedom and democracy are compatible. That is not a position attributed to him by critics. It is a position he published under his own name. Joe Lonsdale, another Palantir co-founder, is among the donors to the PAC working to remove the accountability structures those democratic processes produced. Some of the people dismantling those structures have said, on the record, that this is the correct arrangement.
Part of the answer is also visible in federal campaign finance records and in the election cycle that preceded them. Billionaires spent at historically unprecedented levels in 2024, and some of that investment came from the same individuals now backing AI's version of this infrastructure. The strategist now running Leading the Future (LTF) ran a $130 million crypto-focused PAC in that same cycle. LTF, which raised $125 million in 2025 and is backed by an OpenAI co-founder, Andreessen Horowitz, the founders of Palantir and Perplexity, is the escalation of efforts to remove accountability. It is protecting the conditions under which AI's projected returns, which are potentially in the trillions, land in the same hands that already hold social media's unchecked power. Against that number, $125 million is not a lobbying expense. It is a venture capital investment in a strategy that is already working.
Your car had to demonstrate it was safe before it was sold to you. Your medication had to prove it worked. The food you eat is inspected. None of this is framed as anti-innovation: it is the minimum condition for operating in a marketplace where the consequences of failure fall on people who did not build the product. AI is currently arguing that this expectation should not apply to it, while simultaneously telling you it will reshape hiring, healthcare, education, credit, and criminal justice. The documented consequences of that argument are not hypothetical.
Health insurance companies are among the earliest large-scale deployers of AI for consequential decisions. A 2024 survey of 93 large health insurers found that 84% were already using AI for operational purposes. By 2025, 71% acknowledged using it specifically for prior authorization, the process through which they decide whether to approve or deny medical care.
The results are documented. One major insurer's algorithm denied 300,000 claims in two months, spending an average of 1.2 seconds per review. The appeal overturn rate was 90%: the algorithm was wrong nine times out of ten. Most patients never appealed because the process was designed to discourage them. A 2024 Senate committee report found AI tools in some cases produced denial rates sixteen times higher than typical human review. In some cases, the errors resulted in severe health outcomes.
This is not a story about bad actors. Every insurer operating this way faces the same business model incentives. Prior authorization denials are profitable. AI makes them faster and cheaper to generate. The adoption is rational given the structure.
Some states moved to fill the void that federal inaction left. California passed the Physicians Make Decisions Act in 2024, explicitly prohibiting AI as the basis, wholly or partially, for medical necessity determinations. Arizona, Maryland, Nebraska, and Texas followed. In states without these laws, the algorithmic denials continue.
The public bears the cost. The companies capture the efficiency gains.
This is what the $125 million is protecting.
There is a second category of documented harm that works differently. Healthcare denials are legible: you receive a notice, and you have a right to appeal. When an algorithm influences a hiring decision, a loan determination, a credit assessment, or the price someone is shown for the same product another customer sees for less, the person on the receiving end rarely knows an algorithm was involved. They receive a result, not an explanation. And the assumption that a human reviewer will catch what the algorithm gets wrong has been tested under controlled conditions. It did not hold.
A University of Washington study tested 528 participants working alongside AI systems with documented racial bias in hiring recommendations. The participants mirrored the biased recommendations at nearly the same rate as the AI itself. In cases of severe bias, they made only marginally less biased decisions than the system they were supposed to be checking.
The research explains why. An Oxford and University of Kentucky study analyzed over 20 million queries and concluded that bias is not a correctable anomaly: it is a structural feature, rooted in training data that reflects centuries of uneven information production. The model reproduces existing inequality because it learned from a world that already had it baked in. AI systems carry the weight of computational authority, they feel objective in a way that human judgment does not, while encoding the same historical inequities present in the data they were trained on. The consequence: the errors do not distribute randomly. They land on people who were already disadvantaged by the underlying data, and because the mechanism is invisible, the harm accumulates without triggering the appeals process that eventually forced accountability in healthcare.
The industry's stated answer is the same in both cases: we will fix it. The healthcare record, ninety percent overturn rates maintained for years before state intervention, suggests the mechanism for fixing it is external pressure, not internal correction. Things get fixed when they are required to be. Without that requirement, the business model does not produce the incentive.
The AI industry is not a broad cross-section of American business. A handful of companies, and the billionaires who founded or back them, stand to capture the majority of projected returns. When this spending is framed as protecting American competitiveness, the word "American" is doing significant work. The competitiveness at stake belongs to a small number of people. The accountability being avoided belongs to everyone else.
Part of what that accountability would cover is labor displacement. Most Americans already know what this looks like: when manufacturing moved overseas, the companies that moved it captured lower costs and higher margins. The workers absorbed the cost: in lost wages, in community collapse, in retraining programs that never quite materialized. The towns didn't recover. The shareholders did.
AI doesn't require moving physical production somewhere else. It replaces the work where it sits, in customer service, in logistics, in legal work, in medical coding and accounting, and the displacement may happen faster than any prior wave of automation. The companies that deploy it capture the productivity value. The workers displaced absorb the cost. When those workers need retraining, income support, or social services, the bill goes to the public.
That bill lands on a specific public: the workforce taxed on every paycheck, at ordinary income rates. The accumulated wealth that produced the displacement was taxed differently: capital gains rates, deferred recognition, or not taxed at all as long as it remains unrealized. The people absorbing the cost and the people capturing the gain are not the same people. The tax structure reflects that difference precisely.
The trillions in projected AI returns will not be taxed at a rate that covers that bill (corporate tax avoidance is a mature industry), so the fiscal gap falls to individuals and governments who had no say in the deployment decision. There is even a version of this story where the AI investment bubble collapses before the returns materialize. That does not change the calculus for workers. Companies that have sunk billions into AI infrastructure face enormous pressure to justify those investments regardless of whether the technology delivers. The most direct path is squeezing labor costs. A bubble that bursts doesn't slow the displacement. It accelerates it.
A second pressure operates alongside it. When commercial returns disappoint, the use cases that survive are the ones with paying customers willing to accept what the market won't. Government surveillance contracts. Predictive enforcement systems. Data infrastructure for immigration, law enforcement, political targeting. These applications were already available. A bubble that needs to produce returns makes them more attractive, not less. Companies in that position comply with whatever limited law is on the books. The intent of those laws, to prevent specific documented harms, is a different question from the letter, and the letter is what gets tested. The result is not necessarily illegal. It is the predictable shape of what happens when accountability is deferred until after the infrastructure is built and the investment needs to pay.
The pattern has a name most people recognize even if they haven't heard it called this: privatize the gains, socialize the costs. The $125 million is not just protecting the right to avoid safety accountability. It is protecting the right to capture that value, or recover those investments, without contributing proportionally to the costs either outcome creates.
The industry is not spending this money because it is corrupt. It is spending this money because it is rational. Political spending at this scale doesn't primarily buy votes. It pre-installs the reference point, anchoring the issue on terrain that avoids accountability questions, before most people arrive at the conversation. Subsequent evaluation runs from that anchor, not from an independent position.
The national framework the industry has been calling for now exists. On December 11, 2025, the Trump administration issued an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence." Its stated goal: "to sustain and enhance the United States' global AI dominance through a minimally burdensome national policy framework for AI." That is the LTF press release, translated directly into executive order language.
The mechanism is more specific than the framing suggests. The order established a DOJ AI Litigation Task Force to sue states whose AI laws are deemed "onerous" or harmful to interstate commerce. It directed the Commerce Department to withhold broadband infrastructure funding designed to bring high-speed internet to underserved communities from states that refuse to align their AI laws with the new federal posture. Every federal agency was directed to condition discretionary grants on state compliance. This is not abstract preemption. It is a funding threat aimed at communities who had nothing to do with these deployment decisions.
The White House fact sheet for the order is titled "PREVENTING A PATCHWORK OF AI REGULATIONS." Leading the Future's press materials use "patchwork" as their core attack phrase. The vocabulary is not coincidental. The PAC and the executive order were not parallel strategies converging on the same goal. They were the same operation running through different channels simultaneously.
"National framework" sounds like a preference for consistency. The question it doesn't answer is whether the entity setting the standards answers to the public or to its investors. The executive order eliminates state protections without building any federal protections to replace them. The floor is nothing, and the governance vacuum is occupied.
The state bills being characterized as "patchwork" are the healthcare protections described above. They are the laws that stopped algorithmic claim denials in California and would have required safety testing before deployment. The industry's stated preference for a "national framework that protects users" has a specific factual problem: the only frameworks that actually protected users were the state laws this movement is working to eliminate. There is no proposed federal alternative. "We support a national framework" is not a policy position. It is a placeholder for "not the thing you're trying to do."
LTF operates alongside a dark money nonprofit called Build American AI, a 501(c)(4) not required to disclose its donors. The disclosed PAC handles electoral work. The undisclosed nonprofit handles "issue advocacy." Disclose where the law compels it, operate in the dark where it does not. The electoral pressure is visible. The money behind the policy work is not.
Any discussion of AI accountability eventually arrives at the competitiveness argument: regulate and America falls behind. The country most frequently cited as the reason the US cannot afford regulation has regulation: mandatory safety assessments, algorithmic recommendation rules, generative AI content requirements. The argument is incoherent.
But the factual problem is the smaller problem. The larger one is structural. When the competitiveness argument is made, it converts the conditions for billionaire wealth accumulation into a national interest. "America must win" means "these particular companies must be allowed to operate without accountability." The framing asks the rest of the country to treat those conditions as a shared national mission, and to bear the costs of whatever accountability is avoided in the process.
The question the competitiveness frame never asks is about distribution. The costs and the gains do not land on the same people.
A small portion of the costs are already documented. Patients denied care by algorithms wrong nine times out of ten, with appeal processes designed to discourage them from finding out. Workers whose jobs are displaced faster than any prior wave of automation, absorbing the cost while companies capture the productivity gain and the public covers the retraining bill at ordinary income tax rates. Job applicants, loan seekers, people receiving a result with no explanation, whose errors don't distribute randomly, but land on those already disadvantaged, invisibly, without triggering the accountability mechanisms that eventually forced action in healthcare. Communities watching broadband infrastructure funding get withheld because their state passed a law protecting patients from algorithmic denial.
The accountability gap is not only about economic decisions. The same surveillance and data infrastructure, already deployed across immigration enforcement and domestic law enforcement, faces no structural constraint on being directed at political activity, protest, or civic dissent under the framework being built. Who controls those decisions, and who those decisions are accountable to, are questions the current approach leaves open. The people funding that approach have disclosed their answer to those questions.
None of the people bearing those costs are projected to capture trillions. That prize, if it lands the way the industry intends, concentrates in the same hands that already hold technology's unchecked power. That is not an argument against AI. It is why the accountability question, the same one that was never fully resolved in social media, cannot be deferred again.
The choice being presented, AI growth or AI accountability, is not a real choice. The same false choice has been argued by other industries. What's different here isn't the money. It's that the public has already rejected the premise, and the people spending know that shifting public opinion and making public opinion irrelevant are equally acceptable outcomes. The favorable environment has already been built. What is still being spent, at scale, is the effort to secure the one that hasn't landed yet.
The $125 million signals urgency, not inevitability.
Industries spend this kind of money when the outcome they need is still in play. At the federal level, the conditions they need exist. At the state level, they don't, not yet. The bipartisan support for AI accountability measures, the state legislation that has already moved, the documented record from healthcare: these are the reasons the spending is continuing. The industry is not celebrating. It is working.
When the first ad the PAC ran against its primary target failed to shift the numbers, it spent $1 million on a second ad questioning his work history. The campaign drew a cease-and-desist letter alleging false statements. That escalation, from policy argument to personal destruction, is evidence of where the state-level fight actually stands. When you are winning the argument, you run more of the same ad. When you are losing it, you attack the person.
The playbook is not new. Tobacco said the science was uncertain. Automakers fought seat belt requirements at every turn. In both cases the accountability eventually came. Most of these fights look different in retrospect than they did while they were happening: a science dispute, a cost argument, a free speech question. They were accountability fights that got framed as something else until the window closed.
The pattern of avoiding accountability is repeating. The mechanism is visible while it is still operating. The donors are disclosed. The executive order is public.
The industry is betting that we won't connect the dots in time.
AI Bias Research
The Oxford Internet Institute and University of Kentucky study "The Silicon Gaze: A typology of biases and inequality in LLMs through the lens of place," published in Platforms and Society (January 20, 2026), analyzed over 20 million ChatGPT queries and concluded that bias is a structural feature of generative AI systems, not a correctable anomaly. Covered by Washington Post (February 12, 2026). Full data available at inequalities.ai.
A University of Washington study (findings October 2025, covered by Washington Post November 25, 2025) tested 528 participants working alongside simulated LLMs with varying levels of racial bias in hiring recommendations. Participants mirrored moderately biased AI recommendations at nearly the same rate as the AI, and in cases of severe bias made only marginally less biased decisions. Funded by the U.S. National Institute of Standards and Technology.
Health Insurance AI Denials
A 2024 survey by the National Association of Insurance Commissioners of 93 large health insurers in 16 states found that 84% were using AI for some operational purposes, per Stanford Law School and Stanford Report (January 2026). By 2025, 71% acknowledged using AI specifically for utilization management, per PBS NewsHour (January 2026) citing Indiana University law professor Jennifer Oliva.
One major insurer's algorithm denied 300,000 claims in two months at 1.2 seconds per review, with a 90% appeal overturn rate, per AAPC (February 2026). A 2024 Senate Permanent Subcommittee on Investigations report found AI tools in some cases produced denial rates sixteen times higher than typical, per the American Medical Association (March 2025). California's Physicians Make Decisions Act (SB 1120), signed September 2024 and effective January 2025, prohibits AI as the basis for medical necessity determinations, per Senator Josh Becker's office. Arizona, Maryland, Nebraska, and Texas enacted similar restrictions, per Stateline (November 2025).
Peter Thiel / Palantir
Peter Thiel's statement "I no longer believe that freedom and democracy are compatible" appears in his essay "The Education of a Libertarian," published by the Cato Institute's Cato Unbound on April 13, 2009. Available at cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/. Thiel co-founded Palantir Technologies alongside Joe Lonsdale, Alex Karp, and others. Palantir's data systems are deployed across U.S. immigration enforcement (ICE), military targeting, and domestic law enforcement agencies, per multiple published reporting sources. Joe Lonsdale's role as an LTF donor is documented in NBC News (October 24, 2025), CNBC (November 17, 2025), and CNN (February 11, 2026).
Leading the Future PAC
Leading the Future raised $125 million in 2025 with approximately $70 million cash on hand entering 2026, per CNBC (January 30, 2026) and NOTUS (January 30, 2026). Major donors include an OpenAI co-founder, Andreessen Horowitz, and the founders of Palantir and Perplexity, per NOTUS (January 13, 2026) and NBC News (October 24, 2025). The PAC's targets included the author of New York's RAISE Act, per NOTUS (January 13, 2026). Meta created two additional super PACs targeting California and national state-level candidates, per Future Caucus (October 6, 2025). Build American AI is described as a dark money 501(c)(4) offshoot operating alongside the super PAC, per Axios reporting.
The Bores Ad Escalation
Leading the Future spent over $1.1 million on two sequential ads against New York congressional candidate Alex Bores. The first ad attacked his pro-regulation stances on AI; the second, disputed by the Bores campaign via cease-and-desist, alleged hypocrisy over his prior employment, per City & State New York (January 2026) and TechCrunch (November 2025).
The December 2025 Executive Order
The Trump executive order "Ensuring a National Policy Framework for Artificial Intelligence," signed December 11, 2025, established a DOJ AI Litigation Task Force to challenge state AI laws, directed Commerce to condition BEAD broadband funding on state compliance, and directed all federal agencies to assess conditioning discretionary grants on state AI policy alignment, per White House (December 2025), Sidley Austin (December 2025), Morrison Foerster (December 2025), and EPI (2025). The White House fact sheet is titled "PREVENTING A PATCHWORK OF AI REGULATIONS," per White House (December 12, 2025).
Federal Preemption
Governor Hochul gutted the RAISE Act on the same day as the Trump executive order, per Rolling Stone (December 12, 2025). A Congressional moratorium on enforcement of state AI accountability laws failed but is expected to return, per Future Caucus (October 6, 2025). There are currently no federal laws governing the development or use of AI systems, per EPI (2025).
Investment Thesis
The legislator's description of the regulatory spending as a venture capital investment with returns that "could be trillions" is from NOTUS (January 13, 2026).
The Crypto Precedent
The Leading the Future strategist's prior role advising the $130 million crypto-focused PAC during the 2024 cycle is documented in NBC News (October 24, 2025).
2024 Billionaire Political Spending
Americans for Tax Fairness documented that 100 billionaire families contributed a record $2.6 billion to federal elections in 2024, 2.5 times the approximately $1 billion spent by individual billionaire donors in the 2020 cycle, and the highest level since the Citizens United decision, per Americans for Tax Fairness (May 2025). Outside spending on 2024 federal elections reached a record $4.5 billion overall, with billionaires among the primary drivers, per OpenSecrets (November 2024).
Cross-Partisan Data
Pew Research Center data on equal Republican and Democratic concern about AI's expanded role in daily life, per Built In. Gallup polling found 80% of Americans support AI safety even if it means slower development of AI capabilities, per NOTUS (January 2026). Future of Life Institute polling documents bipartisan support for AI safety regulations.
China AI Regulation
China's AI regulatory framework includes mandatory safety assessments before deployment, algorithmic recommendation rules effective 2022, and generative AI content requirements effective 2023, per multiple technology policy sources.
TikTok / PAFACA
The Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), signed April 24, 2024, required ByteDance to divest TikTok or face a US ban. The legislation passed with striking bipartisan support: 352-65 in the House, 79-18 in the Senate. In upholding PAFACA, the Supreme Court (TikTok Inc. v. Garland, 604 U.S. 56, January 2025) documented that Congress focused on three categories of concern: collection of US user data accessible to the Chinese government through legal compulsion of ByteDance; Chinese government capacity to leverage the platform for intelligence and counterintelligence operations; and potential use of the recommendation algorithm to spread polarizing or destabilizing content. The Court noted Congress found that "Chinese law enables China to require companies to surrender data to the government, making companies headquartered there an espionage tool."
Get the full analysis and join the conversation about what comes next.