AI Reality Check
The Forces Challenging the Future of AI
We’ve been operating under the assumption that AI success is inevitable. Scale the models, add more compute, and wait for the next generation; breakthrough capabilities will emerge. Companies are betting billions on AI’s success.
The pressure on companies to adopt AI or be left behind has been overwhelming. But what if the premise is wrong? What if the current approach faces fundamental constraints that more money and more computing power can’t solve?
The Converging Headwinds
Recently, the tone of the conversation has shifted, with AI now being referred to as a bubble. We’re hitting structural limits, and it seems the technical architecture has plateaued. These challenges aren’t isolated technical problems. The following are a few of the headwinds that AI must overcome to succeed in the long term.
The economics don’t work.
OpenAI projects cumulative losses of $115 billion through 2029. Anthropic burned $2 billion in 2024. The entire industry loses money on every inference request. Companies claim they’ll reach profitability through scale, but 95% of AI pilots fail to deliver returns.
Currently, AI is heavily subsidized, but those subsidies will start to taper off soon. The path to profitability requires price increases that users might not accept. That’s a fundamental business problem.
Scaling has plateaued.
No model feels significantly smarter than GPT-4 from March 2023. OpenAI’s recent models delivered improvements far less noticeable than the leap from GPT-3 to GPT-4. Ilya Sutskever, who pioneered scaling laws, now says “results from scaling up pre-training have plateaued.” A survey of AI researchers found 76% say scaling current approaches won’t reach AGI.
Industry veterans are acknowledging what the benchmarks show: we’ve hit diminishing returns. More compute and larger models aren’t delivering breakthrough capabilities anymore.
The training data is exhausted.
According to Elon Musk, “We’ve now exhausted basically the cumulative sum of human knowledge in AI training.” The industry’s response is synthetic data, but that is likely to trigger model collapse: train on AI-generated content recursively, and models produce gibberish after nine iterations.
Meanwhile, 74% of new webpages contain AI-generated text. The internet is filling with synthetic content, poisoning the well for future training. It’s a degenerative loop with no clear solution.
Datacenter infrastructure can’t keep up.
By 2027, Gartner predicts 40% of AI data centers will hit power shortages. Electricity demand is climbing to 500 terawatt-hours annually, 2.6 times what we used in 2023. Utilities need five years or more to add new capacity. Hyperscalers can’t wait that long, so they’re cutting deals directly with power providers.
Meanwhile, residents near data centers pay $16-18 more per month on their electric bills. The US faces an 11-gigawatt capacity shortfall this year, growing to 10 gigawatts by 2028. Companies scouting data center sites don’t struggle to find land. They struggle to find grid connections and communities willing to accept them.
Water makes things worse. US data centers consumed 174 billion gallons in 2020. Texas alone will hit 399 billion gallons by 2030. Large facilities evaporate 5 million gallons daily, enough for a city of 50,000 people. Two-thirds of new data centers built since 2022 sit in regions already facing water stress, putting them in direct competition with residential and agricultural.
Unproven ROI.
Between 70-85% of AI initiatives fail to meet expected outcomes, according to MIT and RAND Corporation research. Companies abandoned 42% of their AI projects in 2025, up from just 17% in 2024, per S&P Global Market Intelligence’s survey of over 1,000 enterprises. The average organization cancels 46% of AI proofs of concept before production.
McKinsey’s 2025 State of AI survey found that only 6% of organizations qualify as “AI high performers,” generating an EBIT impact of 5% or more from their AI use. Just 39% report any enterprise-level EBIT impact at all. There’s a fundamental question about whether these tools can deliver ROI at scale.
Hallucinations and AI slop eroding trust.
Developer trust in AI tools dropped from 43% to 33% while active distrust climbed to 46%, according to Stack Overflow’s 2025 survey. The decline stems from the effort required to review and rework AI-generated code to address hallucinations, security issues, and quality problems. AI generates code faster than humans can review it.
Research by Faros AI found that code review times increased by 91% and pull request sizes grew by 154% with AI adoption. GitClear’s analysis of 211 million lines of code revealed 10x more duplication and a doubling of code churn, driving a 41% increase in bugs and long-term maintenance costs.
Legal exposure and regulatory requirements are mounting.
A number of copyright lawsuits target AI companies for training on protected works without permission. In February 2025, Thomson Reuters won the first major US ruling against Ross Intelligence. The court found AI training harmed the market for original content and failed the transformative use threshold under the fair use doctrine. Additional lawsuits claim ChatGPT contributed to suicides, testing whether AI companies face product liability for harmful outputs.
The EU AI Act requires full compliance by August 2026, with fines reaching 7% of global annual turnover. Like GDPR, it applies extraterritorially to any company serving EU users. Organizations must classify AI systems by risk level, maintain decision-making documentation, implement human oversight mechanisms, and register high-risk systems in an EU database. Legal costs and regulatory compliance will rise regardless of technical progress.
What This Means for Engineering Leaders
These headwinds aren’t future concerns. They’re affecting decisions now.
The smart play is healthy skepticism. When vendors promise transformative capabilities, demand proof tied to specific business outcomes. When projects require significant investment, build exit criteria based on concrete metrics, not faith in future improvements. When teams want to adopt AI for productivity, they calculate actual costs, including compute, licensing, and the productivity tax of dealing with hallucinations.
The more important question is strategic positioning. If the current AI approach faces fundamental limits, what should we build that doesn’t depend on exponential improvements in capability? Where can we create value with current, plateau-level capabilities? What would our architecture look like if AGI never arrives or delivers only incremental gains?
Where This Leaves Us
I’m not arguing that AI has no value. Current tools deliver real productivity gains in specific contexts. I’m arguing that betting your engineering strategy on continued exponential improvement is increasingly risky.
The evidence suggests we’re hitting fundamental constraints—economic, technical, environmental, and social. Companies that recognize this reality and plan accordingly will be better positioned than those waiting for breakthroughs that may never come.
What patterns are you seeing in your organizations? Are the promised gains materializing, or are you seeing the same quiet disappointment?


