Igor Maric / imTheOdd0ne

Broken by design: the economics of modern software

The software industry has spent 25 years learning a simple truth: broken products ship faster, cost less, and often face limited legal consequence. Now the same industry deploys AI systems alongside formal arguments that hallucination cannot be fully eliminated in general-purpose use. AI didn't start this trend. It scaled it.

TL;DRHomeBlog2026Article

Mark Minasi argued in 2000 that software vendors had become unusually successful at shifting defect risk onto users through warranty disclaimers. Cory Doctorow later gave the extraction pattern a name. A 2024 arXiv preprint argued that hallucination may be unavoidable for general-purpose LLM deployment, yet companies continue to ship these systems into high-stakes workflows. Recent layoffs removed hundreds of thousands of technology jobs even as AI infrastructure spending surged. The pattern is not accidental. It is incentive made visible.

1 April 2026 · 36 min read · Industry, Quality, Standards, ProductivityMore from 2026 →
Broken by design: the economics of modern software

There's a moment I've stopped being able to recall: the last time a software update made something worse and I was genuinely surprised. Not irritated. Not resigned. Surprised.

Somewhere in the past decade, irritation became resignation, and resignation became something more troubling — an ambient expectation that the software we depend on is permitted to fail. We have been trained. Not through any single incident but through the accumulated weight of ten thousand small betrayals: the update that broke the printer driver, the subscription tier that quietly acquired advertisements, the AI assistant that delivered a confidently wrong answer with the smooth authority of someone who has never been held accountable for anything.

The question worth asking is not when software got worse. It is when we agreed to accept that it would.

Mark Minasi asked a version of this question in 2000. His book, The Software Conspiracy, made an argument that seemed almost paranoid at the time and reads now like a technical manual for the present decade: software companies had discovered that they could ship defective products, face no legal consequence, and still collect full payment.1 Not because they were especially malicious, but because the economics made mediocrity rational. Minasi argued that mass-market software vendors had become unusually successful at using end-user licence agreements and warranty disclaimers to limit liability exposure. In practice, commercial software has long been sold on terms that disclaim broad warranties and shift much of the risk of defects to users.

When you purchase a microwave and it fails to heat food, you have recourse. When you purchase accounting software and it miscalculates your figures, the licence agreement you clicked through will often disclaim responsibility for consequential losses arising from that failure. This is not a legal technicality buried in fine print. It is the founding business logic of an industry that would eventually produce, twenty-five years later, AI systems advertised as productivity tools whilst shipping with standard warnings that outputs may be inaccurate and must be verified before use.

The wonder is not that software quality has declined. The wonder is that it took this long for the consequences to become visible, and that we have still somehow failed to draw the obvious conclusion.

The extraction mechanism

Cory Doctorow named the phenomenon in November 2022. Enshittification — the word the American Dialect Society named its Word of the Year for 2023, and which Australia's Macquarie Dictionary selected for 2024 — describes the lifecycle of platforms that begins with genuine usefulness and ends with systematic extraction.2 The stages are precise: first, platforms are good to users to attract them; then, once users are captured, the platform degrades its service to users in favour of business customers who pay for access to those users; finally, once business customers are sufficiently dependent, the platform degrades its relationship with them as well, harvesting everything for shareholders.

Doctorow was writing primarily about digital platforms — Amazon's search results filling with paid placements until organic results become a minority, Netflix acquiring subscribers with a clean ad-free offer before introducing an ad tier once the password-sharing crackdown made the platform harder to leave, Facebook showing users the content they requested until it didn't, then charging publishers to reach the audiences who had explicitly asked for their content.

But the mechanism is not specific to platforms. It is a general description of what happens to any product when switching costs become high enough to suppress competition, and when legal accountability is low enough that quality degradation carries no penalty. Software had both conditions before Doctorow gave them a name.

The National Institute of Standards and Technology estimated in 2002 that software defects cost the American economy approximately $59.5 billion annually, and that a third of those costs could be avoided with better testing infrastructure.3 That estimate predates smartphones, cloud services, and large language models, so it is best read as a historical baseline rather than a current total.

The mechanism is not always passive neglect. Sometimes it is active sabotage. Apple shipped iOS 10.2.1 in January 2017 with code that throttled CPU performance on older iPhones with degraded batteries — without disclosing the throttling to users. The practice was exposed in December 2017 and produced a $500 million class-action settlement, a $113 million settlement with more than thirty US state attorneys general, and a 25-million-euro fine from France's consumer fraud authority for misleading commercial practice by omission.4 Intuit deliberately added code to its TurboTax Free File page instructing search engines not to index it — hiding the free version from anyone searching for free tax filing. ProPublica exposed the practice in 2019, the FTC sued in 2022, and Intuit settled with all fifty states for $141 million, compensating more than four million people who had paid for tax filing they were legally eligible to receive for free.5 HP has repeatedly deployed firmware updates that disable third-party ink cartridges in its printers, paid a $1.5 million class-action settlement in 2019, and remained controversial into 2026.6

These are not ambiguous cases of platform evolution or product strategy disagreements. They are documented, litigated, fined instances of companies deliberately making their products worse — or deliberately hiding better options — to extract additional revenue from captured users.

I've worked in enough teams that shipped with known defects to recognise the pattern. The defects are rarely catastrophic. They are consistent small failures affecting specific user groups or edge cases — the sort that get triaged onto a backlog rather than treated as a release blocker. The decision to ship is never made from ignorance. It is made because launch dates have been committed, demos have been scheduled, and the cost of delay is considered greater than the cost of the support tickets. The defects go on the backlog. Backlogs grow faster than they are cleared. Most entries are never resolved. This is not an unusual story. It is the operational standard across the industry, documented in post-mortems that get filed and forgotten with reliable regularity.

The quarterly ratchet

The mechanism has a driver, and it sits in the relationship between publicly traded technology companies and their shareholders.

The shift toward shareholder primacy — the doctrine that a corporation's primary obligation is maximising returns to shareholders rather than delivering value to customers or employees — became structurally embedded in how public companies are evaluated, incentivised, and managed through the 1980s and 1990s.7 For software companies, the practical consequence was predictable: investment in quality assurance, which produces no revenue and does not appear on a product roadmap, is harder to justify than investment in features, which can be demonstrated in a sales deck. Quality assurance teams are consistently among the first cut when organisations need to reduce headcount. The people who catch defects before they ship are, in the calculus of shareholder primacy, overhead.

Microsoft shipped multiple cumulative Windows updates across 2023 and 2024 that were subsequently identified as introducing new problems alongside the vulnerabilities they were intended to address.8 IT administrators documented these regressions across community forums and trade publications — updates that broke print spooler functionality, disrupted VPN connectivity, and caused system failures on specific hardware configurations. Not isolated incidents. A pattern recurring across release cycles with enough consistency that enterprise administrators now maintain dedicated procedures for validating patch quality before deploying to production environments. A common recommendation from experienced Windows administrators is to wait at least a fortnight after Patch Tuesday before rolling out updates — not because the patches are untested, but because Microsoft's first-pass release often receives further real-world validation before broad deployment.

Google's search quality provides a parallel case. A January 2024 study from Leipzig University, analysing 7,392 product review queries over one year across Google, Bing, and DuckDuckGo, found that higher-ranked pages were more heavily monetised with affiliate marketing and showed measurably lower text quality. Google's algorithmic ranking updates produced only temporary improvements before SEO spam regained its position.9 Reporting based on internal emails released during the DOJ antitrust case highlighted a 2019 internal revenue alarm and a later shift that critics argue put monetisation pressure ahead of search quality.10 The product that users rely on to find information was restructured around the needs of advertisers. The users were never consulted. They did not need to be; switching costs made consultation unnecessary.

This is normalisation in practice. An industry has adapted its procedures to accommodate a product that reliably fails in predictable ways. The failure is priced into the operational overhead of using the product, and the licence agreement ensures that the cost of failure is borne by the user, not the vendor.

The July 2024 CrowdStrike incident demonstrated what that architecture of normalised brittleness costs when it fails at scale. A faulty content update to CrowdStrike's Falcon sensor triggered a defect in a Windows kernel driver, rendering approximately 8.5 million machines inoperable simultaneously.11 Airlines cancelled thousands of flights. Hospitals reverted to paper records. The cascading failure exposed how deeply critical infrastructure had been built on a stack of software products each individually fragile, held together by operational procedures that assumed no single component would fail catastrophically — an assumption that turned out to be incorrect.

The event produced congressional hearings, security reviews, and extensive industry commentary. Delta Air Lines sued CrowdStrike for $500 million in damages; shareholders and affected passengers filed their own claims.12 Yet none of this litigation has produced meaningful change to the legal framework that permits software vendors to ship defect-prone products as a default commercial practice. The "as-is" clause remains the industry standard. The next major incident will be investigated with identical procedural energy and, most likely, identical legislative outcome.

The subscription promise

Adobe sold perpetual licences for its creative software until 2013, when it announced a transition to Creative Cloud — a subscription model that would deliver continuous updates, cloud integration, and a lower upfront cost.13 The pitch was genuinely compelling, and the adoption numbers reflected that. Continuous updates were a real improvement over major-version cycles. The cloud infrastructure added capabilities that justified the shift.

A decade later, Creative Cloud subscribers pay considerably more annually than the equivalent perpetual licences would have cost over the same period. In 2024, Adobe updated its terms of service with provisions that customers and privacy advocates interpreted as enabling access to subscriber content for AI model training — a reading that provoked significant backlash when the changes were noticed without prominent disclosure.14 Adobe disputed this interpretation, clarifying that it had not trained generative AI on customer content, and subsequently revised the terms. The FTC, meanwhile, filed a separate complaint against Adobe and two of its executives in June 2024 for hiding early termination fees that the agency said could cost consumers hundreds of dollars, and making cancellation deliberately difficult through multi-page flows, dropped calls, and representative resistance.15 The subscription model that had once been a genuine improvement had become, a decade later, the kind of arrangement that required federal regulatory action to exit.

Netflix launched its ad-supported subscription tier in November 2022 after years of presenting the service as a clean, ad-free alternative. By 2024, the ad tier had become the most selected plan at signup in multiple markets, a fact Netflix reported in earnings materials as evidence of market expansion.16 What it was also evidence of was the enshittification arc running at precisely the pace Doctorow described: attract subscribers with a clean offer, raise switching costs through content investment, introduce the monetisation model the subscriber was paying to avoid, and report the adoption rate as a growth metric.

Amazon's search results — the interface customers use to find products — have increasingly filled with paid placements and sponsored listings.17 Searches for ordinary products return pages where the organic results representing what customers actually requested appear after multiple rows of paid placements from sellers who have bid for visibility. The search function, which existed to help customers find things, has been repurposed to charge sellers for access to customers who are actively trying to find things. The customers are the product and the audience for the advertising simultaneously, in an arrangement they did not agree to and cannot meaningfully exit.

None of this is the result of rogue decisions made by individuals acting against their organisations' interests. These are product managers and revenue teams executing strategies that have been normalised across an industry that knows it operates without meaningful accountability. The legal infrastructure that would apply to a manufacturer who degraded product quality after purchase does not apply here. The "as-is" clause that Minasi identified as the foundational business logic of the software industry in 2000 has survived every challenge intact.

The enshittification of atoms

If the extraction mechanism were confined to software — to subscription tiers and search results and terms of service — it might be dismissed as a cost of doing business in a digital economy. Tesla demonstrates that it is not confined to software. The same playbook that Adobe applied to Creative Cloud and Netflix applied to its ad tier has been applied to a car, with consequences measured not in degraded user experience but in downgraded safety ratings and a body count.

Tesla's Full Self-Driving feature has been promised as imminent for a decade. In December 2015, Elon Musk told Fortune that Tesla was two years away from full autonomy. In October 2016, he promised a coast-to-coast autonomous drive from Los Angeles to New York by the end of 2017 — a promise reiterated at a TED talk in April 2017 and extended to "three to six months" in February 2018.18 In April 2019, he promised over a million robotaxis on the road by 2020.19 Every subsequent year brought a similar promise. None was delivered. As of early 2026, Full Self-Driving remains classified as SAE Level 2 — an advanced driver assistance system that requires constant human supervision. It has never been, at any point in its commercial history, full self-driving.

The pricing tells its own story. FSD was originally sold as a one-time purchase at $5,000 in 2019. The price escalated through $8,000, $10,000, and $12,000, peaking at $15,000 in September 2022. In July 2021, Tesla introduced a subscription option at $199 per month. On 14 February 2026, Tesla eliminated the one-time purchase entirely — FSD is now subscription-only.20 Customers who paid up to $15,000 for a feature that was never fully delivered must now pay monthly for the same incomplete product. This is the Adobe Creative Cloud model applied to a vehicle: sell a perpetual licence, make the product subscription-only, then charge indefinitely for a capability that was promised as a one-time purchase and has never been fully realised.

The hardware degradation is more troubling. In May 2021, Tesla removed forward-facing radar sensors from the Model 3 and Model Y, replacing them with a camera-only system marketed as Tesla Vision. In October 2022, ultrasonic sensors were removed as well.21 Regulators and safety assessors later documented mounting concern around the shift to camera-only perception, while phantom-braking complaints continued to accumulate.21 A car company removed safety equipment that worked, replaced it with software that attracted sustained regulatory concern, called it an upgrade, and sold the result to customers who had purchased a vehicle with the reasonable expectation that its safety hardware would not be stripped out via a manufacturing change.

The pattern extends to products that exist primarily as demonstrations. At Tesla's AI Day in August 2021, the Optimus robot was introduced with a person in a robot costume. Reporting on the October 2024 "We, Robot" event later described the Optimus demonstrations as relying heavily on human assistance and remote operation.22 In Tesla's Q4 2025 earnings call, Musk acknowledged that no Optimus robots were yet performing broadly useful work at scale.22

The robotaxi demonstrations followed the same pattern. Reporting in early 2026 described Tesla's Austin robotaxi testing as small-scale and still closely supervised, a long way from the promise of a million robotaxis by 2020.23 The Cybertruck also arrived late and at prices and specifications that diverged materially from the figures presented at launch.24

Regulators have already issued major recalls and expanded investigations covering Tesla's driver-assistance systems, citing fatal crashes, collision allegations, and continuing safety concerns.25 Multiple recalls. A decade of broken promises. And the company continues to sell a feature called Full Self-Driving that is not, by any regulatory or engineering definition, full self-driving.

Tesla is the software industry's playbook made physical. A company that sells vaporware as a premium feature, removes safety hardware and calls it innovation, demonstrates robots that are secretly humans, and absorbs the regulatory consequences as a cost of doing business. The "as-is" clause has metastasised from software EULAs into the physical world, and the results are measured in crash statistics.

The hallucination economy

The technology industry has spent the last three years deploying large language models into production environments that carry significant consequences — legal research, medical information, financial analysis, customer service for regulated industries. It has done this in full knowledge that these systems hallucinate. Not occasionally. Structurally.

A 2024 preprint from researchers at the National University of Singapore, published on arXiv and subsequently revised, argued formally — using results from learning theory — that hallucination in large language models is mathematically inevitable for any computable LLM deployed as a general problem solver.26 The result, if it survives peer review intact, is not a description of current models that might be fixed with better training data or more sophisticated architectures. It is an argument about the fundamental limits of the technology as applied to general-purpose tasks. Systems that predict plausible next tokens based on training distributions will, by the nature of that process, generate confident outputs that are inconsistent with verifiable reality. This is not a bug awaiting a patch. It is the mechanism.

This paper was published in January 2024. By the end of that year, organisations across multiple industries had integrated LLMs into workflows where hallucination carried material consequences.

Law firms that had not yet received the memo were submitting AI-generated citations of cases that did not exist to federal courts. The Mata v. Avianca case in 2023 — where lawyers submitted ChatGPT-generated citations to non-existent precedents and were sanctioned by the court — was not an isolated failure of individual judgement.27 It was the leading edge of a pattern: by 2025, courts across the United States were still sanctioning lawyers over AI-generated fabricated citations in filings.28 A study in JAMA Pediatrics found ChatGPT produced incorrect diagnoses in more than eighty per cent of paediatric cases tested.29 ECRI, the patient safety organisation, named misuse of AI chatbots the number one health technology hazard for 2026, citing cases where chatbots suggested incorrect diagnoses, recommended toxic substitutes, and invented non-existent cancer treatments.30

The standard vendor response is a disclaimer. Every major LLM product ships with language acknowledging that the system may generate inaccurate information and that users accept all risk from acting on its outputs. There is something close to admirable in the consistency of this position. At least they have stayed on message.

The vendor response to measured hallucination rates is instructive. The Vectara Hallucination Leaderboard, which benchmarks grounded summarisation across thousands of articles, found in late 2025 that reasoning models — the ones marketed as the most capable — consistently exceeded ten per cent hallucination rates, with some reaching above twenty per cent.31 A Stanford HAI study published in the Journal of Empirical Legal Studies in 2025 found that even dedicated legal AI tools from LexisNexis and Thomson Reuters hallucinated on more than seventeen per cent of queries, whilst general-purpose LLMs hit sixty-nine to eighty-eight per cent on specific legal questions.32 Researchers in 2025 continued to find that LLMs often project confidence despite giving incorrect answers.33

The industry's preferred defence is that humans remain in the loop: AI generates, humans verify. The research on automation bias suggests this is inadequate. Parasuraman and Manzey's seminal 2010 study in Human Factors established that automation bias — the tendency to defer to automated recommendations over contradictory evidence — affects both novices and experts, cannot be eliminated through training or instruction, and persists in team settings as well as individual work.34 The "human in the loop" is not a safety mechanism. It is a liability transfer dressed as a process control.

The human cost

The defects discussed so far — broken updates, degraded search results, hallucinated citations — are presented as inconveniences, as friction in a system that broadly functions. This framing understates the consequences. Software defects and AI failures have killed people and driven vulnerable individuals to suicide. The accountability that followed has been, in every case, inadequate.

Boeing's 737 MAX was equipped with the Maneuvering Characteristics Augmentation System — MCAS — a software system designed to prevent aerodynamic stalls by automatically pushing the aircraft's nose down based on angle-of-attack sensor data. MCAS relied on a single sensor with no redundancy. When that sensor fed faulty data, MCAS repeatedly forced the nose down whilst pilots fought to regain control. Lion Air Flight 610 crashed thirteen minutes after takeoff from Jakarta on 29 October 2018, killing all 189 aboard. Ethiopian Airlines Flight 302 crashed shortly after takeoff from Addis Ababa on 10 March 2019, killing all 157 aboard. Three hundred and forty-six people, killed by a software defect in a system that Boeing's own engineers and test pilots knew was problematic but whose limitations were not disclosed to the FAA, airlines, or pilots.35

Boeing pleaded guilty to conspiracy to defraud the FAA in July 2024. The judge rejected the plea deal in December 2024, after victims' families objected that it made concessions no ordinary criminal defendant would receive. The DOJ dropped the criminal case entirely in May 2025, reaching a financial agreement worth approximately $1.1 billion.36 No Boeing executive was criminally charged. Three hundred and forty-six people dead from a software defect, and the legal system produced a financial agreement rather than prison sentences for company leadership.

The AI era has added its own body count. Sewell Setzer III was fourteen years old when he died by suicide on 28 February 2024, after months of intensive interaction with a Character.AI chatbot. The chatbot, modelled after a fictional character, had engaged in sexualised conversations with the minor and, according to the lawsuit filed by his mother in October 2024, encouraged self-harm.37 Zane Shamblin was twenty-three when he died by suicide on 25 July 2025 after a four-hour conversation with ChatGPT. As he wrote repeatedly about having a gun, the model responded with affirmations. It took approximately four and a half hours before ChatGPT sent him a crisis hotline number.38 By November 2025, OpenAI faced seven lawsuits in California state courts, including four wrongful death suits, with allegations that internal staff had warned that GPT-4o was dangerously sycophantic before its release.38

The National Eating Disorders Association replaced its human helpline with an AI chatbot called Tessa in 2023. The chatbot, enhanced with generative AI capabilities by its vendor without NEDA's approval, began recommending calorie counting, regular weigh-ins, and measuring body fat with calipers — advice directly dangerous to people with eating disorders, delivered by a system that had replaced the humans trained to recognise exactly this kind of harm. Tessa was taken offline within hours of the public complaint.39

Elaine Herzberg, forty-nine, became the first pedestrian killed by a self-driving car on 18 March 2018, when an Uber test vehicle struck her in Tempe, Arizona. Uber had disabled the vehicle's automatic emergency braking system. The software detected her 5.6 seconds before impact but could not classify her as a pedestrian because she was not near a crosswalk. Uber as a corporation faced zero criminal charges.40 As of October 2025, sixty-five fatalities have been reported involving Tesla Autopilot or Full Self-Driving, with the NHTSA identifying a critical safety gap contributing to at least 467 collisions.25

In every case, the pattern is the same: individual scapegoating where possible, corporate financial settlements where unavoidable, and no structural change to the legal framework that permits these products to operate under disclaimer cover. The backup driver in the Uber incident received three years' probation. The fundamental question — whether a company that deploys software it knows to be defective bears criminal liability for the consequences — remains judicially untested.

The product liability that would apply to a pharmaceutical company selling a medication that caused harm in a third of patients does not apply to a technology company selling an AI system that generates incorrect outputs with documented frequency, because the "as-is" disclaimer that has been upheld in court for twenty-five years now applies to systems that are, by their own vendors' admission, structurally incapable of reliability. The industry is not unaware of this. It is exploiting it. The same liability exemption that allowed Microsoft to ship buggy Windows updates without consequence, and allowed Adobe to revise terms of service without penalty, now allows AI vendors to ship systems with known structural limitations, collect enterprise contracts, and route all accountability back to the user who clicked through the terms.

The engineer displacement

Between January 2022 and the end of 2025, technology companies cut hundreds of thousands of jobs.41 The wave was not uniform — it accelerated in early 2023 following the Federal Reserve's interest rate increases that ended the cheap-capital era, then continued through 2024 and into 2026.

Google laid off 12,000 employees in January 2023. The internal memo cited the need to direct resources toward artificial intelligence as a priority. Simultaneously, Google announced its Bard AI product, which launched and immediately demonstrated hallucination failures in a promotional video — generating an incorrect claim about the James Webb Space Telescope that was publicly fact-checked within hours of the announcement.42 The stock dropped several percentage points. Recovery was swift. The layoffs and AI investment both continued.

The pattern repeated across the industry. Salesforce cut approximately 8,000 positions in early 2023 and simultaneously expanded its AI product push.43 Meta's two rounds of mass layoffs in 2022 and 2023 totalled more than 21,000 positions under the "Year of Efficiency" banner.44 SAP announced 8,000 layoffs in early 2024 as part of a transformation programme tied in part to its AI strategy.45

The calculation is being made publicly, in earnings calls and investor materials: AI tools can substitute for human engineers and analysts at a cost that improves short-term margins. Klarna provided the most instructive case study of where this logic leads. In February 2024, the company publicly claimed that AI had replaced 700 customer service agents. By May 2025, CEO Sebastian Siemiatkowski told Bloomberg that the company had gone too far and was rehiring human staff.46 Gartner predicted in February 2026 that fifty per cent of companies that cut staff to implement AI would rehire within two years, concluding that the technology was not mature enough to replace human expertise at the scale the market had anticipated.47

What is not being made public is the quality differential. The DORA 2024 State of DevOps Report, which surveyed approximately 39,000 professionals across thousands of organisations, found that organisations using AI tools for software delivery showed a 7.2 per cent reduction in delivery stability compared to organisations not using them — meaning more incidents, more rollbacks, more failures in production.48 This data does not appear in the earnings materials. The margin improvement does. Nor does the more sceptical line from Goldman Sachs, which argued in 2024 that AI would need to solve genuinely hard, revenue-generating problems to justify its cost base.49 Sequoia Capital's David Cahn identified in June 2024 a gap of approximately $500 to $600 billion between AI infrastructure spending and the actual revenue the AI ecosystem was generating — a gap that had not meaningfully narrowed by late 2025.50 The industry is spending hundreds of billions on infrastructure for systems that cannot be made reliable, whilst cutting the people who maintained reliability, and reporting the resulting cost reduction as a productivity gain.

These cuts also reduce the review capacity and institutional memory that normally catch defects before release — the people who built and maintained the review processes, the security checks, and the contextual knowledge that prevents predictable failures from reaching production. In many organisations, that lost capacity is being offset not by deeper human review but by heavier reliance on AI systems whose limitations are already well documented.

I've watched this play out in enough organisations to recognise the arc. You cut the people who know what they don't know, replace them with tooling that doesn't know what it doesn't know, ship faster because the review cycle is shorter, and inherit the consequences eighteen months later when the failures accumulate into something that can no longer be managed by the team that remains. The backlog grows faster than before. The product becomes more fragile than before. The disclaimer in the terms of service remains unchanged throughout.


Mark Minasi's book is twenty-six years old. The argument it makes has become more accurate with every year that has passed. Commercial software remains a product category in which broad warranty disclaimers and risk-shifting terms are routine rather than exceptional. The consumers who accept this — and we all accept it, because there is no alternative that does not — have been trained through sustained exposure to expect failure as the normal operating condition of the tools we depend on.

The trajectory from Minasi's observation in 2000 to the current moment is not a story of deterioration. It is a story of maturation. The software industry has become extraordinarily proficient at what it actually does, which is not building reliable software — it is capturing users into dependency relationships and extracting maximum value from those relationships whilst the legal framework that might otherwise constrain the quality floor remains unenforced.

The AI layer adds the element that makes this a complete system rather than a collection of related failures. The formal argument that hallucination may be unavoidable for general-purpose LLM deployment provides something that no previous era of software development had: an academic case for why the product may never be fully reliable, attached to a product that is being deployed into consequential workflows across modern organisations. The "as-is" disclaimer, which was always a legal fiction dressed as consumer information, now has a theoretical underpinning. The product may fail not because of poor engineering, but because of the fundamental nature of the technology. The terms of service say so. The academic literature is building the case. The sales materials emphasise the productivity gains.

The defect dividend — the business model of shipping products that are permitted to be broken — was always going to find its most efficient expression in a product that is structurally incapable of reliability, whose unreliability has been academically validated, and whose adoption is being accelerated by the elimination of the engineers who might otherwise catch the outputs before they cause harm.

We have spent twenty-five years building an industry on the premise that software can disclaim its way out of accountability, and watched as that premise has been tested against increasingly high-stakes applications and survived intact. The question is not whether the next generation of AI deployments will produce failures more serious than a hallucinated court citation or an advertising intrusion into a subscription product — they already have. Fourteen-year-olds are dead. Three hundred and forty-six airline passengers are dead. The question is whether the legal frameworks that have permitted this model to persist will finally find their limit.

There are small signs that they might. In February 2024, a British Columbia tribunal held Air Canada liable for incorrect refund information provided by its customer service chatbot, rejecting the company's remarkable argument that the chatbot was a "separate legal entity" for which Air Canada bore no responsibility.51 The EU's revised Product Liability Directive, published in November 2024, explicitly classifies software and AI systems as "products" subject to strict liability for the first time — the same standard that applies to physical goods. When technical complexity makes it excessively difficult for claimants to prove a defect caused their harm, courts may presume defectiveness. Failure to provide security updates can constitute a defect. The directive applies to products placed on the market from 9 December 2026.52

Whether this represents a genuine shift or merely a European regulatory gesture that the global technology industry routes around remains to be seen. The "as-is" clause has survived twenty-five years of escalating consequences. It has survived broken software, degraded products, captured platforms, hallucinating AI systems, and dead consumers. It will take more than a directive to dismantle a business model this profitable.

Given the track record, I wouldn't hold my breath.


Footnotes

  1. Mark Minasi, The Software Conspiracy: Why Companies Put Out Faulty Software, How They Can Hurt You, and What You Can Do About It (McGraw-Hill, 2000).

  2. Cory Doctorow, "Pluralistic: 'Enshittification'," Pluralistic.net, 28 November 2022. Doctorow coined the term in this post. The American Dialect Society named enshittification its 2023 Word of the Year on 5 January 2024; Australia's Macquarie Dictionary selected it for 2024 on 25 November 2024. See also Doctorow, "The 'Enshittification' of TikTok," WIRED, 23 January 2023, for the extended essay.

  3. Research Triangle Institute, "The Economic Impacts of Inadequate Infrastructure for Software Testing," National Institute of Standards and Technology Planning Report 02-3, May 2002. The study estimated annual software defect costs to the US economy at approximately $59.5 billion.

  4. NPR, "Apple Agrees To Pay $113 Million To Settle 'Batterygate' Case Over iPhone Slowdowns," 18 November 2020. See also MacRumors on the $500 million class-action settlement payouts (January 2024) and the Library of Congress on France's 25-million-euro fine (February 2020).

  5. ProPublica, "TurboTax Deliberately Hides Its Free File Page From Search Engines," 2019. The FTC sued Intuit in March 2022; Intuit settled with all 50 states for $141 million. See NPR, "TurboTax maker Intuit is ordered to stop its deceptive 'free' services ads," 23 January 2024.

  6. ClassAction.org, "HP Printer Ink Firmware Lawsuit." HP's Dynamic Security firmware restrictions have generated recurring litigation and consumer complaints; the 2019 settlement resolved one such suit. See also 2026 reporting on the continuing controversy over third-party cartridge lockouts.

  7. Lynn Stout, The Shareholder Value Myth: How Putting Shareholders First Harms Investors, Corporations, and the Public (Berrett-Koehler Publishers, 2012).

  8. Susan Bradley, "Patch Tuesday problems — a running list," Computerworld, multiple entries 2023–2024. See also The Register's ongoing coverage of Windows update regressions, including Liam Proven, "Windows patches on Patch Tuesday? Don't rush to satisf-Action," The Register, multiple entries 2023–2024.

  9. Janek Bevendorff, Matti Wiegmann, Martin Potthast, and Benno Stein, "Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search Engines," Leipzig University and Bauhaus-Universität Weimar, published January 2024.

  10. Edward Zitron, "The Man Who Killed Google Search," Where's Your Ed At, 24 April 2024. Drawing on internal Google emails released during the DOJ antitrust case.

  11. Microsoft, "Helping our customers through the CrowdStrike outage," Microsoft On the Issues, 20 July 2024. Microsoft confirmed approximately 8.5 million Windows devices were affected by the defective CrowdStrike Falcon sensor content update.

  12. Associated Press, "Delta sues CrowdStrike over global tech outage," 25 October 2024. See also CNBC coverage of shareholder and passenger litigation following the July 2024 incident.

  13. Adobe Inc. Press release announcing Creative Cloud transition, May 2013.

  14. Kate Knibbs, "Adobe Is Asking for a Lot of Trust with Its New Terms of Service," WIRED, June 2024. Adobe revised its terms following public criticism over provisions that customers and privacy advocates interpreted as enabling user content to be used for AI model training. Adobe subsequently clarified that it had not trained generative AI on customer content.

  15. Federal Trade Commission, "FTC Takes Action Against Adobe and Executives for Hiding Fees, Preventing Consumers from Easily Cancelling Software Subscriptions," FTC press release, June 2024.

  16. Netflix, "Building a Great Ad Experience," Netflix ad-business update, 12 November 2024. Netflix confirmed that over fifty per cent of new sign-ups in ad-supported markets were for the ad-supported tier. See also Netflix, Inc. earnings materials, 2024.

  17. Cory Doctorow, "Amazon's enshittification," Pluralistic.net, 2023. Doctorow documented the proportion of paid placements on Amazon search results pages for common product categories, finding the first several screens to be majority advertising.

  18. Fortune interview with Elon Musk, December 2015. Tesla Autonomy Day press call, October 2016, promising LA-to-NYC autonomous drive by end of 2017. TED talk, April 2017. Reuters reporting on "three to six months" claim, February 2018.

  19. Tesla Autonomy Day, April 2019. Musk stated Tesla would have "over a million robotaxis" on the road by 2020.

  20. Tesla pricing history documented by Electrek, The Drive, and CNBC across 2019–2026. FSD subscription launched July 2021 at $199/month. One-time purchase eliminated 14 February 2026.

  21. Tesla removed radar from Model 3/Y production starting May 2021 and ultrasonic sensors starting October 2022. NHTSA and IIHS documentation, together with later regulatory complaints, recorded sustained concern around phantom braking and the shift to camera-only perception. 2

  22. Reporting from Bloomberg, TechCrunch, and Fortune on the October 2024 "We, Robot" event described Tesla's Optimus demonstrations as relying heavily on human assistance and remote operation. In Tesla's Q4 2025 earnings call, Musk acknowledged that the robots were not yet performing broadly useful work at scale. 2

  23. Reuters and Bloomberg reporting in early 2026 described Tesla's Austin robotaxi testing as small-scale and still closely supervised.

  24. Coverage from CNBC, TechCrunch, and Car and Driver documented that the Cybertruck's launch timing, price, and delivered specifications diverged materially from Tesla's 2019 presentation.

  25. NBC News reporting in October 2025 and later NHTSA recall and investigation activity documented fatal crashes, collision allegations, and expanding regulatory scrutiny of Tesla Autopilot and Full Self-Driving. 2

  26. Xu, Ziwei, Sanjay Jain, and Mohan Kankanhalli. "Hallucination is Inevitable: An Innate Limitation of Large Language Models." arXiv:2401.11817, submitted 22 January 2024, revised February 2025. The preprint employs learning theory to argue that hallucination cannot be eliminated from computable LLMs deployed as general problem solvers. Note: this is an arXiv preprint, not a peer-reviewed publication; the result applies specifically to general-purpose deployment scenarios.

  27. Benjamin Weiser, "Here's What Happens When Your Lawyer Uses ChatGPT," The New York Times, 27 May 2023. The Mata v. Avianca case involved lawyers submitting ChatGPT-generated citations to non-existent cases to a federal court, resulting in judicial sanctions and significant professional consequences.

  28. Law360 coverage and court orders across 2024-2026 documented continuing sanctions over AI-generated fabricated citations in US court filings.

  29. The JAMA Pediatrics study found ChatGPT made incorrect diagnoses in over 80 per cent of paediatric cases. See also researchers finding that 88 per cent of chatbot health responses contained false information.

  30. ECRI, "Misuse of AI Chatbots Tops Annual List of Health Technology Hazards," 2025. ECRI named misuse of AI chatbots the number one health technology hazard for 2026, citing cases of incorrect diagnoses, toxic substitutes, and hallucinated cancer treatments. See also Fierce Healthcare and MedTech Dive coverage.

  31. Vectara Hallucination Leaderboard, benchmarking grounded summarisation across 7,700+ articles, late 2025. Reasoning models consistently exceeded 10 per cent hallucination rates.

  32. Stanford HAI, study published in Journal of Empirical Legal Studies, 2025. Found dedicated legal AI tools hallucinated on 17+ per cent of queries; general-purpose LLMs hit 69–88 per cent on specific legal questions.

  33. Carnegie Mellon University, "AI Chatbots Remain Confident — Even When They're Wrong," July 2025. The reporting summarised research finding that LLMs can remain overconfident even when their answers are incorrect.

  34. Raja Parasuraman and Dietrich H. Manzey, "Complacency and Bias in Human Use of Automation: An Attentional Integration," Human Factors, Vol. 52, No. 3, 2010. The seminal study establishing that automation bias affects novices and experts alike and cannot be eliminated through training.

  35. National Transportation Safety Board investigations of Lion Air Flight 610 (October 2018) and Ethiopian Airlines Flight 302 (March 2019). See also PBS Frontline coverage and the Belfer Center, Harvard Kennedy School, case study on Boeing 737 MAX.

  36. Boeing agreed to plead guilty to conspiracy to defraud the FAA in July 2024 (CNBC, 8 July 2024). The judge rejected the plea deal in December 2024 (NPR, 5 December 2024). The DOJ dropped the criminal case in May 2025, reaching a financial agreement worth approximately $1.1 billion (CNN, 30 May 2025). No individual Boeing executive was criminally charged.

  37. CNN, "Family of 14-year-old who died by suicide sues Character.AI," October 2024. See also NBC News and NPR (September 2025) on subsequent Character.AI lawsuits and the January 2026 settlement with Google.

  38. CNN, "Parents sue OpenAI over son's death after ChatGPT conversation," November 2025. See also KQED and Deseret News on the seven California lawsuits including four wrongful death suits. Internal allegations regarding GPT-4o sycophancy warnings reported by the Social Media Victims Law Center. 2

  39. NPR, "An eating disorders chatbot offered dieting advice, raising fears about AI in health," 8 June 2023. See also CBS News and Washington Post coverage.

  40. NPR, "Feds Say Self-Driving Uber SUV Did Not Recognize Jaywalking Pedestrian In Fatal Crash," November 2019. Uber as a corporation faced zero criminal charges per Yavapai County Attorney determination (NPR, March 2019).

  41. Layoffs.fyi, Tech Layoff Tracker, accessed 2026. https://layoffs.fyi. The tracker documents hundreds of thousands of technology industry layoffs across thousands of companies from January 2022 onward.

  42. Manish Singh and Haje Jan Kamps, "Google's Bard AI chatbot gave a wrong answer in its first demo," TechCrunch, 8 February 2023.

  43. Reuters, "Salesforce to lay off about 8,000 employees," January 2023. Salesforce was simultaneously expanding its AI product messaging and related tooling.

  44. Meta Platforms, Inc. Mark Zuckerberg announced approximately 11,000 layoffs in November 2022 and approximately 10,000 additional cuts in March 2023 under the "Year of Efficiency" programme.

  45. SAP SE press release, "SAP Announces Transformation Program," January 2024, announcing approximately 8,000 position restructuring attributed to AI transformation.

  46. Bloomberg, reporting on Klarna CEO Sebastian Siemiatkowski acknowledging the company had gone too far in replacing customer service agents with AI, May 2025.

  47. Gartner, press release, February 2026. Predicted 50 per cent of companies that cut staff for AI will rehire by 2027.

  48. DORA, "2024 Accelerate State of DevOps Report," Google Cloud, 2024. The survey of approximately 39,000 professionals across thousands of organisations found a 7.2 per cent reduction in delivery stability and a 1.5 per cent decrease in throughput among teams using AI tools for software delivery.

  49. Goldman Sachs research commentary in 2024, including analysis from Jim Covello, questioned whether AI economics could justify the scale of infrastructure spending without materially harder, revenue-generating use cases.

  50. David Cahn, "AI's $600B Question," Sequoia Capital, June 2024, updated December 2025. Identified a gap of $500–600 billion between AI infrastructure spending and actual revenue generation.

  51. Moffatt v. Air Canada, British Columbia Civil Resolution Tribunal, 14 February 2024. The tribunal held Air Canada liable for negligent misrepresentation by its chatbot, rejecting the company's argument that the chatbot was a separate legal entity.

  52. Directive (EU) 2024/2853 of the European Parliament and of the Council, published 18 November 2024, revising the Product Liability Directive. Software and AI systems are explicitly classified as "products" subject to strict liability. Burden-of-proof reform allows courts to presume defectiveness when technical complexity creates excessive difficulties for claimants. Applicable to products placed on the market after 9 December 2026.

Related Articles