Null: the billion-dollar mistake that keeps compounding

In 1965, a computer scientist added a feature to a programming language because it was easy to implement. Sixty years and countless production crashes later, every modern language designed for reliability has arrived at the same conclusion: that decision was catastrophically wrong.

Null: the billion-dollar mistake that keeps compounding

Every programming language I've worked with has had a dirty secret hiding in plain sight. Not a complex concurrency bug or an arcane memory corruption issue — something far more mundane and far more destructive: the concept of nothing. A special value that means 'no value here,' slipped into the foundations of computing so long ago that most developers treat it as inevitable, like gravity or terrible coffee in standup meetings.

It wasn't inevitable. It was a choice. One man made it in 1965, and the rest of us have been paying for it ever since.

The temptation

Tony Hoare was designing the type system for ALGOL W, a programming language intended as the successor to ALGOL 60. His stated goal was elegant and ambitious: ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler1. A type system that would make entire categories of bugs impossible. This was 1965 — the ambition alone was remarkable.

Then came the moment he would spend the rest of his career regretting. 'I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement,' Hoare admitted in his now-famous 2009 presentation at QCon London, titled 'Null References: The Billion Dollar Mistake'1. The idea was seductively simple. A special value representing the absence of a value, a universal placeholder for 'nothing is here.' Every reference could potentially point to something real, or to null. The compiler wouldn't object. The programmer could sort it out later.

That word — later — is where the tragedy begins. Every major language that followed inherited the same fundamental design. C, C++, Java, C#, JavaScript, Python — each one copied the convenient fiction that a reference might point to nothing and that managing this was somehow the programmer's problem. Null spread through computing's family tree like a hereditary condition, each generation passing it forward without questioning whether the original decision was sound.

Hoare estimated the cumulative damage at one billion dollars. He was being generous.

The compound interest on nothing

The numbers tell a story that should have prompted a reckoning decades ago. An analysis by OverOps of over one billion error events across more than a thousand Java production applications found that NullPointerException was the number one exception in 70% of environments surveyed2. Not occasionally problematic. Not a nuisance in edge cases. Dominant. The same study found that 97% of all logged production errors stem from just ten unique exception types, with null sitting immovably at the top of the list.

The problem was significant enough that Java 14 introduced JEP 358, specifically designed to make NullPointerExceptions more descriptive, because the language's own maintainers acknowledged that NPEs are so pervasive it is 'generally impractical to attempt to catch and recover from them'3. When a language's designers build dedicated tooling just to make your most common crash more informative rather than less frequent, the underlying design decision deserves more than a retrospective — it deserves an inquest.

MITRE's 2025 Common Weakness Enumeration ranks NULL Pointer Dereference at number thirteen among the most dangerous software weaknesses — climbing from twenty-one just a year prior, making it one of the fastest-rising vulnerabilities in the entire index4. This isn't a historical curiosity filed under 'mistakes we've learned from.' It is an active, accelerating problem in production systems worldwide.

NIST estimated back in 2002 that software bugs cost the United States economy $59.5 billion annually5. Null-related failures have consistently claimed a disproportionate share of that figure, and the global cost has only grown as software has consumed every industry.

Google discovered the scale of the problem firsthand when analysing crash data from their Home application. NullPointerExceptions were the single leading cause of crashes reported through Google Play Console. After migrating new feature development to Kotlin — a language with null safety built into its type system — they measured a 33% reduction in null-related crashes, with only 30% of the codebase converted6. A third of their most common crash category, eliminated not through better discipline or more careful code review, but by choosing a language that simply refuses to let you ignore the possibility of nothing.

Sixty years of 'we'll handle it later'

The solutions have existed for years, which makes the continued prevalence of null-related failures all the more maddening. Swift introduced optionals in 2014, forcing developers to explicitly acknowledge and handle the absence of a value before the compiler would let them proceed. Rust arrived in 2015 with no null whatsoever — its Option<T> type makes the presence or absence of a value part of the type system itself, checked at compile time with zero runtime cost. Kotlin followed in 2016 with nullable and non-nullable types woven into its syntax. TypeScript added strictNullChecks the same year, giving JavaScript developers an opt-in escape from the chaos.

The pattern is unmistakable. Every modern language that has seriously grappled with software reliability has arrived at the same conclusion independently: the programmer should never be trusted to remember that a value might be absent. The compiler should enforce it. The type system should make it impossible to forget. Hoare's original goal in 1965 — references that are absolutely safe, with checking performed automatically — was correct. His implementation was the betrayal.

And yet the languages where null remains unchecked still dominate production systems worldwide. Billions of lines of code carry Hoare's 1965 decision forward, each line a small wager that every developer who touches it will remember to check for nothing before assuming something. Every day, somewhere, that wager loses.

Hoare called it his billion-dollar mistake. Adjusted for six decades of compound interest across every software system on the planet, the actual figure is incalculable. But the lesson is deceptively simple: when you let programmers ignore the possibility of nothing, nothing is exactly what they'll eventually get. The languages that refuse to let them forget have proven that the fix was never about discipline — it was about design. Choose wisely.


Footnotes

  1. Hoare, C.A.R. (2009). "Null References: The Billion Dollar Mistake." QCon London 2009. InfoQ. https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/ 2

  2. OverOps. "The Top 10 Exception Types in Production Java Applications — Based on 1B Events." Harness Blog. https://www.harness.io/blog/10-exception-types-in-production-java-applications

  3. OpenJDK. "JEP 358: Helpful NullPointerExceptions." https://openjdk.org/jeps/358

  4. MITRE. (2025). "2025 CWE Top 25 Most Dangerous Software Weaknesses." MITRE Corporation. https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html

  5. NIST. (2002). "The Economic Impacts of Inadequate Infrastructure for Software Testing." Planning Report 02-3, National Institute of Standards and Technology. https://www.nist.gov/document/report02-3pdf

  6. Google. (2020). "Google Home Reduces #1 Cause of Crashes by 33% with Kotlin." Android Developers Blog. https://android-developers.googleblog.com/2020/07/Google-home-reduces-crashes.html

Published on:

Updated on:

Reading time:

6 min read

Article counts:

24 paragraphs, 1,094 words

Topics

TL;DR

Tony Hoare introduced null references in ALGOL W in 1965, later calling it his 'billion-dollar mistake' at QCon London 2009. Analysis of over one billion Java error events confirms NullPointerException dominates 70% of production environments, whilst MITRE's vulnerability index ranks null pointer dereference among the top fifteen most dangerous software weaknesses. Google measured a 33% crash reduction simply by adopting a null-safe language. Modern languages — Rust, Swift, Kotlin, TypeScript — have eliminated the problem through compile-time enforcement, proving the solution existed all along: never trust programmers to remember that nothing might be there.

More rabbit holes to fall down

28 August 2025

The symptom-fix trap: Why patching consequences breeds chaos

In the relentless pressure to ship features and fix bugs quickly, development teams fall into a destructive pattern of treating symptoms rather than root causes. This reactive approach creates cascading technical debt, multiplies maintenance costs, and transforms codebases into brittle systems that break under the weight of accumulated shortcuts.

21 July 2025

Technical debt triage: making strategic compromises

Simple CSV export: one day estimated, three weeks actual. User data spread across seven tables with inconsistent types—strings, epochs, ISO 8601 timestamps. Technical debt's real cost isn't messy code; it's velocity degradation. Features take weeks instead of days. Developers spend 17 hours weekly on maintenance from accumulated debt.

7 August 2025

The velocity trap: when speed metrics destroy long-term performance

Velocity metrics were meant to help teams predict and improve, but they have become weapons of productivity theatre that incentivise gaming the system while destroying actual productivity. Understanding how story points, velocity tracking, and sprint metrics create perverse incentives is essential for building truly effective development teams.