When Algorithms Burn Down the City: Lessons from New York's Fire Crisis for Today's AI Revolution
Interviews
Oct 16, 2025

Jowanza Joseph
CEO, Parakeet Risk

Joe Flood
Journalist and author
In Episode 6 of Industrial Risk: Beyond The Blueprint, journalist Joe Flood explores how a computer model caused entire neighborhoods in NYC to burn down during the 1970s. In this article, we provide essential warnings for today's AI-driven society based on Joe's seven-year investigation into this algorithmic disaster.
The Genesis of Algorithmic Catastrophe: RAND Meets New York City
The story begins in 1968 with what seemed like a perfect match: cash-strapped New York City and the prestigious RAND Corporation, fresh from their Cold War successes in military strategy. Mayor John Lindsay, eager for innovative solutions to the city's fiscal crisis, embraced RAND's promise of using sophisticated computer modeling to optimize public services. What followed would become one of America's most devastating examples of algorithmic failure.
Joe Flood's investigation reveals how RAND's computer models led to the closure of fire companies in precisely the neighborhoods that needed them most. As Flood explains in the interview:
When you first use data in an anecdotal, intuitive field, there's enormous gains to be made. Where it goes wrong is when you start using models as an excuse to not think hard about complicated systems.
The RAND analysts, applying military-grade systems analysis to urban firefighting, made a fundamental error that would prove catastrophic. They closed fire stations based on response time calculations that ignored basic geographic realities—such as rivers separating boroughs—and failed to account for the interconnected nature of urban fire spread.
The models only estimated the time it takes for firefighters to get from the firehouse to the alarm box. They did not account for the time needed to reach the burning building or the complicated way fire spreads in crowded urban areas.
From 1972 to 1976, New York closed 26 fire companies. Of these cuts, 21 happened in or near areas with the highest number of fire and medical emergencies in the city. This wasn't coincidence—it was the inevitable result of flawed modeling that confused high service demand with service redundancy.
The Michael Lewis Principle: Models as Tools, Not Replacements for Thinking
Joe Flood conveys an important insight that he refers to as "The Michael Lewis Principle".
Models work when used as another way to think about complex problems. They fail when used as an excuse to NOT think about complex systems.
This principle captures the fundamental error that occurred in 1970s New York and continues to manifest in today's AI implementations.
The RAND analysts fell into what Flood identifies as "The Technocratic Trap"—a phenomenon that emerged from the post-WWII period's unprecedented faith in data-driven solutions. Following victories in World War II and the economic boom of the 1950s and 1960s, the era represented a peak in confidence in algorithmic decision-making. When fire unions challenged station closures with ground-level experience and local knowledge, RAND countered with "stochastic modeling" that judges and city officials couldn't effectively question.
This dynamic reveals a critical pattern: technical complexity becomes a shield against democratic accountability. Complex algorithmic systems, wrapped in mathematical sophistication, can effectively silence opposition by making their reasoning processes opaque to those who must live with the consequences. As one analysis notes, such models
are portrayed as scientific, objective, and neutral tools, when in fact they necessarily entail normative choices about political values at every key step.
The Human Cost: When Algorithms Meet Reality
The consequences of New York's algorithmic experiment were devastating and measurable. Entire census tracts lost 80-90% of housing and population. In neighborhoods with shared "cockloft" building designs—where attic spaces connected multiple buildings—a one-minute delay in fire response meant the difference between one building burning versus an entire block.
Flood's research reveals that the fires weren't primarily caused by arson, as commonly believed, but by the systematic withdrawal of fire protection from the city's most vulnerable neighborhoods. The human toll was staggering: more than 2,000 deaths and hundreds of thousands displaced. As one firefighter described the devastation:
When you walk through it, you just smell fire. You smelled fire. It wasn't even burned no more – but you could just smell it.
The tragedy exemplifies how algorithmic failures compound in complex systems. Each delayed response increased the probability of fire spread, creating cascading effects that the original models never anticipated. Fire companies from Queens found themselves responding to fires in the Bronx because "nearly the entire borough is busy"—a system-wide failure that the optimization models had failed to predict.
Universal Lessons: From Fire Departments to AI Deployment
Flood's analysis reveals that the patterns of failure he documented in 1970s New York are recurring in today's AI revolution. The same institutional dynamics, cognitive biases, and systemic vulnerabilities that enabled the fire crisis are now manifesting across industries implementing artificial intelligence systems.
The Core Vulnerability: Model-Over-Reality Thinking
The fundamental problem isn't technical—it's cognitive. Organizations become so enamored with the elegance and apparent objectivity of their models that they lose sight of the messy, interconnected reality those models are supposed to represent. As Flood notes:
The same quotes about computer-driven governance from the Goddard Rocket Institute could be recycled for AI companies today.
This "model-over-reality" thinking manifests in several dangerous ways:
Algorithmic Authority: Complex models gain unquestioned authority simply because their reasoning processes are opaque to decision-makers. The mathematical sophistication of RAND's fire models made them difficult to challenge, just as today's AI systems often operate as "black boxes" that resist scrutiny.
Failure of Imagination: The RAND analysts couldn't envision how their optimizations would interact with the complex social and physical systems of urban neighborhoods. Similarly, today's AI deployments often fail to account for the full ecosystem of effects their decisions will generate.
Substitution of Symptoms for Causes: RAND's models addressed response times (symptoms) rather than the underlying causes of fire vulnerability—poverty, housing decay, and systemic disinvestment. Modern algorithmic systems often exhibit the same pattern, optimizing measurable metrics while ignoring root causes.
The Normalization of Algorithmic Deviance
Flood's work reveals how organizations gradually accept lower standards of performance from their algorithmic systems—a phenomenon that parallels what NASA researchers call "normalization of deviance". In New York, each "successful" model recommendation that didn't immediately result in catastrophe became justification for accepting even more aggressive optimizations.
This pattern is already emerging in AI deployment:
Gradual Scope Creep: AI systems initially deployed for narrow tasks gradually take on broader decision-making responsibilities without corresponding increases in oversight or accountability.
Performance Rationalization: Organizations explain away AI errors as "edge cases" or "acceptable trade-offs" rather than indicators of fundamental system limitations.
Expertise Displacement: Human experts who raise concerns about AI system performance are often dismissed in favor of the apparent objectivity of algorithmic decisions.

Building Resilience: Lessons for the AI Era
Flood's investigation offers crucial guidance for organizations navigating today's AI revolution. The key insight is that algorithmic systems must be designed as augmentation tools that enhance human judgment, not replacement systems that eliminate human oversight.
Preserve Human Expertise in the Loop
The fire crisis demonstrates what happens when algorithmic optimization displaces rather than supplements human expertise. Experienced firefighters and union leaders had critical knowledge about neighborhood fire dynamics, building types, and response patterns that RAND's models ignored. Organizations implementing AI must create robust mechanisms for incorporating domain expertise and ground-level feedback.
Design for Transparency and Contestation
Complex algorithmic systems must be designed for democratic accountability, not just technical optimization. This means creating clear mechanisms for stakeholders to understand, question, and challenge algorithmic recommendations. The opacity that made RAND's models politically powerful also made them dangerous.
Implement Continuous Monitoring of Real-World Outcomes
RAND's models were never systematically validated against real-world fire outcomes. Organizations must build robust monitoring systems that track not just technical performance metrics but broader systemic effects of algorithmic decisions.
Maintain Skepticism About Algorithmic Objectivity
Perhaps most importantly, organizations must resist the seductive appeal of algorithmic neutrality. All models embed values, assumptions, and biases. The goal should be to make these explicit and subject to democratic scrutiny, not to hide them behind mathematical complexity.
The Continuing Relevance: Why This Story Matters Now
Flood's seven-year investigation into New York's fire crisis isn't just historical analysis—it's a warning system for our current AI revolution. We're in another period of peak algorithmic confidence, with AI systems being deployed across critical sectors from healthcare and criminal justice to financial services and urban planning.
The parallels are striking: post-pandemic disruption has created urgency for technological solutions, venture capital is flowing into AI companies promising revolutionary efficiency gains, and regulatory frameworks lag far behind technological deployment. These conditions mirror the environment that made New York's fire crisis possible.
The Stakes Are Higher Now
While New York's fire crisis devastated specific neighborhoods, today's algorithmic systems operate at unprecedented scale and speed. AI systems can now make millions of decisions per second, affecting everything from loan approvals and medical diagnoses to hiring decisions and social media content curation. The potential for algorithmic failures to cascade across interconnected systems is vastly greater than it was in the 1970s.
Conclusion
Organizations must build cultures that value human expertise alongside algorithmic efficiency, create governance structures that make algorithmic decision-making transparent and contestable, and maintain the intellectual humility to recognize that no model, however sophisticated, can fully capture the complexity of the systems it seeks to optimize.
Listen to the conversation that reveals universal patterns of technocratic overconfidence, which extend beyond city planning and provide sobering lessons for any industry using automated decision-making systems.
