Underestimating Launch Risk: When 'Everything' Feels Safer Than Its Parts
A real-world example of Subadditivity effect in action
Context
A mid-stage SaaS company prepared to release a major new feature to its enterprise customers. Executive leadership wanted a single-number assessment of the risk of a problematic launch to decide whether to delay and allocate extra engineering and customer-success resources.
Situation
The product manager was asked in the weekly steering meeting: 'What's the probability this launch will cause a major customer-impacting incident?' She replied, 'About 12%.' Separately, engineering and operations teams were asked to list and estimate probabilities for specific failure modes during a planning exercise.
The bias in action
When asked for a single, global likelihood the product manager gave a relatively low number (12%). However, when engineers estimated individual failure modes—database migration rollback (10%), API rate-limit misconfiguration (8%), third-party auth outage (7%), edge-caching mismatch (6%), and a critical client-specific integration bug (12%)—the sum of those parts amounted to 43%. Decision-makers treated the 12% holistic estimate as the authoritative figure and did not act on the higher cumulative number generated by the decomposed estimates. This mismatch is a textbook example of the subadditivity effect: people judge the probability of the whole (a major incident) to be less than the sum of its part-events when those parts are enumerated.
Outcome
Leadership approved the launch with only light contingency staffing and no extra rollback rehearsals. Within two weeks a combination of the listed failure modes occurred (an auth provider outage triggered a cascade that exposed a client-specific integration bug), producing a major outage that required a hotfix and rollback. Recovery took 27 hours, some customers experienced data access disruption, and the company reputation suffered among key accounts.
