Over-tuned Pricing: How Side-by-Side Comparisons Hurt a SaaS Launch
A real-world example of Distinction bias in action
Context
A growing SaaS startup was preparing to launch new subscription tiers to move customers up the value ladder. The product and marketing teams ran an internal workshop, laying two candidate plans side-by-side to choose which to ship.
Situation
Team members evaluated Plan A and Plan B together during a single meeting, scanning a table of features, micro-differences in trial length, and a minor discount. Because the options were adjacent, designers and execs fixated on a small UX polish and a slightly longer trial in Plan B and declared it the clearly superior choice.
The bias in action
When the team compared the plans simultaneously, small differences (a 3-day longer trial, a reorder of menu items, and a slightly different onboarding flow) appeared large and decisive. That joint evaluation exaggerated those distinctions, making the two plans feel qualitatively different even though customers typically evaluate plans one at a time. The team overweighted the seemingly superior onboarding polish in Plan B and downplayed parity in pricing and core features that mattered to users. As a result the decision reflected internal perception of difference rather than empirical evidence of customer preference.
Outcome
The company launched Plan B across all new signups. Over the next three months conversion from trial-to-paid was 7.0% versus the modeled 8.2% (a ~15% relative drop), and monthly churn rose from 6% to 10% for new customers. After six months the company estimated $120,000 of missed recurring revenue and redirected two engineers for 320 hours to revise the pricing presentation and run tests.


