Doubling Down on an Unfair Algorithm
A real-world example of Backfire effect in action
Context
A mid-stage fintech startup built a machine-learning credit-decision model to accelerate loan approvals and reduce underwriting costs. Leadership was under pressure to show fast growth and low default rates to satisfy investors and a planned Series C.
Situation
An internal audit by the risk team flagged statistically significant differences in denial rates for applicants from certain ZIP codes and minority groups, and recommended model retraining and manual review for borderline cases. The product and engineering leads, confident in their performance metrics (low overall default rate), pushed to deploy the model broadly to keep pace with revenue targets.
The bias in action
When presented with audit evidence, senior engineers and the CEO questioned the audit methodology and emphasized their own successful backtests rather than engaging with the specific fairness metrics. Rather than treating the audit as a call to investigate, the team interpreted it as an attack on their competence; this triggered defensive reasoning that reinforced belief in the model's adequacy. The result was a dismissal of corrective recommendations and a faster, company-wide rollout — a textbook backfire effect where contradictory evidence strengthened existing convictions.
Outcome
Within months of full deployment complaints about unfair denials rose, triggering a regulator inquiry and public criticism from consumer advocates. The company had to pause new lending in affected states, commission an external audit, and implement costly remediation.




