When Hope Skews the Scale: An Unblinded Phase II Trial That Overstated Efficacy
A real-world example of Observer-expectancy effect in action
Context
A small biotech company ran a Phase II trial on a promising oral compound for chronic neuropathic pain. Investigators and site clinicians were excited by preclinical data and early compassionate-use anecdotes, and full blinding procedures were not enforced due to perceived logistical savings.
Situation
Clinicians conducted in-person assessments using a semi-structured pain-rating interview and clinician-rated improvement scales, knowing which participants received the experimental drug. The company relied on these clinician ratings to decide whether to advance to an expensive Phase III program.
The bias in action
Because clinicians expected the drug to work, they subconsciously cued patients with more encouraging language and accepted ambiguous statements as signs of improvement. Raters scored borderline improvements more generously for patients on the experimental drug, while similar ambiguous reports from control patients were recorded as 'no change.' These small, systematic shifts in assessment added up across sites, producing an inflated treatment effect in the analyzed data.
Outcome
The trial report showed a 62% responder rate in the experimental arm versus 45% in the control arm (a 17 percentage point difference), which the company interpreted as clinically meaningful and used to justify a $12M Series B and the start of Phase III. In a later double-blind, independently adjudicated Phase III, the difference vanished and the company missed primary endpoints, forcing them to halt development and write off program costs.




