Prompted Product Priorities: When User Research Plants Ideas
A real-world example of Suggestibility in action
Context
A fintech startup racing to ship a new savings app module ran a rapid round of user interviews to decide which features to build first. The research team had only two weeks and relied on moderated sessions with interactive mockups to speed decisions.
Situation
During ten 45-minute interviews, researchers showed mid-fidelity screens that included toggles and microcopy describing automated rules and premium nudges. Moderators occasionally paraphrased participant comments back using phrases like "so you’d want an automatic rule that transfers on payday," and prompted participants to compare imagined workflows. The team treated qualitative feedback as authoritative and used it to set the first sprint priorities.
The bias in action
Interview participants began endorsing and elaborating on features that were visible or suggested by the moderator, even when they initially expressed uncertainty. Several participants later referenced imagined past experiences with similar features that they had never used, adopting suggested benefit language provided during the session. The research team's notes emphasized these voiced preferences and treated them as replicated demand, overlooking how the interview framing had shaped those responses. As a result, the apparent consensus reflected implanted ideas rather than stable user needs.
Outcome
The company built three suggested features (scheduled transfers, a 'smart nudge' premium, and a rule-creation wizard) over three months at a development cost of approximately $150,000. After launch, feature adoption remained below 5% among the target segment and overall retention and NPS showed no improvement; marketing and product teams had to deprioritize the features within six weeks of release. The roadmap was delayed while leadership reallocated budget for follow-up validation.




