Socrates was famously difficult to work with. He never accepted the first answer. He probed assumptions. He asked "why" until the person he was speaking with either found the truth or revealed that they didn't have one.
Product managers could learn a lot from him.
The Socratic method using disciplined questioning to surface hidden assumptions and arrive at clearer thinking - is arguably the most valuable tool in a PM's discovery toolkit. And it's also the most consistently skipped.
Why Product Teams Skip the Hard Questions
The most common failure mode in product development isn't bad execution. It's building the right way toward the wrong outcome. Teams commit to building a feature, align stakeholders, estimate engineering work, and write detailed requirements all before anyone has seriously asked whether the feature should exist.
There are understandable reasons this happens:
Velocity pressure. Teams are measured by output (features shipped) rather than outcome (problems solved). Slowing down to challenge a feature idea feels like falling behind.
Stakeholder momentum. Once an idea comes from leadership or a high-value customer, questioning it can feel politically risky. The path of least resistance is to start writing requirements.
Assumption blindness. Most PMs aren't aware of all the assumptions their feature ideas rest on - until those assumptions fail in production.
The Socratic method creates space for the questions that prevent expensive mistakes.
The Core Socratic Questions for Product Discovery
Before any feature is added to the roadmap, a Socratic discovery process works through a structured set of probing questions. These aren't optional they're the filter that separates well-founded features from well-intentioned guesses.
1. Who Exactly Is This For?
The question isn't "who is our user?" it's "who specifically will use this feature?" A PM who answers "SMB customers" hasn't thought hard enough. A PM who answers "operations managers at SaaS companies with 10–50 person teams who are currently managing invoices in spreadsheets" has.
The more specifically you can describe the user, the more likely the feature will serve that user. Vague user definitions produce vague features.
2. What Problem Does This Solve?
And the more important follow-up: What happens if we don't build it?
If the honest answer is "nothing much changes," that's valuable information. The problem might not be serious enough to justify the build. If the answer is "users churn, support tickets spike, or the sales team loses deals," that's a different conversation.
3. What Are We Assuming?
Every feature rests on a set of assumptions that are rarely made explicit. Writing them down is clarifying.
Common assumptions that turn out to be wrong:
- "Users will check this dashboard daily" (they won't)
- "Users understand the terminology we're using" (they don't)
- "Users will configure this before using it" (they'll skip setup)
- "This integrates cleanly with users' existing workflow" (it disrupts it)
Surfacing assumptions isn't pessimism it's risk management.
4. How Will We Know This Succeeded?
If a feature doesn't have a measurable success metric defined before it's built, there is no way to know whether building it was the right decision. This question forces specificity:
- "Users will like it" → not a metric
- "40% of active users will use this feature within 30 days of launch" → a metric
Success metrics also create accountability. If the feature ships and the metric isn't hit, that's a signal to learn from not to ignore.
5. What's the Smallest Version We Could Test?
This is the MVP question, but asked with rigor. The goal isn't to ship the smallest possible feature it's to identify the smallest experiment that would confirm or refute the core assumption before full investment.
A landing page test, a Wizard of Oz prototype, a concierge beta, or a five-user research session can often validate an assumption in days that would otherwise require months of development to test.
Socratic Discovery in Practice: A Before and After
Without Socratic discovery:
Feature idea: "We should add a dashboard showing users their monthly product usage stats."
What happens: Engineering scopes it, design creates wireframes, PM writes a PRD. Six weeks later, the feature ships. 12% of users ever open the dashboard. It's quietly deprioritized.
With Socratic discovery:
Same idea goes through the questions:
- "Who specifically is this for?" → Power users who want to audit their team's activity
- "What problem does it solve?" → They currently export data manually to get this view
- "What are we assuming?" → That they check this regularly and find it actionable
- "How will we measure success?" → 30%+ of power users check the dashboard weekly within 60 days
- "What's the smallest test?" → A Slack notification with weekly stats for 20 power users for 2 weeks
What happens: The beta test shows power users open the Slack message but rarely click through to the dashboard. The real need is the notification, not the dashboard. Engineering builds a notifications system (lower effort) instead of a full dashboard (higher effort). Engagement is 3x higher.
Discovery didn't slow the team down. It redirected them toward the right solution.
How AI Is Automating the Socratic Method
Until recently, Socratic discovery depended entirely on the PM being disciplined enough to ask hard questions - and experienced enough to know which questions mattered.
AI agents designed for product management can now automate this. Tools like Pruva apply Socratic questioning at the start of every feature, before any documentation begins:
- The PM describes a feature idea
- The AI agent asks structured discovery questions
- Assumptions are surfaced and documented
- The feature only advances to documentation after the core questions are answered
This doesn't replace PM judgment - it ensures that judgment is applied before commitment, not after.
Frequently Asked Questions
How long does Socratic discovery take? A structured discovery conversation for a single feature typically takes 20–40 minutes when done manually. AI-guided discovery can compress this to 10–15 minutes by asking targeted questions in sequence. The time investment pays back immediately by preventing rework.
What if leadership is pushing a feature and I don't have time to challenge it? The Socratic method doesn't have to feel like resistance. Framing the questions as "how do we make sure this lands well?" rather than "should we build this?" usually gets the same rigor without the political friction.
Can Socratic discovery be applied to bugs or tech debt work? Yes - though the questions shift slightly. For bugs: "What user problem does this create?" For tech debt: "What future capability does this unlock?" The core discipline of defining value before committing effort applies to all types of work.
Pruva builds Socratic discovery into every feature brief - asking the hard questions before a single line of requirements is written. →



