Skip to content
Back to Blog

due-diligence

Startup Due Diligence Questions: What Investors Actually Ask (And Why)

Verve Intelligence··17 min
Startup Due Diligence Questions: What Investors Actually Ask (And Why)

Every question has a surface meaning and an actual meaning. Knowing both changes how you prepare.

Due diligence isn't an interrogation. It's pattern-matching.

Investors have seen hundreds of startups. They've seen which problems recur. Due diligence questions are probes designed to surface those patterns — to identify whether your startup has the same hidden flaws that killed the last three deals in your category.

This guide covers 47 common due diligence questions across seven categories, what each question is actually testing, and which answers raise flags.

Category 1: Team Questions

Team due diligence isn't about credentials. It's about whether this specific team can execute this specific idea. Impressive backgrounds with weak fit still fail.

"How did the founding team meet?"

Surface meaning: Where did you meet?

Actual meaning: Are these co-founders who've worked through hard problems together, or strangers who met at a startup mixer three months ago?

Red flag answers: "We met at a hackathon" or "We connected on CoFoundersLab" without subsequent proof of working together. First-time collaborations under startup pressure have high failure rates.

Strong answers: Prior professional collaboration, long personal friendship with shared projects, or past startups together (even failed ones).

"Why are you the team to build this?"

Surface meaning: What qualifies you?

Actual meaning: Is there founder-market fit — specific unfair advantages this team has for this problem?

Red flag answers: Generic credentials ("we're Stanford MBAs") without connection to the specific market. Or pure passion ("we're really excited about this space") without domain expertise.

Strong answers: Direct industry experience, proprietary insight from previous role, or specific technical capability that competitors can't easily replicate.

"How do you divide responsibilities?"

Surface meaning: Who does what?

Actual meaning: Are there clear owners for critical functions, or will everything become everyone's problem?

Red flag answers: "We do everything together" or vague ownership ("we both work on growth"). Equal ownership of everything usually means no ownership of anything.

Strong answers: Clear domains with named owners, explicit decision rights for each area, and defined interfaces between roles.

"What happens if a co-founder leaves?"

Surface meaning: Is there a vesting schedule?

Actual meaning: Have you thought through failure modes, or are you assuming everything works out?

Red flag answers: Surprised reaction, no vesting structure, or "we haven't really discussed that." Avoiding hard conversations early predicts avoiding them later.

Strong answers: Standard four-year vesting with one-year cliff, documented founder agreements, and honest acknowledgment that the possibility exists.

"Tell me about a hard disagreement between founders and how you resolved it."

Surface meaning: Can you tell a conflict story?

Actual meaning: Is there a working decision-making process, or does conflict turn into paralysis or blowup?

Red flag answers: "We don't really disagree" (either dishonest or a sign of groupthink) or a story that reveals ongoing unresolved tension.

Strong answers: Specific example with clear resolution process, evidence that disagreement produced a better outcome, and mutual respect despite differing views.

Category 2: Market Questions

Market questions assess whether the opportunity is real and whether you understand it accurately — not just whether you can cite big numbers.

"What's the total addressable market?"

Surface meaning: How big is the opportunity?

Actual meaning: Do you understand market sizing methodology, or are you just citing Gartner?

Red flag answers: Round numbers from industry reports with no segmentation. "$50B market" without explaining which slice you're actually targeting.

Strong answers: Bottom-up calculation with clear assumptions. "There are X target companies at Y price point with Z conversion rate = TAM of $N." Show the math.

"Why now?"

Surface meaning: What changed that makes this possible?

Actual meaning: Is there a forcing function, or is this idea equally possible at any time (which usually means it's not actually good)?

Red flag answers: "The technology is ready" (too vague) or "COVID changed everything" (overused, often false).

Strong answers: Specific regulatory change, technological inflection point, or market shift that creates a window. Bonus: explanation of why competitors haven't already exploited it.

"Who are your competitors?"

Surface meaning: Who else does this?

Actual meaning: Do you actually understand the competitive landscape, or are you either naive ("no competition") or dismissive ("we're better")?

Red flag answers: "We don't have direct competitors" (almost always wrong) or only listing obvious players while missing adjacent solutions.

Strong answers: Comprehensive mapping including indirect competitors and current alternatives, honest assessment of their strengths, and specific articulation of why your approach wins in your segment.

"Why will customers switch from their current solution?"

Surface meaning: What's the switching trigger?

Actual meaning: Inertia is strong. What's the forcing function that overcomes "good enough"?

Red flag answers: "We're better" without specific quantification of how much better. Or underestimating switching costs (time, integration, training, risk).

Strong answers: Specific pain point that current solutions don't address, quantified ROI that justifies switching friction, or greenfield segment where there's no incumbent to displace.

"What market trends favor your approach?"

Surface meaning: What tailwinds exist?

Actual meaning: Are you riding structural change or fighting it?

Red flag answers: Citing trends that favor the category generally but not your specific approach within it.

Strong answers: Specific trend that makes your approach more viable over time (regulatory shift, technology cost curve, demographic change) with evidence of momentum.

Category 3: Product Questions

Product questions test whether you're building something people actually want — not just something technically interesting.

"What problem are you solving?"

Surface meaning: What's the use case?

Actual meaning: Is this a real problem that people will pay to solve, or an interesting technical challenge in search of a user?

Red flag answers: Feature description disguised as problem statement. "We're building AI-powered X" is not a problem — it's a solution.

Strong answers: Crisp problem statement from the customer's perspective. "Mid-market CFOs spend 8 hours per month reconciling data from three systems. We reduce that to 30 minutes."

"How did you validate the problem exists?"

Surface meaning: Did you talk to customers?

Actual meaning: Did you actually discover demand, or did you assume it based on your own experience?

Red flag answers: "I had this problem personally" (party of one) or "we surveyed 50 people and 80% said they'd be interested" (hypothetical interest does not equal real demand).

Strong answers: Evidence of real behavior: "We interviewed 30 target customers. 22 described the problem unprompted. 8 said they'd pay for a solution today."

"Walk me through the user experience."

Surface meaning: How does it work?

Actual meaning: Have you thought through the actual workflow, or just the happy path?

Red flag answers: Demo that only shows the ideal scenario. No mention of error states, edge cases, or the messy parts of real usage.

Strong answers: Walkthrough that includes onboarding, daily usage, and failure modes. Acknowledgment of current friction points and plans to address them.

"What's your unfair advantage?"

Surface meaning: What moat do you have?

Actual meaning: What makes this defensible over time as competitors respond?

Red flag answers: "We move faster" (not durable) or "better UX" (easily copied) or "first mover" (usually overrated).

Strong answers: Network effects, proprietary data, regulatory advantage, or specific technical capability that's genuinely hard to replicate.

"What have you learned from early users that surprised you?"

Surface meaning: Any discoveries?

Actual meaning: Are you actually learning from usage, or are you building in a vacuum?

Red flag answers: Nothing surprising (either dishonest or a sign you're not paying attention) or surprises that you haven't acted on.

Strong answers: Specific insight that changed your roadmap, evidence of rapid iteration in response to feedback.

Category 4: Traction Questions

Traction questions verify that real-world results support your thesis — or reveal the gap between pitch and reality.

"What's your current traction?"

Surface meaning: What are the numbers?

Actual meaning: Is there evidence of real demand beyond early adopters?

Red flag answers: Vanity metrics (downloads, signups) without engagement or revenue. Or redefining "traction" to include things that aren't actually progress.

Strong answers: Revenue, retention, or usage metrics that demonstrate real value delivery. Honest about the stage while showing clear trajectory.

"What's your month-over-month growth?"

Surface meaning: How fast are you growing?

Actual meaning: Is growth accelerating, linear, or decelerating?

Red flag answers: Cherry-picked timeframe that obscures slowing growth. Or percentage growth on a tiny base presented as impressive.

Strong answers: Full growth history including plateaus and dips, with explanation of what drove each phase.

"What does retention look like?"

Surface meaning: Do customers stick around?

Actual meaning: Is the product actually valuable, or are you churning through free trial users?

Red flag answers: Focusing on acquisition while avoiding retention questions. Or measuring retention in ways that obscure churn (e.g., "we retained 90% of our enterprise clients" when you only have 2).

Strong answers: Cohort retention curves, ideally showing stability or improvement over time. Honest about segments where retention is weak.

"What's your CAC and LTV?"

Surface meaning: How expensive is customer acquisition?

Actual meaning: Can you actually make money acquiring customers, or is your unit economics upside-down?

Red flag answers: Guesses presented as data, or LTV calculated on theoretical retention without actual cohort data.

Strong answers: Real numbers from real cohorts, including honest acknowledgment of what's estimated vs. measured. Bonus: explanation of how CAC and LTV will evolve at scale.

"Tell me about your best customer and your worst customer."

Surface meaning: Who's working and who isn't?

Actual meaning: Do you understand your ICP, or are you taking anyone who'll pay?

Red flag answers: No clear pattern distinguishing good from bad customers, or inability to describe the differences.

Strong answers: Specific profile of ideal customer with reasoning. Honest about which segments don't work and why.

Category 5: Financial Questions

Financial questions test whether you understand the business model and have realistic projections — not just optimistic spreadsheets.

"Walk me through your financial model."

Surface meaning: Show me the projections.

Actual meaning: Do you understand the drivers of your business, or did you work backward from a desired outcome?

Red flag answers: Model that shows hockey-stick growth with no explanation of how you get there. Or projections that assume everything works perfectly.

Strong answers: Clear articulation of key assumptions, sensitivity analysis on critical variables, and honest acknowledgment of what's uncertain.

"What are your unit economics?"

Surface meaning: What's the profit per customer?

Actual meaning: Is this fundamentally a good business at scale, or are you subsidizing growth?

Red flag answers: Negative unit economics with hand-wavy explanation of how they'll improve "at scale." Or confusion about gross margin vs. contribution margin.

Strong answers: Clear breakdown of revenue per customer, variable costs, and contribution margin. Honest about whether current economics work or require improvement.

"How much are you raising and what will you spend it on?"

Surface meaning: What's the ask?

Actual meaning: Is the raise size connected to a coherent plan, or is it arbitrary?

Red flag answers: Round number with no clear allocation ("we're raising $2M for growth") or plan that doesn't connect spending to specific milestones.

Strong answers: Raise tied to specific milestones with clear allocation: "We're raising $1.5M. $800K for engineering (hiring 3 engineers to ship X feature), $500K for sales (hiring 2 AEs to reach Y revenue), $200K for operations and buffer."

"What's your burn rate and runway?"

Surface meaning: How long can you survive?

Actual meaning: Are you managing resources responsibly, or burning cash without discipline?

Red flag answers: Not knowing the number, or explaining high burn rate without clear ROI justification.

Strong answers: Precise current burn, runway in months, and clear explanation of the largest expense categories and why they're necessary.

"What are the biggest risks to this model?"

Surface meaning: What could go wrong?

Actual meaning: Do you understand your vulnerabilities, or are you naive about what could break?

Red flag answers: "We've thought of everything" or listing only generic risks that apply to any startup.

Strong answers: Specific, thoughtful risk identification connected to your model. Explanation of how you're mitigating or monitoring each risk.

Category 6: Legal and Governance Questions

Legal questions surface structural issues that could block a deal or create future problems.

"What's your cap table look like?"

Surface meaning: Who owns how much?

Actual meaning: Is the cap table clean, or are there landmines (dead equity, weird terms, too many shareholders)?

Red flag answers: Founder ownership below 50% at seed, unusual share classes, or inability to produce a clean table quickly.

Strong answers: Clean cap table, reasonable founder ownership at current stage, standard terms on any prior investment.

"Are there any IP issues?"

Surface meaning: Do you own your technology?

Actual meaning: Could a previous employer or collaborator claim ownership of core IP?

Red flag answers: Code written at a previous employer, co-founder who hasn't assigned IP, or unclear ownership of key algorithms.

Strong answers: All IP created after incorporation, proper assignment agreements from all contributors, and no competing claims.

"Any existing liabilities or litigation?"

Surface meaning: Are you being sued?

Actual meaning: Are there hidden obligations that could derail the company?

Red flag answers: Surprise disclosures that should have been mentioned earlier, or inability to clearly describe outstanding obligations.

Strong answers: No material litigation, or honest disclosure with clear context on any pending issues.

"How is the board structured?"

Surface meaning: Who's on the board?

Actual meaning: Will founders maintain control through early stages, or is governance already dysfunctional?

Red flag answers: Investor board majority at seed, unclear board composition, or existing board conflicts.

Strong answers: Founder control at early stage, clear path to adding independent members, and no legacy governance problems.

Category 7: Risk and Reference Questions

Risk questions test whether you've thought through failure modes. Reference questions verify claims.

"What's the biggest risk to this business?"

Surface meaning: Name a risk.

Actual meaning: Are you honest and self-aware, or do you only see upside?

Red flag answers: Deflection, naming only generic risks, or inability to identify specific vulnerability.

Strong answers: Honest, specific risk identification with clear articulation of mitigation strategies. Shows you've thought through failure modes.

"What would make you give up on this?"

Surface meaning: Do you have quit criteria?

Actual meaning: Are you rational about failure, or will you burn investor money past the point of reason?

Red flag answers: "I'll never give up" (irrational) or not having thought about the question.

Strong answers: Specific kill criteria: "If we can't achieve X traction by Y date with Z resources, we'd need to seriously reconsider the thesis."

"Can I speak with your customers?"

Surface meaning: Can I verify traction?

Actual meaning: Are your customer relationships real, or will references reveal problems?

Red flag answers: Hesitation, or only offering references who are personal connections rather than paying customers.

Strong answers: Immediate willingness to connect, multiple references across segments, and confidence they'll speak well of the product.

"Can I speak with people who've worked with you before?"

Surface meaning: Can I verify your claims?

Actual meaning: Do people who know you well trust you?

Red flag answers: Only offering curated references, difficulty finding former colleagues willing to speak.

Strong answers: Multiple references including former co-workers, past investors, or previous collaborators. Strong references often proactively reach out.

The Meta-Pattern

Across all categories, investors are probing for three things:

Honesty: Do your answers match reality? Are you straightforward about problems?

Self-awareness: Do you understand your vulnerabilities? Have you thought through failure modes?

Specificity: Can you answer with precision, or do you default to generalities?

The founders who succeed in due diligence aren't those with perfect answers. They're those who demonstrate honest, self-aware, specific thinking about their business — including the hard parts.

The psychology: Founders often prepare by rehearsing optimistic answers. But investors aren't looking for optimism. They're looking for judgment. Showing that you understand the risks — and have thoughtful plans to address them — builds more confidence than pretending the risks don't exist.

To understand the full VC evaluation process behind these questions, read what VCs look for in due diligence. For a structured checklist you can work through before facing investors, see the startup due diligence checklist. And for the complete framework covering all seven areas of analysis, start with the startup due diligence guide.

Startup Due Diligence Questions FAQs

What questions do investors ask during due diligence? Investors probe seven areas: team (founder-market fit, working relationships), market (size, timing, competition), product (problem, validation, moat), traction (metrics, retention, unit economics), financials (model, burn, use of funds), legal (cap table, IP, liabilities), and risk (vulnerabilities, kill criteria, references).

How do I prepare for due diligence questions? Understand what each question is actually testing, not just the surface inquiry. Prepare specific, honest answers with supporting data. Anticipate follow-ups by thinking through failure modes and risks. Practice with someone who'll push back, not just validate.

What answers raise red flags for investors? Generic responses, inability to cite specific numbers, deflecting hard questions, contradictions between what you say and what references say, over-optimism without acknowledging risks, and surprises that should have been disclosed earlier. The pattern is dishonesty or lack of self-awareness.

How long does startup due diligence take? Typically 2-8 weeks depending on deal size and investor type. Seed rounds often have lighter diligence (1-3 weeks). Series A and beyond involve deeper investigation (4-8 weeks). Having documents, references, and data ready accelerates the process.

What documents should I prepare for due diligence? Cap table, financial model with assumptions documented, customer list with contract details, IP assignment agreements, incorporation documents, any prior investment terms, employee option pool details, and bank statements. Having these organized before diligence starts signals professionalism.

Can I fail due diligence? Yes. Deals die in diligence when investors discover undisclosed problems, when reference calls reveal concerns, when metrics don't match claims, or when deeper investigation reveals the opportunity isn't as presented. The antidote is honesty throughout the process — surprises in diligence are almost always fatal.


References

  • Feld, Brad and Jason Mendelson. Venture Deals: Be Smarter Than Your Lawyer and Venture Capitalist. Wiley, 2019.
  • Kupor, Scott. Secrets of Sand Hill Road: Venture Capital and How to Get It. Portfolio, 2019.
  • Fitzpatrick, Rob. The Mom Test: How to Talk to Customers and Learn if Your Business is a Good Idea When Everyone is Lying to You. 2013.

Verve Intelligence helps you prepare for due diligence by running the same scrutiny investors apply — before you're in the room. $99. Get your evaluation →