Seed-stage venture capital due diligence is fundamentally different from later-stage diligence. At the growth stage, a due diligence process can be anchored in quantitative analysis — revenue growth rates, cohort retention, unit economics, competitive positioning — because the company has enough operational history to generate meaningful data. At the seed stage, most of these data points either do not exist or exist in quantities too small to be statistically meaningful. Yet the investment decision must be made, and the stakes are high on both sides.

Over the past three years at Moberg Analytics Ventures, we have developed a diligence framework specifically for AI analytics companies at the seed stage. It is not a formulaic checklist — good seed-stage investing involves too much judgment and context-sensitivity to be reduced to a formula. But it is a structured approach that has helped us consistently distinguish companies with genuine long-term potential from those whose early appeal rests on factors that will not compound into durable value. This essay describes how we approach the process and what we have learned.

The First Meeting: What We Are Actually Evaluating

The first meeting with a founding team is not, primarily, a pitch evaluation — it is a calibration exercise. We are not trying to decide whether to invest in the first meeting. We are trying to understand whether it is worth spending the time to investigate further, and to begin building a picture of the founding team's quality, the specificity of the problem they have identified, and the coherence of their initial approach.

The qualities we are looking for in a first meeting are: intellectual honesty about what is known versus unknown in the business, genuine depth of domain expertise (not just awareness of a market), the ability to articulate the customer's problem and workflow in precise, concrete terms rather than in abstract high-level descriptions, and a founding team dynamic that suggests productive complementarity rather than redundancy or conflict.

We are explicitly not evaluating the pitch deck, the slide design, or the comprehensiveness of the TAM calculation. Founders who have spent months polishing a deck often produce better first-meeting impressions than founders who have spent the same time building. We try to discount the polish and focus on the signal.

Technical Diligence for AI Analytics Companies

Technical diligence in AI analytics requires a different approach than technical diligence for conventional software. The technical questions that matter are not primarily about code quality, architecture scalability, or infrastructure choices — they are about the validity of the AI approach, the quality of the training data, and the robustness of the model evaluation methodology.

Our technical diligence process for AI analytics companies covers four dimensions. Problem formulation validity: Is the machine learning problem formulation actually aligned with the business value that the company claims to deliver? We have seen companies where the model is technically sophisticated but the prediction target is poorly chosen — the model predicts something that is measurably accurate but only weakly correlated with the business outcome the customer cares about. Identifying this disconnect requires understanding both the AI methodology and the business workflow simultaneously.

Training data quality and provenance: What data was used to train the model? Where did it come from? What preprocessing was applied? What are the known limitations of the dataset? We are particularly focused on the representativeness of the training data — a model trained on data from a handful of enterprise customers in a single industry may generalize poorly to new customers in different industries, even if its in-sample performance metrics are impressive.

Evaluation methodology: How was the model's performance measured? We are skeptical of evaluation frameworks that optimize for metrics that are easy to achieve but weakly correlated with production performance. We prefer to see evaluation on realistic held-out test sets that reflect actual production conditions, with performance metrics that are interpretable in business value terms — not just algorithmic performance metrics.

Production readiness: Has the model been deployed in a production environment? What was the difference between development performance and production performance? How is model performance monitored in production? Companies that have achieved even a single production deployment have a qualitatively different technical risk profile from those that have only demonstrated laboratory performance.

Commercial Diligence: Reading the Customer Signals

Our commercial diligence process centers on direct conversations with the customers the company claims to serve. We call between two and five customer references for every company we are seriously evaluating, and we conduct these conversations with a structured protocol designed to surface the depth and authenticity of the relationship rather than just confirming that the customer is satisfied.

The questions we find most informative are the ones that force specific, concrete answers. Not "are you happy with the product?" but "can you walk me through a specific decision you made differently because of this product?" Not "is the product delivering value?" but "if this product disappeared tomorrow, what would change in your operations and how would you quantify that impact?" Not "would you recommend this product?" but "have you, in fact, introduced this product to colleagues at other companies?"

The gap between what a customer says when the vendor has arranged the reference call and what they say when the framing is shifted from evaluation to genuine assessment can be significant. We work to create conditions in which customers feel free to be honest about limitations, frustrations, and uncertainties alongside the genuine value they have received.

Team Diligence: The Assessment That Matters Most

At the seed stage, team quality is the primary driver of investment decisions. The market can change, the product will evolve, the initial customer hypothesis may prove wrong — but a great founding team will navigate all of these changes and emerge with a better business than they started with. A weak founding team will struggle even when all the external conditions are favorable.

Our team diligence process involves multiple structured conversations with each founder, reference conversations with former colleagues and managers, and direct technical and commercial assessments that test the depth of the founders' expertise beyond what they can convey in prepared presentations. We are looking for three things above all: genuine intellectual depth in the domain they are addressing, the emotional resilience to navigate the inevitable setbacks of early-stage company building, and the learning velocity to absorb feedback and update their understanding rapidly.

The learning velocity assessment is perhaps the most important and most underweighted dimension of team evaluation. The founders who succeed in AI analytics are rarely the ones who have the most complete picture of the market and the product at the time of their seed raise. They are the ones who are learning fastest from their customer interactions, their technical experiments, and their competitive encounters. A founder who comes back to a conversation three months later with substantially updated views, clearly informed by new information and genuine reflection, is demonstrating a learning capability that will compound dramatically over the life of the company.

Market Diligence: Sizing the Opportunity Honestly

We approach market sizing at the seed stage with a healthy skepticism toward top-down TAM calculations. A large stated TAM is necessary but not sufficient — we have seen many companies with correctly identified large markets fail to build significant businesses because the specific market segment accessible to a seed-stage company with the specific capabilities of the founding team was a small fraction of the stated total.

What we care about at the seed stage is the answer to a different question: what is the specific segment of this market that this team, with this product, can reach in the next 18-24 months with the capital we are providing? If that segment is large enough to generate meaningful proof points, demonstrate the business model, and justify a follow-on raise at a significantly higher valuation, the investment makes sense. The broader market opportunity is important for understanding the long-term potential ceiling, but it is not what we are betting on in the immediate term.

The Investment Decision: Conviction Over Consensus

The final investment decision in seed-stage venture capital is not a consensus process. We do not require every member of our team to be convinced that every investment is a good one. We require one conviction holder who is willing to lead the investment and take ongoing responsibility for the relationship with the company. The best investments we have made have often been ones where the founding team generated strong conviction in one investor and skepticism in others — not because the skeptics were wrong but because the signal that generated conviction was subtle and non-obvious.

The ability to develop conviction on non-obvious signals — to see past the weaknesses in a company's early presentation to the genuine strengths in the founding team, the technical approach, or the market dynamics — is the core skill of seed-stage investing. It is not a skill that can be reduced to a formula. But it can be developed through experience, through intellectual honesty about past investment decisions, and through a genuine commitment to learning from both successes and failures.

Key Takeaways

  • Seed-stage AI diligence is fundamentally different from later-stage diligence — qualitative assessment must substitute for quantitative data that does not yet exist.
  • Technical diligence for AI companies covers four dimensions: problem formulation validity, training data quality, evaluation methodology, and production readiness.
  • Customer reference conversations should probe specific decisions and operational impacts, not general satisfaction levels.
  • Team quality — especially learning velocity — is the primary driver of seed-stage investment decisions at Moberg Analytics Ventures.
  • Market sizing at the seed stage should focus on the specific accessible segment in the next 18-24 months, not top-down TAM calculations.
  • The best seed investments often generate strong conviction in one investor despite initial skepticism from others — non-obvious signals require conviction, not consensus.

Interested in Moberg Analytics Ventures' investment process? Learn more on our About page or reach out directly to start a conversation about your company.