Due Diligence AI: The Debate Between Caution and Acceleration


Viewpoint 1: Chief Investment Officer (CIO)

“AI is here to accelerate—not replace—judgment. And in diligence, we need speed with clarity.”

I’ve reviewed dozens of acquisitions over the last two years. Every time, the clock is ticking. We have two to four weeks to assess the fundamentals, ask the right questions, and catch red flags before the closing call.

Human teams can’t move fast enough anymore. That’s why we’ve begun integrating due diligence AI—tools that analyse codebases, product usage patterns, customer churn, competitive benchmarks, and even contract structures. We’re not handing over decision-making to machines. We’re sharpening our sight so we don’t miss what matters.

One recent case: the financials looked fine, the growth curve steady. But our AI platform flagged a usage drop within key customer cohorts. The sellers hadn’t seen it—or hadn’t disclosed it. Turned out a feature deprecation six months earlier had triggered silent attrition. Without AI, that pattern would have been buried.

Due diligence AI didn’t replace our process—it focused it. We redirected our analysts toward root-cause questions and adjusted our valuation accordingly.

My view? The risk of not using AI is now greater than the risk of over-relying on it. Because every missed signal is a missed insight. And at scale, those insights compound.

Viewpoint 2: General Counsel (GC)

“Pattern recognition is powerful—but legal nuance doesn’t fit into models as cleanly as numbers do.”

I’m not anti-tech. But I’m cautious about the growing faith in due diligence AI—especially in legal and compliance work. Contracts aren’t data fields. They’re structured language riddled with exceptions, historical amendments, and implications that a model can’t fully parse.

We had an AI engine mark a series of partnership agreements as “low-risk.” A human lawyer found the issue: a non-compete clause embedded deep in a footnote, with jurisdictional implications that could have triggered litigation post-close. AI didn’t miss it out of laziness—it missed it because the clause didn’t resemble prior examples it had been trained on.

That’s the issue. AI generalises. The law penalises generalisation.

Even in IP audits, AI has blind spots. A model might confirm clean code ownership, but what if early-stage contractors pushed code before proper NDAs were in place? What if verbal agreements formed the foundation of a product’s original stack? AI doesn’t catch whispers—it catches patterns.

The danger isn’t using due diligence in Artificial Intelligence. It’s using it without context, or worse, without challenge. We can’t afford false confidence. Fast is fine—but it has to be accurate.

Viewpoint 3: Strategy Lead (SL)

“If we combine the precision of AI with the insight of experts, we move from mechanical review to strategic foresight.”

This isn’t a binary argument. The most effective workflows we’ve seen blend both sides. We start with due diligence in AI to process enormous volumes of structured and semi-structured data. But the interpretation always belongs to people.

In a recent SaaS deal, AI flagged inconsistencies in feature adoption rates by region. The data team correlated it with the timing of localisation rollouts. That insight completely changed our go-to-market plan post-acquisition. The value wasn’t in the anomaly—it was in the human response to the anomaly.

We’ve also used AI tools to review sentiment in internal chat logs, issue trackers, and knowledge bases. Patterns of friction between dev and ops teams pointed to cultural misalignment that later became one of our top post-close priorities. Without AI, that context would’ve been anecdotal at best.

Due diligence AI doesn’t replace human reasoning. It enhances it. It brings signal to the surface faster—so human decision-making can be stronger, deeper, and less reactive.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top