The Mistake Most Legal AI Vendors Won't Admit
How some of the biggest names in legal AI locked themselves into old AI models.
A lot of legal tech vendors make a big song and dance about how their AI models are "trained for legal."
[Full transparency: at DraftPilot we do a bit of singing and dancing about this too].
However, the dirty little secret is that our biggest quality improvements have never come from fine-tuning or training.
They come when OpenAI releases a new base model. Magically everything gets better. Everything else, even our fine-tuning, is pretty incremental in comparison.
Here's an example, same prompt, same document, just upgrading DraftPilot from GPT4o to GPT4.1 which we did a few weeks ago:
The quality leap was massive. And we were able to ship it within weeks of OpenAI’s release. It’s because the kind of fine-tuning we do, we can easily ‘apply’ to every new model that comes out.
Now, some very large providers went much beyond simple fine-tuning. They made deep structural changes to the model: retraining internal weights, ingesting billions of legal tokens mid-training, building custom checkpoints.
This was a rational strategy two years ago because models simply weren't good enough out of the box yet. It also gave them a real ‘defensive moat’ for a time as you’d need huge amounts of funding to do this.
But this approach now prevents them from easily swapping out models. Under the hood, they locked themselves into infrastructure they can't easily escape.
What you should ask every AI vendor before buying
So if you're an in-house lawyer evaluating an AI tool, take the claims of legal-specific tuning with a grain of salt. It’s useful (and two years ago it was critical), but we’re a bit in diminishing returns territory.
These days it’s better to ask: How fast can you switch to a new base model when it comes out?
If the answer isn't "within weeks," that's a problem.
Model quality really is improving at breakneck speed, so it’s not good if you can’t benefit from those uplifts because the vendor is locked into their (formerly) state-of-the-art custom AI model from 2-3 years ago.
Don't bet against OpenAI / Anthropic etc
For the foreseeable future the big AI Providers are pumping billions in making their models better.
As Sam Altman pointed out: “if you're betting on your own model improvements to stay relevant, you're effectively betting against OpenAI. That's not a wise position to be in”.
In other words, it’s better to be able to quickly apply whatever the latest and greatest is. So that your product automatically gets better as soon as the underlying models get better.
Investors don’t love this
Investors don’t like the fact that this is the case.
They would’ve preferred a world where their investment cash leads to a strong ‘technical moat’. As a result, when pitching for investment, there is an incentive for founders to overstate the importance of ‘training’ as it makes investors feel more comfortable it’s a long term defendable position.
In practice it doesn’t work that way, more cash often leads to bloated products that are hard to update.
Doesn’t mean there aren’t any moats anymore, they’re just not nice and clean like having a log term tech advantage.
The moat is to have a better understanding of how lawyers work than anybody else.
When lawyers are looking at legal tech, they are asking themselves “can I understand how this tool works and get value in 5 mins?” If the answer is no, then even if they buy it they likely won’t use it.
So as a legal AI company it’s better to focus deeply on the lawyer’s specific problem you’re solving and ensure they can intuitively use it without training. That’s the real moat these days.
Thanks for being here,
Daniel
CEO at DraftPilot
LinkedIn