A senior legal ops manager recently vented her frustration: “Every time I ask lawyers in our team to test a new legal tech tool, I get the same feedback: It made mistakes, so it's not fit for purpose."
She was baffled. The tools were designed to save the lawyers time on the more tedious parts of their work. Not to replace them altogether!
If there was nothing for the lawyers to improve, guide or tweak, the tool could actually replace the lawyers entirely! So why did the lawyers seem to look for that level of perfection in the tool?
This got me thinking about the double standard we often apply between hiring junior lawyers to help on legal work compared to using legal tech.
The double standard between tech and people
When we hire a junior lawyer, we fully expect to review and occasionally fix their work. If they draft a contract and we spend an hour fixing it (instead of three hours drafting it ourselves), we consider that a win. That's exactly what we hired them for!
But with legal tech, the standard suddenly becomes perfection. I've seen lawyers reject tools because they:
Made good fixes to an indemnity clause but missed updating a related defined term
Correctly implemented 3 playbook requirements for a payments clause, but had missed the 4th one
Fixed a limitation of liability but missed updating a cross-reference
These are exactly the kind of mistakes we'd expect (and accept) from a junior lawyer. So why the different standard?
Why we judge tech differently
I think there are two key reasons for this double standard:
1. Lack of Shared Standards
Legal AI tools are still new, and as a sector, we’ve never defined where they ‘fit.’ What should they do? What shouldn’t they? What makes a good tool versus a bad one? Without a shared framework, every lawyer ends up using their own, often subjective, criteria. This inconsistency can lead to tools being unfairly judged against impossible standards of perfection.
This is where a scorecard becomes vital (scroll down for a free template). Without it, everyone’s assessment becomes subjective, leading to wildly inconsistent judgments about what a tool should deliver. Eg is it okay if the AI misses a few issues, but still saves you 70% of your time after you fix it? Or is every error disqualifying? Right now, the answer depends entirely on who you ask.
2. The Psychological Driver: Professional Anxiety
There’s a lot of noise about AI replacing humans, and that can trigger anxiety. It does in me too. It’s just all moving so fast.
When lawyers use AI, it’s easy for that fear to creep into their evaluation. Every mistake becomes a reassuring sign: “See? The work still needs me. We can ignore AI a bit longer!”
But that mindset is a trap. With every innovation, the people who thrive are those who embrace it and find ways to work with it.
When the car was invented, some horse-carriage drivers became car drivers—and others didn’t. The same is true here. Legal tech isn’t about replacing lawyers; it’s about enabling them to deliver more value whilst spending less time.
Shifting the conversation to Time Savings
Here's the thing: the real metric shouldn't be "Does it need fixes?" but rather "Does it save net time?"
If you normally spend three hours drafting a contract, and checking/fixing the AI's version takes 30 minutes, that's a clear win - even if it's not perfect.
But there's a catch: the tool needs to be easy to use.
In our experience, even the most advanced AI tools fail if they’re cumbersome to use. Lawyers are busy - if a tool doesn’t deliver savings in the first few tries, it’s likely to be abandoned.
That’s why usability is at the heart of everything we build at DraftPilot. The quality of AI is becoming similar across tools - the real differentiator is how easily lawyers can work with it.
We talked about that before here.
Free Scorecard template - Making Evaluation Objective
To help legal teams evaluate tech more objectively, we've created a simple scorecard that measures what really matters - including net time saved, learning curve, and consistency.
The scorecard is created to test contract redline tools like DraftPilot, but you can easily adapt it for other uses. [Click here to download].
I’ve found that it helps shift the conversation from "Is it perfect?" to "Does it make our lives easier?"
As someone passionate about helping legal teams work smarter, I believe we need to give our tools the same grace we give our people. Progress, not perfection, is the goal.
Thanks for being here,
Daniel
CEO at DraftPilot
LinkedIn