There's been a bit of a (fun) debate in Legal AI.
It started with Brian Inkster's observation that there's a 2:1 ratio of time needed to check for accuracy of AI generated legal work.
It was aptly coined "Inkster's Law" by Gav Ward.
The thinking is: for legal applications accuracy matters. The hallucination risk of legal AI means that you're spending at least 2x the time checking the work, so it's pointless.
I think this is a neat conclusion but the wrong one. Inkster's Law might (currently!) hold true for certain pockets of legal work, like legal research, but definitely not for contract mark-ups.
Because with mark-ups, it's much easier to check whether the AI messed up and course correct.
The proof is in the pudding
Here's a video of me using our tool DraftPilot to mark-up a SaaS contract. I'd say it saves 80% time compared to doing it manually:
Does Legal AI save time? - Watch Video
Utterly shameless plug: if you’re a lawyer and interested in Beta testing DraftPilot for 3 months for free, just hit reply!
Junior lawyers
So what’s the rough rule of thumb in terms of where it’s worth investing in legal AI.
I think it's a bit like hiring a junior lawyer. If you have to fix 50%+ of the work on an ongoing basis, then Inkster's Law is true.
The fixing takes more time than having the junior saves.
But if you only need to tweak say 10-20% (and you can spot those parts instantly!), then Inkster's Law doesn't apply.
As an aside, one reason I love working in legal tech is that the community is full of smart people like Brian sharing interesting ideas.
I feel that the historic 'guild' model of law where the default is to be friendly and collaborate has transferred quite smoothly to legal tech as well.
Thanks for being here!
Daniel
—
Daniel van Binsbergen
CEO at DraftPilot
Hi Daniel. Would love to try Draft Pilot. Can we connect ?