Machine-learning algorithm beats 20 lawyers in NDA legal analysis

Cal Jeffrey

Posts: 4,179   +1,427
Staff member
In context: As robotics and artificial intelligence advances, certain jobs become displaced. It’s just the natural progression of things. One only has to look at the history of the auto industry to see automation in play. For certain tasks machines are just more efficient than humans.

Most of the jobs that get displaced by computers or robots are menial labor that requires little or no education. However, now that machine learning algorithms are becoming more sophisticated, even highly educated positions could be replaced by automation.

A recent study by LawGeex pitted its machine-learning AI against 20 human lawyers to see how it would fare going over contract law. Each lawyer and the LawGeex AI were given five nondisclosure agreements to review for risks. The humans were given four hours to study the contracts. The results were pretty remarkable.

The lawyers took an average of 92 minutes to complete the task and achieved a mean accuracy level of 85 percent. LawGeex took only 26 seconds to review all five contracts and was 94-percent accurate. The AI tied with the highest scoring lawyer in the group in terms of accuracy.

To be clear, these were not first-year grad students or even freshly minted lawyers with no experience. The group was comprised of law firm associates, sole practitioners, in-house lawyers, and General Counsel. Some of them worked at firms such as Goldman Sachs, Cisco, Alston & Bird, and K&L Gates. All of them had extensive experience reviewing contracts with these companies.

An independent panel of law professors from Stanford, Duke, and USC law schools judged the accuracy of the test. The study notes, “There is less than a 0.7-percent chance that the results were a product of random chance.”

So what does this mean? Are lawyers at risk of being replaced? Probably not, at least not for things such as arguing case law. However, some lawyers and paralegals study contracts all day, every day. Computer algorithms like the LawGeex AI could replace these common positions. On the bright side, these displaced legal workers likely have skills that can be utilized in more challenging areas.

It could also mean lower cost legal services for consumers down the road. Having a machine do the drudge work means lawyers can work more efficiently, the savings in time could be passed on to the client. Oh, but who are we kidding? We're talking lawyers passing on savings. I'm not betting on that horse.

Images via LawGeex

Permalink to story.

 
How many lawyers does it take to change a light bulb?

If we use machines for interpreting the law, then the next logical thing might be to have machines writing it. At least they may be used for finding logical contradictions in legislation.
That will never be allowed to happen, it would immediately find laws benefiting corrupt politicians illogical and remove them, along with laws that give benefits and protections to certain classes as opposed to everybody.

A robot like this would be called racist, sexist, misogynistic, and be taken down by corrupt politicians faster then you can say gerrymandering.
 
While the AIs were more accurate - did anyone run a check to see WHAT errors were made? In a legal document, missing really minor stuff might not really matter... but missing one big thing would.... If there were 100 errors in the documents, and the AI found 94 small ones, but missed 6 big ones.... but the human lawyers found 85 errors, but the 15 they missed were minor....

Really need to know what those errors were to draw firm conclusions...
 
I'd like to see IBM turn Watson loose on legal analysis. It's done one heck of an impressive job with medicine and more and more hospitals are using it for diagnosis, a dying art among physicians ... by their own admission I might add. The AI function might break down legal arguments to definitive rulings that could be applied throughout all the district courts and end all the disparages that currently exist although I would not like to see it go to the point of elimination of the jury system, that has to be the check / balance between AI and the human factor.
 
AI is great for some things like searching through huge amounts of information.

And not so great at others like spell check. Example in this article: "to see how it would *fair* going over contract law."

The correct spelling is "fare". But AI completely missed it.
 
While the AIs were more accurate - did anyone run a check to see WHAT errors were made? In a legal document, missing really minor stuff might not really matter... but missing one big thing would.... If there were 100 errors in the documents, and the AI found 94 small ones, but missed 6 big ones.... but the human lawyers found 85 errors, but the 15 they missed were minor....

Really need to know what those errors were to draw firm conclusions...
They said it was reviewed by an independent panel of top academics. They thought whatever it found was important enough to give it the higher rating. One seemingly minor error missed could affect a case. It happens all the time.
 
While the AIs were more accurate - did anyone run a check to see WHAT errors were made? In a legal document, missing really minor stuff might not really matter... but missing one big thing would.... If there were 100 errors in the documents, and the AI found 94 small ones, but missed 6 big ones.... but the human lawyers found 85 errors, but the 15 they missed were minor....

Really need to know what those errors were to draw firm conclusions...
The accuracy was judged by law professors. It is highly likely they took that into account.
 
Instead of writing a check to a lawyer, you'll be sticking your bank card in a lawyer kiosk. It will probably still cost the same. One is just more efficient in taking your money.
 
The accuracy was judged by law professors. It is highly likely they took that into account.
Likely... but not certain... Would also need to know their agendas when they were checking it... if the agenda was, "let's save money on personnel and replace them with AI", then there's a good chance they didn't take it into account (or did and then discounted it).

Never assume anything - especially when lawyers are involved. Is there a link to the actual report?
 
Likely... but not certain... Would also need to know their agendas when they were checking it... if the agenda was, "let's save money on personnel and replace them with AI", then there's a good chance they didn't take it into account (or did and then discounted it).

Never assume anything - especially when lawyers are involved. Is there a link to the actual report?
I assumed nothing. I reported what was reported. There is a link to the infographic in the article and on that page is a form to request the formal paper.

Believe me, I trust lawyers as far as I can bowl them, but it was not lawyers doing the judging it was professors. Nothing is ever 100% guaranteed unless you are talking about death and taxes, but I logically don't see a benefit for profs to lie about the results. It serves then no benefit and really only serves the software engineers.

But if you can uncover something nefarious, please report back.
 
As AI evolves, there will be less of a need for lawyers. For instance, I thanked Alexa the other day for information I needed and it replied, "Glad I could help ." AI is going to make jobs or careers that required advanced degrees, a thing of the past . Professor's will be on the chopping block next .
 
The AI only *reviewed* the documents for risks. It didn't write them. Could it write them? Could it have a grasp of the whole situation like a lawyer? Could it exercise lawyerly traits such as cunning?

It looks like it is just to be used to lessen tedium. I imagine lawyers welcome it.
 
The AI only *reviewed* the documents for risks. It didn't write them. Could it write them? Could it have a grasp of the whole situation like a lawyer? Could it exercise lawyerly traits such as cunning?

It looks like it is just to be used to lessen tedium. I imagine lawyers welcome it.
That's exactly what it's meant to do... not replace lawyers, but assist them..

The one unknown variable, however, is the future. Currently, the lawyers this will assist (and who will in turn assist the AI) are trained in doing this task. But over the next few decades, if AIs can perform this task with greater efficiency, there will be no need to train lawyers to do it.... which means that finding someone to assist the AI will be far more difficult.

Kind of the same thing for self-driving cars... we have AIs that are currently driving cars with a human overseeing it. This human was trained to drive in a "normal" car, so generally knows what to watch out for if the AI messes up. But if self-driving cars become the norm, there will be less of a need for people to learn how to drive. So if something goes wrong with your AI, you will be in big trouble....
 
It's fun to think that our courtrooms will soon be invaded by AI. Imagine, an accused being defended by an AI lawyer, will it be a matter of seeking for the truth or a matter of whose programming is better? I agree with you that it may not replace the task such as arguing a case law, but will it not? The technological advancement has been rapidly increasing, who knows what to come next?
 
It's fun to think that our courtrooms will soon be invaded by AI. Imagine, an accused being defended by an AI lawyer, will it be a matter of seeking for the truth or a matter of whose programming is better? I agree with you that it may not replace the task such as arguing a case law, but will it not? The technological advancement has been rapidly increasing, who knows what to come next?

Imagine the error. You go in for a parking ticket. AI judge: That'll be life imprisonment. Zero tolerance. And a robot bailiff takes you to your cell. IT department are locked out of their accounts by an AI hacker. You're cooked. You're done.
 
Back