Jump to content

AI in Law


Gregmal

Recommended Posts

Is the disruption and/or apparent benefit not absolutely massive? Much of law is interpreting rules and using existing case knowledge to build a point of attack. The best lawyers probably still don’t have the on demand knowledge of both the written jargon but also all the cases out there which may have relevance. But an AI program can instantly sift these and learn arguments based on what’s been successful in the database. Is this not basically the equivalent of having robo umpires in baseball? Automates something that is technically pretty defined? There are definitely investment angles here. Could the $1200 an hour lawyer turn into a subscription service or app? Could the architects and engineering firms be on the chopping block as well? 

Link to comment
Share on other sites

3 minutes ago, Gregmal said:

Is the disruption and/or apparent benefit not absolutely massive? Much of law is interpreting rules and using existing case knowledge to build a point of attack. The best lawyers probably still don’t have the on demand knowledge of both the written jargon but also all the cases out there which may have relevance. But an AI program can instantly sift these and learn arguments based on what’s been successful in the database. Is this not basically the equivalent of having robo umpires in baseball? Automates something that is technically pretty defined? There are definitely investment angles here. Could the $1200 an hour lawyer turn into a subscription service or app? Could the architects and engineering firms be on the chopping block as well? 


100%. 
 

I’ve gotten some shockingly good results from chat gpt-4 that have essentially replaced an experienced paralegal. 
 

I don’t think it’s going to be long until it can replace a lot of associate level transactional type attorney roles. 
 

 

Link to comment
Share on other sites

I don't think so, for the same reason that AI can't tell jokes, it doesn't understand nuance and can't make analogies.  

 

It has for years been getting better and better in some things like filtering through emails for discovery in litigation and flagging the ones that might be relevant, then having an actual attorney look at them. 20 years ago, when some of my friends graduated and still hadn't found a job, "document review" was like Uber/Lyft/TaskRabbit for lawyers.  They would pay you a decent amount (no benefits) to cull through thousands of emails and documents in response to a discovery request, and you would be a room with 20 other underemployed recent grads, and after you made the first cut of the docs, the big lawyer would review them.  That first step is gone now.  The software is not only cheaper, but more accurate than most lawyers, and it doesn't get tired, it doesn't take days off, and it doesn't sue for things like a real employee.  

 

If you ask an AI program to explain Brown vs Board of Ed, it will find enough articles written on it to explain it. But giving it a set of facts and asking if it's legal or not, is probably way beyond it's abilities for now. 

Link to comment
Share on other sites

20 hours ago, Saluki said:

I don't think so, for the same reason that AI can't tell jokes, it doesn't understand nuance and can't make analogies.  

 

It has for years been getting better and better in some things like filtering through emails for discovery in litigation and flagging the ones that might be relevant, then having an actual attorney look at them. 20 years ago, when some of my friends graduated and still hadn't found a job, "document review" was like Uber/Lyft/TaskRabbit for lawyers.  They would pay you a decent amount (no benefits) to cull through thousands of emails and documents in response to a discovery request, and you would be a room with 20 other underemployed recent grads, and after you made the first cut of the docs, the big lawyer would review them.  That first step is gone now.  The software is not only cheaper, but more accurate than most lawyers, and it doesn't get tired, it doesn't take days off, and it doesn't sue for things like a real employee.  

 

If you ask an AI program to explain Brown vs Board of Ed, it will find enough articles written on it to explain it. But giving it a set of facts and asking if it's legal or not, is probably way beyond it's abilities for now. 


In my practice I’ve found that even chat gpt 4 produces fantastic templates for all sorts of things (demand letters, complaints, mediation briefs, etc) with completely unreliable legal arguments and/or fake case law to support its positions. 
 

I still find this incredibly helpful.  But how much of a leap does it take to get chat gpt 4 to shephardize cases and troll public records to make actual valid legal arguments and cite persuasive authority? 
 

Chat gpt 4 writes pretty good discovery requests, good templates for almost anything, I think the missing link is incorporating legal research tools that already exist. 
 

I don’t think this will make lawyers irrelevant, but it should disrupt the industry. If I had to guess AI will be beneficial for partners/owners/higher performing senior attorneys.

 

Combined with an oversupply of under qualified young attorneys, I feel like the prospects for new lawyers drowning in debt are especially bleak, but there will still be those that use these AI tools to start off scrappy with low overhead and do just fine in private practice. 

Link to comment
Share on other sites

Yes, I agree, it make life easier for senior lawyers and miserable for new lawyers.  

 

I played around with it and asked it to draft a sample forum selection clause, and a sample material adverse change clause.  Because those two are so common, I think it did pretty good job.  Without any false modesty, I know that I could've done a better one when I was a junior attorney, but it wouldn't have been twice as good. If a lawyer (or client) has a choice of free and good enough or slightly better for a lot of money, I think they will go with the off-the-rack option vs the bespoke suit. Maybe senior lawyers will use this for a first cut? Or the type of client who uses RocketLawyer or LegalZoom will use this and take it to a lawyer for editing?  Or maybe, like computer programmers, junior lawyers will use it and will just be pumping out much more product for the same salary?  

 

I asked a couple of specific legal questions about my current practice area and my prior one (both of which are not very common), and it gave an incorrect answer for both, but sounded very authoritative.  Which is scary if a small landlord or mom and pop business owner will rely on it to prepare for a court case where they can't afford a lawyer. 

 

Without the foresight of a crystal ball, all I can say is that I'm glad I'm not graduating now, since the practice seems to get worse every year.  I saw that in the 1960s, even the white shoe firms on Wall Street had a billable requirement of 1500 billable hours.  Nowadays that would be considered part time, even at a smaller firm.  

 

So do you use the sample template the same way you would use a sample brief or prior contract and just follow the pattern to make sure that it includes everything it's supposed to, but edit everything so that it fits the specific facts?  Since they charge for shepardizing cases, I would assume that Chat would have to have an agreement with Westlaw before that becomes an option.  

Link to comment
Share on other sites

49 minutes ago, Saluki said:

Without the foresight of a crystal ball, all I can say is that I'm glad I'm not graduating now, since the practice seems to get worse every year.  I saw that in the 1960s, even the white shoe firms on Wall Street had a billable requirement of 1500 billable hours.  Nowadays that would be considered part time, even at a smaller firm.  

 

So do you use the sample template the same way you would use a sample brief or prior contract and just follow the pattern to make sure that it includes everything it's supposed to, but edit everything so that it fits the specific facts?  Since they charge for shepardizing cases, I would assume that Chat would have to have an agreement with Westlaw before that becomes an option.  

 

Completely agreed, and I'm fortunate to be in private practice not in a billable hour purgatory, although we do of course track billable hours for attorney's fees motions and so on. 

 

In my practice I've just been using the sample template and editing it. Very much paralegal level stuff, but in a relatively small practice it has actually significantly improved productivity by removing that bottleneck. 

 

I would imagine that Clio, case text, west law, or lexis ends up licensing the Chat GPT or other AI service to offer a really good subscription model service. Not saying it's on the market now, but I'm going to be surprised if it's not on the market pretty quickly. I see case text is already offering something, I have a buddy that signed up but I haven't heard the review yet. I'm quite curious how this progresses. 

Link to comment
Share on other sites

A friend of mine tried CoCounsel (https://casetext.com/cocounsel/) and was impressed with it.  @Saluki - I believe it integrates some aspect of case law research.  I am in private practice as well and having tried Bard and ChatGPT for certain templates, I do believe low level tasks can be replaced today.  There may be a market to have a better query editor so that the result (in the first try) is better.  Not sure where the opportunities will be for new lawyers but perhaps document review is replaced with managing queries and cross checking AI results prior to going to senior attorney.  I dreaded document review as a new patent trial associate but ediscovery is much better now.  I suspect an AI tool which incorporates database of your practice/firm/company will help.

Link to comment
Share on other sites

  • 2 weeks later...

Lawyer disrupted a legal process using AI:

https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

 

Link without a paywall:

https://archive.is/XYO2o#selection-501.0-505.49

 

Quote

 

There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.
That was because ChatGPT had invented everything.

 

 

 

The guy included screenshots from ChatGPT and bogus cases:

image.thumb.png.68d76dcdd871dc272361a9c21b7446be.png

 

image.thumb.png.37a0feb6ec5f71c7d91df3cd6691651a.png

https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/

 

Edited by formthirteen
Link to comment
Share on other sites

36 minutes ago, james22 said:

Will the bogus cases now enter the database from which other AIs will source?

 

AI noise could quickly drown out signal.

I think this is easy to train around for an actual legal application since you can simply require that AI use rule statements only from shephardized cases. Or come up with a better way to do the same thing. 

Link to comment
Share on other sites

14 hours ago, james22 said:

Will the bogus cases now enter the database from which other AIs will source?

 

AI noise could quickly drown out signal.


I assume any competent law firm would hire some ML dudes to train an AI with specialized cases and parameters to avoid this.

 

A general chat bot would be foolish to use for law.

Edited by Malmqky
Link to comment
Share on other sites

14 minutes ago, Malmqky said:

I assume any competent law firm would hire some ML dudes to train an AI with specialized cases and parameters to avoid this.

 

You can't avoid this when the algorithm would is only xx.xx% correct. You can't even mitigate it because it's about people's lives. Okay, some people trust Tesla's FSD, so I'm clearly wrong.

Link to comment
Share on other sites

40 minutes ago, formthirteen said:

 

You can't avoid this when the algorithm would is only xx.xx% correct. You can't even mitigate it because it's about people's lives. Okay, some people trust Tesla's FSD, so I'm clearly wrong.


Fair point, but I can see a scenario where the AI is used to pull data from a database of cases and relay information about those. Easy to double check as well.

Link to comment
Share on other sites

  • 2 weeks later...

As those following the burgeoning industry and its underlying research know, the data used to train the large language models (LLMs) and other transformer models underpinning products such as ChatGPT, Stable Diffusion and Midjourney comes initially from human sources — books, articles, photographs and so on — that were created without the help of artificial intelligence.

 

Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content?

 

A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: “We find that use of model-generated content in training causes irreversible defects in the resulting models.”

 

https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/

Link to comment
Share on other sites

  • 1 month later...

They feed Large Language Models (LLMs) such as ChatGPT vast amounts of data on what humans have written on the internet. They learn so well that soon AI-generated output is all over the internet. The ever-hungry LLMs eat that, and reproduce it, and what comes out is less and less like human thought.

 

https://www.samizdata.net/2023/07/we-think-we-are-living-at-the-dawn-of-the-age-of-ai-what-if-it-is-already-sunset/

Link to comment
Share on other sites

  • 4 weeks later...

. . . when you feed synthetic content back to a generative AI model, strange things start to happen. Think of it like data inbreeding, leading to increasingly mangled, bland, and all-around bad outputs. (Back in February, Monash University data researcher Jathan Sadowski described it as “Habsburg AI,” or “a system that is so heavily trained on the outputs of other generative AI’s that it becomes an inbred mutant, likely with exaggerated, grotesque features.”)

 

It’s a problem that looms large. AI builders are continuously hungry to feed their models more data, which is generally being scraped from an internet that’s increasingly laden with synthetic content. If there’s too much destructive inbreeding, could everything just… fall apart?

 

https://futurism.com/ai-trained-ai-generated-data-interview

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...