Phil Karter’s article, “Artificial Intelligence for Lawyers: Not Ready for Prime Time,” in The Legal Intelligencer
Reprinted with permission from the June 27, 2023, edition of The Legal Intelligencer © 2023 ALM Media Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-257-3382 or firstname.lastname@example.org.
Artificial Intelligence for Lawyers: Not Ready for Prime Time
By: Phil Karter
You’re a young, tech-savvy associate and a partner has just made you responsible for researching the legal issue central to your client’s case and finding the appropriate precedential authority to support the key argument the partner wants to make. And, by the way, you have 24 hours to come up with the answer.
Like a rite of passage, those of us who practice in litigation have found ourselves in this position at various stages of our careers. In ancient days (i.e., when I started practicing), it was straight to the library to pull out a hornbook, a volume of Corpus Juris Secundum or perhaps the West Key Number Digest. With luck, this led to a list of legal citations to track down, review, and then, if relevant, to Shepardize to confirm they remained good law.
In the digital age, the process of legal research has been streamlined mightily by research databases such as Lexis or Westlaw, which offer the added benefit of incorporating the Shepardizing process within their vast capabilities. Additionally, with so many scholarly articles pervading the Internet, even a pedestrian search engine inquiry can sometimes provide a good head start (although I have yet to try it with Siri).
But despite all these increasingly powerful electronic research tools, most if not all of us have had experiences at one point or another where the answer to our research assignment refuses to reveal itself. Now, there is a new temptation – and, to paint a somewhat frightening picture – for some lawyers it is metamorphosizing from a curiosity into the starting point for legal research.
I am, of course, speaking about artificial intelligence, or AI.
Admittedly, it is an intriguing idea to use AI as the starting point for a research project. Looking beyond concerns about wholesale plagiarism of an AI response (which is another concern in its own right), it can be tempting to see what these intelligent machines, which employ decision-making and reasoning capability, have to say about the topic of your interest. Curiosity aside, it is irresponsible, if not altogether negligent, to outsource to AI what lawyers old and young have always had to do, which is to avoid shortcuts, put in the effort themselves and apply the reasoning and judgment that is far from ready to be outsourced to machines.
A recent story that appeared in the New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT,” and countless other media outlets is a powerful reminder of the folly of lawyers treating AI as anything other than a novelty at this point.
The story involves a hapless New York lawyer who filed a brief in a federal district court containing multiple legal citations to support his argument that a statute of limitations to commence a suit was tolled by a bankruptcy statute so that his suit was still timely. After neither the court nor opposing counsel could find more than half a dozen of the cases cited, the court ordered plaintiff’s counsel to produce copies of the opinions. When he was unable to do so, the lawyer acknowledged that he had relied on ChatGPT to find the cases cited, which the court found to be nonexistent. He admitted further that, having never utilized ChatGPT for legal research before, he was “unaware of the possibility that its content could be false,” and explained that he had even asked the program whether he could rely on its accuracy. (Apparently, looking up the cases and actually reading them to confirm they stood for the proposition offered – or even existed – never crossed his mind.) The lawyer, along with his co-counsel, is now facing potential Rule 11 sanctions for the reckless filing.
Although this may seem like an extreme example of stupidity, there is no denying that AI is inexorably creeping into the legal mainstream, and even well-respected legal services like LexisNexis are beginning to incorporate it into their research platforms.
Concerns about the advancement and proliferation of AI outpacing our ability to harness it effectively have given rise to some pretty scary headlines, like one recently published in Wired Magazine entitled, “Runaway AI Is an Extinction Risk, Experts Warn.” No, the story was not written by lawyers worried about their future job security (although that concern also has been a topic of many articles), but rather by AI industry leaders worried about “the threat AI poses to humanity on a par with nuclear war or a pandemic.”
That may seem somewhat melodramatic – certainly, it is a subject well beyond the scope of this tiny little advice column. But it is worth noting that AI, if not all of humanity, is at least already smart enough to recognize its own limitations. This is aptly illustrated by some words of caution that ChatGPT itself offered on the topic in response to the following query: “Write a poem about the limitations of using AI for legal research.”
“In the vast realm of legal research's plight, There lies a tool that promises insight, ChatGPT, a wondrous AI device, Yet perils lurk beneath its polished guise.
For though it wields a wealth of knowledge vast, Its limitations loom, they hold steadfast, ChatGPT, a language model grand, Still lacks the touch of human's guiding hand.
In matters legal, nuance is the key, Interpretations varied, complexity, The law's labyrinth, intricate and deep, Requires expertise that models can't keep.
While ChatGPT can search and provide text, Its understanding lacks, it may perplex, Legal research demands a keen eye, To navigate the pitfalls, laws imply.
For precedents and statutes may align, But context matters in each legal shrine, The human mind discerns the subtle cues, That models miss, for they lack human views.
Ethics and judgment, integral parts, In legal research, where wisdom imparts, ChatGPT may lack the moral ground, To guide in matters where ethics are found.
The perils lie in the blind reliance placed, On ChatGPT's answers, in haste embraced, For complex legal issues demand care, And expert guidance that AI can't bear.
The risk of error looms in every word, A misinterpreted rule, a plea unheard, To safeguard justice, human minds prevail, With knowledge, skill, and careful legal sail.
So let us not dismiss the wondrous aid, That ChatGPT provides when wisely played, But heed the perils that this tool may bring, And seek the counsel that true lawyers sing.”
Words to the wise.
Philip Karter is a shareholder at Chamberlain Hrdlicka’s Philadelphia office, where he focuses on tax controversy and tax litigation. In his 40-year career, Karter has litigated federal tax cases in the U.S. District Courts, the U.S. Tax Court and the U.S. Court of Federal Claims, and argued in the U.S. Court of Appeals in multiple circuits.