An illustration showing silhouettes of two lawyers sitting at laptops, facing each other. One lawyer looks exhausted and frustrated, sitting with her head on her hand. Multiple duplicate silhouettes of her fade off into the distance behind her. The other lawyer sits upright and alert. Binary code comes out of her computer towards her face representing her use of AI assistive technology.
Illustration by Katie McBride

Law’s newest laboratory

October 2, 2024

Artificial Intelligence

As AI shifts the professional landscape for practicing attorneys, legal education adapts to the latest technological disruption.

By Matthew Dewald

In a Charlottesville, Virginia, office, attorney Justin Ritter fires up his computer and launches Claude, a generative artificial intelligence tool that is assisting his legal work. He’s an enthusiastic early adopter of the new technology, using it “virtually every day.” Depending on his tasks for the day, he might use it to create a first draft of a purchase agreement for which he has no template. Or he might have it make tedious, routine document changes, such as changing “manager” to “managers,” adjusting verb agreement and other grammatical nuances along the way. Or he might ask it for feedback about something he’s written. “It might give me 15 suggestions,” he says. “Maybe three of them are good, and maybe one of those was critical.” It performs any of these tasks within seconds.

Ritter hasn’t been using generative AI that long — ChatGPT, the first of the current wave of new tools, launched in November 2022. Claude is his current favorite because of its ease with language, and he has tried many of the tools currently out there. In the short time he’s been using them, he has already reached some big conclusions about the changes AI will bring to the legal profession.

“I’m already seeing the changes,” he says. “I am unequivocally convinced that there is no putting the genie back in the bottle or ignoring this as a major disruptive tool for our practice. This will be more disruptive than email or fax.”

A black and white photo of Chris Sullivan, ’17 and L’22, with a turquoise oval shape behind it.

The question isn’t, “Will AI replace lawyers?” It’s, “Lawyers using AI will replace lawyers not using AI.” 

— Chris Sullivan, ’17 and L’22

Meanwhile, back at the law school, Roger Skalbeck, associate dean for library and information services in Muse Law Library, is guiding students through a hands-on exercise with AI tools. They’re using AI-generated images to build comic books explaining legal principles.

“Everybody’s seen examples of [AI] now, whether in the press or using it directly,” Skalbeck says. He observes two main reactions from students: some hesitate to use AI, wanting to learn fundamental skills without relying on it as a crutch, while others are highly curious to explore its capabilities.

Scenes like these are playing out in law offices and law schools across Virginia and the country, illustrating the rapid and transformative impact that artificial intelligence is beginning to have on the legal profession. From solo practitioners to major law firms, from first-year law students to veteran attorneys, AI is reshaping how legal work gets done and how the next generation of lawyers is being prepared at Richmond Law.

The efficiency gains promised by AI are hard to ignore. Ritter estimates he’s seeing about a 20% productivity boost in his work. His colleague at Ritter Law, associate attorney Chris Sullivan, ’17 and L’22, ballparks the gain closer to 25%. Regardless of the precise figure, those gains are set to grow as AI proves capable of performing certain routine tasks in a fraction of the time that humans can.

Skalbeck draws a parallel to how cell phones changed our relationship with information. Just as we no longer memorize phone numbers, AI may shift which tasks occupy a lawyer’s time, freeing up mental bandwidth for higher-level analysis and creative problem-solving.

“The question isn’t ‘Will AI replace lawyers?’” Sullivan says. “It’s ‘Lawyers using AI will replace lawyers not using AI.’”

 

As inevitable as tomorrow’s rising sun

Ritter and Sullivan are an integral part of Richmond Law’s efforts to prepare law students for the developing AI landscape. In spring 2024, the pair co-taught the school’s first AI-focused course: Artificial Intelligence in Legal Practice. Other faculty are increasingly incorporating AI within broader courses focused on topics such as legal research skills or intellectual property. It figures to become just as essential to the practice of law as other professional skills like writing and advocacy.

Whenever the topic of AI comes up in courses, the focus is on both possibilities and pitfalls. The pitfalls grab attention. In one much-discussed case, an attorney representing a client with a tort claim against an airline submitted an AI-written brief riddled with citations to nonexistent case law. He then doubled down when initially challenged about it. The headline in the New York Times captured the tone of much of the overage: “Here’s what happens when your lawyer uses ChatGPT.” Ritter and Sullivan have that article as one of their course readings.

There are also serious concerns about threats to attorney-client privilege. To get output from a generative AI tool, a user must enter a prompt. For example, an attorney revising a leasing contract for a landlord about to lease to their first dog-owning client might input the language of the original contract along with “Add common clauses to the contract that account for a tenant with pets.” If the attorney doesn’t like what comes back, they can quickly scrap it and ask the tool to try again with a more tailored prompt. One key is to avoid entering real names and other confidential data — those should be subbed for generic names and then changed back by the attorney to avoid violating client confidentiality.

The algorithm’s not smarter than you. You are a human who is looking for something specific.

— Molly Lentz-Meyer
A photo of Molly Lentz-Mayer talking to stendents. She is wearing a black tank top and a gold skirt.

“If you put privileged information in there, they say it doesn’t get out,” says Sam Cabo, a digital resources librarian at Muse Law Library. She co-teaches a course called Legal Practice Technology with Molly Lentz-Meyer, the library’s director of bibliographic services. Their course includes a section on using AI in legal research. “But every time you put something in, it’s learning — so it’s using privileged information to learn, and that is an ethical dilemma.”

Despite such concerns, Cabo and Lentz-Meyer emphasize to students that lawyers have an ethical obligation to understand and use time-saving technologies when appropriate. “If you’re saving three hours by using AI, that’s good for your client,” Lentz-Meyer notes. Sullivan makes the same argument, pointing to a section of the Virginia Rules of Professional Conduct that says specifically that lawyers are obligated to stay up to date on technologies that impact their services to clients.

Simply put, the time when attorneys can ethically adopt a head-in-the-sand approach to AI is rapidly passing as its capabilities improve. Skalbeck notes that incoming first-year law students nationally will make up the first law school class that has always had access to commercial-grade, law-specific AI tools — not only the ones being developed by Westlaw and LexisNexis but others fighting for space in the emerging market. “You can’t unring that bell,” he says.

These developments make AI adoption by the legal profession a question of not if but when, not whether but how. Legal-specific generative AI tools have the capacity to perform rapid and sophisticated document review, processing, classification, data synthesis, and analysis, for example. In minutes, they can produce good-quality research summaries that currently take hours or days to compile. In short, they can perform many of the rote tasks that disproportionately take up lawyers’ time, freeing them for high-level thinking, advising, strategizing, and other tasks that demand the higher-level thinking lawyers are specially trained to do. Faculty who teach the subject at Richmond Law say that the appropriate ground is between the poles of naïve enthusiasm and avoidance — a hefty dose of knowledge about the tools tempered by awareness of their limitations.

 

Legal education adapts

For their part, Lentz-Meyer and Cabo take a long view of AI. They note that it isn’t brand new — AI has quietly become ubiquitous in legal research and practice for more than a decade. Platforms like Westlaw and LexisNexis have long used algorithms to power their search functions. Now, they’re simply integrating more advanced AI capabilities, including brief generation and analysis tools.

What they caution students against is over-reliance on these tools. “The algorithm’s not smarter than you,” Lentz-Meyer tells her students. “You are a human who is looking for something specific.”

An illustration showing silhouettes of two lawyers sitting at laptops, facing each other. One lawyer looks exhausted and frustrated, sitting with her head on her hand. Multiple duplicate silhouettes of her fade off into the distance behind her. The other lawyer sits upright and alert. Binary code comes out of her computer towards her face representing her use of AI assistive technology. This lawyer is also represented by another instance of her silhouette, but this one is shaking the hand of a client, having moved on from the previously time consuming work that the AI technology has helped her with.

One eye-opening exercise involves having students compare the results of identical searches across different legal research platforms. Students think of these platforms as definitive, but what they find is significant variance among them. This anecdotal experience is backed up by a journal article that is one of their course’s readings. It reports on a study in which researchers closely examined the results returned by six of the most commonly used legal research databases, including Westlaw and LexisNexis. The study found “hardly any overlap in the cases that appear in the top ten results returned by each database.” One conclusion the researchers drew: “Every algorithm has a unique voice.”

In some ways, it’s a lesson in old-fashioned information literacy, with AI just the latest twist.

Cabo and Lentz-Meyer also encourage students to develop a healthy skepticism of AI-generated writing. In one assignment, students use ChatGPT to draft a legal memo, then annotate and reflect on the process. Many are surprised to find that refining the AI-generated text takes longer than writing from scratch would have.

“Now those students know what looks like a time-saving instrument is in fact not always that,” Lentz-Meyer says.

In their legal practice course, Ritter and Sullivan also have students experiment with AI tools and reflect on their experiences.

“We need to move beyond just an introduction of what ChatGPT is,” Sullivan says. “Let’s start having more of a conversation about what machine learning AI is, what are the various tools out there, and what are the specific ways that lawyers can be using it.”

The students’ final project in the course suggests its potential. Students create their own AI tool using a technique called retrieval-augmented generation. In a nutshell, they create a custom generative AI tool that draws its information from user-vetted sources rather than just information ingested from the digital sources at large. The answers it generates to user prompts are far more likely to be accurate and relevant. The approach solves one of the biggest problems with more generalized models, a tendency to make up information — “hallucinate,” in tech jargon — with confidence.

Students in their inaugural course created bots to help with topics ranging from help with Medicaid applications to systems that create first drafts of personal injury complaints for Virginia courts.

“We had someone who created a Virginia Residential Landlord and Tenant Act bot that helps tenants understand complex leases and lease terms,” Sullivan says. “These are just students that didn’t have any background in AI three months ago, and now they’re building these technologies that are basically ready for market to immediately help the people of Virginia.”

As the pair revise their syllabus for when they offer the course again this spring, the main challenge is simply keeping up with new developments.

 

Novel legal questions

Skalbeck has been incorporating AI tools into his teaching for several years, giving students hands-on experience with the technology. In 2022, even before ChatGPT became a household name, Skalbeck’s students were among the first in the country to test an AI-powered legal research tool.

“What I hope students get out of it is the freedom to explore and experience it,” Skalbeck says. “Also a personal sort of moral compass or comfort level. ‘What do these tools do? How do I think about them? And where do I think these things are going?’”

In his classes, students are encouraged to experiment with AI tools for tasks like legal research and creative projects that focus on communicating legal principles in plain language. For example, in a course on comics and law, students use AI image generators to create illustrations to create comic books about legal issues, such as how tenants can force landlords to make repairs or the history of the jury system. This hands-on approach helps students understand the potential, limitations, and legal issues related to the technology.

In addition to affecting how lawyers will practice law, AI is also raising new legal questions. One key area Skalbeck focuses on is intellectual property. As AI-generated content becomes more prevalent, thorny questions arise about authorship and ownership. He points out that according to current copyright law, works created solely by AI cannot be copyrighted — human authorship is required.

A photo of Roger Skalbeck in his office. He is holding up a copy of the comic about copyright that he created with AI assistance.

Skalbeck, above, created a comic book about copyright as a test case for the U.S. Copyright Office. The book is a hybrid of human- and AI-produced content, putting it in a legal gray area for copyright protection. “They’ve had it almost two years now,” he said.

What I hope students get out of it is the freedom to explore and experience it. Also a personal sort of moral compass or comfort level. “What do these tools do? How do I think about them? And where do I think these things are going?”

— Roger Skalbeck

This principle was recently tested when an artist submitted a comic book partly created with AI to the U.S. Copyright Office. The office initially approved the copyright but later revoked it when they learned AI was involved. Eventually, the artist was granted copyright protection for the text and arrangement of panels, but not for the AI-generated images. Such cases highlight the complex legal and ethical issues surrounding AI use. Skalbeck notes that many current lawsuits target the companies creating AI tools, with plaintiffs like authors seeking to have their works removed from AI training data.

Skalbeck himself has dipped his toes into these waters, testing the ground between human-generated and AI-generated artistic creations. He created an illustrated guide to the definitions of copyright law that uses largely AI-generated images. In dozens of images, he’s cleverly incorporated legal citations and graphics to represent leading court cases. Unlike the artist’s earlier submission to the copyright office, he fully discloses his use of AI in the forward.

“I did a lot of enhancements,” he says. “I generated hundreds and hundreds of images, selected them, arranged them, annotated them, edited them, and put them together. I’ve sent this to the copyright office with the assistance of our Intellectual Property and Transactional Law Clinic and am waiting to hear back. They’ve had it for almost two years now.”

 

The billable hour’s death foretold? And other questions

In this environment of uncertainty, Sullivan maintains a stance of “cautious optimism” toward AI in law. Like others, he emphasizes the importance of understanding the technology’s limitations, always verifying AI-generated information, and using it as a supplement to, not a replacement for, human legal expertise. As Ronald Reagan put it in a different context, trust but verify. But he is also excited by the offloading of rote tasks and by the significant productivity gains, which he expects to continue to rise quickly.

Like Ritter, he believes one potential impact is the modification or decline of the billable hour as the focus shifts from time spent to the value of a deliverable. Yes, AI technology might initially slow an attorney down — as students in Cabo and Lentz-Meyer’s course learned — but Sullivan sees in that struggle the initial climb up learning curve as lawyers test the technology. Attorneys, he says, should be testing generative AI tools now so they can understand what they are and how they work and develop best practices for their use.

An illustration of a billable hours sheet being broken apart by an onslaught of binary code, representing AI.

“That’s really the only way to learn,” he says. “Fine-tune your prompt engineering skills. Figure out what works and what doesn’t work. Yeah, maybe it slowed you down in the short term on that one particular assignment, but you just come out of it knowing more about AI strengths and weaknesses so that next time you know how you can leverage it, and it will make you faster.”

As efficiency builds, pressure to switch billing models will grow, he predicts.

“Attorneys are always happy to hear about threats to the billable hour, certainly, but this isn’t just me talking,” he says. “The Florida and California bars are already on record exploring how AI impacts reasonable fees. It will no longer be, ‘How long will it take me to draft this contract?’ It’ll be ‘What’s the value of having a well-written contract?’”

Ritter predicts change may be coming within the next three to five years. If so, it will mark a major change to the billable hour’s current dominance. The practice might feel immutable, but its ubiquity is, in fact, a fairly recent development. According to the 2014 Idaho Law Review article “Bill, Baby, Bill,” it became the dominant billing method in the United States only after the Supreme Court’s 1975 ruling in Goldfarb v. Virginia State Bar.

“I thought the billable hour was something that [had] just always been there,” Ritter says. “It’s not as entrenched as we all think it is.”

If AI can reverse that pillar of current legal practice, what else might change? For now, Sullivan, Ritter, and others who practice and teach at Richmond Law are focused on developing ethical practitioners who use AI tools efficiently and effectively while protecting client confidentiality and assuring appropriate review of output from generative AI tools. While AI may seem like uncharted territory, many of the challenges it presents are reminiscent of those posed by previous technological advances. The focus, appropriately, is on harnessing the power of AI while maintaining the judgment, ethics, and human touch that are at the heart of the legal profession.