Faculty Perspectives: Does Your Law School Need a Policy on Generative Artificial Intelligence?

Faculty Perspectives is an ongoing series in which AALS presents opinions from law faculty on a variety of issues important to legal education and the legal profession. Opinions expressed here are not necessarily the opinions of the Association of American Law Schools. Faculty interested in contributing a piece for the series can contact [email protected].

The rise of students using generative artificial intelligence (AI) applications and tools (such as ChatGPT) has raised several concerns for universities and law schools. UC Berkeley Law is one of the first known schools to develop a policy specifically to address some of these concerns.

Berkeley Law Policy on the Use of Generative AI Software

Generative AI is software, for example, ChatGPT, that can perform advanced processing of text at skill levels that at least appear similar to a human’s. Generative AI software is quickly being adopted in legal practice, and many internet services and ordinary programs will soon include generative AI software. At the same time, Generative AI presents risks to our shared pedagogical mission. For this reason, we adopt the following default rule, which enables some uses of Generative AI but also bans uses of Generative AI that would be plagiaristic if Generative AI’s output had been composed by a human author.

The class of generative AI software:

Instructors have discretion to deviate from the default rule, provided that they do so in writing and with appropriate notice


Photo courtesy of UC Berkeley Law

AALS sat down with Chris Hoofnagle, faculty director of the Berkeley Center for Law & Technology and Professor of Law in Residence at the University of California, Berkeley School of Law to discuss the policy and use of generative AI in the classroom. Professor Hoofnagle is an expert in law and technology and helped craft UC Berkeley’s generative AI policy.

Chris Hoofnagle: It’s important to create clarity around norms and expectations. In particular, it’s essential to have clear guidelines surrounding anything that can trigger an academic dishonesty allegation. We thought that the capabilities of these tools would be quite tempting for students to use. We needed to clarify the situations in which the tool is acceptable and consistent with pedagogical goals and values.

In what ways do you foresee students using generative AI as the technology gets better?

CH: AGI (Artificial Generative Intelligence) is going to be everywhere. It is already present in internet searches. We’ll soon see it implemented in Lexis and Westlaw and anywhere else where complex material needs to be summarized. A ban on this technology is impossible because it will soon find its way into every tool used for research and writing. Just as an example, if you look at the editor tool in Microsoft Word, it is so good now. This is no longer the days of “Clippy” from Microsoft Office. Students are going to get a lift from these technologies in many different areas.

I also think that there are some important caveats. I teach security, so I think about things through threat modeling. The model here is the kind of student who wants to cheat for whatever reason. It will be extraordinarily difficult for schools to deal with students who are determined to cheat. 

ChatGPT’s capabilities are surprising, and students are learning how to use it in advanced ways. For example, some teachers are developing countermeasures such as, “base your response on an argument made in the classroom.” But that countermeasure fails when the student can take the Zoom transcript and dumps it into ChatGPT. ChatGPT-4 will accommodate about 50 pages of text.

Therefore, we have to think about a future where a student could put their entire outline, class textbook, class notes, and maybe even a transcript into ChatGPT and request that the service produce an essay. The threat becomes even more complex because smart users of these tools don’t just ask a question and take the output. The smart way to ask ChatGPT a question is to decompose an issue and ask ChatGPT to write shorter essays about each sub-issue. For instance, if the assignment were to write an argument about metaphor in Hamlet, you’d ask ChatGPT first, “What are the most important metaphors in Hamlet?” Then you’d subsequently say, “Write me an essay about metaphor one, three and seven.” AGI technologies present a fundamental challenge to assessment.

Who at your school was involved in crafting the policy? Did you seek input from students or faculty? What has been the reaction after its release?

CH: A small team of us developed the policy. It was Jonah Gelbach, Ken Bamberger, and myself. I workshopped the policy several times with the students in my Python programming class. Students found corner cases and problems that I didn’t foresee. They were earnest in wanting to create a policy that discourages uses that we describe as “plagiaristic” versus pro-pedagogical. There’s plenty of positive learning experiences that could come out of an AGI.

The reaction after release of the policy has been largely positive. However, I don’t think the policy is fully developed. The policy might work for doctrinal courses, but deeper complexities in teaching legal research and writing need to be surfaced. I don’t teach those courses, so I don’t know those complexities. But an AGI could be a complete hack when your course goal is to have a student research an issue and write about it. Whereas in a torts exam, I can create facts and bizarro situations that would just bend the universe, and the use of an AGI might be more obvious or it might produce a result that’s just not very good.

One of the ways I use AGI in the classroom is that I pose legal questions to the program and have my students critique it. One learning from this is that ChatGPT could, in effect, provide legal services for all sorts of functions that people cannot afford. For example, you could have ChatGPT write a complaint letter to a landlord. It’s a perfectly acceptable letter if you’re not willing to hire a lawyer and the price you’re willing to pay is $0.

It even will make legal threats. For instance, I had the system write a letter to a landlord invoking the right to withhold rent. It correctly cited the California code that entitles renters to withhold payment for certain types of wrongs. Most tenants cannot hire a lawyer for such a thing. It reveals a bigger issue: bar associations might attempt to block AGI companies from the practice of law. And two, we could find ourselves in a world where all sorts of people can complain. Imagine all the tenants, all the prisoners, and NIMBYs, who will write complaints that they previously didn’t have the time and literacy to complete.

This policy recognizes how generative AI can be a useful tool while specifically stating how the school restricts certain uses. Is this a recognition that these tools are here to stay and will likely be integrated into everyday technology? How have you already seen students and scholars embrace generative AI? 

CH: There’s an excellent tool to do literature reviews called elicit.org. It’s fantastic because it will do scientific or legal literature reviews. When it finds a scientific paper, it writes a plain-language summary. If, God forbid, you were to have a severe illness, but you didn’t have doctor-level literacy, you could actually research. I think that’s fantastic.

There are a couple of other pro-pedagogical applications I can think of. One is translation. I have a lot of Chinese students and I’m finding that they are using these tools just to understand the class. Another is to model better writing. Many students struggle when asked to start with a blank page. The AGIs will at least give you an outline to work from, a first draft, if you will.

Then finally, ChatGPT is great at coding, especially for legal empirical research scholars. If you wanted to be an empirical legal scholar, but your Python or Stata skills were just not great, you can use ChatGPT to figure out the stuff that doesn’t matter. I think it will make it possible for all sorts of people who are currently shut out from the field to do statistical analysis.

Generative AI tools get smarter and more useful every day. Could you see the need to expand your policy to address emerging concerns?

CH: From the law professor perspective, my exams are often based on a recent torts case, as in one decided maybe a month before the exam. My strategy might break down if these tools can ingest more recent data. I think the development that will happen and that all lawyers need to consider is when an AGI company has access to the entire electronic court filing (ECF) database. There must be a million motions to dismiss in it. If one could train up a model on the corpus of federal litigation documents in ECF, that could be a tremendous game changer for law firms.

I’m using AGI in the classroom because I think lawyers will be using it for their first drafts. Graduates need to be familiar with where it shines and where it has warts.

We have to rethink some aspects of examination fundamentally. I am reconsidering whether we should have take-home exams. Do we need to open an exam center where students come in person to write their exams, just like the bar? We might even think about switching to an oral exam.

Have you heard from other law schools or university departments about your policy?

CH: Yes. The most thoughtful feedback came from Bryant Walker Smith, a professor at the University of South Carolina. He created an interesting PowerPoint deck for his faculty on the subject.

Faculty members need to know that there’s a kind of game theory to the problem of academic dishonesty. Our problem is that ChatGPT can write an essay that, in effect, is passing. What comes out of it is not very good. But instructors have to ask themselves whether they’d be willing to fail more students or whether they’d be willing to accuse more students of plagiarism. Modern universities make both of these options difficult. So, we have an arms race that’s a little asymmetric, where students might use the tool and faculty might be reluctant to investigate or fail mediocre answers because they don’t want to deal with the policy implications.

What sort of advice do you have for schools thinking of crafting similar policies? 

CH: First, the ostrich approach will not work. There’s a significant number of people who are saying, “I don’t want to deal with this. I’m going to keep on doing what I’m doing.” I believe the time that you spend designing an assessment against AGI use is worth it. Because post hoc, the cycles of investigation and possible academic misconduct charges are immense. Even if it takes 20 or 30 minutes it’s worth crafting an assessment so that it’s less likely to be subject to an AGI analysis. 

There are some tricks for that at the moment, but these tricks might not last. I’m afraid even to say what they are because I could be wrong by the time this article comes out.

But I also think we need to rethink recording classes. I don’t let people record my classes. I’m on the Faculty Senate’s executive committee, so I talk to many faculty members. Other faculty are telling me that, “Students are literally quoting them in the classroom.” They’ll write an essay and they’ll quote from the class, so the student is going back through the transcript. 

The big conflict will come when students request a recording as a disability accommodation. That recording becomes a vulnerability because it can now be subjected to automated analysis. These are all difficult considerations.