The rise of generative AI tools (like ChatGPT – but there are many more!) is transforming how we approach legal work—but with great innovation comes ethical responsibility. As legal professionals increasingly integrate AI into daily workflows, understanding the technology and its implications is no longer optional—it's essential.
What is it (Generative AI)?
Generative AI is a type of artificial intelligence that creates new content based on user prompts. This can include text, images, music, and even computer code. Tools like ChatGPT use what’s known as a large language model (LLM) to generate human-like responses by identifying patterns in massive datasets. Their effectiveness depends heavily on two factors: the quality of the data they trained on and the clarity of the prompt provided.
How is it Being Used in Legal?
Generative AI is making waves across various sectors of the legal industry, streamlining tasks such as:
Ethical Concerns: Input & Output
With this new frontier comes a host of ethical challenges. One of the biggest risks is AI “hallucination”—generating wrong answers with a high degree of confidence. It’s like telling a kindergartener if they give an answer they’ll get candy, then asking a complex math problem. The likelihood of a correct answer is low, but the answer given will be with a smile and a confident voice. Some legal professionals have had trouble with AI using false citations in briefs or other court documents, and sometimes even in the evidence.
Equally troubling is feeding confidential or sensitive information into AI systems, which may compromise privacy or security. AI is only as good as the data that it’s using, so if there are any biases in the dataset they will perpetuate into the AI’s answers.
Staying on the Right Side of Ethics
ABA Model Rule 1.1 emphasizes a lawyer’s duty to provide competent representation, which now includes understanding relevant technology. That means:
Best Practices Moving Forward
So how to use it in the best way possible? First, only use AI that offers repeatable, reliable outcomes. That may mean using it repeatedly to make sure that the output is what you would expect.
This leads to the second recommendation: ALWAYS review AND verify anything generated by AI. There’s an entire category of memes dedicated to term papers with obvious mistakes made by AI that could have been changed had the student even read what the program gave them (just Google “AI writing fails” and prepare to chuckle).
Third: don’t upload client or case-sensitive data. This one’s obvious but it can have big implications since many AI programs are considered part of the public cloud. If the information was yours and you wouldn’t want it out there, don’t do the same for your client.
The last suggestion is to understand the training data and the system’s logic as much as possible. This one’s a little tougher and it requires a bit of research. But for most of the big Large Language Models, information is available on the data used to train the system.
Final Thought
Generative AI is a powerful tool—but like all powerful tools, it must be used wisely. For legal professionals navigating the complexities of eDiscovery and digital forensics, AI offers both promise and peril. By staying informed and ethically vigilant, we can harness the benefits while mitigating the risks.