“AI is an amazing legal assistant that will cut out all the boring work and make you three times as efficient.”
“AI is a copyright thief that will take your work and reproduce it without attribution or remuneration.”
“AI produces great value for your clients.”
“AI gets things wrong and will make you a global laughing stock if you cite imaginary cases in your court pleadings.”
Sounds familiar? There is no shortage of opinions on generative artificial intelligence (Gen AI) and its uses in the legal space. If you are a lawyer, it is probably dominating your office meetings and working dinner conversations. Many firms are taking a cautious approach but, with legal technology providers pouring hundreds of millions of pounds into developing ever faster, more accurate tools, it feels only a matter of time before the use of Gen AI is an integral part of most lawyers’ working days. That has interesting, if uncertain, implications for how young lawyers learn their craft and the kind of roles available to legal professionals, as well as for the delivery of justice.
There is something else that merits attention in the AI debate: what is the value of human thinking about the law? Law doesn’t exist in a vacuum. It is shaped and applied in a social, political, and economic reality created by humans. It is a deeply human endeavour. How can the law evolve if algorithms use statistics to apply legislation and precedent to the facts and produce pleadings—or even decisions—based on the most probable outcome? What is the role of lawyers and judges if their work can be mined by a large language model (LLM), which can then create its own legal advice, pleadings, and judgments, as well as legal scholarship, for anyone who knows how to write the right prompts? What does it mean to be a good lawyer when AI can do your work in seconds—for free?
We are not quite in that world yet but it is not a far-fetched scenario. Numerous tests have shown that the differences between student- or AI-written essays can be imperceptible even to experienced lecturers. Some of the steps to be taken are deeply practical: establish the right guardrails to stop the sharing of protected information with LLMs that will ingest that information and reuse it; train lawyers and legal scholars on how to use AI responsibly and to always check the source material; and press tech companies to be transparent about how their LLMs are trained and users’ data and privacy are protected. Ensuring LLMs are free from bias is particularly important. No single lawyer can achieve this but, collectively, lawyers’ advocacy for responsible, safe AI will make a difference.
Perhaps even more important is this: among everything AI promises, let us not lose sight of the importance of human thinking and creativity to the law. Sometimes a completely new line of argument or a highly creative interpretation is required to adapt the law to changing circumstances or shifts in society. AI cannot, or perhaps should not, do this. The best thinking is often slow, maturing over time as a lawyer or judge mulls a case over. Or it emerges in conversations with others, sometimes in unexpected ways. It is often sparked by something you read. Legal publishing has a crucial role here: helping to disseminate the best legal analysis and commentary across the globe and create a permanent record of every book, article, and short form piece. Being a good lawyer in an AI world involves placing enduring value on the quality and originality of human thought and scholarship.
Featured image by Ground Picture and licensed via Shutterstock.
Recent Comments
There are currently no comments.