AI in the Law: Accountability Still Rests With the Lawyer
When Your Chatbot Hallucinates Case Law, Guess Who's Getting Sanctioned?
The legal profession's relationship with artificial intelligence has shifted from cautious experimentation to widespread adoption at remarkable speed. Drafting assistance, case law summaries, and document review are now routine features of legal practice. But something interesting has happened recently. Courts and regulators have stopped asking whether lawyers use AI and started asking a far more pointed question: how are you exercising responsibility when you do?
Courts draw the line
Recent guidance from bodies including the Supreme Court of South Australia has made explicit what many had assumed was implicit: AI may assist, but it categorically does not displace professional judgement. As Professor Richard Susskind has long argued in his work on the future of the professions, technology in law should enhance rather than replace human expertise. Lawyers remain personally accountable for every submission to court. Errors attributed to automated tools are now being treated as professional failures, not technical mishaps.
This shift reflects growing judicial concern that speed and efficiency are crowding out verification. Courts in multiple jurisdictions have publicly identified filings containing fabricated citations or mischaracterised precedents, all linked to uncritical reliance on generative systems. The response hasn't been prohibition, but clarification: AI use is permitted, but delegation of responsibility is not.
The black box problem
The implications strike at the heart of legal ethics. As Professor Luciano Floridi at the Oxford Internet Institute has explored, traditional frameworks were built around human actors exercising judgement within defined duties. AI challenges this by inserting a powerful but opaque intermediary into the process. What Professor Frank Pasquale of Brooklyn Law School calls the 'black box society' problem becomes acute when that black box is generating legal arguments. Judges are now signalling that opacity is not an excuse. If a lawyer cannot explain how an argument was constructed, that argument should not be before the court.
Reuters reporting has highlighted parallel debates in the United States and Europe, where draft rules on AI-generated evidence and judicial warnings about 'hallucinated' submissions have prompted renewed scrutiny of professional standards. The concern, as legal technology scholar Professor Daniel Katz at Illinois Tech has noted, is not that AI will replace lawyers, but that it may quietly erode the habits of scepticism and checking that underpin adversarial justice.
Trust and institutional authority
There's also a broader institutional dimension. Courts derive authority not from efficiency, but from reliability and trust. A system perceived as vulnerable to automated error risks reputational damage that far outweighs productivity gains. This is particularly acute in high-stakes litigation, where public confidence matters as much as formal correctness. Professor Dame Hazel Genn at UCL has written extensively about the importance of public trust in justice, and AI introduces new vulnerabilities that must be carefully managed.
A conservative consensus
The emerging consensus is therefore conservative rather than revolutionary. AI is being treated as a tool akin to junior assistance or advanced research software, not as a decision-maker. Its outputs require scrutiny, contextual understanding, and, where necessary, rejection. The current moment is less about technology than about professional discipline: remembering that the fundamental obligations of lawyering don't evaporate because the tools become more sophisticated.
As courts clarify expectations across jurisdictions, the message to practitioners is increasingly consistent: innovation is welcome, but accountability remains indivisible. In an age of automated drafting and instant analysis, the lawyer's core obligation hasn't changed. Someone must still stand behind every word placed before a court, and that responsibility cannot be automated away. The algorithm might do the heavy lifting, but the lawyer still signs the document. And that signature means something.

