AI legal research tool eases Victorian lawyer’s work pressure but at a great cost



Posted By and on 16/10/24 at 1:32 PM

A Victorian solicitor (afforded the protection of a pseudonym) has been referred to the Victorian Legal Services Board and Commissioner for submitting to the court a list and summary of fictitious authorities generated by a legal research tool powered by Artificial Intelligence (AI).

The judge accepted the solicitor’s unconditional apology, however a referral for investigation followed given the significant public interest in examining professional conduct issues in using AI in the legal profession. Meanwhile, the research tool provider blamed the solicitor for failing to use the “… verification process, which sent the user the correct information just four hours after requesting it and well before appearing in court …”.

With similar reports from the UK, US, Canada and other countries since 2023, this matter raises questions about reliance on technology, AI risks in the legal sector and responsibility for when things go wrong in Australia.

Courts’ patience has run out …

In a similar case in the UK, a taxpayer faced a penalty from the HM Revenue & Customs for the late payment of capital gains tax.[1]  The taxpayer appealed citing ‘reasonable excuse’ based on her mental health and excusable ignorance of the law. To support her appeal, she submitted summaries of nine cases, supposedly provided by a friend in a solicitor’s office. However, the tribunal found that none of these cases were authentic.

The tribunal did not sanction the taxpayer and proceeded with the case as if the submissions had not been made. While it accepted the taxpayer’s ignorance about the cases being fabricated by AI and her inability to verify authorities, it rejected her claim of ignorance of the law regarding her late tax filing (under a recently amended rule) which her solicitor allegedly failed to advise her on.

Lawyers are not afforded the same leniency as self-represented litigants. The Victorian solicitor submitted a list and summary of legal authorities that neither the judge nor her associates could identify. While the solicitor claimed he did not intend to mislead the court, he admitted to not fully understanding how the AI tool worked and failing to verify the accuracy of the authorities. Despite the solicitor’s anxiety about the matter, the judge nevertheless considered it important to refer for investigations as a matter of public policy.

Even less fortunate were lawyers in New York who were sanctioned $5,000 for including six AI-generated fictitious case citations in a brief submitted to the court and making false and misleading statements to the court.[2]  A Colorado lawyer faced a one-year suspension after he cited AI-generated case law in a motion submitted to a court in May 2023. The junior lawyer failed to verify the fictitious cases, and then failed to alert the court about the existence of the incorrect cases or to withdraw the motion.  In the Canadian case Zhang, [3] counsel mistakenly filed a notice of application referring to legal authorities fabricated by ChatGPT. The court held the counsel personally liable for the application cost and expense of the other party’s remediation research.

Reports indicate that Australian Courts frequently encounter AI-generated pleadings. The Law Institute of Victoria and Law Society of NSW have published material on the responsible use of AI in line with the solicitors conduct rules. In these circumstances, we can expect even less understanding for future AI legal errors, particularly if court forms are equipped with a tick box to attest to the use of technologies in preparing submissions.

Is the solicitor the only party to blame?

Each solicitor is unequivocally responsible for their own work and court submissions, and must be familiar with authorities in their practice area. Any entirely unfamiliar AI-generated authorities should raise red flags. While some issues will be more challenging to detect – like in the UK case where AI-generated cases looked very similar to real cases – from a conduct perspective, there seems to be no justification for submitting false authorities to a court.

On the other hand, like other professions, solicitors also rely on commonly available technology for safe file storage, sharing of confidential documents, document comparison tools, keyword searching of large volumes of documents to comply with discovery orders, legal research, etc. Surely the lesson from this case cannot be that solicitors should stop doing so.

Secondly, despite an individual solicitor’s conduct responsibilities, employers who provide such technology for use in a workplace should consider how they can better support staff to avoid similar situations in future.

Thirdly, the service provider’s website may acknowledge that “… there is no room for errors…” when it comes to legal matters, but it will not accept any liability for the errors of its own product (but noting that in this case a subsequent human-led verification was provided). The provider will likely be protected by the service limitations and liability exclusions in its terms. Any output was likely delivered with a disclaimer such as “The above response is AI-generated and may contain errors. It should be verified for accuracy.”.

But should the provider’s liability be this limited? In the past, technology in testing phase was marked ‘beta’. With AI, customers are somehow expected to pay the full price for allegedly market-ready solutions which only perform to a beta standard.

AI risks and standards

Stanford researchers have discovered that even bespoke legal AI tools produce ‘hallucinations’ at 17% to 34% of cases.  A particular risk is in the tendency of AI to agree with the user’s incorrect assumptions expressed in the prompt. Even retrieval-augmented generation which splits the process into content retrieval and subsequent output generation, does not resolve the issue. Hallucinations come in the form of either incorrect output or output which is correct but supported by wrong citations.

The consequence are some real-life examples of AI risks in the legal profession, such as:

  • Breach of confidentiality by using AI, as considered by OVIC in ordering a ban on generative AI tools for “Child Protection staff” in exercising their “official duties” at the Victorian Department of Families, Fairness and Housing, even for internally tenanted Microsoft 365 Copilot.
  • Case dismissed by court due to errors.
  • Judge not giving due weight to evidence or submission suspected to be AI-generated.
  • Apology given and hearing cost borne by the offending party.
  • Reputational harm to solicitor and law firm.
  • Lawyers sanctioned by the court for acting in bad faith.
  • Professional conduct investigation by the legal profession authority.

Apart from voluntary frameworks, Australia currently lacks any AI standards mandated by the law (other than the recently introduced AI standards under online harms legislation).[4] In practice, this means that law firms and lawyers purchasing AI-powered legal tools have no access to service performance evaluations and transparency around AI architecture. Providers are able to disclaim all liability and benefit from the regulatory gap. This is understandable as they have to act in the best interest of their shareholders, and no lawyer would advise them to invite liability which is not due (certainly KHQ would not).

In contrast, the EU’s AI Act requires providers of high-risk AI systems to deliver instructions on the characteristics, capabilities and limitations of performance of the high-risk AI system, including “ … the level of accuracy, including its metrics, robustness …” and “… specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used…”. The proposed AI Liability Directive will enable claimants to make claims outside a contract for any loss or damage caused by “faulty” AI.

Conclusion

The key lesson is that lawyers must not rely on AI-generated outputs as substitutes for their own judgment and due diligence; they must verify the accuracy and reliability of the information provided. Lapses by lawyers in exercising proper care when utilising these AI tools can mislead the courts, jeopardise their clients’ interests, and ultimately weaken the rule of law.

Lawyers must be educated on the significant distinction between search and generative AI. One indexes information from various sources and helps us find it, while the other relies on its neural net to generate outputs based on unknown sources. We have developed a natural trust in search tools but must now teach ourselves to be sceptical of AI – even while paying for it.

Employers introducing AI tools in the workplace must do more, including:

  • Introducing AI policies
  • Providing workshops and training on common risks
  • Strengthening their due diligence on AI providers before introducing them

Victorian Supreme Court guidance warns about AI-generated outputs being out of date and unaware more recent jurisprudence; incomplete and not contain all relevant points, inaccurate or incorrect, inapplicable to the jurisdiction or biased based on data which may over- or under-represent certain demographics or viewpoints. However, there is more to do in the legal industry’s adjustment to AI.

At present, the solicitor may have no remedy against AI hallucinations, other than his or her own judgment and diligence. However, this defeats the purpose of having AI tools in the first place. While this conundrum is being resolved by the legislature, employers introducing AI tools should push for more information, reassurance and responsibility from their providers.

KHQ can assist with technology, data privacy, cyber and workplace matters concerning AI. Please reach out to us if you have any questions or concerns.


[1] Felicity Harber v The Commissioners for HMRC ([2023] UKFTT 1007 (TC)).

[2] Roberto Mata v Avianca Inc, 22-cv-1461 (PKC), US DC Southern District of New York.

[3] Zhang v. Chen, 2024 BCSC 285

[4] Online Safety (Basic Online Safety Expectations) Determination 2022 amended in 2024.

George Tabet Lawyer

George is a lawyer in our Litigation & Dispute Resolution team, having commenced at KHQ as a graduate in 2022. 

Prior... Read More

KHQ Lawyers - Alex Dittel

Alex Dittel Principal Solicitor - practising English law

Alex leads our Data Privacy, Cyber and Digital practice. He brings 15 years of experience in data protection, information security and technology commercial matters acquired during his time working... Read More