
The integration of AI into daily workflows has become increasingly widespread, offering effective solutions for automating repetitive or routine tasks. While utilising tools such as ChatGPT for drafting informal communications, speeches or recipes is generally appropriate, a clear distinction must be made when considering their use in preparing legal documents. Nevertheless, some individuals, recognising the value of AI in more casual contexts, have begun to rely on it for legal drafting, an approach that can lead to significant negative repercussions both for the individuals involved and for the legal profession at large.
This problem has been highlighted in a number of areas, and one of those is employment claims. Recent Tribunal cases have shown that when a claimant chooses to represent themselves, for whatever reason, the likelihood of them presenting a case that doesn’t meet the required standards of legal accuracy can be significantly higher. For example, when a claimant alleges protected disclosure (whistleblowing) detriment, simply describing the detriment is insufficient; it does not address the specific requirements that must be demonstrated to the employment tribunal.
Typically, arguments in the tribunal are presented by legal professionals with thorough knowledge of previous cases and Tribunal decisions. In situations where an individual chooses to represent themselves, they may not possess legal experience or know how to identify relevant cases to support their argument. They may choose to use generative AI for assistance in drafting their Tribunal submissions. However, the same lack of experience that prompts them to rely on AI can also prevent them from accurately evaluating whether the AI-generated output is appropriate and fit for its intended purpose.
Frequent users of generative AI tools may observe that case law is inserted at different points in arguments. However, the generated cases do not always present valid legal principles or decisions to support those arguments, even if they may appear convincing to someone without legal expertise. If the AI generated a fictitious case, and if a claimant relies on such material, referred to in the AI sector as “hallucination”, they are likely to lose their claim.
AI and the law
A recent (2024) case in the Employment Tribunal is a good example of this: Miss M Wright v SFE Chetwode Ltd and K Winter: 2601525/2024 and Others
The Claimant, Miss Wright, pursued multiple claims relating to whistleblowing and automatic unfair dismissal. Regardless of the outcome of the case, two paragraphs of the final judgment stand out for what they reveal about how Miss Wright conducted her case:
“I am left with a strong feeling Miss Wright is pursuing a claim she does not understand and cannot personally justify when asked. While it is clear she feels genuinely aggrieved, I am left with the impression that she does not understand if or how that grievance is legally justified.
Her particulars of claim are well set out and quite clear. Her written submissions and witness statements are verbose and less focused but appear on issue. Her oral arguments were not particularly on issue and vague. It transpired that she received help from a charity to bring and write the claim and used ChatGPT (a large language artificial intelligence (AI) web-based device) to write her statement and submissions, into which she added her own details. I am not going to be drawn into a discussion about the use of AI. I also appreciate that writing something for a Tribunal is different from speaking in front of it, especially if one is a litigant-in-person against experienced legal representatives. However, even allowing for all of that, the contrast and demeanour left me with that feeling.”
With the increasing prevalence and availability of generative AI tools such as (but not uniquely) ChatGPT, it may become much more common for submissions to Court (or more likely tribunals) as well as witness statements, a cornerstone of the legal process, to be either partly or wholly generated by AI.
However, some might argue that generative AI helps to level the playing field in an arena where an unrepresented claimant might feel that the odds are stacked against them in terms not only of expertise, but also of financial resources.
When used carefully and responsibly, AI tools and applications can help Claimants to organise their own ideas into a format that will carry more weight in court – using coherent and persuasive language.
Nevertheless, this trend poses certain risks for the legal profession. Without diligent review and verification, it may produce arguments that are not only inaccurate but also misrepresent the likelihood of success.
Large Language Models (LLMs)
LLMs (Large Language Models) are designed to be overly affirming, which can give users unwarranted confidence. This approach may be problematic if a user relies on confirmation from generative AI when deciding whether to pursue legal action. Where an experienced lawyer would assess the odds of success and advise whether to proceed based on that assessment, a private individual will rely on unaudited legal guidance which will inevitably to lead to uncertainty throughout the litigation process.
Another important aspect of generative AI is its output speed. When a user inputs content, even lengthy text, into ChatGPT, it can process the information within seconds and produce a rapid response. This increased speed results in a larger volume of content, which leads to more material that respondents will receive from claimants. All of this content must be read, sorted, evaluated and filed, potentially raising the costs for the Respondent. If the Claimant does not succeed with their claim, there is a significant risk that they may be subjected to a costs order which can be anything up to £20,000 (and, in exceptional cases, uncapped).
On a wider scale, flawed claims will increase the burden on a system that is already facing long delays and heavy administrative workload. According to Ministry of Justice figures, open tribunal cases have increased by nearly a third in 2024-25.
AI and the Employment Rights Act
The imminent arrival of the Employment Rights Act (it is currently a Bill, but it is expected to be enacted into law in the near future) will lead to an increase in claims and cases, and with no imminent regulation of the use of such intelligence, there is no reason why generative AI will not play a part in the increase of communication and demands on the Employment Tribunal.
It is important to note that AI models are trained on a wide variety of data sources, often without explicit consent. These sources may include information provided by claimants while using AI platforms to prepare their cases. As a result, employers could face situations where sensitive or highly confidential information becomes accessible through generative AI technologies. Once control over such data is lost, the risks related to GDPR compliance and confidentiality obligations can increase substantially.
It has previously been noted that generative AI often affirms user input and opinions. This can also affect claims in another way. Employment lawyers report that AI chats can encourage users to think that they will be in line to claim what are grossly unrealistic amounts of compensation, encouraging them to pursue their claim further and for longer, overlooking the possibility of discussing a settlement at an early stage – or even abandoning their claim due to its poor prospects for success.
What can employers do?
It is not always possible to determine whether material has been assisted or generated by AI, as there are limited indicators. Some features, such as em dashes and American English spellings, are often associated with AI-generated content, but these can be altered by users, and legitimate materials may also include them, based on their origin. HR departments should receive training on how to evaluate submissions for accuracy and for possible AI involvement in their drafting, as well as guidance on appropriate steps if AI use is suspected.
That does not, however, extend to dismissing those submissions without further consideration. The ACAS Code of Practice indicates that employers must carry out a thorough investigation of all grievances, regardless of how unclear or badly argued it might be, and keep clear records of their investigation.
Generative AI and related tools can assist employers in processing communication from employees or former employees. These technologies may enable individuals to express concerns differently from before, which sometimes may not fully reflect the core issues of their complaints. As a result, it’s important for employers and HR professionals to thoroughly review the content they receive and ensure that any complaints or issues raised are properly documented and addressed through appropriate communication or investigation.