- BY Sonia Lenegan

Briefing: AI and immigration law – what guidance is there for lawyers?
Table of Contents
ToggleDesperate times call for desperate measures, and the complexity of immigration law combined with the ongoing legal aid advice crisis mean that many people (both lawyers and migrants) are increasingly tempted to turn to generative artificial intelligence (I will just use “AI” here) as a possible solution. Personally, I don’t think this is a great idea, but this post is for those lawyers who do choose to use it and who need to be aware of what the professional bodies as well as the judiciary have to say about it.
Generative AI is defined on GOV.UK as “a subset of AI capable of generating text, images, video or other forms of output by using probabilistic models trained across one or more domains”. Concerns abound about the use of AI, from the environmental implications, racial and other biases, the potential for manipulation of outcomes by bad actors, the degeneration of critical thinking skills and theft of copyright from creators.
And of course errors, inaccuracies and “hallucinations” which is where the AI invents facts (or cases). As lawyers are hopefully aware by now, the latter creates regulatory risks.
You can read an illustrative example here, from an immigration lawyer with a robust knowledge of the relevant subject matter who experimented with asking Google Gemini about the late applications to the EU Settlement Scheme (the lawyer is in italics and “show thinking” is a button you can click which commands the AI to show you how it got to the result it did). Yes this is a complex topic, but the AI’s responses to being confronted with its errors should, in my view, give anyone very serious pause for thought before using it.
And yes, the Free Movement article it cited does not exist, nor do the two cases it mentioned. Even the AI described its own responses as “astonishing and unacceptable”.
For those of you undeterred by the above issues and risks, I have set out below where we are as far as guidance from the professional bodies and the judiciary is concerned.
What guidance is there from professional bodies?
Immigration Advice Authority
The Immigration Advice Authority is yet to publish guidance on the use of AI. When they do, it will no doubt be flagged up here on Free Movement. This means that we don’t have any immigration specific guidance, but we do have general guidance from the other professional bodies.
The Law Society
The Law Society’s guidance says it is an “introduction to generative AI is designed to be a primer for solicitors and firms, particularly small and medium-sized firms (SMEs), who want to understand more about the technology”. It provides an overview of the opportunities and risks that lawyers should be aware of when deciding whether or not to use it.
The opportunities mentioned are the potential for increased efficiency and cost savings. There is a much longer list of risks:
- intellectual property risks: potential infringements of copyright, trade marks, patents and related rights, and misuse or disclosure of confidential information
- data protection and privacy risks: concerns related to the unauthorised access, sharing or misuses of personal and sensitive data
- cyber security risks: vulnerabilities to hacking, data breaches, corruption of data sources and other malicious cyber activities
- training data concerns: the use or misuse of data to train generative AI models, which could result in biases or inappropriate outputs
- output integrity: the potential for generative AI to produce misleading, inaccurate or false outputs that can be misconstrued or misapplied
- ethical and bias concerns: the possibility of AI models reflecting or amplifying societal biases present in their training data, leading to unfair or discriminatory results. There may also be environment, social and governance (ESG) considerations
- human resources and reputation risks: if the use of generative AI may result in negative consequences for clients, there may be reputational and brand damage
There is an incredibly useful checklist in the guidance which sets out what lawyers should be considering when deciding whether to use AI. This includes the purpose of using the tool, whether the AI is a closed system or public and whether the input data you are likely to use is appropriate to put into that tool. This is a list of good practice and should really be built into organisations’ internal processes.
Importantly, the guidance also suggests that lawyers “discuss expectations and clearly communicate the use of generative AI tools for the delivery of legal services between you and the client”. Also on client care, the guidance says:
Currently, the SRA does not have specific guidance on generative AI related to use or disclosure of use for client care.
It is advisable that you and your clients decide on whether and how generative AI tools might be used in the provision of your legal advice and support.
Clear communication on whether such tools are used prevents misunderstandings as to how information is produced and how decisions are made.
Although not explicit said here, I think it is important to remember that use of AI can go in both directions. Lawyers should be clear about if and how they intend to use it, but should also set expectations with clients, increasing numbers of whom are apparently seeing fit to “double check” their lawyer’s advice through an AI search. Proving that what the AI has said is incorrect (if that is the case) creates additional work and lawyers may want to consider advising clients that this behaviour could result in additional fees being charged.
There is also some interesting data in the guidance on opinions on the use of AI in the legal profession. The section on ethics is also interesting. There is also a link to a SRA report from 2023 which looked at the risk of the use of AI in the legal market.
Another section covers solicitors’ professional obligations are when using AI. The relevant parts of the SRA’s Code of Conduct are set out and it is made explicit that “misuse of any tool, leading to inaccurate information being presented, will breach the standards of professionalism required” under the Code.
The Bar Council
The Bar Council’s guidance on generative AI for the Bar was updated in November 2025 and says that it is not “guidance” for the purpose of the BSB Handbook 16.4. It states that the purpose is “to provide a useful summary of considerations for barristers if they decide to use ChatGPT or any similar LLM software, as well as systems specifically aimed at lawyers”.
The possibility of hallucinations and biases is set out and the guidance reiterates the importance of barristers verifying AI’s output, with the reminder that “the ultimate responsibility for all legal work remains with the barrister”. As with the other guidance documents, the importance of not putting any legally privileged or confidential information into the system is set out.
Anthropomorphism is also listed here as a risk, with the guidance reminding barristers that AI tools “are designed and marketed in such a way as to give the impression that the user is interacting with something that has human characteristics”. There is a lengthy section on hallucinations with examples of where lawyers have gotten themselves into trouble with the courts for misusing AI.
Biases, mistakes, confidential training data and cyber security vulnerabilities are also covered. On the latter point, the guidance states that “due diligence on the security protocols of AI tools is essential”. It concludes this section by saying that “while generative AI LLM systems have shown impressive capabilities in various natural language processing tasks, they also come with significant limitations.”
The Bar Council has also provided a list of considerations, including:
- Mandatory verification of outputs and human oversight
- ‘Black box syndrome’: lack of explain-ability
- Respect legal professional privilege (LPP), confidential information and data protection compliance
- Intellectual property (IP) infringement and brand association
- Professional considerations
The last point includes mention of claims for professional negligence, which is presumably something we will also start seeing sooner rather than later.
Paragraph 37 states:
Barristers should also keep abreast of relevant Civil Procedure Rules, which in the future may implement rules/practice directions on the use of LLMs; for example, requiring parties to disclose to the court when they have used generative AI in the preparation of materials. This approach has already been adopted by the Court of the King’s Bench in Manitoba and the Civil Justice Council has setup a working group to consider specific rules for the use of AI in civil court proceedings.
Which brings me quite neatly to my next section.
What does the judiciary have to say about this?
Judicial guidance on the use of AI by judicial office holders has been issued, and was updated most recently in October 2025. This sets out the key risks and issues with using AI as well as suggesting how these can be minimised. The guidance explicitly recommends that AI is not used for either legal research or analysis. It says that:
AI tools may be useful to find material you would recognise as correct but have not got to hand, but are a poor way of conducting research to find new information you cannot verify. They may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts.
The guidance reiterates the unreliability of output from AI:
These may include misinformation (whether deliberate or otherwise), selective data, or data that is not up to date. Even with the best prompts, the information provided may be inaccurate, incomplete, misleading, or biased. It must be borne in mind that “wrong” answers are not infrequent.
Those who use AI are told that they must treat all public AI tools as being capable of making public anything that is put in to them, and:
Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world.
The current publicly available AI chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users. As a result, anything you type into it could become publicly known.
AI users are also reminded of the need to check the accuracy of any AI output before it is relied on, to remember that hallucinations are possible and also to be aware of the possibility of bias within the tools. Judges are referred to the Equal Treatment Bench Book.
Their own use of the technology aside, the judiciary are increasingly having to deal with it turning up in their cases. We know of three immigration lawyers so far who face potential disciplinary proceedings following what appears to be the unchecked use of AI in their cases. More Hamid referrals related to misuse of AI are no doubt in the pipeline.
Interestingly, the judicial guidance states “Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context”. I wonder if this position will remain sustainable. In a recent case in the First-tier Tribunal (Tax Chamber), the judge made the following order after making a finding of fact that AI had been used to produce case summaries:
158. The skeleton argument must also be accompanied either by a statement of truth from the Appellant stating that he has produced the skeleton argument entirely himself, with or without the help of AI, and has personally checked each statement of fact or case summary and reference contained within it, OR must contain a statement of truth from any other person who has contributed to the skeleton argument, confirming which of the statements of fact or case summaries that person has checked. Every person other than the Appellant must include their professional qualifications, if any, and the professional body that regulates their employer, if any.
I would not be surprised if this sort of thing is where we end up.
Conclusion
To me, all of the above looks like a huge amount of additional work for very little tangible benefit.
However if you are using AI then if you have not already done so, you should certainly go and read the relevant guidance for your professional body and I would suggest also at a minimum the judiciary guidance as well. If you are an IAA adviser then for now you should familiarise yourself with the guidance provided by the other professional bodies. It would be sensible to document that you have done this for your continuing professional development purposes.
Organisations should ensure that they have robust policies and procedures in place – in particular bearing in mind their duties in the handling of personal data and making clear to employees that AI outputs must be checked carefully. Otherwise, it is entirely possible that senior people may also find themselves in the judicial firing line, along with the relevant lawyers.
Where lawyers are making the choice to use AI, as usefully highlighted by the Law Society, I think it is key to be transparent with clients at the outset about how it is proposed to be used. This should include an explanation of what checks are in place to counter errors, and most importantly – what are the benefits to clients are of its use. I am personally sceptical about there being any benefit to client unless it is lower fees, and I don’t see how this is necessarily possible given the questionable time savings given all of the additional work that using AI requires, as set out above.
I think that it is important for us to recognise that many organisations have invested heavily in this technology and are highly incentivised to force AI onto us as a result. They claim that the use of it is “inevitable”. I shall remain a happy sceptic and wouldn’t be surprised if “AI free” starts being used to denote higher quality work.
SHARE
