- BY Sonia Lenegan

Barrister referred to regulator following misuse of AI in immigration appeal
In a newly reported Hamid decision of the Upper Tribunal, a barrister has been referred to the Bar Standards Board for investigation following his use of a false citation generated by ChatGPT. The case is MS v Secretary of State for the Home Department (Professional Conduct: AI Generated Documents) Bangladesh [2025] UKUT 305 (IAC).
Background
The barrister, Mr Muhammad Mujeebur Rahman, drafted grounds of appeal for the appellant on 14 March 2025, in which he referred to the case of Y (China), which does not exist, with a citation which was actually for the case of R (YH) v Secretary of State for the Home Department [2010] EWCA Civ 116.
Permission to appeal was granted on limited grounds. At the error of law hearing on 20 June 2025 the judge asked Mr Rahman to take him to the relevant paragraph of Y (China), noting that the citation was for a different case about fresh claims, rather than delay which was the issue in this appeal. Mr Rahman said he did not wish to rely on YH (Iraq) and after a couple of unsuccessful attempts to cite other cases the tribunal took a break, providing Mr Rahman with a copy of R (Ayinde) v London Borough of Haringey, Al-Haroun v Qatar National Bank QPSC [2025] EWHC 1383 (Admin) and asking him to consider his position.
I do wonder whether he actually read Ayinde during that break, because rather remarkably when Mr Rahman returned, he told the tribunal that “he had undertaken ChatGPT research during the lunch break and the citation for Y (China) was correct, and it was a decision made by Pill and Sullivan LJJ and Sir Paul Kennedy.” The tribunal then directed him to either provide a copy of the decision by 24 June, or else to explain what had happened.
After the hearing, Mr Rahman handed the tribunal clerk “nine stapled pages which were not a judgment of the Court of Appeal but an internet print out with misleading statements including references to the fictitious Y (China) case with the citation for YH (Iraq).”
In compliance with the tribunal’s direction, on 24 June 2025 Mr Rahman wrote and stated that:
he had in fact meant to cite YH (Iraq) (and in particular paragraph 24 of that decision which he cites as saying all factors in an applicant’s favour must be taken into account) and apologising for his failure to cite the full and correct name of the case. He blamed this on having suffered from “acute illness” before drafting the grounds; and on having been on a visit to Bangladesh between 10th and 18th June 2025, and the fact that he had been hospitalised in Bangladesh due to diabetes, cholesterol problems and high blood pressure. He also argued that we should not penalise him for this error as he has five family members (wife and four children) depending on him.
The tribunal then listed the Hamid hearing for 23 July 2025. Mr Rahman provided the tribunal with a letter of the same date acknowledging with reference to Ayinde that it is a breach of professional duties to rely on citations obtained through AI without checking their veracity through reputable legal sources.
He accepted that he had used ChatGPT to draft the grounds of appeal and also to create the document that he had handed to the tribunal clerk following the hearing on 20 June 2025. He provided various excuses relating to his health for why he had done this and argued “that he was misled by the search engine and is thus also a victim”.
Mr Rahman accepted that YH (Iraq) was not relevant to the appeal. He apologised and said that he should not be referred to the Bar Standards Board as he would “act with integrity in the future and is unwell and concerned as to how he will support his family.”
There were also issues surrounding who was the representative on record with the tribunal. Mr Rahman said that he was instructed by Lextel Solicitors who were on record with the Upper Tribunal, however the tribunal did not have any record of them for this case.
The tribunal had Mr Rahman of Lexminders’ Chambers Limited as being on record, however Mr Rahman advised the tribunal that this company had only existed between February 2023 and June 2024 when it was dissolved and since then he had operated as a self-employed barrister. The Upper Tribunal noted that in the First-tier Tribunal’s appeal, Mr Rahman had completed a section 84 notice stating that he was appearing on a direct access basis.
Referral to the Bar Standards Board
The Upper Tribunal said that they had already referred Mr Rahman to the Bar Standards Board in January this year for concerns in another appeal that he was conducting litigation without being authorised to do so and that he lacked basic professional competence. The tribunal dismissed his attempts to explain his behaviour and said that if he had been unwell then he should have informed the tribunal and alternative counsel instructed if there was time.
The tribunal said that:
Mr Rahman therefore has moved from an acceptance of the use of ChatGPT but with a defence of the research and a defence of the fake case of Y (China) on the day of the Panel error of law hearing; to a claim that it was a regrettable oversight and he did in fact want to rely upon an irrelevant but genuine case in his letter of 24th June 2025; to an acceptance before us that he used ChatGPT to assist in formulating the original grounds and in production of the document he handed to the Panel on 20th June 2024, and that the case of Y (China) is fake. We find therefore that Mr Rahman has directly attempted to mislead the Tribunal through reliance on Y (China), and has only made a full admission of this fact in his third explanation to the Upper Tribunal. He has not therefore acted with integrity and honesty in dealing with this issue, as well as having attempted to mislead the Tribunal in the grounds through the use of an AI generated fake authority.
The tribunal found that Mr Rahman “did not know that AI large language models, and ChatGPT in particular, were capable of producing false authorities. It follows that this is not a case where it would be appropriate to refer the matter for police investigation or to initiate contempt proceedings.” However a referral to the Bar Standards Board was deemed “most definitely appropriate”. The decision ended with an indication that the tribunal would like this and the earlier referral of Mr Rahman considered quickly by the Bar Standards Board because of the seriousness of the concerns raised.
Headnote
The headnote states:
1. AI large language models such as ChatGPT can produce misinformation including fabricated judgments complete with false citations.
2. The Divisional Court has provided guidance in the case of R (Ayinde) v London Borough of Haringey, Al-Haroun v Qatar National Bank QPSC [2025] EWHC 1383 (Admin) that the consequence of using AI large language models in a way which results in false authorities being cited is likely to be referral to a professional regulator, such as the BSB or SRA, as it is a lawyer’s professional responsibility to ensure that checks on the accuracy of citation of authority or quotations are carried out using reputable sources of legal information. Where there is evidence of the deliberate placing of false material before the Court police investigation or contempt proceedings may also be appropriate.
3. Taking unprofessional short-cuts which will very likely mislead the Tribunal is never excusable.
Conclusion
The tribunal noted at the outset of the decision that “the immigration client group can be particularly vulnerable”. This is of course correct, and why it is often difficult to find any sympathy for lawyers in these situations, despite the stress and difficulty of the job. It certainly seems that what happened here was that a client paid for work to be carried out in their appeal which was then instead outsourced to an AI tool.
I personally think, given the by now well-known risks of errors, hallucinations as well as various biases, that anyone who is using generative AI in legal work should be upfront about it at the outset. That way people (whether that be clients or instructing solicitors) can make an informed decision about whether or how they are happy for AI to be used in their case.
SHARE
