In yet another case involving AI hallucinations in court filings, the Court in Boston et al. v. Williams et al., Case No. 1:23-cv-00752 (N.D. Ga.), sanctioned the Plaintiffs’ attorney after finding most of the citations in their summary judgment opposition contained errors suggesting AI use.
Background
The Plaintiffs, four women involved in an altercation with comedian Katt Williams outside an Atlanta nightclub in 2016, brought a personal injury action in the U.S. District Court for the Northern District of Georgia against Williams and his associates. In an August 13, 2025, reply brief on their motion for summary judgment, Defendants noted many of the cases cited in Plaintiffs’ opposition were either inaccurate or did not exist. In response, Plaintiffs’ attorney Loletha Hale moved to replace the “erroneously filed” brief with an amended one. Hale’s August 19 declaration claimed she “inadvertently uploaded and electronically filed the wrong version of plaintiffs’ brief as a result of being distracted by a number of simultaneous life challenges.”
On September 9, 2025, the Court (Hon. William M. Ray, II) issued a sua sponte Order to Show Cause (the “OSC”), ordering Hale to appear for an evidentiary hearing to determine whether she should be sanctioned for violating Fed. R. Civ. P. Rule 11(b)(2). As summarized in the OSC, the Court had independently discovered that “17 of the 24 cases cited by Plaintiffs’ counsel either did not exist, did not support the proposition for which they were cited, or misquoted the authority,” indicia tending to show that they were AI-generated “hallucinations.” When confronted with these deficiencies at the summary judgment hearing on August 27, Hale had said “she was distracted with other matters and had asked her daughter (who is not an attorney or paralegal) to draft the brief for her.” While the OSC observed that generative AI was the most logical explanation for the deficiencies, whether AI was used was not the problem. The basis for possible Rule 11 sanctions was instead “counsel’s abdication of her responsibility to ensure that the signed brief she provided to the Court was accurate.”
Instead of responding to the OSC, on September 16, Hale moved to recuse Judge Ray, arguing that he “spoke to the undersigned in a disrespectful and condescending manner,” at the hearing and “questioned her honesty, candor and credibility in open court.” The recusal motion admitted AI was used to generate the filed brief, but maintained that Hale’s error was filing the wrong version, “not an intentional violation of Rule 11(b)(2).” Judge Ray denied the motion to recuse on September 23, noting no bias against the Plaintiffs was alleged and the fact that he addressed their lawyer’s conduct is not a proper basis for recusal, “[r]ather, it is indicative of the undersigned’s concern about the quality of legal representation that Plaintiffs are receiving.”
At the September 29, 2025 show cause hearing, Hale declined to elaborate on her conduct and said she would “rest on the record.”
On October 28, Judge Ray issued an order sanctioning Hale under Rule 11. The order recounted that Hale had asked her daughter to draft the brief and that she failed to review the brief before it was filed. The Court also observed that the proposed amended brief contained at least one of the same citation-related deficiencies as the original, and a subsequent motion also contained a hallucinated case, casting doubt on Hale’s explanation. As the order stated:
“The Court does not find or suggest that the use of AI tools to draft a legal brief is impermissible. What is not allowed, however, is the submission of such a brief without thoroughly checking the AI-generated material for its accuracy. The Court recognizes that mistakes in drafting can happen, but it is Ms. Hale’s abdication of her responsibility to ensure that the brief she provided to the Court was accurate and her cavalier attitude regarding her errors that is of most concern to this Court.” ECF No. 70 at 4, n. 2.
The Court ordered Hale to notify all her existing clients in federal cases filed in the Northern District of Georgia of the Court’s findings and to file a notice with a copy of the sanctions order in all her pending and future cases in that district for five years. On November 7, the Court granted Defendants’ motion for summary judgment, and entered judgment in favor of Katt Williams and the defendants.
Practice Takeaways
Boston v. Williams presents a few perennial and some unique ethical, as well as tactical, lessons for litigators using or contemplating using AI.
Don’t Abdicate Your Responsibility for Drafting Court Documents
Like others before her, the attorney sought AI-fueled help while facing difficult or overwhelming life circumstances in the midst of court deadlines. As these examples show, if complicated life circumstances mean that the lawyer does not have the bandwidth to either personally prepare or thoroughly check a document before filing it, the better course is to seek an extension from the Court in advance and/or delegate the task to another attorney responsible for the matter, rather than delegate to AI without thoroughly checking the output.
The Court also repeatedly referenced the sanctioned lawyer’s claim that she tasked her daughter with drafting the response brief, resulting in AI hallucinations in the filed document. Although not in itself a Rule 11 violation, this kind of outsourcing contributed to the violation and presents a risk that a regulator may consider it to violate additional rules of professional conduct. It is not clear if Hale’s daughter was employed by her firm. If not, outsourcing the drafting of a court document to a non-attorney risks running afoul of at minimum the rule prohibiting an attorney from assisting another in the unauthorized practice of law (see Model Rule 5.5(a)). If so, then the duty to supervise any employees, including on the appropriate use of AI, also applies. (See Model Rule 5.1.) In any case, the attorney has a duty to exercise her own professional judgment as a licensed attorney on behalf of her clients, which cannot be abdicated or outsourced to non-lawyer employees, family members, or AI.
Always Carefully Check Court Documents Before Filing
As in other cases, which my colleague has blogged about, here, the error-riddled brief was filed by an attorney who did not maintain complete control over the writing process, did not sufficiently cite check the brief, and AI was used somewhere along the line. As these cases highlight, the attorney who signs and/or uses their credentials to file a court document has a non-delegable responsibility to ensure it does not make any false statements of fact or fail to disclose controlling legal authority (ABA Model Rules of Professional Conduct (“Model Rules”) 3.3), does not make frivolous arguments (Model Rule 3.1) and complies with the statutes and court rules governing the conduct of attorneys admitted to practice before that court (e.g., Fed. R. Civ. P. Rule 11(b)). Failure to ensure that the filed document is correct and compliant may expose the signing attorney to the risk of sanctions.
Be Transparent About AI Use
The attorney here did not admit to personally using AI and declined the Court’s invitation to provide details on how the erroneous brief came to be. But the Court did not find her explanation credible, pointing out subsequent errors with the same hallmarks of AI use. If the AI-generated brief had been a one-off created by another person, none of same errors would have persisted in the proposed amended brief, or in a subsequent filing that contained yet more hallucinations. But if the attorney did use or authorize the use of AI, she could and arguably should have said so.
As the Court twice noted, there is nothing in Rule 11 that prohibits an attorney from using AI in to draft a brief. What matters is that the attorney remains responsible for certifying what she signs and files is true and non-frivolous. ABA guidance on lawyers’ use of generative AI, which I previously blogged about here, confirms that lawyers may use generative AI, provided that they don’t abdicate their own professional judgment and comply with the rules of professional conduct, including maintaining sufficient competence in the tech to understand what it can and cannot do (Model Rule 1.1) and ensuring any submissions to a tribunal using AI-generated output are truthful, non-frivolous, and show candor to the tribunal (Model Rules 3.1 and 3.3). Courts also encourage transparency on AI use in the legal profession, with some judges requiring lawyers to disclose AI use and certify the accuracy of its output, as in this example.
It seems lawyers are still hesitant to admit to using it, either because they do not want to be perceived as using a shortcut, or they think it is impermissible or frowned upon to do so. But courts and ethics committees have signaled their acceptance that AI is here to stay, and its use in legal work will likely become more widespread as more sophisticated AI tools are developed. To maintain the integrity of the legal profession and develop best practices for AI use within it, we need to know when it has been used and to what effect. The message from courts and bar associations alike is clear: using generative AI is fine, you just need to be upfront and remain accountable.
In this case, failing to sufficiently check the AI-generated brief before it was filed was the conduct that the Court found violated Rule 11. In addition, though, the attorney’s evasive refusal to further explain—which the Court viewed as demonstrating a “cavalier” attitude—raised the Court’s greatest concerns and militated in favor of sanctions.
Don’t Attack the Court for Regulating the Conduct of Attorneys
Instead of responding to the Court’s order to show cause, the attorney attacked the judge’s impartiality because he questioned the propriety of her conduct. While there may be circumstances in which a recusal motion is appropriate, the Court’s calling into question the conduct of attorneys practicing before it, without more, is not one of them. Attacking the judge instead of showing candor and contrition in the face of potential sanctions may also display the kind of “cavalier attitude” that the Court here found most concerning, and likely weighed in favor of sanctions.
* * *
In sum, deflection over claims relating to AI-generated filings will not likely avoid negative consequences for the lawyer or client. If a lawyer opts to use generative AI to draft court documents, doing so with care, attention, and transparency will better serve the lawyer, better serve clients, and help the legal profession to grow with this new technology.

/Passle/644c41cc474c4c94b77327c8/SearchServiceImages/2025-11-18-01-23-48-640-691bcaa4e44a5af8ac0f8264.jpg)

/Passle/644c41cc474c4c94b77327c8/SearchServiceImages/2025-11-07-18-27-06-162-690e39fa0c7a1185349f2c71.jpg)
/Passle/644c41cc474c4c94b77327c8/SearchServiceImages/2025-10-31-18-25-20-337-6904ff105e0a2c8c621b49b1.jpg)