A Kentucky attorney has been called out after admitting he used artificial intelligence to draft appellate briefs that contained inaccurate legal citations.

A recent opinion from the U.S. Court of Appeals for the Sixth Circuit detailed how attorney Steven N. Howe filed briefs in a criminal appeal that included misquoted and misrepresented case law after relying on AI-generated content he failed to properly verify.

According to the court’s April 3 opinion in United States v. Farris, Howe acknowledged that he used an artificial intelligence tool to prepare both his principal and reply briefs, then submitted them without confirming the accuracy of the cited authorities. The court found that several quotations attributed to real cases “do not appear in their cited sources,” and in some instances misrepresented the holdings of prior decisions.

The panel said Howe’s conduct violated core professional obligations, emphasizing that “attorneys have an ethical obligation to verify the citations and propositions they submit to courts,” regardless of whether AI tools are used. As a result, the court ordered that Howe not be compensated for his work under the Criminal Justice Act, removed him from the case, and referred the matter for possible disciplinary action.

The ruling underscores growing concerns about the reliability of generative AI in legal practice, even as some systems demonstrate advanced capabilities. In 2023, researchers reported that GPT-4 was able to pass the Uniform Bar Exam, scoring in roughly the 90th percentile and outperforming many human test takers, according to a Stanford-affiliated analysis.

CLICK HERE TO GET THE DALLAS EXPRESS APP

Still, the Sixth Circuit cautioned that such advancements do not replace attorney oversight, warning that “new technologies… are no substitute for tried-and-true safeguards managed by practicing attorneys.”

The case is the latest in a string of incidents involving lawyers misusing AI in court filings. In 2025, two attorneys representing MyPillow CEO Mike Lindell were sanctioned $3,000 each after submitting a brief containing nearly 30 defective citations, including references to nonexistent cases, according to a prior report by The Dallas Express.

Courts have increasingly flagged so-called “hallucinations,” where AI systems generate false but plausible-sounding information. A separate DX investigation in 2025 found that Google’s AI Overview tool falsely claimed that Diana Ross had been arrested for cocaine possession, an assertion unsupported by public records.

In the Kentucky case, the Sixth Circuit noted that even though Howe’s brief cited real cases rather than entirely fictitious ones, the inaccurate quotations and misleading arguments still constituted serious misconduct.

“Attorneys should not utilize technology without knowing the ways in which it can be misused or contribute to inaccuracies,” the court wrote, adding that reliance on staff to oversee AI-generated work did not meet professional standards.

The decision also highlighted broader ethical considerations, including the need for lawyers to safeguard client information, maintain competence with evolving technology, and ensure transparency when using AI tools.

While Howe told the court he had not previously been disciplined in his 40-year career, the judges concluded that his failure to review the AI-generated material resulted in “inexcusable transgressions” that delayed proceedings and consumed judicial resources.

The Dallas Express reached out to Howe for comment before publication, but did not receive a response.