Advertisement
UK markets close in 6 hours 40 minutes
  • FTSE 100

    8,236.72
    +65.60 (+0.80%)
     
  • FTSE 250

    20,618.32
    +88.90 (+0.43%)
     
  • AIM

    770.84
    +0.72 (+0.09%)
     
  • GBP/EUR

    1.1806
    -0.0004 (-0.04%)
     
  • GBP/USD

    1.2753
    +0.0007 (+0.05%)
     
  • Bitcoin GBP

    45,121.64
    -2,512.12 (-5.27%)
     
  • CMC Crypto 200

    1,216.45
    -44.74 (-3.54%)
     
  • S&P 500

    5,537.02
    +28.01 (+0.51%)
     
  • DOW

    39,308.00
    -23.90 (-0.06%)
     
  • CRUDE OIL

    83.18
    -0.70 (-0.83%)
     
  • GOLD FUTURES

    2,369.40
    0.00 (0.00%)
     
  • NIKKEI 225

    40,913.65
    +332.89 (+0.82%)
     
  • HANG SENG

    18,028.28
    +49.71 (+0.28%)
     
  • DAX

    18,460.56
    +86.03 (+0.47%)
     
  • CAC 40

    7,692.70
    +60.62 (+0.79%)
     

Michael Cohen Says AI-Created Fake Cases Mistakenly Used in Court Brief

(Bloomberg) -- Donald Trump’s former lawyer Michael Cohen unwittingly included phony cases generated by artificial intelligence in a brief last month arguing for his release from post-prison supervision, according to court papers made public Friday.

Most Read from Bloomberg

Cohen, who was disbarred in 2019 after pleading guilty to lying to Congress, said in a statement that he used Google’s AI tool Bard to come up with the cases and then sent them to his lawyer. The brief, filed in federal court in Manhattan, was in support of his request for an early end to requirements that he check in with a probation officer and get permission to travel outside the US.

ADVERTISEMENT

David Schwartz, the lawyer who filed it, said he mistakenly believed the cases had been vetted by Danya Perry, an attorney who had represented Cohen, and that he didn’t check them himself. Perry requested in a letter to the court that “Mr. Schwartz’s mistake in filing a motion with invalid citations not be held against Mr. Cohen” and that the judge release him from supervision.

In the wake of the legal faux pas, polite finger-pointing abounds.

The lawyers pointed to their client as the source of the bogus precedents, offering up his own admission that he had gotten the cases from Bard and failed to check them against standard legal research sources.

Cohen, for his part, said “it did not occur to me then - and remains surprising to me now - that Mr. Schwartz would drop the cases into his submission wholesale, without even confirming they existed.” He said he had thought of Bard as a “super-charged search engine” and not a service that would generate real-looking but phony legal cases.

The case is “a simple story of a client making a well-intentioned but poorly-informed suggestion,” trusting that his lawyer would vet the cases before relying on them in a brief, Perry said, arguing that Cohen is blameless. As for Schwartz, she said, he’s guilty only of an “embarrassing” mistake.

Lawyers’ Bane

Schwartz isn’t the first lawyer to find himself forced to explain AI-related errors in Manhattan federal court. In June two lawyers were fined $5,000 after a judge found they had cited phony cases, which had been generated by OpenAI Inc.’s ChatGPT, and then made misleading statements after he called the problem to their attention.

The use of AI for legal research has prompted judges across the country to issue standing orders governing its use. The federal appeals court in New Orleans is contemplating a rule requiring lawyers to certify either that “no generative artificial intelligence program was used” in drafting legal filings or that any AI-created work has been reviewed and approved by a human lawyer.

The case is US v. Cohen, 18-cr-00602, US District Court, Southern District of New York (Manhattan).

--With assistance from Holly Barker.

(Adds broader context in second section.)

Most Read from Bloomberg Businessweek

©2024 Bloomberg L.P.