Technology

Chief justice centers Supreme Court annual report on AI’s dangers

Chief Justice John Roberts warned that courts will need to consider the proper use of artificial intelligence (AI), portraying it as a new frontier for change in an annual report that follows a turbulent year for the Supreme Court.

“I predict that human judges will be around for a while,” Roberts wrote in his report.

“But with equal confidence I predict that judicial work — particularly at the trial level — will be significantly affected by AI,” he continued. “Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them.”

Released Sunday, the report makes no mention of the high court’s recent ethics controversies or the disputes barreling toward the justices implicating former President Trump that are set to dominate the coming months.

The report instead marks the chief justice’s most extensive public statement to date on AI, building upon his comments over the years concerning emerging technologies’ impact on the law. 

For years, the justices have grappled with applying centuries-old legal doctrine to modern machinery in cases before them, and this term is no different.

But Roberts at times has stressed a need for courts to better incorporate new technologies into their operations. In 2014, as he announced the Supreme Court would move to an electronic case filing system, Roberts lamented that courts “often choose to be late to the harvest.”

“For those who cannot afford a lawyer, AI can help.” Roberts wrote Sunday. “It drives new, highly accessible tools that provide answers to basic questions, including where to find templates and court forms, how to fill them out, and where to bring them for presentation to the judge — all without leaving home. These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system.”

But while he hailed AI as providing benefits in the legal field and elsewhere, Roberts simultaneously warned that AI requires “caution and humility.”

“One of AI’s prominent applications made headlines this year for a shortcoming known as ‘hallucination,’ which caused the lawyers using the application to submit briefs with citations to non-existent cases. (Always a bad idea.),” Roberts wrote.

Two days earlier, Trump’s ex-fixer and personal lawyer, Michael Cohen, admitted to giving his lawyer fake case citations using AI-powered chatbot Google Bard. Cohen’s lawyer used the citations in a motion to end Cohen’s supervised release early, a sentence that followed Cohen’s guilty plea to tax and campaign finance charges.

Cohen was not the first to have been caught using AI this year, however.

In June, a federal judge sanctioned two lawyers after one used fake case citations and admitted to using AI-powered chatbot ChatGPT in doing so. One federal appeals court is now proposing lawyers certify they did not rely on AI in drafting court documents.

Roberts indicated that multiple committees of the Judicial Conference, the federal judiciary’s policy-making arm of which Roberts is the presiding officer, will be involved in determining AI’s proper use.

The chief justice argued that AI risks invading privacy safeguards and dehumanizing the law, repeatedly stressing in his report the notion that machines cannot fully replace humans in the courts.

The prediction echoes comments Roberts made to graduates at a D.C.-area high school in 2018, when Roberts told the students to “beware the robots” and AI while suggesting the technologies won’t “take over the world.”

“Judges, for example, measure the sincerity of a defendant’s allocution at sentencing,” Roberts wrote Sunday. “Nuance matters: Much can turn on a shaking hand, a quivering voice, a change of inflection, a bead of sweat, a moment’s hesitation, a fleeting break in eye contact. And most people still trust humans more than machines to perceive and draw the right inferences from these clues.”