Today, we begin a new subject in our ongoing analytics study of the Court’s decision making – oral arguments.  Although the academic community has been producing analytics studies of appellate decision making for a century, the analytics study of oral arguments is a much more recent development.

[We repeat the next five paragraphs for the benefit of any readers who don’t also read our sister blog the Illinois Supreme Court Review, where this material was published earlier this week.  For readers who do, you can skip the following five paragraphs.]

The earliest study appears to be Sarah Levien Shullman’s 2004 article for the Journal of Appellate Practice and Process.  Shullman analyzed oral arguments in ten cases at the United States Supreme Court, noting each question asked by the Justices and assigning a score from one to five to each depending on how helpful or hostile she considered the question to be. Once seven of the ten cases had been decided, she divided her observations according to whether the Justice ultimately voted for or against the party. Based upon her data, she made predictions as to the ultimate result in the three remaining cases. Shullman concluded that it was possible to predict the result in most cases by a simple measure – the party being asked the most questions generally lost.

John Roberts addressed the issue of oral argument the year after Shullman’s study appeared. Then-Judge Roberts (at the time, two years into his tenure on the D.C. Circuit) noted the number of questions asked in the first and last cases of each of the seven argument sessions in the Supreme Court’s 1980 Term and the first and last cases in each of the seven argument sessions in the 2003 Term. Like Shullman, Roberts found that the losing side was almost always asked more questions. So apparently “the secret to successful advocacy is simply to get the Court to ask your opponent more questions,” Judge Roberts wrote.

Professor Lawrence S. Wrightsman, a leading scholar in the field of psychology and the law, took an empirical look at U. S. Supreme Court oral arguments in a 2008 book. Professor Wrightsman chose twenty-four cases from the Supreme Court’s 2004 term, dividing the group according to whether they involved what he called ideological or non-ideological issues. He then analyzed the number and tone of the Justices’ questions to each side, classifying questions as either sympathetic or hostile. Professor Wrightsman concluded that simple question counts were not a highly accurate predictor of ultimate case results unless the analyst also took into account the tone and content of the question.

Timothy Johnson and three other professors published their analysis in 2009. Johnson and his colleagues examined transcripts from every Supreme Court case decided between 1979 and 1995 – more than 2,000 hours of argument in all, and nearly 340,000 questions from the Justices. The researchers isolated data on the number of questions asked by each Justice in each argument, along with the average number of words used in each question. The study concluded, after controlling for other factors that might explain case outcomes, all other factors being equal, the party asked more questions generally lost the case.

Professors Lee Epstein and William M. Landes and Judge Richard A. Posner published their study in 2010. Epstein, Landes and Posner used Professor Johnson’s database, tracking the number of questions and average words used by each Justice. Like Professor Johnson and his colleagues, they concluded that the more questions a Justice asks, all else being equal, the more likely the Justice will vote against the party, and the greater the difference between total questions asked to each side, the more likely a lopsided result is.

[ISCR readers resume here.]

The California Supreme Court began posting videos and audios of its oral arguments in 2016.  Since that time, the Court has asked 8,569 questions in civil cases – 4,450 of appellants and 4,119 of respondents.  Appellants received the most questions in every year between 2016 and 2020 (so far).

In the next two Tables, we divide this data between affirmances and reversals.  First up – civil affirmances.  Did the (losing) appellants receive more questions, as the research would lead us to expect?

The answer is yes.  Losing appellants in civil cases averaged more questions in every year from 2016 to 2020.

Finally, we review the data for full and partial reversals, where the respondents are the losing party.  For four of the last five years, we see the expected relationship between average questions for the winner and loser – the respondents average more questions.  The only exception was 2019.

Join us back here tomorrow as we review the same metrics for the Court’s arguments in criminal cases.

Image courtesy of Flickr by Becky Matsubara (no changes).