Our latest repost:

We begin our analysis by addressing the foundation of the entire body of data analytic scholarship on appellate judging: competing theories of judicial decision making.

The oldest theory by far is generally known in the literature as “formalism.”  This is the theory we all learned in law school, according to which every decision turns on four factors, each completely extrinsic to the background and ideology of the individual judge: (1) the case record on appeal; (2) the applicable law; (3) controlling precedent; and (4) judicial deliberations (at least in the appellate world).  As Judge Richard Posner of the Seventh Circuit has pointed out, Blackstone was describing the formalist theory when he described judges as “the depositories of the laws; the living oracles, who must decide in all cases of doubt, and who are bound by an oath to decide according to the law of the land.”  In Federalist Paper No. 78, Alexander Hamilton was expounding the same theory when he wrote that judges have “no direction either of the strength or of the wealth of the society; and can take no active resolution whatever.  {The judicial branch] may truly be said to have neither force nor will, but merely judgment.”

Much more recently, Chief Justice Roberts endorsed the formalist theory when at his confirmation hearing he compared a Supreme Court Justice to a baseball umpire – merely calling balls and strikes, never pitching or hitting.  For decades, politicians have promoted the formalist ideal when they insist that judges should merely interpret or discover the law rather than making it (such comments seem to be most often made in the context of complaints that one judge or another has fallen short of that ideal).

The adequacy of formalism as an explanation for how judicial decisions are made has been questioned for generations.  As I noted two posts ago, Charles Grove Haines showed in 1922 that magistrate judges in New York City appeared to be imposing widely varying sentences in factually indistinguishable DUI cases.  Many observers have pointed out that if formalism (which posits that there is one correct answer to every case, entirely extrinsic to the judges) best explains how appellate courts actually operate then dissent should be exceedingly rare, if not unheard of.  In fact, dissent is quite rare at intermediate appellate courts, if you consider both unpublished and published decisions.  But at appellate courts of last resort, and in all appellate courts when you consider the published decisions which shape the law, dissent typically runs anywhere from 20 to 45%.  Other observers have suggested that strict formalism cannot explain the importance of diversity in the judiciary if one assumes that individual judges’ judicial or political ideologies and personal backgrounds are entirely irrelevant.

Still others have pointed out that even the politicians who like to endorse the ideal of formalism have never actually believed that it explains judicial decision making.  As Professors Lee Epstein and Jeffrey A. Segal point out in Advice and Consent, their book on the politics of judicial appointments, 92.5% of the 3,082 appointments to the lower federal courts made between 1869 and 2004 have gone to members of the President’s own party.  Surely that number would be far lower if the philosophy of an individual judge had no impact on judicial decision making.

Most of all, critics of formalism have argued that in fact, it is possible to predict appellate decision making reasonably well over time based upon factors unrelated to the facts of any specific case and legal doctrine.  For example, in a 2004 study performed by Theodore W. Ruger and others, the professors attempted to predict the result in every case at the U.S. Supreme Court during the 2002 term using a six factor model: (1) the circuit of origin; (2) the issue involved; (3) the type of petitioner; (4) the type of respondent; (5) whether the lower court decision was liberal or conservative; and (6) whether the petitioner challenged the constitutionality of a law or practice.  They compared the model’s predictions to the results of independent predictions by legal specialists.  The statistical (and decidedly non-formalist) model predicted 75% of the Court’s results correctly; the legal experts were correct 59.1% of the time.

Image courtesy of Flickr by Ken Lund (no changes).

Our short series of contextual reposts continues:

Although the state Supreme Courts have not attracted anything near the level of study from academics engaged in empirical legal studies that the U.S. Supreme Courts and Federal Circuits have a number of different researchers have attempted to compare how influential the various state courts are for the development of American law. One of the first efforts was published in 1936 by Rodney L. Mott, “Judicial Influence” (30 Am. Pol. Sci. Rev. 295 (1936)). Using several different proxies for influence, including law professors’ rankings, reprinting of a court’s cases in casebooks, citations by other state Supreme Courts and citations by the U.S. Supreme Court, Mott concluded that the most influential state Supreme Courts between 1900 and 1936 were New York, Massachusetts, California and Illinois.

In 1981, Lawrence Friedman, Robert Kagan, Bliss Cartwright and Stanton Wheeler published “State Supreme Courts: A Century of Style and Citation” (33 Stan. L. Rev. 773 (1981)). Friedman and his colleagues assembled a database consisting of nearly 6,000 cases from sixteen state Supreme Courts spanning the years 1870-1970. Among other things, the authors counted the number of times each case had been cited by out-of-state courts as a rough proxy for the author court’s influence. As far back as the 1870-1880 period, California ranked third among all state Supreme Courts in the sample for out-of-state citations, behind only New York and Massachusetts. By the 1940-1970 period – not coincidentally, a period when the California Supreme Court was developing a national reputation for innovation with a string of landmark decisions under the leadership of Chief Justices Gibson, Traynor and Wright – California had moved into first place in out-of-state citations. Fully 92% of all California Supreme Court decisions in the sample were cited at least three times by out-of-state courts, and 26% were cited more than eight times.

Two years later, Professor Gregory Caldeira published “On the Reputation of State Supreme Courts” (5 Pol. Behav. 83 (1983)). Using a database limited to cases published in 1975, Professor Caldeira focused on citations by other state Supreme Courts to each state’s decisions as a proxy for influence. He concluded that the top-performing courts were California, New York and New Jersey. Professor Scott Comparato took a somewhat similar approach in 2002 with “On the Reputation of State Supreme Courts Revisited,” using a random sample of thirty cases from each state Supreme Court. Professor Comparato concluded that the Supreme Courts of California and New York were cited by out-of-state courts significantly more often than the Supreme Courts of any other state.

In 2007, Jake Dear and Edward W. Jessen published “‘Followed Rates’ and Leading State Cases, 1940-2005” (41 U.C. Davis L. Rev. 683 (2007)). Dear and Jessen attempted to determine which state Supreme Court’s decisions were most often “followed” by out-of-state courts, as that term is used by Shepherd’s. Dear and Jessen concluded that the California Supreme Court is the most often followed jurisdiction in the country by a significant margin, with 33% more decisions between 1940 and 2005 which were followed at least once by an out-of-state court than the second highest finisher, Washington. California’s lead lengthens when the authors limited the data to cases followed three or more times by out-of-state courts, or five or more times – California leads Washington 160 to 72 in terms of decisions followed three or more times, and 45 to 17 for five or more.

Two years after Dear and Jessen’s paper was published, Professors Eric A. Posner, Stephen J. Choi and G. Mitu Gulati published their effort to bring all the various measures together, “Judicial Evaluations and Information Forcing: Ranking State High Courts and Their Judges” (58 Duke Law Journal 1313 (2009)). The authors compared the state Supreme Courts by three standards: productivity, opinion quality and independence, using a database consisting of all the Supreme Courts’ decisions between 1998 and 2000. Proposing out-of-state citations as a proxy for “opinion quality,” the authors determined that California was the most often cited court by a wide margin, with 33.76 “citations per judge-year,” as compared to 22.40 for Delaware, the second-place finisher.

Image courtesy of Flickr by Ken Lund (no changes).

I’m always surprised when I encounter litigators who dismiss litigation analytics as a passing fad.  In fact, as shown in the reprint post below, it’s a century-long academic enterprise which has produced many hundreds of studies conclusively proving through tens of thousands of pages of analysis the value of data analytics in better understanding how appellate decisions are actually made.  Here’s the second in our reprint series, both here and at the California blog:

The application of data analytic techniques to the study of judicial decision making arguably begins with political scientist Charles Grove Haines’ 1922 article in the Illinois Law Review, General Observations on the Effects of Personal, Political, and Economic Influences in the Decisions of Judges. (17 Ill. L. Rev. 96 (1922)). Reviewing the records of New York City magistrate courts, Haines noted that while 17,075 people had been charged with public intoxication in 1916 – 92% of whom had been convicted – one judge discharged just one of 566 cases, another 18%, and still another fully 54%. Haines argued from this data that results in the magistrate courts were reflecting to some degree the “temperament . . . personality . . . education, environment, and personal traits of the magistrates.”

Two decades later, another political scientist, C. Herman Pritchett published The Roosevelt Court: A Study in Judicial Politics and Values, 1937-1947. Pritchett became interested in the work of the Supreme Court when he noticed that the Justices’ dissent rate had sharply increased in the late 1930s. Pritchett argued that the increase in the dissent rate necessarily weighed against the formalist view that “the law” was an objective reality which appellate judges merely found and declared. In The Roosevelt Court, Pritchett published a series of charts showing how often various combinations of Justices had voted together in different types of cases (the precursor of the some of the analysis we’ll publish later this year in California Supreme Court Review).

Another landmark in the data analytic literature, the U.S. Supreme Court Database, traces its beginnings to the work of Professor Harold J. Spaeth about three decades ago. Professor Spaeth undertook to create a database which classified every vote by a Supreme Court Justice in every argued case for the past five decades. In the years that followed, Spaeth updated and expanded his database, and additional professors joined the groundbreaking effort. Today, thanks to the work of Professors Harold Spaeth, Jeffrey Segal, Lee Epstein and Sarah Benesh, the database contains 247 data points for every decision the U.S. Supreme Court has ever rendered – dating back to August 3, 1791.  The Supreme Court Database is a foundational tool utilized by nearly all empirical studies of U.S. Supreme Court decision making.

Not long after the beginnings of the Supreme Court Database, Professors Spaeth and Segal also wrote one of the landmarks of data-driven empirical research into appellate decision making: The Supreme Court and the Attitudinal Model, in which they proposed a model arguing that a judge’s personal characteristics – ideology, background, gender, and so on – and so-called “panel effects” – the impact of having judges of divergent backgrounds deciding cases together as a single, institutional decision maker – explained a great deal about appellate decision making.

The data analytic approach began to attract widespread notice in the appellate bar in 2013, with the publication of Judge Richard A. Posner and Professors Lee Epstein and William M. Landes’ The Behavior of Federal Judges: A Theoretical & Empirical Study of Rational Choice. Drawing upon arguments developed in Judge Posner’s 2008 book How Judges Think, Posner, Epstein and Landes applied various regression techniques to a theory of judicial decision making with its roots in microeconomic theory, discussing a wide variety of issues from the academic literature.

Today, there is an enormous academic literature studying the work of the U.S. Supreme Court and the Circuits from a data analytic perspective on a variety of different issues, including case selection, opinion assignment, dissent aversion, panel effects, the impact of ideology, race and gender. That literature has led to two excellent anthologies just in the last few years: The Oxford Handbook of U.S. Judicial Behavior, edited by Lee Epstein and Stefanie A. Lindquist, and The Routledge Handbook of Judicial Behavior, edited by Robert M. Howard and Kirk A. Randazzo.  The state Supreme Courts have attracted somewhat less study than the federal appellate courts, but that has begun changing in recent years, and similar anthologies for the state courts seem inevitable.

Image courtesy of Flickr by miheco (no changes).

The Illinois Supreme Court Review recently marked its sixth anniversary.  In April, this blog turns five.

So I thought it was time for a first: cross-posted reprints from the earliest days of the blogs.  My early attempts to provide context for the work and to answer the question I often heard in those days: “Interesting, but what difference does it make?”

So for the next 2-3 weeks, we’ll be reprinting those context posts – with minimal revisions – both here and at the Illinois blog.  For readers who follow both blogs, be warned – the two posts reprinted each week will be largely identical (and don’t worry – it’ll be easy to tell when we resume our regularly scheduled programming . . .).  So here we go:

One of the primary reasons why appellate lawyering is a specialty is because appellate lawyers must contend with persuading a collective, institutional decision maker. An appellate panel isn’t like a jury. The members of a jury come together for the first time for a particular case, and part forever when it’s over. Members of an appellate panel have generally been on the Court for months if not years and will be there for years after a particular case is over. Members of a jury don’t share anything akin to the “law of the Circuit” or the “law of this Court” as a collective enterprise built over a span of years. And although historically, there’s been considerable pressure on jurors to find unanimity – although less so in recent years on the civil side – they almost always are trying to reach a binary decision: yes/no, one side wins, one side loses. An appellate panel, on the other hand, is attempting to reach unanimity on a collective reasoned, written argument. Decision making by appellate panel rather than individual judges has all kinds of potential effects on the outcome, and therefore on appellate lawyers’ task of persuasion – from making judges more reluctant to dissent from a decision they disagree with, to causing judges to vote in a more (or less) liberal or conservative direction than they otherwise would because of the panel’s deliberations.

Over the past few generations, political scientists, law professors, economists and statisticians have developed a host of tools for better understanding the dynamics of group decision making. These include game theory, organization theory, behavioral microeconomics, opinion mining and data analytics. Some researchers have used game theory to develop important insights about everything from the inner workings of the U.S. Supreme Court[1] to why Federal Circuits follow Supreme Court precedent.[2] Others have used traditional labor theory in an attempt to develop a unified theory of judicial behavior.[3] With the rise of widely available massive computerized databases of appellate case law, the most fast-growing and widely varied area of research has applied sophisticated statistical and “big data” techniques to understanding the law.

Data analytics is revolutionizing litigation. Several different companies are offering such services at the trial level. Lex Machina (acquired in 2015 by LexisNexis), Ravel Law (acquired two years later, also by LexisNexis) and Premonition each offer detailed analytics about trial judges, courts and case types based on databases of millions of pages of case information. ALM has also expanded its judicial profiles services to increase their focus on judge analytics.

In 2015, I started the Illinois Supreme Court Review to bring rigorous, law-review style empirical research founded on data analytic techniques to the study of appellate decision making. A year later, I expanded the project here. Both blogs are based on massive databases consisting of 125-150 data points (depending on the year) drawn from every case, civil and criminal, decided by the California and Illinois Supreme Courts.

Why?  Simple.  Litigators, no matter whether they’re usually in the appellate or trial courts, frequently find themselves predicting the future.  This jurisdiction or this judge tends to be pro-plaintiff or pro-defendant.  Juries in this county tend to return excessive verdicts, or they don’t.  Trial or appellate litigation in this jurisdiction takes . . . this long.  What does it mean that the state Supreme Court just granted review?  Or what does it mean that the Supreme Court asked me way more questions at oral argument than they did my opponent?

Every one of these questions has a data-driven answer.  Not just in California and Illinois, but in every jurisdiction in the country.  Sometimes the data confirms the traditional wisdom – and sometimes it proves that the traditional wisdom is dead wrong.

Want a more high-flown answer?  Try this one from Posner, Epstein and Landes’ The Behavior of Federal Judges:

The better that judges are understood, the more effective lawyers will be both in litigating cases and, as important, in predicting the outcome of cases, thus enabling litigation to be avoided or cases settled at an early stage.

So that’s what we do here.  For everyone who’s been with us for most or all of the six years since we started, thank you.  And for first-time visitors: we hope you’ll join us.

Image courtesy of Flickr by Andrew Dupont (no changes).

————————————————————————-

[1]               James R. Rogers, Roy B. Flemming, and Jon R. Bond, Institutional Games and the U.S. Supreme Court (2006).

[2]               Jonathan P. Kastellec, “Panel Composition and Judicial Compliance on the U.S. Courts of Appeals,” The Journal of Law, Economics & Organization, 23(2): 421-41.

[3]               Judge Richard A. Posner and Professors Lee Epstein and William M. Landes, The Behavior of Federal Judges: A Theoretical & Empirical Study of Rational Choice (2013).

Last time, we reviewed Justice Corrigan’s voting record to see how often she has been in the minority in civil cases, year by year since joining the Court.  Overall, Justice Corrigan has been in the minority in 5.98% of the civil cases she’s participated in.  Today, we’re looking at Justice Liu’s numbers.

Justice Liu has voted with the minority in 6.51% of his civil cases.  There is a definite time trend in the data which likely reflects the changing composition of the Court.  Between 2012 and 2017, Justice Liu was over his baseline in four of six years: 2012 (7.69%), 2014 (13.04%), 2016 (8.33%) and 2017 (7.14%).  Since that time, he has been over his baseline only once – last year, at 6.9%.

Join us back here next week as we continue our review of the Justices’ minority voting percentages.

Image courtesy of Flickr by Edward De La Torre (no changes).

 

Last week, we looked at how often the Chief Justice is in the minority in civil cases as one measure of how much in sync with the rest of the Court she has been from one year to the next.  Since joining the Court, the Chief Justice has been in the minority only 3.85% of the time in civil cases.  Justice Corrigan is somewhat more likely to be in the minority – 5.98% since joining the Court in 2006.  There are no obvious time trends correlating with the changing membership of the Court.  In three of her first four years, Justice Corrigan was below her baseline.  She was below baseline in 2012 and 2014, but above it in 2013 and 2015.  Her minority percentage briefly spiked in 2016 (13.89%) and 2017 (7.32%), but has been below baseline ever since – 0% in 2018, 5.88% in 2019 and 3.7% last year.

Join us back here tomorrow as we review the data for Justice Liu.

Image courtesy of Flickr by Channone Arif (no changes).

 

Today, we begin the second phase of our review of the Justices’ individual voting records.  The percentage of the time a particular Justice votes in the majority and the minority is one measure of to what degree she or he is in philosophical sync with the rest of the Court.  When applied to a Chief Justice, it is also arguably a measure of the Chief’s influence on the rest of the Court.  Today, we begin our review with Chief Justice Cantil-Sakauye.

From joining the Court through the end of 2020, Chief Justice Cantil-Sakauye had voted in 312 civil cases.  She was in the minority for only 12 of those 312 cases – 3.85% of the total.  Interestingly, her percentage in the minority has ticked up a bit in the past two years as the percentage of Democratic nominee Justices has increased.  The arrival of Justices Cuellar and Kruger has little effect on the Chief’s minority percentage – she was in the minority in 3.13% of civil cases in 2015, 5.56% in 2016, 2.44% in 2017 and 3.03% in 2018.  But then in 2019, the Chief had her most civil minority votes since joining the Court – three votes, or 8.82% for the year.  This past year, the Chief Justice was in the minority in 2 of 29 cases – 6.9% of the total.  Since the beginning of 2019, the Chief Justice has been in the minority in 7.94% of civil cases.

Join us back here next Thursday as we continue our examination of the Justices’ minority percentages.

Image courtesy of Flickr by chrissharkman (no changes).

 

Today we’re concluding our initial review of the Justices’ voting records by reviewing Justice Cuellar and Justice Groban.

Since joining the Court in 2015, Justice Cuellar has voted in 201 civil cases.  He has voted in affirm in 73 cases and to reverse 90 times.  Those votes show a curious time pattern: in 2015 and 2016, his votes to reverse outpaced his votes to affirm by eight, and in 2017 and 2018, he voted to affirm eight times more than to reverse.  So the entire 17-vote margin between affirm and reverse for his term comes from 2019 and 2020.

Justice Cuellar has cast 17 split votes to affirm in part and reverse/vacate/modify in part.  He has voted to deny once, to grant four times, to vacate twice, and has cast 14 “other” votes – nearly all in certified question appeals from the Ninth Circuit.

Since joining the Supreme Court in 2019, Justice Groban has voted in 54 civil cases.  He has voted to affirm only 11 times – 20.37% of his total caseload – and voted to reverse completely in 31 cases (57.41%).  He has cast three split votes, two votes to grant and seven “other.”

Join us back here tomorrow as we discuss how often Chief Justice Cantil-Sakauye has voted in the minority in civil cases.

Image courtesy of Pixabay by 1552036 (no changes).

 

Justice Leondra R. Kruger took her seat on the Court on January 5, 2015.  Between that time and the end of 2020, she had voted in precisely 200 civil cases.  She has voted to affirm in only 63 of those cases, or 31.5% of the total.  She had voted to reverse in full 95 times, for 47.5%.  There are no clear time trends in those totals.

Justice Kruger has cast split votes to affirm in part and reverse/vacate/modify in part in 21 cases.  She has voted to grant 3 times and to deny and vacate 2 times (each).  Finally, she has cast 14 “other” votes, nearly all of them in certified question appeals from the Ninth Circuit.

 

Join us back here next week as we continue our examination of the Justices’ voting records.

Image courtesy of Flickr by Karen Borter (no changes).

Today, we’re turning our attention to our initial examination of Justice Liu’s voting record.

Justice Goodwin Liu took his seat on the Court on September 1, 2011.  Since that time, he has voted in 292 civil cases.  The distribution of his votes is similar to the Chief Justice and Justice Corrigan.  From 2011 through the end of 2020, Justice Liu has voted to affirm in 97 civil cases, or 33.22% of his total, and voted to reverse in 142 cases – 48.63%.  He has cast 24 split votes to affirm in part and reverse/vacate/modify in part, 3 votes each to deny and grant, 2 votes to vacate and 21 “other” – nearly all in certified question cases from the Ninth Circuit.

Join us back here tomorrow as we begin our review of Justice Kruger’s voting record.

Image courtesy of Flickr by Doug Kerr (no changes).