How Were the Court’s Civil Cases Distributed at the Court of Appeals (Part 4)?

Yesterday, we reviewed the year-by-year data for the distribution of the Court’s civil docket among the Districts and Divisions of the Court of Appeal. Today, we review the most recent data, from 2012 to year-to-date 2017.

The Court heard twenty-four cases from the First District during this time. The Court decided one case from Division One each year from 2013 to 2017; one case from Division Two in 2012-2013 and 2016-2017, and one case from Division Three in 2013 and 2015-2017. The Court decided two cases from Division Four in 2012 and 2015 and one case per year in 2013, 2014 and 2017. The Court decided one case a year from Division Five of the First District in 2012, 2014, 2015 and 2017.

The Court decided eighty-five civil cases from the Second District during this period. The Court decided seven cases from Division One of the Second District in 2016, three in 2013, two in 2012, 2015 and to-date in 2017, and one in 2014. The Court has decided three cases so far from Division Two in 2017, two in 2014, one each in 2013 and 2016, and none in 2012 or 2015. The Court has decided five civil cases so far this year from Division Three, four from 2016, three in 2013 and two per year in 2012, 2014 and 2015. The Court decided four cases from Division Four in 2015, two per year from 2012 through 2014 and one each year in 2016 and 2017. The Court decided three cases from Division Five in 2014, two per year in 2012, 2015 and 2016, one in 2013 and none so far in 2017. The Court decided two cases from Division Six in 2014, one in 2016 and 2017, and none in 2012, 2013 and 2015. The Court decided four cases from Division Seven in 2016 and one per year in 2013, 2015 and 2017. The Court decided five cases from Division Eight in 2013, two per year in 2016 and 2017 and one in 2012.

The Court decided two cases per year from the Third District in 2012, 2014, 2015 and 2016, and one each in 2013 and 2017. The Court has decided five cases so far this year from Division One of the Fourth District, four in 2015, three in 2012 and one each year in 2013 and 2016. The Court decided three cases from Division Two in 2015, two in 2013 and 2016 and one per year in 2012, 2014 and 2017. The Court decided five cases from Division Three of the Fourth District in 2015, three in 2012 and so far in 2017 and one each in 2013 and 2016. The Court decided three cases from the Fifth District in 2014 and one per year in 2012, 2013, 2015 and 2017. The Court decided three cases from the Sixth District in 2013, two in 2012 and one each year from 2015-2017. The Court decided two cases on direct appeal in 2016, and one per year within the Court’s original jurisdiction in 2012, 2016 and 2017. The Court decided three certified question appeals in 2013, two in 2014 and 2017, and one in 2016.

Join us back here next Thursday as we begin looking at the distribution of the Court’s criminal docket between 1994 and 2017.

Image courtesy of Flickr by Keith Skelton (no changes).

How Were the Court’s Civil Cases Distributed at the Court of Appeals (Part 3)?

Last week, we reviewed the year-by-year distribution of the Court’s civil docket among the Districts and Divisions of the Court of Appeal for the years 1994 through 2005. Today, we review the data for the years 2006 through 2011.

The Court has decided forty-five civil cases which originated in the First District. The Court decided three cases from Division One in 2007, one per year in 2008 and 2009, and none in 2006, 2010 and 2011. The Court decided three cases from Division Two in 2006 and 2010, two in 2007, one in 2009 and none in 2008 and 2011. The Court decided three cases from Division Three in 2007, two in 2009, and one per year in 2006, 2008, 2010 and 2011. The Court decided four cases from Division Four in 2006, three in 2010, two in 2009 and 2011, one in 2008 and none in 2007. The Court decided four civil cases from Division Five in 2006, two per year in 2007 and 2010, one in 2008 and 2011, and none in 2009.

The Court decided ninety-eight civil cases from the Second District during these years. The Court decided three cases from Division One in 2011, two in 2006 and 2008, one each in 2007 and 2010, and none in 2009. The Court decided three cases from Division Two in 2007, two per year in 2008 and 2010, one in 2006 and none in 2009 and 2011. The Court decided five cases from Division Three in 2009, four in 2008, three in 2010, two per year in 2006, 2007 and 2011. The Court decided three cases from Division Four in 2006 and 2007, two in 2011, one per year in 2008 and 2010, and none in 2009. The Court decided seven cases from Division Five in 2006, six in 2005, five in 2008, four in 2011, two in 2010 and one in 2009. The court decided three civil cases from Division Six in 2006, two in 2009, and one per year in 2007, 2008, 2010 and 2011. The Court decided three cases per year from Division Seven in 2006 and 2011, two in 2009, one in 2007 and 2010 and none in 2008. The Court decided four cases from Division Eight in 2009, three in 2010, one per year in 2006 and 2007, and none in 2008 and 2011.

We report the remainder of the data for these years in Table 332. The Court decided nine cases per year from the Third District in 2006 and 2007, six in 2008, five in 2009, three in 2010 and one in 2011. The Court decided eight civil cases from Division One of the Fourth District in 2008, five in 2007 and 2009, two in 2006 and 2011, and one in 2010. The Court decided four cases from Division Two of the Fourth District in 2007, three in 2006, two per year in 2008 and 2009, one in 2011 and none in 2010. The Court decided four cases from Division Three in 2009, three in 2007 and 2010, two in 2011, one in 2008 and none in 2006. The Court decided three civil cases from the Fifth District in 2007, two in 2006 and one in 2008. The Court decided six cases from the Sixth District in 2010, four in 2009, two per year in 2006, 2007 and 2011, and one in 2008. The Court heard one direct appeal from the Court of Appeal in 2010, and one case within the Court’s original jurisdiction each year in 2009 and 2011. The Court decided one certified question appeal in 2006, two in 2007, three in 2009, five in 2010 and four in 2011.

Join us back here tomorrow as we review the data for the Court’s civil docket between 2012 and 2017.

Image courtesy of Flickr by Ingrid Taylar (no changes).

How Were the Court’s Cases Distributed at the Court of Appeals (Part 2)?

Yesterday, we began our review of the distribution of the Court’s civil cases at the Districts (and Divisions) of the Court of Appeal, reviewing the data for the years 1994 through 1999. Today, we review the data for the years 2000 through 2005.

The Court heard a total of 45 cases from the First District. The Court decided four cases from Division One, three each from 2000, 2003 and 2005, one from 2004 and zero from 2001. The Court decided five cases from Division Two in 2000, two each from 2002 and 2004, one each from 2001 and 2003, and none in 2005. The Court decided three cases from Division Three in 2001, one each in 2002, 2004 and 2005, and zero in 2000 and 2003. The Court decided two cases from Division Four in 2002, one each in 2000 and 2004, and zero in 2001, 2003 and 2005. The Court decided four cases from Division Five of the First District in 2005, three in 2004, two in 2002, one in 2001, and zero in 2000 and 2003.

The Court’s civil docket from the Second District was essentially flat between 2000 and 2005, as the Court decided 115 cases. The Court decided five cases from Division One in 2000, three in 2001, 2002 and 2004 and two in 2003 and 2005. The Court decided three cases from Division Two in 2005, two in 2001 and 2002, one in 2004 and zero in 2000 and 2003. The Court decided seven cases from Division Three in 2000, six in 2005, four in 2003 and three each year in 2001, 2002 and 2004. The Court decided four cases per year from Division Four in 2000, 2001 and 2004, three cases in 2003 and two cases in 2002 and 2005. The Court decided four cases per year from Division Five in 2000 and 2004, two per year in 2001 and 2003 and one per year in 2002 and 2005. The Court decided three cases from Division Six in 2002, two each in 2000 and 2001, one per year in 2003 and 2005 and zero in 2004. The Court decided six cases in Division Seven in 2004, four in 2002, two per year in 2001, 2003 and 2005 and one in 2000. The Court decided one case from Division Eight in 2004 and one in 2005, and zero the other years.

The Court decided seven cases from the Third District in 2005, six in 2004, five in 2003, four in 2000, three in 2001 and two in 2002. The Court decided five cases from Division One of the Fourth District in 2004, three each from 2001, 2002, 2003 and 2005, and two in 2000. The Court decided six cases from Division Two of the Fourth District, five from 2002, four from 2003, three per year from 2000 and 2001 and one from 2004. The Court decided eight cases from Division Three of the Fourth District in 2001, fie in 2004, two per year from 2002 and 2003, one from 2005 and zero in 2000. The Court decided four cases from the Fifth District in 2005, three cases from 2004, two from 2003 and one each year from 2000, 2001 and 2002. The Court decided four cases from the Sixth District in 2000, three from 2003 and 2005, two from 2002, one from 2004 and zero from 2001. The Court decided only two cases within its original jurisdiction – one in 2000 and one in 2004. The Court decided four certified questions per year in 2001 and four in 2002, two in 2000 and one per year in 2003, 2004 and 2005.

Join us back here next Thursday as we look at the years 2006 through 2017.

Image courtesy of Flickr by Derek Giovanni (no changes).

How Were the Court’s Civil Cases Distributed at the Court of Appeals (Part 1)?

Last week, we reviewed the government’s record in civil and criminal appeals. This week, we begin our review of which Districts of the Court of Appeal have produced the most civil cases each year.

Just as we did earlier this week in our study of the Illinois Supreme Court’s caseload, we use population distribution as a rough cut estimator for how we expect the caseload to geographically distributed. The twelve counties of the First District comprised 16.12% of California’s population as of the 1990 census. However, between 1994 and 1999, the First District accounted for 75 civil cases, or 25.42% of the Court’s civil docket. Division One produced two cases per year in 1994, 1996 and 1997, one each in 1998 and 1999, and a high of four in 1995. Division Two produced five cases in 1994, four in 1997, three each in 1998 and 1999, two in 1995 and one in 1996. Division Three produced five cases in 1995, four in 1998, two in 1994 and 1999, and one per year in 1996 and 1997. Division Four produced five cases per year in 1996 and 1998, three each in 1994 and 1995, two in 1997 and none at all in 1999. Division Five produced three cases per year in 1994, 1995 and 1997, two in 1999, one in 1996 and none in 1998.

Where the First District was arguably a bit overrepresented on the Court’s civil docket between 1994 and 1999, the Second District’s share of the civil caseload closely matched its population. The four counties of the Second District, based on the 1990 census, contained exactly 34% of California’s population. For the years 1994 through 1999, the Second District produced 110 of the Court’s civil cases – 37.29% of the docket. Division Five led the Second District, producing 21 civil cases during these years; Division Seven was second at 19, with the others tightly bunched behind. Division One of the Second District produced four cases in 1998 and 1999, three in 1995 and 1997, two in 1994 and none in 1996. Division Two produced five cases in 1997, three in 1998, two in 1996 and one per year in 1994, 1995 and 1999. Division Three produced a high of six cases in 1997, three in 1999, two in 1995 and one per year in 1994, 1996 and 1998. Division Four produced four cases in 1995 and 1999, three in 1998 and one case per year in 1994, 1996 and 1997. Division Five produced five cases per year in 1996 and 1999, four in 1994, three in 1995 and 1998 and one in 1997. Division Six accounted for three cases per year in 1994, 1995 and 1997, two in 1999 and one case per year in 1996 and 1998. Division Seven produced five cases in 1994 and again in 1998, four in 1995, three in 1999, two in 1997 and none in 1996.

We report the data from the rest of the state in Table 326. The Court decided six civil cases from the Third District in 1999, five in 1995, 1996 and 1998, three in 1997 and two in 1994. The Court decided 45 cases in all from the Fourth District, which consists of San Diego and Imperial counties. The Court decided four cases from Division One of the Fourth in 1998, three in 1997 and 1999, and two per year from 1994 through 1996. The Court decided five cases from Division Two in 1998, three in 1994 and 1997, two in 1999, one in 1995 and none in 1996. The Court decided six cases from Division Three in 1994, two each in 1995, 1996, 1998 and 1999, and one in 1997.

The Court decided six cases from the Fifth District in 1997, two in 1994 and one each in 1995, 1996, 1998 and 1999. The Court decided four cases from the Sixth District in 1995, four in 1999, three in 1994, two in 1998, one in 1997 and none in 1996. The Court decided one direct appeal in 1994 and five in 1995. The Court decided one civil case within its original jurisdiction in 1995, two in 1998 and four in 1999. The Court decided no certified question cases at all during these years.

Join us back here tomorrow as we turn our attention to the data for the years 2000 through 2005.

Image courtesy of Flickr by MoonJazz (no changes).

What’s the Government’s Winning Percentage in Criminal Appeals?

Yesterday, we reviewed the year-by-year data on the State’s appearances as appellant and respondent in criminal cases from 1994 to 2016.  Today, we review the State’s winning percentage in each role.

The State won 80.95% of its cases as petition in 1994, 93.75% in 1995, 64.29% in 1996 and 88.89% in 1997.  The State won 63.64% in 1998, 54.55% in 1999, 82.35% each in 2000 and 2001, 81.82% in 2002, 90% in 2003 and 83.33% in 2004.

The State’s winning percentage remained relatively high for the ten years that followed.  The State won 83.33% of its cases as petitioner in 2005, 87.5% in 2006, two-thirds in 2007, 76.47% in 2008, 83.33% in 2009, 77.78% in 2010, 88.89% in 2011 and 81.82% in 2012.  The State won all its cases as appellant in 2013, but only 53.33% in 2014 and two-thirds in 2015 and 2016.

We report the State’s winning percentage as Respondent in criminal cases from 1994 to 2004 in Table 322.  In 1994, the State won 85% of its cases as respondent.  The State won 79.31% of its cases in 1995, 73.08% in 1996, 72% in 1997 and 78.79% in 1998.  The State won only 60% in 1999, but the State’s winning percentage returned to trend thereafter: 81.58% in 2000, 78.95% in 2001, 71.43% in 2002, 78.26% in 2003 and 72.22% in 2004.

The State continued to win an average of about three-quarters of its cases as respondent from 2005 to 2013.  The State won 79.07% of tis cases in 2005, 64.86% in 2006, 78.72% in 2007, 79.59% in 2008, 79.17% in 2009, 77.78% in 2010, 80.95% in 2011, 72.73% in 2012 and 74.29% in 2013.

In the years since, the State’s winning percentage as respondent has dropped substantially.  The Court won only 58.54% of its cases as respondent in 2014, 53.13% in 2015 and 46.51% in 2016.

Join us back here next Thursday as we turn our attention to another issue in the Court’s decision making.

Image courtesy of Flickr by Miheco (no changes).

How Many Criminal Appeals Are the Result of Government Appeals?

Last week, we reviewed how often public entities appeared as appellant and respondent in civil cases, and how often the public entities prevailed.  This week, we review the Court’s experience in criminal cases.

The Court decided twenty-one criminal cases in 1994 where the State was the appellant.  The Court heard sixteen such cases in 1995, 14 in 1996, 18 in 1997, 11 each in 1998 and 1999, 17 each in 2000 and 2001, 11 in 2002, ten in 2003 and twelve in 2004.

The Court decided eighteen cases involving State appeals in 2005, 16 in 2006, 15 in 2007, 17 in 2008, twelve in 2009 and 18 in 2010.  The Court decided nine cases involving State appeals in 2011, 22 in 2012, 16 in 2013 and 15 in 2014.  The Court decided a dozen cases involving State appeals in 2015 and nine in 2016.

In Table 318, we report the year-by-year data for the State’s appearances as respondent.  The Court decided twenty cases with State respondents in 1994, 29 in 1995, 26 in 1996 and 25 in 1997.  In 1998, the Court decided 33 cases involving State respondents, 30 in 1999 and 38 each in 2000 and 2001.  The totals increased each year after – 42 in 2002, 46 in 2003 and 54 in 2004.

The Court decided 43 cases involving State respondents in 2005, 37 in 2006, 47 in 2007, 49 in 2008 and 48 in 2009.  The Court decided 54 cases involving State respondents in 2010, 42 in 2011, 55 in 2012 and 35 in 2013.  The Court decided 41 cases involving State respondents in 2014, 32 in 2015 and 43 in 2016.

Join us back here tomorrow as we turn our attention to the State’s winning percentages in criminal cases.

Image courtesy of Flickr by Ken Lund (no changes).

Making Sense of the Litigation Analytics Revolution

Reprinted with permission from the October 2017 issue of ALI CLE’s The Practical Lawyer.

 

“In God we trust. All others must bring data.”

— Professor William Edwards Deming

All of us who often speak and write about the ongoing revolution in data analytics for litigation have heard it from at least some of our fellow lawyers: “Interesting, but so what?”

Here’s the answer in a nutshell. One often hears that business hates litigation because it’s enormously expensive and risky. There’s a degree of truth to that, but it’s far from the whole truth. Business doesn’t dislike expense or risk per se. Business dislikes unquantified expense and risk. As the maxim often (incorrectly) attributed to Peter Drucker goes, “You can’t manage what you can’t measure.”

Don’t believe me? If your client offers to sell an investment bank a two billion dollar package of mortgages, the bank gets nervous. But tell the bank that based on the past ten years of data, 65.78 percent of the mortgages will be paid off early, 24.41 percent will be paid off on time, and 9.81 percent will default, and they know how to deal with that.

It’s the same thing in litigation. For generations, most facts that would help a business person understand the risks involved have been solely anecdotal: this judge is somewhat pro-plaintiff or pro-defendant; the opposing counsel has a reputation for being aggressive or smart (or not); juries in this jurisdiction often make runaway damage awards or are notoriously parsimonious. But every one of those anecdotal impressions and bits of conventional wisdom can be approached from a data-driven perspective, quantified and proven (or disproven). Do that, and we’ve taken a giant step towards approaching litigation the way a business person approaches business—by quantifying and managing every aspect of the risk.

I hear lawyers talking about “early adopters” of data analytics tools in litigation, but the truth is, we’re not early adopters by a long shot. The business world has been investing billions in data analytics tools for a generation in order to understand and manage their risks.

Tech companies use algorithms to choose among job applicants and assign “flight risk” scores to employees according to how likely each is thought to be to leave. Billions of dollars in stock are traded every day by algorithms designed to predict gains and reduce risk. Both Netflix and Amazon’s websites (among many others) track what you look at and buy or rent in order to recommend additional choices you’ll be interested in. In 2009, Google developed a model using search data which predicted the spread of a flu epidemic virtually in real time. UPS has saved millions by placing monitors in their trucks to predict mechanical failures and schedule preventive maintenance. The company’s algorithm for planning drivers’ optimal routes shaved 30 million miles off drivers’ routes in a single year. Early in his term as New York Mayor, Michael Bloomberg created an analytics task force that crunched massive amounts of data gathered from all over the city to determine which illegal conversions (structures cut up into many smaller units without the appropriate inspections and licensing) were most likely to be fire hazards. Political campaigns now routinely use mountains of data to not only identify persuadable voters, but determine the method most likely to work with each one.

The application of data analytic techniques to the study of judicial decision making arguably begins with a 1922 article for the Illinois Law Review by political scientist Charles Grove Haines. Haines reviewed over 15,000 cases of defendants convicted of public intoxication in the New York magistrate courts. He showed that one judge discharged only one of 566 cases, another 18 percent of his cases, and still another fully 54%. Haines argued that his data showed that case results were reflecting to some degree the “temperament . . . personality . . . education, environment, and personal traits of the magistrates.”

In the early 1940s, political scientist C. Herman Pritchett published The Roosevelt Court: A Study in Judicial Politics and Values, 1937-1947. Pritchett published a series of charts showing how often various combinations of Justices had voted together in different types of cases. He argued that the sharp increase in the dissent rate at the U.S. Supreme Court in the late 1930s necessarily argued against the “formalist” philosophy that law was an objective reality which judges merely found and declared.

Another landmark in the judicial analytics literature, the U.S. Supreme Court Database, traces its beginnings to the work of Professor Harold Spaeth about three decades ago. Professor Spaeth created a database which classified every vote by a Supreme Court Justice in every argued case for the past five decades. Today, thanks to the work of Spaeth and his colleagues Professors Jeffery Segal, Lee Epstein and Sarah Benesh, the database has been expanded to encompass more than two hundred data points from every case the Supreme Court has decided since 1791. The Supreme Court Database is the foundation of most data analytic studies of the Supreme Court’s work.

Professors Spaeth and Segal also wrote one another classic, The Supreme Court and the Attitudinal Model, in which they proposed a model arguing that a judge’s personal characteristics—ideology, background, gender, and so on—and so-called “panel effects”—the impact of having judges of divergent backgrounds deciding cases together as a single, institutional decision maker—could reliably predict case outcomes.

The data analytic approach began to attract attention in the appellate bar in 2013, with the publication of The Behavior of Federal Judges: A Theoretical & Empirical Study of Rational Choice. Judge Richard Posner and Professors Lee Epstein and William Landes applied various regression techniques to a theory of judicial decision making with its roots in microeconomic theory, discussing a wide variety of issues from the academic literature.

Although the litigation analytics industry is changing rapidly, the four principal vendors are Lex Machina, Ravel Law, Bloomberg Litigation Analytics and Premonition Analytics. Lex Machina and Ravel Law began as startups (indeed, both began at Stanford Law School), but LexisNexis has now purchased both companies. Lex Machina is fully integrated with the Lexis platform, and Ravel will be integrated in the coming months. Although there are certain areas of overlap, all four analytics vendors have taken a somewhat different approach and offer unique advantages. For example, Premonition’s database covers not just most state and all federal courts, but also offers data on courts in the United Kingdom, Ireland, Australia, the Netherlands and the Virgin Islands.

The role of analytics in litigation begins with the earliest moments of a lawsuit. If you’re representing the defendant, Bloomberg and Lex Machina both offer useful tools for evaluating the plaintiff. How often does the plaintiff file litigation, and in what areas of the law? Were earlier lawsuits filed in different jurisdictions from your new case, and if so, why? Scanning your opponent’s filings in cases in other jurisdictions can sometimes reveal useful admissions or contradictory positions. If your case is a putative class action, these searches can help determine at the earliest moment whether the named plaintiff has filed other actions, perhaps against other members of your client’s industry. Have the plaintiff’s earlier actions ended in trials, settlements or dismissals? This can give counsel an early indication of just how aggressive the plaintiff is likely to be.

All four major vendors have useful tools for researching the judge assigned to a new case. Ravel Law has analytics for every federal judge and magistrate in the country, as well as all state appellate judges. State court analytics research is always a challenge because of the number of states whose dockets are not yet available in electronic form, but Premonition Analytics claims to have as large a state-court database as Lexis, Westlaw and Bloomberg combined. How much experience does your judge have in the area of law your case involves compared to other judges in the jurisdiction? How often does the judge grant partial or complete dismissals or summary judgments early-on? How often does the judge preside over jury trials? Were there jury awards in any of those trials, and how do they compare to other judges’ trials? What is defendants’ winning percentage in recent years before your judge? Ravel Law and Bloomberg can provide data on how often your trial judge’s opinions are cited by other courts— an indicator of how well respected the judge is by his or her peers— as well as how often the judge is appealed, and how many of those appeals have been partially or completely successful. The data can be narrowed by date in order to focus on the most recent decisions, as well as by area of law. Say your assigned judge appears to be more frequently appealed and reversed than his or her colleagues in the jurisdiction. Are the reversals evenly distributed across time, or concentrated in any particular area of law? If your judge’s previous decisions in the area of law where your case arises have been reversed unusually often, it can influence how you conduct the litigation. Counsel can keep all this data current through Premonition’s Vigil court alert system, which patrols Premonition’s immense litigation database and can give counsel hourly alerts and updates, keyed to party name, judge, attorney or case type, from federal, state and county courts. Many jurisdictions give parties one opportunity, before any substantive ruling is made, to seek recusal of the assigned judge as a matter of right, without proof of prejudice. Data-driven judge research can help inform your decision as to whether to exercise that right.

Lex Machina’s analytics platform focuses on several specific areas of law, giving counsel a wealth of information for researching a jurisdiction (additional databases on more areas of law will be coming soon). For example, in antitrust, cases are tagged to distinguish between class actions, government enforcement, Robinson-Patman Act cases, as well as others. The platform is integrated with the MDL database, linking procedurally connected cases. The database reflects both damages—whether through a jury award or a settlement—and additional remedies, such as divestiture and injunction. Cases are also tagged by the specific antitrust issue, such as Sherman Act Section 1, Clayton Act Section 7, the rule of reason or antitrust exemptions. The commercial litigation data includes the nature of the resolution, any compensatory or punitive damages, and the legal finding—contract breach, rescission, unjust enrichment, trade secret misappropriation, and many more. The copyright database similarly tracks damages, findings and remedies, and allows users to exclude from their data “copyright troll” filings. Lex Machina’s federal employment law database includes tags for the type of damages—backpay, liquidated damages, punitive damages and emotional distress, the nature of any finding, and the remedy given. The patent litigation database includes many similar fields, but also a patent portfolio evaluator, isolating which patents have been litigated, and a patent similarity engine, which finds new patents and tracks their litigation history. The securities litigation database enables users to focus on the type of alleged violation, tracking the most relevant outcomes, and the trademark litigation database contains data for the legal issues and findings, damages and remedies in each case.

Analytics research is important for the plaintiffs’ bar as well. Bloomberg’s Legal Analytics platform is integrated with its enormous library of corporate data covering 70,000 publicly held and 3.5 million private companies. Counsel can survey a company’s litigation history, and the information is keyed to the underlying dockets. The data can be focused by jurisdiction or date, as well as to include or exclude subsidiaries. Lex Machina’s Comparator app can compare not only the length of time particular judges’ cases tend to take to reach key milestones but also previous outcomes, including damages awards and attorneys’ fees awards. A plaintiffs’ firm can use such data in cases where there are multiple possible venues to select the jurisdiction likely to deliver the most favorable result in the shortest time.

One bit of conventional wisdom that is commonly heard in the defense bar is that defendants should generally remove cases to federal court when they have the right to do so because juries are less prone to extreme verdicts and the judges are more favorable to defendants. Although comprehensive data on state court trial judges is still less common than data on federal judges, all four major analytic platforms can help evaluate courts and compare judges, giving a client a data-driven basis for making the removal decision.

Researching your opposing counsel is important for both defendants and plaintiffs. How aggressive is opposing counsel likely to be? Bloomberg Analytics covers more than 7,000 law firms, and enables users to focus results by clients, date and jurisdiction. Is your opposing counsel in front of your judge all the time? If so, that can inform decisions like whether to seek of-right substitution of the judge or remove the case. What were the results of those earlier lawsuits? Reviewing opposing counsel’s client list can suggest how experienced opposing counsel is in the area of law where your case arises. Lex Machina’s law firms comparator also enables the user to compare opposing counsel to their peers, and get an idea of what opposing counsel’s approach to the lawsuit is likely to be. Lex Machina’s app enables counsel to compare opposing counsel’s previous cases by open and terminated cases, days elapsed to key events in the case, case resolutions and case results. In preparing this article, I reviewed a report generated by Lex Machina’s Law Firms Comparator and learned several things I didn’t know about my own firm’s practice. Ravel Law’s Firm Analytics enables counsel to study similar data about one’s opponent, focused by practice area, court, judge, time or proceeding—or all of the above. Firm Analytics also compares opposing counsel to other law firms in the jurisdiction, showing whether counsel appears before the trial judge frequently, and whether they tend to win (or lose) more often than comparable firms. All this information gives counsel a tremendous leg up as far as estimating how expensive the litigation is likely to be.

As you begin to develop the facts of a case, motions begin to suggest themselves. Is your client’s connection to the jurisdiction sufficiently tenuous to support a motion to dismiss for lack of personal jurisdiction, or for change of venue? Has the plaintiff failed to satisfy the Twombly/Iqbal standard by stating a plausible claim? Discovery motions to compel and for protective orders are commonplace, and inevitably defense counsel will face the question of whether to file a motion for summary judgment.

Ravel Law’s platform has extensive resources for motions research. For every Federal judge, the system can show you how likely the judge is to grant, partially grant or deny a total of 90+ motions—not just the easy ones like motions for summary judgment or to dismiss, but motions to stay proceedings or remand to state court, motions to certify for interlocutory appeal, motions for attorneys’ fees, motions to compel or for an injunction and motions in limine. This can by an enormous savings in both time and money for your clients. Even where examining the facts suggests that a motion for summary judgment might be in order, that calculus might look very different when one learns that the trial judge has granted only 18 percent of the summary judgment motions brought before him or her since 2010.

On Lex Machina’s platform, counsel can use the “motion kickstarter” to survey recent motions before the assigned trial judge. The “motion chain” links together the briefing and the eventual order for each motion, so counsel can identify the arguments which have succeeded in recent cases, and review both the parties’ briefs and the judge’s order.

Ravel Law offers extensive resources to help counsel in crafting their arguments. As counsel does her research, Ravel Law shows visualizations demonstrating how different passages of a case have been cited, and by which judges, enabling counsel to quickly zero in on the passages which judges have found most persuasive. Or the research can be approached from the other direction, by identifying the cases and passages most often cited by your judge for particular principles. How does the judge typically explain the standards for granting a motion to dismiss, or for summary judgment? Does the judge tend to frequently cite Latin legal maxims, or even sports analogies? How does your federal judge handle the state law of his or her home jurisdiction? How has your judge ruled in rapidly evolving areas of the law, such as class certification, arbitration and personal jurisdiction? Now it’s easy to find out.

And when the case finally goes to trial, there’s still a role for judicial analytics. How often do the judge’s cases go to trial? What kinds of cases have tended to go to trial before your trial judge? What were the results? The data you pulled at the outset on the length of the judge’s previous trials might suggest just how liberal or strict the judge tends to be with the parties in trial. Did either party waive a jury, and if so, what happened? How has your trial judge handled jury instructions in recent trials where the parties didn’t waive the jury? What were the awards of damages, plus any awards of attorneys’ fees or punitive damages?

Post-trial is an often overlooked opportunity to cut litigation short by limiting or entirely wiping out an adverse verdict through new trial motions and motions notwithstanding the verdict. Counsel can determine on Lex Machina’s motion comparator, Ravel Law’s motions database or Bloomberg’s Litigation Analytics how likely judges are to either overturn or modify a jury verdict. A close look at the data and recent orders and motions will help inform a decision as to whether to file a motion for judgment notwithstanding the verdict or a motion for new trial. If your client has been hit with a punitive damages award, you’ll need to review not only the judge’s record on post-trial review of punitives, but drill down from there to the order and the briefing on the motion to evaluate what approaches worked (or didn’t).

Analytics have tremendous potential in appellate work too. All of the major vendors have enormous collections of data on state and federal appellate courts and judges. But for my firm’s appellate practice, I was interested in tracking a number of different variables which would be difficult to extract through computer searches, so rather than relying on any of the vendors, I built two databases in-house. Our California and Illinois Supreme Court databases are modeled after Professors Spaeth and Segal’s Supreme Court database, tracking many of the same variables. My California Supreme Court database encompasses every case the court has decided since January 1, 1994 – 1,004 civil and 1,293 criminal, quasi-criminal and attorney disciplinary. My Illinois Supreme Court database is even bigger, including every case that court has decided since January 1, 1990 – 1,352 civil and 1,529 criminal. For each of these 5,000+ cases, I’ve extracted roughly one hundred different data points. Was the plaintiff or the defendant the appellant in the Supreme Court? Is there a government entity on either side? Where did the case originate, and who was the trial judge? Before the intermediate appellate court, we track dissents, publication, the disposition and the ideological direction of the result. We track three dates for each case: the date review was granted, the date of the argument and the date of the decision. Before the Supreme Court, we note both the specific issue and the area of the law involved, the prevailing party and the vote, the writers and length of all opinions, the number of amicus curiae briefs and who each amicus supported, and of course each Justice’s vote. In addition, our database includes data from every oral argument at the Illinois Supreme Court since 2008, and arguments at the California Supreme Court since May 2016, when the Court first started posting video and audio tapes of its sessions.

Conventional wisdom in most jurisdictions holds that unless the intermediate appellate court’s decision was published with a dissent, it’s not worth seeking Supreme Court review. We’ve demonstrated that in fact, a significant fraction of both the California and Illinois Supreme Court’s civil dockets arises from unpublished unanimous decisions. We track not just aggregate reversal rates for intermediate appellate courts, but break the data down into reversal rates by area of law.

Lag times are particularly interesting in California, since the Supreme Court is generally required to decide cases within ninety days of oral argument. As a result, the vast majority of the lag between grant of review and final decision in California falls between grant and argument, rather than argument and decision. Not only have we tracked the average time to resolution for civil and criminal cases— we’ve demonstrated that there’s a correlation between the Supreme Court’s decision and the lag time from grant to argument. We’ve tracked the individual Justices’ voting records, not just overall, but one area of law at a time.

Only in the past few years have data analysts began to take a serious look at appellate oral arguments. The earliest study appears to be Sarah Levien Shullman’s 2004 article for the Journal of Appellate Practice and Process.  Shullman analyzed oral arguments in ten cases at the United States Supreme Court, noting each question asked by the Justices and assigning a score from one to five to each depending on how helpful or hostile she considered the question to be. Based upon her data, she made predictions as to the ultimate result in the three remaining cases. Comparing her predictions to the ultimate results, Shullman concluded that it was possible to predict the result in most cases by a simple measure – the party being asked the most questions generally lost.

John Roberts addressed the issue of oral argument the year after Shullman’s study appeared. Then-Judge Roberts noted the number of questions asked in the first and last cases of each of the seven argument sessions in the Supreme Court’s 1980 Term and the first and last cases in each of the seven argument sessions in the 2003 Term. Like Shullman, Roberts found that the losing side was almost always asked more questions.

Timothy Johnson and three other professors published their analysis in 2009. Johnson and his colleagues examined transcripts from every Supreme Court case decided between 1979 and 1995—more than 2,000 hours of argument in all, and nearly 340,000 questions from the Justices. The study concluded, after controlling for a number of other factors that might explain case outcomes, all other factors being equal, the party asked more questions generally wound up losing the case.

Professors Lee Epstein and William M. Landes and Judge Richard A. Posner published their study in 2010. Epstein, Landes and Posner used Professor Johnson’s database, tracking the number of questions and average words used by each Justice. Like Professor Johnson and his colleagues, they concluded that the more questions a Justice asks, all else being equal, the more likely the Justice will vote against the party, and the greater the difference between total questions asked to each side, the more likely a lopsided result is. Our study of every oral argument at the Illinois Supreme Court from 2008 through 2016 came to the same conclusion: the larger the margin between your total questions from the Court and your opponent, the less your chance of winning.

Litigation analytics can uncover useful insights outside of courtrooms as well. Corporate legal departments are increasingly using analytics to track and manage their outside counsel. Does the company have more or less litigation than its competitors? Do the lawsuits last a comparable length of time, and is the company’s win rate comparable to its peers? What are the trends over time? When the company is selecting counsel for a particular lawsuit, depending on where the case is venued, it should be possible by consulting Premonition, Lex Machina or Bloomberg to compare each candidate counsel’s winning percentage in the jurisdiction and before the particular judge, as well as to develop far more background information than was ever possible before. From the viewpoint of the law firms competing for business, analytics offers an invaluable insight into the nature of your target client’s business. All the same questions which the legal department will likely be interested in are valuable to the outside attorneys as well. Is your target’s current counsel not winning cases as often as other companies are? What’s the nature of the company’s litigation? And if candidate counsel can discover the names of the other firms competing for the business, analytics databases can provide detailed information about those lawyers’ experience and relevant background. Premonition’s Vigil court alerts system can get lawyers word of a new filing or case development involving a client or potential client only an hour or two after it happened, not a few days later.

So how does the future look? We’re still in the early days of the revolution in litigation analytics. As the federal PACER system is upgraded and more and more states put some or all dockets in electronic form, more litigation data will become available to analytics vendors. Analytics scholars will develop new methods to turn additional aspects of litigation into usable data. Upgrades in artificial intelligence systems will result in analytics learning to gather more subtle data from court records— the kind of variables that require understanding and interpretation, rather than simply looking for text strings. More analytics vendors will inevitably enter the market.

Lawyers will have to become comfortable working with analytics data in situations where decisions were once made based upon intuition and experience, both in courtrooms and in clients’ counsel searches. More law firms will likely develop in-house analytics databases similar to mine in other large states.

We’ve barely scratched the surface in terms of statistical and theoretical techniques which can uncover new insights about litigation and judicial decision making. Several academics have proposed algorithms for predicting case outcomes based on information such as the composition of an appellate panel and the ideology, gender and background of the judges, and these algorithms have generally performed better than law professors’ predictions based on the legal issues involved. Regression modeling is a natural next step not just to predict case results, but to estimate the real impact of various variables, such as how much (if at all) amicus support increases one’s odds of winning. Several vendors have touted their data on winning percentages for lawyers, but regression modeling could isolate how much impact a particular counsel really has upon a party’s chances, or whether the jurisdiction or the nature of a lawyer’s clients explains his or her record. As Judge Posner and Professors Epstein and Landes suggested in The Behavior of Federal Judges, computerized sentiment analysis of the content of judicial opinions could produce more nuanced insights about particular judges’ attitudes and ideology. Game theory is another well-developed academic discipline with a largely untapped potential for understanding how appellate courts work.

We end with the question every analytics scholar (and vendor) is asked sooner or later: will litigation analytics replace lawyers?

The answer is no, for two reasons.

The first is what I think of as the orange used car problem.

A few years ago, a company which conducts data mining competitions for corporate clients ran a contest in hopes of building an algorithm to determine which among used cars available at auction was likely to have mechanical problems. They collected the data, ran the correlations, and it turned out the strongest correlation to “few or no mechanical problems” was, you guessed it, that the vehicle was orange.

A few people facetiously proposed theories as to why orange used cars might be more trouble-free (maybe car fanciers with better maintenance habits are drawn to them?), but this is an example of one of the most fundamental rules in data analytics: correlation does not necessarily indicate causation. Saying two variables are highly correlated doesn’t necessarily mean one is causing the other; both could be caused by a third, unidentified variable, or it could be a random correlation, or your dataset could be biased or simply too small. Much of litigation analytics—at least short of the more sophisticated logistic regression modeling – currently consists of identifying correlations. It takes an experienced lawyer intermediary to review the data and understand what are valuable, actionable insights and what are just orange used cars.

The second reason is even more fundamental: all litigation analytics require interpretation, and one must keep constantly in mind—and remind clients early and often – that nothing in analytics is a guarantee of any particular result. The more heavily questioned party does win at times in the appellate courts. Just because Justices A and B have voted together in 75 percent of the tort cases in the past five years is no guarantee they won’t disagree about the next one. The academic algorithms which have been developed for predicting results at the Supreme Court are wrong anywhere from twenty percent to a third of the time. Some often-quoted statistics can mislead through over-aggregation. For example, perhaps an intermediate court’s overall reversal rate on all cases is two-thirds, but on further analysis, it turns out that the reversals are all in tort cases, while the court is generally affirmed in other areas of the law.

Does this mean that litigation analytics are irrelevant? No, no more so than the bank would find the experiential data on the hypothetical mortgage bundle we discussed at the outset irrelevant. Attorneys have been predicting what courts are likely to do for generations based on intuition, experience and anecdote. The business world began moving away from that a generation ago, and now that revolution has struck the law full force. Today, there’s data for most aspects of litigation, and that trend builds every year. The advent of litigation analytics and data-driven decision making is a game-changer in terms of intelligent management of litigation risk.

Image courtesy of Flickr by Barcelona Supercomputing Center – National Supercomputing Center (BSC-CNS)

How Often Do Governmental Entities Win in Civil Cases?

Last week, we began our review of the Court’s experience with governmental entities.  This week, we’re evaluating governmental entities’ winning percentage.

Petitioners won 85.71% of their cases in 1994, 59.09% in 1995 and 100% in 1996.   Petitioners won 85.71% in 1997, 64.29% in 1998, 57.14% in 1999 and 71.43% in 2000.

In 2001, petitioners won 57.14% of the Court’s decisions.  Petitioners accounted for 88.89% in 2002 and 2004, and 80% in 2003.

We review governmental entity respondents from 1994 to 2004 in Table 313.  Half of the Court’s governmental entity respondents prevailed in 1994.  Two-thirds prevailed in 1995, 37.5% won in 1996, 71.43% won in 1997.  Two-thirds won in 1998, 40% in 1999, 42.86% in 2000 and one-quarter in 2001.  In 2002, 57.14% of governmental entity respondents prevailed.  In 2003, one-third of respondents won, and in 2004, 36.36% did.

In Table 314, we report government entity appellants’ winning percentage for the years 2005 through 2016.  Government entity appellants won only 28.57% of their cases in 2005.  Appellants won three-quarters of the time in 2006, 80% in 2007, 77.78% in 2008, and 100% in 2009.  In 2011, government entity appellants won three quarters of the time.  Appellants won 100% of their cases in 2012, but none in 2013 and 100% again in 2014.  Government entity appellants won two-thirds of their cases in 2015 and 77.78% in 2016.

In 2005, government entity respondents won 44.44% of their cases.  In 2006, respondents won 47.83%.  Government entity respondents won 70% in 2007 and half in 2008 and 2009.  In 2010, government entity respondents won 57.14% of their cases.  In 2011, government entity respondents won only 28.57% of their cases.  In 2012, government entity respondents won 83.33%.  Government entity respondents won half their cases in 2013, two-thirds in 2014, 46.67% in 2015, but only 20% in 2016.

Join us back here next Thursday as we turn our attention to the Court’s criminal docket.

Image courtesy of Flickr by Owen Allen (no changes).

How Common Are Governmental Parties in the Court’s Civil Docket?

For the past several weeks, we’ve looked at the Court’s record with death penalty appeals.  This week and next, we’re looking at the Court’s record with parties that are governmental entities.

In Table 308, we report the year-by-year data for governmental entity petitioners, beginning in 1994.  The numbers have varied widely.  The Court decided seven cases involving governmental entity petitioners in 1994, 22 in 1995, seven in 1996 and 14 each in 1997 and 1998.  The court decided seven cases each involving governmental petitioners in 1999 and 2000, fourteen in 2001, nine in 2002, five in 2003 and nine in 2004.

The data for petitioners in the years 2005 through 2016 are reported in Table 309.  The Court decided seven cases involving governmental entity petitioners in 2005, eight in 2006 and ten in 2007.  The Court decided nine such cases in 2008, six in 2009, nine in 2010, four in 2011 and six in 2012.  The Court decided only two cases involving governmental entity petitioners in 2013 and 2014 and three in 2015, but the Court’s total increased to nine in 2016.

We track the Court’s year by year totals of governmental entity respondents in Table 310.  The Court decided ten cases involving governmental entities as respondents in 1994, nine in 1995, eight in 1996 and seven in 1997.  The Court decided twelve such cases in 1998, ten in 1999, seven in 2000, eight in 2001 and seven in 2002.  The Court decided a dozen cases involving governmental entity petitioners in 2003 and eleven such cases in 2004.

The Court decided nine cases involving governmental entity respondents in 2005.  That number increased to twenty-three in 2006, but was back down to ten in 2007, six in 2008 and ten in 2009.  The Court decided seven such cases each in 2010 and 2011, six in 2012 and eight in 2013.  The Court only decided three cases involving governmental entity respondents in 2014, but decided fifteen in 2015.  The Court’s total was back down to five in 2016.

Join us back here tomorrow as we look at governmental entities’ winning percentage in civil cases.

Image courtesy of Flickr by Ray_LAC (no changes).

 

Reviewing the Justices’ Voting Records in Death Penalty Appeals, 1994-2017 (Part 2)

Yesterday, we reviewed the individual Justices’ voting records in death penalty appeals for the years 1994 through 2005.  Today, we look at the second half of our study period, 2006 through 2017.

In Table 303, we report the percentage of each Justice’s votes in death penalty cases which were to affirm or to reverse in part while affirming the sentence for the years 2006 through 2011.  No one topped ninety percent during these years in votes to affirm, as Chief Justice Lucas and Justice Arabian both did for the first six years we studied (based on comparatively few votes for each).  Five permanent Justices were between eighty and ninety percent affirmances – Chief Justice Cantil-Sakauye (85.71%), Justices Baxter (84.51%), Chin (84.4%) and Corrigan (84.06%), and Chief Justice George (82.76%).  Three more voted to affirm between seventy and eighty percent of the time – Justice Werdegar (79.58%), Justice Moreno (77.97%) and Justice Kennard (77.46%).  Justice Liu, who joined the Court in the final year of this six year period, voted to affirm in half the death penalty cases he participated in.  Pro tem Justices voted to affirm 83.33% of the time.

Votes to reverse in part while affirming the sentence were also more common that they had been earlier.  Every Justice but one – Chief Justice Cantil-Sakauye – voted to reverse in part while affirming the sentence in at least 10% of cases.  Justice Liu voted that way 25% of the time.  Justice Kennard did 13.38% of the time.  Justice Moreno cast such votes 12.71% of the time.  Chief Justice George (12.07%), Justice Chin (11.35%), Justices Werdegar and Baxter (11.27% each) and Justice Corrigan were next (10.87%).

In Table 304, we review the Justices’ votes to overturn death sentences between 2006 and 2011 – both reversing in part with the sentence vacated, and outright reversals.  Pro tem Justices were the most frequent votes to reverse in part and vacate the sentence, casting such votes in 8.33% of their cases.  Justice Moreno was next at 5.93%, followed by Justices Kennard and Werdegar (4.93% each), Chief Justice George (3.45%), and Justices Corrigan, Chin and Baxter (2.17%, 2.13% and 2.11%, respectively).  Although he participated in far fewer cases than the others, joining the Court in the final year of our six year period, Justice Liu led in reversals, voting to reverse 25% of the time.  Chief Justice Cantil-Sakauye was next among the permanent Justices, voting to reverse in 4.76% of cases.  Justices Kennard and Werdegar both voted to reverse 4.23% of the time.  Justice Moreno was next at 3.39%, followed by Justices Corrigan, Chin and Baxter (2.9%, 2.13% and 2.11%, respectively) and Chief Justice George (1.72%).

In Table 305, we report the first half of the data for the years 2012-2017.  During these years, the Court added two additional Democratic appointees, and the data clearly reflects the shift.  Votes to affirm in full were about ten percentage points less common across the board.  Justice Kennard was the most frequent vote to affirm at 75.47%.  Justice Baxter voted to affirm 74.24% of the time, and Justice Chin did in 70.43% of his cases.  Justice Corrigan voted to affirm 69.3% of the time, and Chief Justice Cantil-Sakauye did in 68.7%.  Justice Werdegar voted to affirm in 64.91% of her cases, and Justice Liu did in 60.18%.  The two least frequent votes to affirm in full were the two newest Justices – Justice Kruger (57.45%) and Justice Cuellar (53.19%).  Even pro tems were down substantially in votes to affirm – pro tems only voted to affirm in 55.56% of their cases.

The most frequent vote to reverse in part while affirming the sentence among the permanent Justices was Justice Cuellar, who cast such votes in 23.4% of his death penalty cases.  Justices Kruger and Liu were next at 21.28% and 20.35%, respectively.  The Republican nominees were up in this category during these years too.  Justices Corrigan and Werdegar voted to reverse in part affirming the sentence 19.3% of the time.  Chief Justice Cantil-Sakauye did 19.3% of the time.  Next were Justice Chin (18.26%), Justice Baxter (16.67%) and Justice Kennard (15.09%).  Pro tem Justices voted to reverse in part affirming the sentence in 22.22% of their cases.

In Table 306, we report each Justice’s frequency of voting to reverse in part with the sentence vacated, and to reverse outright.  Justice Kruger was the most frequent vote to reverse in part at 17.02%.  Justices Cuellar and Liu were next at 14.89% and 14.16%.  Justice Werdegar voted to reverse in part in 10.53% of her cases.  Chief Justice Cantil-Sakauye did so in 8.7%, Justice Corrigan in 7.89%, Justice Chin in 7.83%, and Justice Baxter in 6.06%.  Justice Kennard was the least frequent vote to reverse in part at 5.66%.

Justice Cuellar was the most frequent vote to reverse outright, voting that way in 8.51% of death penalty cases he participated in.  Justices Liu and Werdegar were next (5.31% and 5.26%, respectively).  Justice Kruger voted to reverse in 4.25% of cases. Justices Kennard and Corrigan were next (3.77% and 3.51%), followed by Chief Justice Cantil-Sakauye and Justice Chin (3.48 each) and Justice Baxter (3.03%).

In Table 307, we mix and match the data for the last two periods – 2006 to 2011 and 2012 to 2017 – to better illustrate a noteworthy trend.  Students of appellate decision-making have written extensively about “panel effects” for many years – the phenomenon of judges voting differently based upon who else is on the same panel (i.e., conservative Justices vote more liberally when sitting with liberals, and more conservatively when sitting with conservatives, and vice versa).  Indeed, political scientists have written about the phenomenon as a more basic tenet of group decision theory (as in, voters being polled when surrounded by people who either agree with their attitudes, or don’t).

We report the fraction of cases each Republican nominee has voted to reverse in part with the sentence vacated and to reverse outright in the years 2006 through 2011, compared to the years 2012 through 2017.  The data shows clear evidence of a panel effect.  For example, in the years 2006 through 2011, Justice Corrigan reversed in part in 2.17% of her cases and reversed outright 2.9% of the time.  For the years 2012 through 2017, she voted to reverse in part in 7.89% of cases, and to reverse 3.51% of the time.  Justice Werdegar’s numbers were up significantly too.  For the years 2006 to 2011, she averaged a partial reversal every 4.93% of the time, and an outright reversal every 4.23% of the time.  For the years 2012 through 2017, she reversed in part 10.53% of the time, and reversed outright in 5.26%.

Chief Justice Cantil-Sakauye didn’t reverse in part in any case between 2006 and 2011.  She reversed in part in 8.7% of cases between 2012 and 2017.  She voted to reverse outright in 4.76% of cases between 2006 and 2011, and to reverse in 3.48% of her cases from 2012 on.  Justice Chin reversed in part and reversed outright in the same fraction of cases between 2006 and 2011 – 2.13%.  Between 2012 and 2017, Justice Chin reversed in part in 7.83% of his cases, and reversed outright 3.48% of the time.  Justice Baxter reversed in part and reversed outright 2.11% of the time for the years 2006 through 2011.  For the years 2012 through 2017, he reversed in part in 6.06% of cases, and reversed outright 3.03% of the time.

Join us back here next Thursday as we turn our attention to a new subject in our ongoing study of the California Supreme Court’s decision making.

Image courtesy of Flickr by    (no changes).

LexBlog