Reprinted with permission from the October 2017 issue of ALI CLE’s The Practical Lawyer.

Cross-posted from the Illinois Supreme Court Review.

“In God we trust. All others must bring data.”

— Professor William Edwards Deming

All of us who often speak and write about the ongoing revolution in data analytics for litigation have heard it from at least some of our fellow lawyers: “Interesting, but so what?”

Here’s the answer in a nutshell. One often hears that business hates litigation because it’s enormously expensive and risky. There’s a degree of truth to that, but it’s far from the whole truth. Business doesn’t dislike expense or risk per se. Business dislikes unquantified expense and risk. As the maxim often (incorrectly) attributed to Peter Drucker goes, “You can’t manage what you can’t measure.”

Don’t believe me? If your client offers to sell an investment bank a two billion dollar package of mortgages, the bank gets nervous. But tell the bank that based on the past ten years of data, 65.78 percent of the mortgages will be paid off early, 24.41 percent will be paid off on time, and 9.81 percent will default, and they know how to deal with that.

It’s the same thing in litigation. For generations, most facts that would help a business person understand the risks involved have been solely anecdotal: this judge is somewhat pro-plaintiff or pro-defendant; the opposing counsel has a reputation for being aggressive or smart (or not); juries in this jurisdiction often make runaway damage awards or are notoriously parsimonious. But every one of those anecdotal impressions and bits of conventional wisdom can be approached from a data-driven perspective, quantified and proven (or disproven). Do that, and we’ve taken a giant step towards approaching litigation the way a business person approaches business—by quantifying and managing every aspect of the risk.

I hear lawyers talking about “early adopters” of data analytics tools in litigation, but the truth is, we’re not early adopters by a long shot. The business world has been investing billions in data analytics tools for a generation in order to understand and manage their risks.

Tech companies use algorithms to choose among job applicants and assign “flight risk” scores to employees according to how likely each is thought to be to leave. Billions of dollars in stock are traded every day by algorithms designed to predict gains and reduce risk. Both Netflix and Amazon’s websites (among many others) track what you look at and buy or rent in order to recommend additional choices you’ll be interested in. In 2009, Google developed a model using search data which predicted the spread of a flu epidemic virtually in real time. UPS has saved millions by placing monitors in their trucks to predict mechanical failures and schedule preventive maintenance. The company’s algorithm for planning drivers’ optimal routes shaved 30 million miles off drivers’ routes in a single year. Early in his term as New York Mayor, Michael Bloomberg created an analytics task force that crunched massive amounts of data gathered from all over the city to determine which illegal conversions (structures cut up into many smaller units without the appropriate inspections and licensing) were most likely to be fire hazards. Political campaigns now routinely use mountains of data to not only identify persuadable voters, but determine the method most likely to work with each one.

The application of data analytic techniques to the study of judicial decision making arguably begins with a 1922 article for the Illinois Law Review by political scientist Charles Grove Haines. Haines reviewed over 15,000 cases of defendants convicted of public intoxication in the New York magistrate courts. He showed that one judge discharged only one of 566 cases, another 18 percent of his cases, and still another fully 54%. Haines argued that his data showed that case results were reflecting to some degree the “temperament . . . personality . . . education, environment, and personal traits of the magistrates.”

In the early 1940s, political scientist C. Herman Pritchett published The Roosevelt Court: A Study in Judicial Politics and Values, 1937-1947. Pritchett published a series of charts showing how often various combinations of Justices had voted together in different types of cases. He argued that the sharp increase in the dissent rate at the U.S. Supreme Court in the late 1930s necessarily argued against the “formalist” philosophy that law was an objective reality which judges merely found and declared.

Another landmark in the judicial analytics literature, the U.S. Supreme Court Database, traces its beginnings to the work of Professor Harold Spaeth about three decades ago. Professor Spaeth created a database which classified every vote by a Supreme Court Justice in every argued case for the past five decades. Today, thanks to the work of Spaeth and his colleagues Professors Jeffery Segal, Lee Epstein and Sarah Benesh, the database has been expanded to encompass more than two hundred data points from every case the Supreme Court has decided since 1791. The Supreme Court Database is the foundation of most data analytic studies of the Supreme Court’s work.

Professors Spaeth and Segal also wrote one another classic, The Supreme Court and the Attitudinal Model, in which they proposed a model arguing that a judge’s personal characteristics—ideology, background, gender, and so on—and so-called “panel effects”—the impact of having judges of divergent backgrounds deciding cases together as a single, institutional decision maker—could reliably predict case outcomes.

The data analytic approach began to attract attention in the appellate bar in 2013, with the publication of The Behavior of Federal Judges: A Theoretical & Empirical Study of Rational Choice. Judge Richard Posner and Professors Lee Epstein and William Landes applied various regression techniques to a theory of judicial decision making with its roots in microeconomic theory, discussing a wide variety of issues from the academic literature.

Although the litigation analytics industry is changing rapidly, the four principal vendors are Lex Machina, Ravel Law, Bloomberg Litigation Analytics and Premonition Analytics. Lex Machina and Ravel Law began as startups (indeed, both began at Stanford Law School), but LexisNexis has now purchased both companies. Lex Machina is fully integrated with the Lexis platform, and Ravel will be integrated in the coming months. Although there are certain areas of overlap, all four analytics vendors have taken a somewhat different approach and offer unique advantages. For example, Premonition’s database covers not just most state and all federal courts, but also offers data on courts in the United Kingdom, Ireland, Australia, the Netherlands and the Virgin Islands.

The role of analytics in litigation begins with the earliest moments of a lawsuit. If you’re representing the defendant, Bloomberg and Lex Machina both offer useful tools for evaluating the plaintiff. How often does the plaintiff file litigation, and in what areas of the law? Were earlier lawsuits filed in different jurisdictions from your new case, and if so, why? Scanning your opponent’s filings in cases in other jurisdictions can sometimes reveal useful admissions or contradictory positions. If your case is a putative class action, these searches can help determine at the earliest moment whether the named plaintiff has filed other actions, perhaps against other members of your client’s industry. Have the plaintiff’s earlier actions ended in trials, settlements or dismissals? This can give counsel an early indication of just how aggressive the plaintiff is likely to be.

All four major vendors have useful tools for researching the judge assigned to a new case. Ravel Law has analytics for every federal judge and magistrate in the country, as well as all state appellate judges. State court analytics research is always a challenge because of the number of states whose dockets are not yet available in electronic form, but Premonition Analytics claims to have as large a state-court database as Lexis, Westlaw and Bloomberg combined. How much experience does your judge have in the area of law your case involves compared to other judges in the jurisdiction? How often does the judge grant partial or complete dismissals or summary judgments early-on? How often does the judge preside over jury trials? Were there jury awards in any of those trials, and how do they compare to other judges’ trials? What is defendants’ winning percentage in recent years before your judge? Ravel Law and Bloomberg can provide data on how often your trial judge’s opinions are cited by other courts— an indicator of how well respected the judge is by his or her peers— as well as how often the judge is appealed, and how many of those appeals have been partially or completely successful. The data can be narrowed by date in order to focus on the most recent decisions, as well as by area of law. Say your assigned judge appears to be more frequently appealed and reversed than his or her colleagues in the jurisdiction. Are the reversals evenly distributed across time, or concentrated in any particular area of law? If your judge’s previous decisions in the area of law where your case arises have been reversed unusually often, it can influence how you conduct the litigation. Counsel can keep all this data current through Premonition’s Vigil court alert system, which patrols Premonition’s immense litigation database and can give counsel hourly alerts and updates, keyed to party name, judge, attorney or case type, from federal, state and county courts. Many jurisdictions give parties one opportunity, before any substantive ruling is made, to seek recusal of the assigned judge as a matter of right, without proof of prejudice. Data-driven judge research can help inform your decision as to whether to exercise that right.

Lex Machina’s analytics platform focuses on several specific areas of law, giving counsel a wealth of information for researching a jurisdiction (additional databases on more areas of law will be coming soon). For example, in antitrust, cases are tagged to distinguish between class actions, government enforcement, Robinson-Patman Act cases, as well as others. The platform is integrated with the MDL database, linking procedurally connected cases. The database reflects both damages—whether through a jury award or a settlement—and additional remedies, such as divestiture and injunction. Cases are also tagged by the specific antitrust issue, such as Sherman Act Section 1, Clayton Act Section 7, the rule of reason or antitrust exemptions. The commercial litigation data includes the nature of the resolution, any compensatory or punitive damages, and the legal finding—contract breach, rescission, unjust enrichment, trade secret misappropriation, and many more. The copyright database similarly tracks damages, findings and remedies, and allows users to exclude from their data “copyright troll” filings. Lex Machina’s federal employment law database includes tags for the type of damages—backpay, liquidated damages, punitive damages and emotional distress, the nature of any finding, and the remedy given. The patent litigation database includes many similar fields, but also a patent portfolio evaluator, isolating which patents have been litigated, and a patent similarity engine, which finds new patents and tracks their litigation history. The securities litigation database enables users to focus on the type of alleged violation, tracking the most relevant outcomes, and the trademark litigation database contains data for the legal issues and findings, damages and remedies in each case.

Analytics research is important for the plaintiffs’ bar as well. Bloomberg’s Legal Analytics platform is integrated with its enormous library of corporate data covering 70,000 publicly held and 3.5 million private companies. Counsel can survey a company’s litigation history, and the information is keyed to the underlying dockets. The data can be focused by jurisdiction or date, as well as to include or exclude subsidiaries. Lex Machina’s Comparator app can compare not only the length of time particular judges’ cases tend to take to reach key milestones but also previous outcomes, including damages awards and attorneys’ fees awards. A plaintiffs’ firm can use such data in cases where there are multiple possible venues to select the jurisdiction likely to deliver the most favorable result in the shortest time.

One bit of conventional wisdom that is commonly heard in the defense bar is that defendants should generally remove cases to federal court when they have the right to do so because juries are less prone to extreme verdicts and the judges are more favorable to defendants. Although comprehensive data on state court trial judges is still less common than data on federal judges, all four major analytic platforms can help evaluate courts and compare judges, giving a client a data-driven basis for making the removal decision.

Researching your opposing counsel is important for both defendants and plaintiffs. How aggressive is opposing counsel likely to be? Bloomberg Analytics covers more than 7,000 law firms, and enables users to focus results by clients, date and jurisdiction. Is your opposing counsel in front of your judge all the time? If so, that can inform decisions like whether to seek of-right substitution of the judge or remove the case. What were the results of those earlier lawsuits? Reviewing opposing counsel’s client list can suggest how experienced opposing counsel is in the area of law where your case arises. Lex Machina’s law firms comparator also enables the user to compare opposing counsel to their peers, and get an idea of what opposing counsel’s approach to the lawsuit is likely to be. Lex Machina’s app enables counsel to compare opposing counsel’s previous cases by open and terminated cases, days elapsed to key events in the case, case resolutions and case results. In preparing this article, I reviewed a report generated by Lex Machina’s Law Firms Comparator and learned several things I didn’t know about my own firm’s practice. Ravel Law’s Firm Analytics enables counsel to study similar data about one’s opponent, focused by practice area, court, judge, time or proceeding—or all of the above. Firm Analytics also compares opposing counsel to other law firms in the jurisdiction, showing whether counsel appears before the trial judge frequently, and whether they tend to win (or lose) more often than comparable firms. All this information gives counsel a tremendous leg up as far as estimating how expensive the litigation is likely to be.

As you begin to develop the facts of a case, motions begin to suggest themselves. Is your client’s connection to the jurisdiction sufficiently tenuous to support a motion to dismiss for lack of personal jurisdiction, or for change of venue? Has the plaintiff failed to satisfy the Twombly/Iqbal standard by stating a plausible claim? Discovery motions to compel and for protective orders are commonplace, and inevitably defense counsel will face the question of whether to file a motion for summary judgment.

Ravel Law’s platform has extensive resources for motions research. For every Federal judge, the system can show you how likely the judge is to grant, partially grant or deny a total of 90+ motions—not just the easy ones like motions for summary judgment or to dismiss, but motions to stay proceedings or remand to state court, motions to certify for interlocutory appeal, motions for attorneys’ fees, motions to compel or for an injunction and motions in limine. This can by an enormous savings in both time and money for your clients. Even where examining the facts suggests that a motion for summary judgment might be in order, that calculus might look very different when one learns that the trial judge has granted only 18 percent of the summary judgment motions brought before him or her since 2010.

Image courtesy of Flickr by Matthew Dillon (no changes).

The Court decided five tort cases from the First District between 2010 and 2020 – one from Division Three, four from Division Four and one from Division Five.  The Court decided 22 cases from the Second District.  There were four cases from Division One, three each from Division Three and Division Four, seven from Division Five, one from Division Six and two each from Divisions Seven and Eight.  There was one case from the Third District.  There were four cases from the Fourth District: two from Division One, two from Division Two and three from Division Three.  There was one case from the Fifth District and three from the Sixth District.

All of the cases from Divisions Three and Four of the First District were reversed.  None of the cases from Division Five of the First District were reversed.  Division One of the Second District had a reversible rate of 75%.  The rate for Divisions Three and Four was 66.67%.  The reversal rate for Division Five was 42.86%.  The reversal rate for Divisions Six and Eight was 100%, and the rate for Division Seven was 50%.  The Third District had a reversal rate of 100%.  The reversal rate of Division One of the Fourth District was 50%.  The reversal rate of Division Two was 100%, and the rate of Division Three of the Fourth District was zero.  None of the decisions from the Fifth District were reversed, and one of three from the Sixth District were reversed.

Join us back here next time as we begin a new topic.

Image courtesy of Flickr by Kevin Gill (no changes).

The reversal rate for tort cases in 2010 was 50%.  In 2011 and 2012, all tort cases were reversed.  In 2013, no tort decisions were reversed.  In 2014, the reversal rate was 50%.  In 2015, the rate dropped to zero.  The reversal rate was 57.14% in 2016, 33.33% in 2017 and 60% in 2018.  All of the tort cases in 2019 were reversed.  In 2020, the reversal rate was 66.67%.

The Court decided 21 tort cases won by the defendants at the Court of Appeal and 19 cases won by the plaintiffs.  Defendants’ wins were reversed at a slightly higher rate than plaintiffs’ wins were – 61.9% to 57.89%.

Join us back here next time as we complete our review of the tort cases.

Image courtesy of Flickr by Martha Jimenez (no changes).

This time, we’re tracing the California Supreme Court’s tort docket by Districts of the Court of Appeal for the years 2000 through 2009.  During the decade, the Second District had 26 tort cases on the Supreme Court’s docket – 1 from Division One, 2 from Division Two, 7 from Division 3, 3 from Division 4, 4 from Division Five, 3 from Division Six and 6 from Division 7.  Seventeen cases came from the Fourth District: 11 from Division Two, 4 from Division Three and 2 from Division One.  The First District contributed 16 cases: 6 each from Divisions Two and Four and 2 each from Divisions One and Three.  The Fifth and Sixth Districts contributed 4 cases apiece, and only 2 cases came from the Third District.

As for reversal rates, the First District did fairly well – 83.33% reversal for Division Two, but 50% from Divisions One, Three and Four.  In the Second District, Divisions Two and Seven had 100% reversal.  Division Three was at 85.71%, while Division Six was at two-thirds.  Only a third of the cases from Division Four were reversed and only one-quarter of Division Five’s cases were.  Division One had the best performance, with a reversal rate of zero.

The reversal rate for the Third District was 100%.  The Fourth District did very well – all the cases from Division Two were affirmed, only 18.18% reversal for Division One and 50% for Division Three.  The reversal rate for the Fifth District was 75%.  The rate for the Sixth was 50%.

Join us back here next time as we review the data for 2010 through 2020.

Image courtesy of Flickr by Martha Jimenez (no changes).

This time, we’re reviewing the data on the Court’s tort docket for the years 2000 through 2009.

The Court’s overall reversal rate in tort cases was down somewhat from the nineties.  Only 50% of the tort cases in 2000 were reversed.  The reversal rate was down to 44.44% in 2001 and 42.86% in 2002.  It jumped to 62.5% in 2003 and was 50% in 2004.  The reversal rate fell to 28.57% in 2005.  For the rest of the decade, the reversal rate was around baseline level: 50% in 2006, 41.67% in 2007, 40% in 2008 and two-thirds in 2009.

The Court decided 33 tort cases between 2000 and 2009 which were won by the defendants at the Court of Appeal.  The Court decided 42 plaintiffs’ wins.  For the decade, only 36.36% of the defendants’ wins were reversed in whole or in part at the Supreme Court.  On the other hand, 54.76% of plaintiffs’ wins were reversed.

Next time, we’ll continue reviewing the data for the years 2000-2009.

Image courtesy of Flickr by Ken Lund (no changes).

Let’s wind down the week with a look at where the Court’s tort cases originated between 1990 and 1999, and what the reversal rate from each District was.

The Court decided 18 tort cases from San Francisco’s First District – two from Division One, six from Division Two, three from Division Three, five from Division Four and two from Division Five.  Another forty cases came from Los Angeles’ Second District – seven cases from Division One, three from Division Two, five from Division Three, seven from Division Four, eight from Division Five, four from Division Six and eight from Division Seven.  The Court decided four tort cases from the Third District.  Seventeen cases arose from San Diego’s Fourth District: eight from Division One, three from Division Two and six from Division Three.  The Court decided seven cases from the Fifth District and five cases from the Sixth.

In the First District, Division Two fared best with only one-third of its decisions being reversed.  Half of Division Five’s cases were reversed, two-thirds from Division Three and all the decisions from Divisions One and Four.  In the Second District, Division One fared best with 42.86% of its decisions reversed.  The reversal rate of Division Seven was 62.5%.  Two-thirds of the decisions from Division Two were reversed.  Division Four had a reversal rate of 71.43%.  Division Three was 80% reversal and Division Five was 87.5%.  Half the decisions from the Third District were reversed.  Only a quarter of the decisions from Division One of the Fourth District were reversed, while two-thirds were in Divisions Two and Three.  All the decisions from the Fifth District were reversed, while the reversal rate for the Sixth District was 80%.

Join us back here next time as we review the data for the next decade, 2000 through 2009.

Image courtesy of Flickr by Ken Lund (no changes).

For the next few weeks, we’ll be taking a deep dive on the Supreme Court’s tort cases.  To begin, we’ll consider whether there is any relationship between the party which won at the Court of Appeal and the result at the Supreme Court in tort cases for the years 1990 through 1999.

We begin with the overall reversal rate, year by year, in tort cases for the decade.  In 1990, three-quarters of the Court’s tort cases were reversed.  The rate fell to 42.86% in 1991 and 55.56% in 1992 before rising to 80% in 1993 and 83.33% in 1994.  In 1995, the reversal rate was 66.67%.  None of the Court’s tort cases were reversed in 1996, but the rate was 70% in 1997, 50% in 1998 and 53.85% in 1999.

Between 1990 and 1999, the Supreme Court decided 35 tort cases won by the defendants at the Court of Appeal level.  The Court decided 58 cases won by the plaintiffs below.  The reversal rate for defendants’ wins was 45.71% – 19 reversals in 35 cases.  The reversal rate for plaintiffs’ wins was 67.24% – thirty-nine reversals in 58 cases.

Join us back here tomorrow as we continue our examination of the Court’s tort cases between 1990 and 1999.

Image courtesy of Flickr by Mike McBey (no changes).

On Thursday, September 16, Justice Mariano-Florentino Cuéllar announced he will be leaving the Court effective Friday, October 29, to become President of the Carnegie Endowment for International Peace.  Chief Justice Tani Cantil-Sakauye said in a statement that Justice Cuéllar’s “legal intellect, academic training, and life experiences brought an essential perspective to California’s highest court.”  See here for the Los Angeles Times’ report on Justice Cuéllar’s departure, including my comments.  We will mark Justice Cuéllar’s final day at the Court on the 29th with an analytics-driven review of his tenure.

Image courtesy of Flickr by Andrew Dupont (no changes).

As shown in Table 1591, the percentage of the Court’s criminal docket accounted for by final judgments and death penalty appeals remained quite high throughout the years 2010-2020.  In 2010, 60.27% of the docket was either from final judgments or death cases.  That rose into the seventies from 2011 to 2013 before falling back a bit to 67.27% in 2014.  For 2015, 70.45% were final judgments or death cases.  The next year, it was even higher – 76.92%.  The share fell over the next two years, to 57.14% in 2017 and 58% in 2018, before rising to 65.85% in 2019 and 61.9% last year.

For the entire thirty-one-year period (1990-2020), the Court has decided 1,722 criminal, quasi-criminal, juvenile justice and mental health cases.  Five hundred eighty of those cases arose from final judgments and an additional 587 arose from death penalty cases.  So 33.68% of the docket arose from final judgments and 34.09% from death penalties – a total of 67.78%.

Join us back here next week as we turn our attention to a new issue.

Image courtesy of Flickr by Vahe Martirosyan (no changes).

For the past two weeks, we’ve been comparing the data for appeals from final judgments to the Court’s total caseload, addressing the notion that the Court is reluctant to get involved in cases which have not yet reached final judgment.  This time, we’re looking at the civil docket for the years 2010 through 2020.

The final-judgments share was fairly consistent from 2010 through 2018, starting out at 52.38% in 2010, dipping into the mid-forties for two years and then reaching 53.13% in 2013 and 56.52% in 2014.  The number was up and down over the next few years – 46.88% in 2015, 52.78% in 2016, 42.86% in 2017 and 51.52% in 2018.  Over the past two years however, the share of the docket has dropped sharply.  In 2019, only 41.18% of the criminal docket was from final judgments.  In 2010, only 31.03% was.

Across the entire period from 1990 to 2020, the Court has decided 1,319 civil cases.  Of those, 735 arose from final judgments in the trial court – a share of only 55.72%.

Join us back here next time as we wind up this part of our study with the data for these same years in the criminal docket.

Image courtesy of Flickr by Loco Steve (no changes).