Table of Contents

Acknowledgments…………………………………………………………………………………………………..……………………… 1

Introduction ………………………………………………………………………………………………………………………..…………….3

  1. The History of Efforts to Set Limits on Public Defender Workloads …………………………..7

    1. The 1973 National Advisory Commission Standards ….…………….…………. 7

    2. Weighted Caseload Studies ………………………………………………………………………. 8

  2. Background to ABA SCLAID’s Development of a Workload Study Methodology…..11

  3. Overview of ABA SCLAID Defender Workload Studies ……………………………………….…..….14

    1. The System Analysis: The “World of Is”……………………………………………………………….. 14

    2. The Delphi Process: The “World of Should”………………………………………..……..………. 14

      1. The Research Team………………………………………………………….…………..…..15

      2. Local Support Necessary for a Successful Workload Study …… 15

      3. Working as a Team …………………………………………..……………………………… 16

      4. Participants………………………………………………………………………………….………16

      5. CaseType/CaseTaskSelection…………………………… …………………………….17

      6. The Structure of the Delphi Process……………………………….…………….. 18

      7. The Report………………………………………………….……………………………………… 19

  4. Lessons Learned and Open Questions………………………………………………………………………21

    1. Decisions in System Analysis ……………………………………………………………………… 21

      1. Timekeeping ………………………………………………………………………….…………. 22

      2. The FTE and Caseload Methodology ………………………………………….. 24

    2. Decisions in the Delphi Process ………………………………………………………………… 25

      1. How Many Delphi Panels?……………………………………………………………….26

      2. How Many Case Types/Case Tasks? …………………………..……………….. 27

      3. Trial v. Plea Analysis ………………………………………………………………………… 28

      4. Survey Interface Options ……………………………………………………………….. 28

      5. Structured Feedback …………………………………………..………………………….. 31

      6. Attrition…………………………………………………………………………………………………32

    3. Timeline for a Workload Study ……………………………………………………………………. 32

    4. Costs of a Workload Study……………………………………………………………………………..32

  5. Conclusion………………………………………………………………………………………………………………..34


This report1 provides an overview of efforts to quantify, with reliable data and analytics, maximum caseloads for public defenders.2 In particular, the report details the Delphi method as utilized
by several major accounting and consulting firms, working with the American Bar Association Standing Committee on Legal Aid and Indigent Defense (ABA SCLAID), to establish jurisdiction specific workload standards for public defense providers.

This report was authored by three individuals who each worked on the ABA SCLAID workload studies: Stephen F. Hanlon, who has served as the Project Director on all of the ABA SCLAID workload studies, Malia N. Brink, who edited the Rhode Island and Colorado reports and then served as Deputy Project Director on the Indiana Study, and Norman Lefstein, who consulted on all of the ABA SCALID workload studies before his death in August 2019.

Stephen F. Hanlon founded the Community Services Team (CST) at Holland & Knight in 1989 and served as the Partner in Charge of the CST for the next 23 years. Since his retirement from Holland & Knight at the end of 2012, Mr. Hanlon has confined his practice to assisting and representing public defenders with excessive caseloads. He served as general counsel to the National Association for Public Defense and is a Professor of Practice at Saint Louis University School

of Law. Mr. Hanlon was lead counsel for the Missouri Public Defender in State ex rel. Mo. Public Defender Commission, 370 S.W.3d 592 (Mo. 2012)(en banc), which was the first state supreme court case to uphold the right of a public defense organization to refuse additional cases when confronted with excessive caseloads.

Malia N. Brink has spent over 15 years working on criminal justice reform issues with a focus on public defense reform. She currently serves as the Counsel for Indigent Defense to ABA SCLAID. Before joining the ABA, Ms. Brink served as the Public Defense Project Director at the Justice Programs Office of American University and the Director of Institutional Development and Policy Counsel at the National Association of Criminal Defense Lawyers. Ms. Brink also serves as a Lecturer in Law at the University of Pennsylvania Law School.

Norman Lefstein was a renowned legal scholar and academic, whose 45 years of scholarship focused on indigent defense, criminal justice, and professional responsibility. In 2011, the ABA published his book, Securing Reasonable Caseloads: Ethics and Law in Indigent Defense. In addition, he played a major role as co-reporter in writing Justice Denied: America’s Continuing Neglect of Our Constitutional Right to Counsel, published by the Constitution Project in 2009. During the 1990s, Dean Lefstein was chief counsel for the Subcommittee on Federal Death Penalty Cases, and in this capacity, he directed the preparation of Federal Death Penalty Cases: Recommendations Concerning the Cost and Quality of Defense Representation, which was approved by the Judicial Conference of the United States. Professor Lefstein served as Dean of the IU McKinney School of Law from 1988-2002. Prior to becoming the law school’s leader, Professor Lefstein was a faculty member for 12 years at the University of North Carolina School of Law in Chapel Hill. Before moving into academia, Professor Lefstein served as director of the Public Defender Service for the District of Columbia, an Assistant United States Attorney in Washington, D.C., and as a staff member of the Office of the Deputy Attorney General of the U.S. Department of Justice. Early in his career, he directed a large-scale Ford Foundation research project in which legal representation was furnished to juveniles in three metropolitan cities.

The authors must acknowledge the tremendous work of the accounting and consulting firms with which ABA SCLAID partnered in conducting the public defense workload studies: RubinBrown, Posterwaithe and Netterville, and Blum Shapiro.

Particular thanks are owed to past Chairman of RubinBrown, James Castellano, and the entire firm. In partnering on the first ABA SCLAID workload study in Missouri, RubinBrown conducted a comprehensive review of not only the use of the Delphi method, but also other potential methods, and developed a national blueprint for conducting these workload studies

The authors also wish to acknowledge the members and staff of ABA SCLAID. Without their advocacy and support, the workload studies would not have been possible. Finally, the authors wish to thank the National Association of Criminal Defense Lawyers (NACDL) for partnering with us on the Rhode Island Project, as well as this report. Special thanks to Bonnie Hoffman, NACDL’s Director of Public Defense Reform and Training, for editing this report.


In Gideon v. Wainwright,3 the United States Supreme Court recognized that the Sixth Amendment right to counsel extends to state criminal proceedings and that every person who stands accused of a crime requires the guiding hand of counsel to assist in their defense. In so doing, the Court declared defense counsel for the accused essential to ensuring fairness and equality in our criminal justice system.

[I]n our adversary system of criminal justice, any person haled into court, who is too poor to hire a lawyer, cannot be assured a fair trial unless counsel is provided for him. . . That government hires lawyers to prosecute and defendants who have the money hire lawyers to defend are the strongest indications of the widespread belief that lawyers in criminal courts are necessities, not luxuries. The right of one charged with crime to counsel may not be deemed fundamental and essential to fair trials

in some countries, but it is in ours. From the very beginning, our state and national constitutions and laws have laid great emphasis on procedural and substantive safeguards designed to assure fair trials before impartial tribunals in which every defendant stands equal before the law. This noble ideal cannot be realized if the poor man charged with crime has to face his accusers without a lawyer to assist him.4

The purpose of the Sixth Amendment cannot be achieved simply by requiring that a person with a law license stand by the side of the accused individual in court. Defense attorneys must provide effective and zealous assistance of counsel pursuant to prevailing professional norms, including the rules of professional conduct.5

In the 50 years since the Gideon decision, America’s public defense providers have struggled to meet these standards. Excessive caseloads are routinely identified as a root cause of the inability In the 50 years since the Gideon decision, America’s public defense providers have struggled to meet these standards. Excessive caseloads are routinely identified as a root cause of the inability to meet standards, as detailed in the numerous national reports analyzing this crisis.7 In short, excessive caseloads stand as a core, overarching issue in public defense.8

In 2003, following the 40th anniversary of Gideon, the American Bar Association’s Standing Committee on Legal Aid and Indigent Defendants (ABA SCLAID) published a detailed report on the state of public defense9 nationally.10 The report was based on testimony submitted at a series of public hearings held around the country. It concluded that “too often . . . crushing workloads make it impossible for [public defenders] to devote sufficient time to their cases, leading to widespread breaches of professional obligations.”11 The report cited examples of excessive caseloads–often with lawyers handling more than 1,000 cases per year–in Maryland, New York, Pennsylvania, Rhode Island, and Nebraska.12

Five years later, in 2009, the National Association of Criminal Defense Lawyers (NACDL) published Minor Crimes, Massive Waste: The Terrible Toll of America’s Broken Misdemeanor Courts, a report looking specifically at misdemeanor courts.13 Again it documented that defenders from across the country had massive caseloads, some handling in excess of 2,000 cases per year.14 More recently, a 2016 series of articles noted Kentucky defenders averaged 448 cases in 2015 and Missouri defenders often have upwards of 150 clients at one time.15 Data from the Texas Indigent Defense Commission indicates more than 300 private attorneys each accepted over 250 assigned cases

in 2015, with a number of those attorneys accepting 500 cases or more.16 Notably, this is often in addition to the private, retained cases these attorneys also accepted during the same year.

The simple reality is that, regardless of talent and experience, a lawyer with too many clients cannot comply with their professional and ethical duties to each and every individual client
who is entrusted to them.17 For example, the Model Rules of Professional Conduct (Model Rules) require all attorneys–including public defenders18–to provide competent and diligent representation.19 Competence requires not only legal knowledge and skill, but the “thoroughness and preparation reasonably necessary for the representation.”20 An essential element of competence is “adequate preparation.”21

The ABA Defense Function Standards establish standards of practice for criminal defense lawyers.22 They include the need to investigate case facts,23 research the law,24 communicate
with clients,25 negotiate with prosecutors,26 file appropriate motions,27 and prepare for court.28 Importantly, defense attorneys must perform these tasks regardless of whether the case proceeds to trial or is resolved by a guilty plea.29 Defenders cannot do all these things for all clients when they have too many cases. By necessity, if a lawyer has too many cases, competent work for one client will result in unreasonable delays or a lack of work for other clients.30

Because excessive workloads create a grave risk of harm to clients, the Model Rules of Professional Conduct require lawyers to limit their workloads. Moreover, both ABA practice standards and Model Rules require that public defenders take steps to prevent or correct excessive workloads.31 “Continued representation in the face of excessive workloads imposes a mandatory duty to take corrective action in order to avoid furnishing legal services in violation of professional conduct rules.”32 Attorneys who face excessive workloads must withdraw from cases or refuse additional cases.33 The ABA Eight Guidelines of Public Defense Related to Excessive Workloads require that providers take corrective action in advance, “to avoid furnishing legal services in violation of professional conduct rules.”34

When an excessive caseload forces a lawyer to choose among the interests of clients, depriving some if not all of them of competent and diligent defense services, the situation also constitutes a conflict of interest.35 While the lawyer may be able to provide reasonably effective assistance of counsel and meet ethical obligations for some clients, doing so requires the lawyer to sacrifice duties owed and the provision of effective assistance to other clients. In this situation, “there is

a significant risk that the representation of one or more clients will be materially limited by the lawyer’s responsibilities to another client[.]”36

Defenders with excessive caseloads inevitably engage in triage and clients suffer real harm as
a result. For example, a prosecutor in Miami extended a time-sensitive plea offer of 364 days in
jail and seven years of probation to a public defender for a client charged with a significant felony. Because the defender had 11 cases set for trial on the same day that the prosecutor required
a response, the defender failed to convey the plea offer to the client. When she contacted the prosecutor after the deadline to accept the offer the prosecutor said the offer was no longer available. The client later pleaded guilty with a significantly less favorable agreement, receiving five years in prison plus probation.37 For this client, the cost of an overburdened public defender was an additional four years of incarceration.

Excessive caseloads also prevent defenders from reviewing and assessing case weaknesses in a timely manner. Consider the case of Donald Gamble, who was charged with two counts of robbery and initially represented by a defender with too many cases. Mr. Gamble was detained pretrial and no progress was made on his case for a year, after which his public defender resigned her position. When a Loyola law professor subsequently accepted the case, she reviewed the file and quickly identified conflicts between security footage and other evidence in the case. She presented the conflicting evidence to the judge and, a few days later, the prosecutor dropped all charges.38 For Donald Gamble, the cost of an overworked public defender was a year in jail despite clear evidence of innocence.

In 2017 current and former Orleans Parish public defenders articulated their grave concerns regarding the inevitable errors caused by excessive caseloads on an episode of the television news show 60 Minutes. Nine defenders shared that they all believed innocent clients had gone to jail because they lacked the time to properly represent their clients.39 An overburdened lawyer simply cannot meet the effectiveness standards of the Sixth Amendment right to counsel.

But how many cases are too many? For almost 45 years, the public defense community has struggled to develop reliable caseload/workload limits. Most recently, ABA SCLAID and others have utilized the Delphi methodology in seeking to provide meaningful limits for public defenders in particular jurisdictions. As of this writing, ABA SCLAID has contributed to studies completed in Colorado, Louisiana, Missouri, and Rhode Island, and consulted on a study completed in Texas.40 A workload study was attempted in Tennessee but failed for reasons that will be discussed below. An additional ABA SCLAID study was subsequently completed during the drafting of this article in Indiana.

This report will detail the methodology used in the ABA SCLAID workload studies and share lessons learned. The purpose of this report is to assist public defense organizations41 in determining whether they have the necessary infrastructure and resources to undertake similar studies, and to assist other research entities that may seek to conduct such studies. Part I of this report reviews the history of efforts to develop reliable workload limits for public defenders. Part II provides an overview of the Delphi method used by ABA SCLAID and its research partners. Part III delves deeper into the ABA SCLAID use of the Delphi method, looking at decisions made during implementation.

I. The History of Efforts to Set Limits on Public Defender Workloads

A. The 1973 National Advisory Commission Standards

In 1973, 10 years after Gideon, the National Advisory Commission on Criminal Justice Standards and Goals (NAC) endeavored to set the first public defender caseload limits (NAC Standards). The NAC adopted the recommendation of a National Legal Aid and Defender Association (NLADA) committee which stated that individual defenders’ annual caseloads should not exceed 150 felonies, 400 misdemeanors (excluding traffic cases), 200 juvenile cases, 200 mental health cases, or 25 appeals, or a proportional combination thereof.42 The NAC Standards were considered the national caseload standards for many years.

In 2002, the ABA promulgated its Ten Principles of a Public Defense Delivery System.43 Principle 5 of that document directs that a lawyer’s workload be “controlled to permit the rendering of quality representation.”44 The Commentary to the Principle states “[n]ational caseload
standards should in no event be exceeded” and cites to the NAC Standards as an example

of such national standards.45

In 2007, NLADA’s American Council of Chief Defenders (ACCD) adopted a resolution recommending “public defender and assigned counsel caseloads not exceed the NAC recommended levels.”46 An accompanying memorandum observed that while the NAC Standards “have proved resilient,” the NAC levels must be altered to account for myriad local practices, including insufficient support staff levels, complex or severe sentencing schemes, particularly complex cases, and the need to advise clients regarding collateral consequences.47

The exceptions and need for alterations noted in the ACCD’s resolution and accompanying memorandum highlight the limited usefulness of the NAC Standards. Because the Standards group cases together in very broad classifications (e.g., felony, misdemeanor, juvenile), they do not allow for consideration of the variation among case categories. For example, the “felony” classification encompasses everything from theft and drug possession to rape and murder. Additionally, other common case types, such as probation violations, are not listed at all. For this reason, the ACCD recommended “that each jurisdiction develop caseload standards for practice areas that have expanded or emerged since 1973 and for ones that develop because of new legislation.”48 One suggestion from the ACCD was to develop adjustments or case weights to address how different factors impact or alter the appropriate caseload standard in a particular jurisdiction.49

The Missouri State Auditor raised similar concerns and rejected the use of the NAC Standards in an October 2012 report. The Auditor was asked to review the Missouri Public Defender’s caseload crisis protocol, under which the public defender’s office sought to refuse cases when the caseloads were too high. In the protocol, the refusal limit was based substantially on the NAC Standards,
with adjustments made for things like complexity through a case-weighting scheme similar to what was recommended in the ACCD opinion.50 In rejecting the protocol, the Auditor noted that there was “very little information regarding the methodology and factors considered in the development of the [NAC S]tandards.”51 He further noted, as the ACCD had, that the NAC Standards “did not distinguish between types of felony offenses and were not established for certain types of cases.”52 The Auditor concluded that “[w]ithout adequate information to support how the national caseload standards were derived or maintaining documentation to support assumptions and decisions regarding case weights, the [Missouri State Public Defender] is unable to demonstrate it has accurately converted the standards to case weights.”53

B. Weighted Caseload Studies

Researchers have also explored weighted caseload studies as a way to set appropriate caseload limits. Like the ACCD recommendation, these studies establish case weights and adjustments from a norm, but instead of the norm being the NAC Standards, the norm was often the current caseload in the jurisdiction. Many public defense weighted caseload studies were conducted between
1995 and 2010 by the Spangenberg Group.54 The National Center for State Courts (NCSC)55 also conducted three statewide weighted caseload studies: Maryland (2005),56 New Mexico (2007),57 and Virginia (2010).58 RAND recently completed a statewide workload study in Michigan (2019).59 The federal defender system has also used a variation of case weighting to address caseloads.60

A review of the NCSC New Mexico workload study demonstrates how these studies typically
work. First, groups of public defenders and private contract lawyers met to determine relevant workload factors and tasks associated with effective representation in each type of case. For New Mexico, 11 different types of cases were chosen for the study.61 Second, the NCSC trained the participating lawyers on how to properly track and record time using a web-based program, and had attorneys record the time spent on both case-related and non-case related activities for a six- week period.62 The NCSC opined that this short period of time, given the high level of participation, was sufficient to produce “a valid and reliable snapshot from which to develop case weights.”63 The NCSC concluded from that data that attorneys were spending, for example, almost seven hours on each non-violent felony case64 and called this a “preliminary case weight.” The “case weight” data collected and documented what the attorneys were currently doing on cases operating under the caseload and time constraints they were currently experiencing. This represented current practice or the measure of the world of “what is.”

To move from that world of “what is”–which does not necessarily capture the time necessary to perform essential tasks effectively–to the world of “what should be,” the NCSC used a “quality adjustment process,” which consisted of two parts. First, a web-based “sufficiency of time survey” was sent to the attorneys asking whether they had sufficient time to perform key tasks of effective representation.65 Then, after data was gathered, veteran public defenders from offices across the state were convened to consider the results from the time study and various areas of concern identified by the sufficiency of time survey. In New Mexico, these attorney groups reviewed 90 distinct events related to attorney performance. Of these 90 decision points, quality adjustments were made to 21.66 In each instance in which a quality adjustment was made, the group was asked to provide a rationale and justify any increase in attorney time.”67

“The combination of the case-related time study data (representing current practices, or ‘what is’) and the quality adjustment data (representing preferred practices, or ‘what should be’) creates a [final] ’case weight’ for each case-type category.”68 The NCSC staff then determined the number
of days per year (233 days) and hours per day (6.25 hours) attorneys had to perform case-related activities. Applying these case weights to the time allotted, the NCSC then produced annualized caseload limits. At the conclusion of this process, the quality adjusted caseloads in New Mexico came out very close to the NAC Standards, with slightly fewer felonies allowed: 144 felonies (or 138 felonies, including murder), rather than the 150 in the NAC Standards, and more misdemeanors (414 in comparison to the NAC’s standard of 400) and juvenile cases (251 in comparison to the NAC standard of 200).69

Applying the case weights to the projected caseload of the public defender and dividing them
by the hours that an attorney has for case work, the NCSC was also able to produce a full-time employee equivalence staffing number.70 Based on those calculations, the NCSC study determined that the New Mexico Public Defender program needed 41 additional attorneys, an increase from 169 to 210.71

In summary, weighted caseload studies start with the actual time spent, as calculated during a time study. This time study is the basis for all other calculations. An attorney survey determines whether they believe this is sufficient time to complete the tasks. Thereafter a smaller, more experienced group of public defenders reviews the time allotment and based on the survey results and personal experience, determines whether additional time should be added to “perform essential tasks and functions effectively.”72

While weighted caseload studies provide critical insights into how to conduct a more structured inquiry into public defender caseloads, ABA SCLAID and its partners found certain aspects of the weighted caseload studies problematic. First, the length of time studied often is insufficient. The NCSC New Mexico study, for example, used six weeks of time data, which is not long enough to cover even low-level cases from opening to close. Extrapolations made from such a limited sample are prone to error. Second, beginning consideration with current time expenditures as the basis

for any of the calculations reinforces existing systemic deficiencies.73 Third, the time adjustment process was not tied to particular standards; instead, attorneys surveyed were simply asked to
use their experience to determine whether they generally had enough time for a particular task. Similarly, the small group of experienced public defenders asked to evaluate the need for time adjustments operated without reference to standards. In this process, “individual lawyers might be reluctant to admit that they should have spent more time on their cases, regardless of whether the data was submitted anonymously. There is also some risk that defenders might not appreciate that they should have spent more time on their cases, simply because they have not done so in the past and believe what they have been doing is perfectly fine.”74

II. Background to ABA SCLAID’s Development of a Workload Study Methodology

A series of events in 2011 and 2012 drove ABA SCLAID to consider an alternative method
for examining defender workloads. In 2011, the ABA published Norman Lefstein’s Securing Reasonable Caseloads: Ethics and Law in Public Defense, which detailed the various efforts undertaken thus far and the difficulties with each of those efforts. In that same year, proceedings also began in the Missouri case State v. Waters.75

In Waters, relying upon the Missouri Public Defender’s protocol for providing a certification of unavailability once a district defender’s office exceeded the established caseload maximums, an office moved to be relieved of further representation of clients due to excessive caseloads. The office also cited to the obligations the attorneys bore under the applicable Rules of Professional Conduct and the obligation to provide effective assistance of counsel under the Sixth Amendment. Although the trial judge conceded that the public defenders’ caseloads were excessive, the court nevertheless ordered the office to continue representing new eligible defendants.76

On appeal, the Missouri Supreme Court found that the trial court exceeded its authority by appointing the public defender to represent a defendant in contravention of the Rules of Professional Conduct, the Sixth Amendment, and the Missouri Public Defender’s caseload maximum rule, and ordered the trial court to vacate its order.77 Thus, Waters stands for the proposition that when a public defender office can demonstrate that it has so many cases that
its lawyers cannot provide competent and effective representation to all their clients, lawyers may–indeed, must–refuse additional appointments and judges may not appoint them to represent additional defendants.

Although Waters was decided on July 31, 2012, in October 2012, as noted above, the Missouri State Auditor found the Missouri Public Defender’s caseload crisis protocol in invalid because it was based substantially on the NAC Standards, which lacked sufficient support. As a result, the promise of Waters and its mechanism for declining cases was dependent on finding a more reliable way to measure whether a public defense provider has too many cases.

In the immediate aftermath of the Waters decision and Missouri State Auditor’s report, Stephen F. Hanlon, who was lead counsel for the Missouri State Public Defender (MSPD) in Waters,
was retained by MSPD to find a reliable alternative to using the NAC Standards to enforce the Waters decision. Mr. Hanlon investigated methodological options, including the recommendation made by Professor Norman Lefstein in Securing Reasonable Caseloads to consider utilizing the Delphi method. Mr. Hanlon then conducted an extensive literature review of the Delphi method, preliminarily concluding it could have merit to apply to the problem of reliably determining appropriate defender caseloads. He then began to develop a potential plan for a Delphi-based public defender workload study.

Realizing that additional expert assistance would be required to design and conduct a workload study, Mr. Hanlon began a search for an accounting firm that could complete the requisite research, design and conduct a workload study in Missouri. He identified RubinBrown, one of the nation’s leading accounting and consulting firms. He submitted the results of his literature review and his initial design work to RubinBrown and proposed to ABA SCLAID that it retain the accounting firm RubinBrown to:

  • Conduct a thorough literature review of previous public defender workload studies and the Delphi method;

  • Determine whether the Delphi method was a reliable research method capable of generating a reliable consensus of expert opinion for a workload study for the Missouri Public Defender or determine an appropriate research method for setting appropriate workload for a public defender office; and

  • Conduct a reliable workload study of the Missouri Public Defender system that would have, as its basis, ABA practice standards and the Rules of Professional Conduct.

    After an exhaustive literature review, RubinBrown concluded that the Delphi method was a reliable research tool to determine the appropriate workload for a public defender office because it was capable of generating a reliable consensus of expert opinion. As Professor Lefstein had observed:

    The Delphi method is based on a structured process for collecting and distilling knowledge from a group of experts by means of a series of questionnaires interspersed with controlled opinion feedback. Delphi is used to support judgmental or heuristic decision-making, or, more colloquially, creative or informed decision- making. The technique is recommended when a problem does not lend itself to precise measurement and can benefit from collective judgments, which is precisely the situation when a defense program considers how much additional time its lawyers need to spend on a whole range of activities involving different kinds of cases.78

    RubinBrown noted that the Delphi method, developed by researchers at the Rand Corporation over 60 years ago, “has been employed across a diverse array of industries, such as health care, education, information systems, transportation, and engineering.”79 “The purpose of its use beyond forecasting has ranged from ‘program planning, needs assessment, policy determination, and resource utilization.’”80 The ABA SCLAID public defender workload studies are, in many ways, program planning and needs assessment studies. Respecting the accuracy of opinions reached by Delphi panelists, RubinBrown observed that “researchers have found that the majority of studies provide compelling evidence in support of the Delphi method” as compared to “unstructured interacting groups.”81

    Thereafter, Mr. Hanlon, ABA SCLAID and RubinBrown outlined the following essential features of a public defense workload study:

• The professionals conducting the workload study must be the facilitators of the study, not the arbiters of what is “appropriate”. The governing principle with respect to the professional judgment of the Delphi panel must be “[l]et the chips fall where they may.”

The professional judgments must come from both public defenders82 and private practice criminal defense lawyers.

  • A successful workload study requires two areas of expertise: (1) surveying and data analysis, and (2) law and standards.

  • Legal, practice and ethical standards, not the results of any timekeeping data, are the appropriate anchors83 for the professional judgments in the study.

    • In this standards-based inquiry, the standards that drive the study are the ABA Criminal Justice Standards,84 the applicable Rules of Professional Conduct,85 and the United States Supreme Court’s holding in Strickland v. Washington that an indigent criminal defendant is entitled to “reasonably effective assistance of counsel under prevailing professional norms.”86

    • In particular, the instructions to the adult criminal Delphi panel, which serve much the same function as jury instructions, would emphasize ABA Defense Function Standard 4-6.1(b): “In every criminal matter, defense counsel should consider the individual circumstances of the case and of the client, and should not recommend to a client acceptance of a disposition offer unless and until appropriate investigation and study of the matter has been completed. Such study should include discussion with the client and an analysis of relevant law, the prosecution’s evidence, and potential dispositions and relevant collateral consequences. Defense counsel should advise against a guilty plea at the first appearance, unless, after discussion with the client, a speedy disposition is clearly in the client’s best interest.”87

    • Timekeeping data should not be used as an anchor for the professional judgment of the Delphi panel to avoid institutionalizing current practices, which may be deficient.

    Following these basic principles, ABA SCLAID began its work in Missouri in early 2013.

III. Overview of ABA SCLAID Defender Workload Studies

ABA SCLAID’s application of the Delphi method to public defender workload studies requires two steps: (1) a system analysis (the “world of is”) and (2) the Delphi process (the “world of should”). The system analysis data is ultimately compared to the workload standards as determined through the Delphi method to identify potential gaps, if any, in the current system.

A. The System Analysis: The “World of Is”

The system analysis is an examination of the current and historical workload of the public defense system under study. The system analysis should include staffing numbers and caseloads for public defense attorneys going back, if possible, at least three years.88 If the data cannot be gathered directly from the public defense organization, it may be available from the courts or other relevant agencies.

In states where the public defense organization has comprehensive oversight authority over all public defense providers and where all providers use case management systems that collect core data on caseloads, most of the data is relatively easy to collect. In states with less centralized systems, where public defense providers have little statewide oversight, use different case management systems with different data entry criteria, or use a large number of contractors or part- time public defense providers and do not gather information on their non-public defense caseloads, the data can be significantly harder to collect.

Additionally, to be most effective, this system analysis should include timekeeping data to show how current public defense attorneys are expending their time.89 Timekeeping data tracks time spent
on specific tasks and the particular type of case for which the task is being done. Timekeeping
data allows for a more robust understanding of the time public defense providers in the jurisdiction currently spend on case specific work, and the time currently spend on other job requirements, such as administrative tasks, travel time, training, and supervision, which are not captured in the Delphi process. Timekeeping also allows for more granular comparisons between historical time expenditures and the time recommendations that result from the Delphi process.

Issues that arise during the System Analysis and how they were addressed during ABA SCLAID workload studies are discussed in detail in Section IV(A), below.

B. The Delphi Process: The “World of Should”

In ABA SCLAID workload studies, the Delphi process is utilized to determine the amount of
time attorneys should spend on given cases. To measure this, the Delphi process leverages the expertise of participants–here criminal defense practitioners in the relevant jurisdiction–to arrive at a consensus on the two key decisions: (1) the amount of time attorneys should expect to spend on average on a given Case Task for a typical case of the particular Case Type to provide competent representation and deliver reasonably effective assistance of counsel under prevailing professional norms (Time), and (2) in what percentage cases of this Case Type should the particular Case Task be performed (Frequency).90

1. The Research Team

As noted above, a Delphi workload study is a two skill set project: one part law and standards, and one part surveying/data analysis. In the ABA SCLAID workload studies, ABA SCLAID
is responsible for the law and standards applicable to the project, and the accounting and consulting firm (consulting firm or consultants) is responsible for the empirical research and data analysis. Together, these two groups form the research team. The public defense organization being evaluated also plays a critical role, particularly in helping to identify potential participants, encouraging participation and answering questions with regard to practice in the jurisdiction.

RubinBrown served as the consulting firm on the first ABA SCLAID workload study, The Missouri Project, and they were instrumental in developing the blueprint on which subsequent studies have been based. The most important roles of the consulting firm are to design the surveying tool, conduct the requisite data analysis, and conduct the system analysis. Accordingly, the selected consulting firm needs to have experience in surveying, as well as data analysis and presentation. Importantly, the firm must have experience gathering and compiling data from various governmental sources. Often these skills do not exist in a single person. The most successful consulting partnerships on the ABA SCLAID workload studies have been with firms

that have put together a team of people, including a lead consultant, a data/surveying expert, and a meeting facilitator.

The ABA SCLAID portion of the research team is responsible for the law and standards applicable to the study. In the workload studies completed to date, the key standards referenced have been the ABA Criminal Justice Standards91 and the jurisdiction’s Rules of Professional Conduct. The ABA SCLAID team for these projects has consisted of Stephen F. Hanlon and Norman Lefstein, assisted by ABA SCLAID staff. Mr. Hanlon and Professor Lefstein have long records as experts in standards and rules of professional responsibility relevant to these studies. The ABA SCLAID staff person for the most recent studies, Malia Brink, is also an attorney and expert in the area of public defense. Her role is to serve as the day-to-day point person on the project, maintaining the timeline and ensuring coordination between and among the different members of the research team. Other institutions and organizations undertaking a workload study should endeavor to include on their law and standards group, individuals with substantial expertise in both the practice and ethical standards applicable to criminal defense attorneys in the study jurisdiction.

2. Local Support Necessary for a Successful Workload Study

While the public defense organization is not part of the research team, the cooperation and support of the organization is critical to the project’s success. The organization must provide expertise on the operations, personnel, and eccentricities of the criminal justice system in the jurisdiction. The research team must ensure that the public defense organization fully understands the research steps so that it can help identify potential problem areas. For this reason, the public defense organization participating in the project must have strong leadership and a demonstrable commitment to the goals of the study–to identify appropriate workloads and ensure that public defenders are limited to such workloads.

The public defense organization plays an integral role in the selection of the study participants.
If the panel participants are not appropriately expert in the areas covered by the Delphi process (e.g., if defense attorneys with knowledge of homicide cases are not included on a Delphi Panel that is tasked with developing standards for homicide cases among other adult criminal matters), the process will fail. Additionally, the public defense organization must either provide data for comparative analysis (timekeeping and/or historical staffing and caseloads) or help the research team identify available data sources and avenues for data collection. For example, caseload data may need to be collected directly from the courts and the public defense organization may serve a critical role in helping craft and facilitate the data request.

3. Working as a Team

Maintaining regular communication between and among the two parts of the research team, as well as with the public defense organization, has proved critical to the success of a workload
study. While each group has a distinct role to play, it is important that they discuss issues as they arise to ensure that a decision made by one group is not problematic for another. Imagine that the public defense organization was having trouble identifying qualified participants and decided that they should ask retired lawyers or lawyers who previously practiced in the jurisdiction, but have moved out of state, to participate. This decision is something that should be discussed by the entire research team because, for example, the data analysis consultants may find recent experience important and want to only include individuals who have practiced in the jurisdiction during the

last three years. Similarly, if a court process changes during the data collection phase, how the new process will be recorded in timekeeping should be discussed with the entire research team to ensure that timekeeping data will still map well to the information obtained from the Delphi portion of the study.

Once issues are discussed among the groups and possible consequences identified, the decision must be left to the appropriate group. For example, the entire group might discuss what data to include on the Round 2 survey and how best to present the data to participants. The ultimate decision on this point would be made by the data analysis consultants, as this falls within their sphere of responsibility. Similarly, the group might discuss how much detail to provide regarding the applicable standards in a survey instrument to not overwhelm participants. The ultimate decision on this point would be made by the members of the team tasked with law and standards. Workload studies function best when the entire group communicates well and trusts each other to make appropriate decisions in their respective areas.

4. Participants

There are three groups of participants in an ABA SCALID workload study, a Consulting Panel, which provides information on practice in the jurisdiction critical to the design of the survey instrument, a Delphi Panel, which is the list of participants in the study, and a Selection Panel, which reviews and approves the final list of Delphi Panel participants.

An ABA SCLAID Workload Study begins with the convening of a Consulting Panel, which is a group of five to 10 highly experienced lawyers who assist the research team in understanding the local practice, which is critical to survey design. Although small, the Consulting Panel should be a diverse group consisting of public defenders92 and private defense attorneys from the jurisdiction who handle the types of cases to be studied. These individuals are usually identified with the assistance of the public defense organization. The purpose of the Consulting Panel is to identify the “Case Types” and “Case Tasks” to be studied.

The attorneys who participate in the study–taking the survey, etc.–are called the Delphi Panel. Initially identified with the assistance of the public defense organization and the members of
the Consulting Panel, the Delphi Panel is made up of local attorneys with significant defense experience in the particular area to be studied (e.g., appeals, juvenile, or adult criminal cases).93 The Delphi Panel should include a mix of all types of defense practitioners (e.g., institutional defenders, contract defenders, and private practitioners ) utilized in that jurisdiction. Efforts should be made to assure that the Delphi Panel reflects the diversity of defense practitioners in the jurisdiction, including geographic, gender, racial, and ethnic diversity. The initial list of potential members of the Delphi Panel should include as many qualified lawyers in the jurisdiction as possible. Once the list of qualified Delphi panelists is compiled, a Selection Panel reviews it.

The Selection Panel, is made up reputable individuals in the jurisdiction who have extensive practical experience in the area of law to be covered in the workload study, in other words lawyers whose credentials would be universally regarded as expert, and who have extensive knowledge
of the members of the bar practicing in this are in the jurisdiction. They may be judges, law school deans, or well-respected defense attorneys. In the ABA SCLAID studies, the Selection Panel usually has been made up of three to five such individuals, which was a manageable group, but also sufficient to include representation from across the study jurisdiction. The Selection Panel members are each provided a list of proposed Delphi Panel participants. They can either meet, in person or by phone, or review the list separately without meeting. Each Selection Panel member may strike any name from that list if he or she (1) has actual knowledge of the individual and
(2) does not believe the person has the expertise or experience necessary to participate. Each Selection Panel member may also add qualified individuals to the list. Once the Selection Panelists’ strikes and additions are incorporated, the list of participants (the Delphi Panel) is finalized.

5. Case Type/Case Task Selection

The first step in developing the survey tool used in the Delphi process is to establish the relevant Case Types and Cases Tasks that will be surveyed.

Case Type is a way to group offenses of roughly similar complexity. Examples of Case Types include: murder/homicide, sex felonies, juvenile delinquency cases, and probation violations. While it is understood that, within a Case Type, case complexity can vary greatly, these groupings help create overall categories of cases that share similar complexity and types of tasks that are performed during representation.

Case Task is a way to group common tasks performed by an attorney. Examples of Case Tasks include: client communication, discovery, attorney investigation, and motions/other writing.

In the ABA SCLAID studies, the Case Types and Cases Tasks are developed by the Consulting Panel during an in-person meeting with the research team. At this meeting, the Consulting Panel is asked to break down their practice area(s) into Case Types that they would naturally group together. They usually develop a list of approximately eight to 10 Case Types. For a process addressing adult criminal cases, for example, it is common for Consulting Panels to break felony cases into low-level felonies, mid-level felonies, high-level felonies, and murder cases. In some studies, Consulting Panels also have separated out drug cases or sex crime cases as distinct Case Types.

The Consulting Panels then break down their work into Case Tasks. These Case Tasks must fairly encompass all of the work that they should perform as defense attorneys. As noted above, common Case Tasks include communication with clients, discovery, court time, motions /other writing. The Consulting Panel must then define these Case Tasks to ensure that there is minimal overlap so that it is clear where time spent on different common tasks would be allocated.

The proper identification of Case Types and Case Tasks is crucial, as it will form the basis for the subsequent determination of the workload standards (e.g., the number of minutes or hours it should take an attorney to conduct legal research in a sexual assault felony case).

6. The Structure of the Delphi Process

In ABA SCLAID workload studies, the Delphi process is an iterative study of the time and frequency associated with Case Tasks for each of the identified Case Types. In other words, the process is repeated several times with the results of each round of the process informing and shaping the next round. In the ABA SCLAID studies, the Delphi process consists of three rounds – Rounds 1 and 2 are conducted via online surveys, while Round 3 is conducted as an in-person meeting.

As structured for ABA SCLAID workload studies, the online surveys in Round 1 and 2 begin with an explanation of the standards and ethical rules applicable to the study. This overview orients
the participants to the prevailing professional norms on what constitutes reasonably effective assistance of counsel and makes clear that these standards should anchor their responses. The Delphi panelists are then asked if they have the requisite experience to respond to questions about each Case Type. For each Case Type for which they have the requisite experience, the survey then asks the Delphi panelists to provide an estimate of the amount of time an attorney should spend on a given Case Task (time), and the percentage of cases in which the Case Task should be performed (frequency). For example, a participant will be asked within the context of an identified Case Type (e.g., felony sexual assault cases) to estimate (1) the amount of time that an attorney handling such a case should spend on motions practice and (2) the percentage of felony sexual assault cases in which an attorney should conduct motions practice.

In each of the online survey rounds (Round 1 and 2), the Delphi Panel participants are instructed to complete the survey without consulting any other participant or member of the defense community. All survey distribution, collection, and analyses are completed by the consultant. Once the results are received from Round 1, the Delphi Panel’s responses are trimmed and summarized, and these trimmed summaries are presented to the panelists in the Round 2 survey (structured feedback).94

Round 3 of the Delphi process is conducted as an in-person meeting of participants. At this meeting the panel is provided trimmed summary data from the Round 2 survey (structured feedback). This meeting is facilitated by the consultants and attended by the ABA SCLAID team, who orient the panel members to the applicable law, standards and ethical rules that anchor the study. The panel members then discuss each question and come to consensus on the appropriate Case Type/Case Task time and frequency. Members of the research team do not have any input into these decisions.

It should be noted that to participate in each successive Round, the participant must complete each prior Round. In other words, Delphi panel participants who do not complete the Round 1 survey
are not invited to participate in the Round 2 survey. Participants who do not complete the Round 2 survey are not invited to participate in the Round 3 meeting.

The group’s consensus decisions made in Round 3 are then used to calculate the results of the workload study. The consultants calculate the time needed by Case Type by multiplying the case time by the case frequency number for each Case Task, and then adding up the resulting time for all Case Tasks for each Case Type. This calculation is the amount of time needed for a typical case of the Case Type, which can be used to calculate overall workload standards.95 Examples of this from ABA SCALID workload studies include:

Screen Shot 2021-06-03 at 4.04.29 PM.png

Importantly, as noted above, in ABA SCLAID workload studies, the Delphi Panel is never shown the results of the actual timekeeping or other historical data. A professional judgment was made by RubinBrown during the first workload study that the Delphi Panel’s task is to determine the “World of Should,” not the “World of Is.” There is no nexus between the two worlds. The data analysis consultants on each ABA SCLAID workload study since the Missouri report have agreed with this judgment.96

7. The Report

Once both the system analysis and the Delphi process have been completed, the research team collaborates to produce a report that contains an explanation of the study’s methods and results, including the resulting workload standards, statistics, and conclusions. To date, ABA SCLAID, together with different consultants, has completed workload studies in Missouri,97 Louisiana,98 Colorado,99 and Rhode Island.100 RAND, in conducting a review of recent public defender workload studies in preparation for its efforts to conduct a study in Michigan, included all four of the ABA SCLAID studies among those that “provided well-tested models” for conducting a public defender workload study.101 During the drafting of this report, an additional study was completed in Indiana.102

IV. Lessons Learned and Open Questions

This section reviews and explains some of the more nuanced decisions made by the research teams in applying the Delphi method over the course of completing ABA SCLAID public defense workload studies. The review focuses on key decision points in the process and the results of those decisions.

A. Decisions in System Analysis

A number of complicated issues arise when completing the system analysis for a workload study. As noted above, the system analysis is intended to provide an accurate picture of public defense as it exists in present conditions–the “World of Is.” When measured against the results of the Delphi process, it allows for a determination of whether deficiencies exist and the extent of those deficiencies, i.e., how much more attorney time (and potentially additional attorneys) would be needed to lower caseloads to the point necessary to provide reasonably effective assistance of counsel pursuant to prevailing professional norms. Addressing issues in system analysis and successfully obtaining accurate comparison data is critical because without it, while the public defense organization may know what it should do, it will not know what additional resources may be needed to achieve this goal.103

The most challenging aspect of the system analysis is gathering data from multiple sources
and determining how to combine that data for analysis. Generally, public defense is provided in three ways: institutional public defenders, who are usually full-time salaried employees; contract defenders who take a fixed number or percentage of the public defense cases in a jurisdiction
for a fixed fee or hourly rate; and attorneys who are assigned individual cases and paid on an hourly or per case basis. Public defense in any given jurisdiction often is provided by multiple organizations and even multiple systems. Typically, there is a primary system (e.g., a full-time public defender office) and a secondary system for conflicts and overflow cases (e.g., an assigned counsel system). In some jurisdictions, public defense is organized statewide and in others it is organized at the county level. With regard to the system analysis, the use of varied systems can pose challenges. Each provider or system may keep records differently and those records may be difficult to combine.

Even when a state operates a statewide public defense system with primary reliance on institutional public defender offices, there can be variations among offices that can impede data integration efforts. In Tennessee, for example, there is a process whereby a defendant in a felony case is arraigned in one court and then, if it not resolved at arraignment, transferred to another court.

In one public defender office, this was counted as a single case and tracked across the transfer from one court to another. In another office, however, it was counted as two cases. This and other inconsistencies in data gathering made it difficult to compute reliable historical caseloads in that state.

In each jurisdiction, the research team must work together early in the study process to develop a comprehensive understanding of the following: (1) the existing public defense system in the study jurisdiction; (2) what records and data exist; and (3) who controls the records/data identified. Using this information, the data analysis consultants must determine how best to calculate the historical caseload for the jurisdiction and the current personnel capacity of the jurisdiction. The Research Teams on the ABA SCLAID workload studies have used two ways of conducting the system analysis: timekeeping and the full-time equivalent (FTE) analysis.

1. Timekeeping

When attorney time can be captured to a high degree of consistency and quality, it remains
the best way of understanding the “World of Is,” i.e., how many public defenders are spending how much time on current cases. Timekeeping provides an unsurpassed level of detail in that, when done correctly, it tracks time not only by case (which can then be matched to Case Types chosen for the Delphi Study), but also by task (which can be matched to the Case Tasks chosen for the Delphi study). In other words, it can be used to see time deficiencies by Case Type and by Case Task. Timekeeping data is also very useful for management purposes after the Delphi study is concluded.

However, timekeeping has long been resisted in the public defense community. As a result, timekeeping with sufficient accuracy and consistency to allow for reliable comparisons has proven difficult in several jurisdictions.

Where timekeeping is undertaken, the public defense organization must either (1) already have
in place a functioning case management information system that includes a timekeeping system so that appropriate data collection can occur, or (2) put a timekeeping system in place before the study begins. Where timekeeping exists in the system, it is necessary to look carefully both at the categories of time collected and compliance rates before beginning the data collection period. It may be necessary to add categories and/or engage in retraining to help assure consistency in how tasks are recorded and to increase the rate of participation among defense attorneys. This will help maximize the usefulness of the timekeeping data for the workload study.

Instituting timekeeping for the purposes of a study can be problematic. There is often resistance, particularly among overworked public defenders who find this new administrative task itself time consuming. Resistance to timekeeping can result in resistance to the workload study itself if the study is blamed for the timekeeping changes. Moreover, it is not easy to go from never having kept time to keeping time with the kind of accuracy and consistency required for a study. Based on the experience of the four ABA SCLAID workload study research teams, if a jurisdiction is instituting timekeeping for the study, it must be put in place at least six months before the data collection period can begin. This allows the attorneys, most of whom likely have never kept time in fractions of an hour before, to be trained to enter time in case-specific categories and to ensure they enter their time accurately and consistently. This is an enormously challenging task. During this six-month period before data collection, timekeeping efforts must be monitored closely to determine whether time is being entered consistently.

a) Consistency in Timekeeping

One common problem that occurs in timekeeping is that different defense providers, or even different attorneys, record time differently. For example, if one office (or attorney) places time related to hearings on discovery motions under a time code called “Discovery” and another places it under a time code called “Court Time,” this inconsistency will cause inaccuracies. This is the type of issue that must be addressed in a training period and why, even where timekeeping is in place before a study begins, it is crucial to conduct a review of timekeeping processes and compliance before a study begins.

When timekeeping is undertaken, the public defense organization must ensure that the data measurement processes and practices for timekeeping are consistent across the system being studied. In other words, the protocols for timekeeping must be clear, and then the protocols must be followed by all of the attorneys. Training and supervision are critical to producing reliable data about actual time spent on cases. As this is a very hands-on, day-to-day activity, it must be the responsibility of the public defense organization.

b) The Timekeeping Collection Period

A timekeeping study should include a considerable period of reliable timekeeping data. During the timekeeping period, a significant number of cases will both begin and end. The longer the data collection period, the more reliable the data will be because it will capture more cases from beginning to end. Cases that begin and/or end outside the data collection period require separate analysis by the data analysis team. This analysis uses estimates and multipliers based upon prior years’ caseload data and the data gathered during the timekeeping period to draw the inferences about the time needed to complete the cases.104

For timekeeping analysis to be successful:

  • A significant percentage of the time reported (preferably above 50-60%) must be reported as case-specific time, i.e., time entries that are made in specific cases. If public defense attorneys do not get a significant percentage of their time recorded in case- specific entries, the data analysis consultants have trouble drawing legitimate inferences about the amount of time required for each Case Task in each Case Type.

  • At least 90% of total attorney time must be reported by public defense attorneys in the appropriate category. The data analysis consultants cannot make valid inferences about the entity studied (e.g., statewide, local, or regional offices) without a full understanding of how attorneys spend their time.

Periodic reviews of all time entry data (at least every 30 days) should be carried out to catch errors and omissions, and to maintain compliance rates at the needed levels.

The determination of what period of time is required to make the necessary inferences is a question for the data analysis consultants. In Missouri, the study looked at a 25-week period of timekeeping,105 and in Colorado, the timekeeping period was 16 weeks.106 Six months of timekeeping should be sufficient, but a longer period would be preferable as it would increase data reliability.

c) Conclusions on the Use of Timekeeping

As noted above, timekeeping data, even if accurate and consistent, is never shown to a Delphi panel under the ABA SCLAID research method. The use of timekeeping as the principal anchor has a high risk of institutionalizing current bad practices. Timekeeping, assuming it is accurate, will reflect current practices, but current practices may be severely deficient. This cannot be determined until after the Delphi study is complete. To suggest that actual time amounts are relevant is, in essence, to prejudge the situation for public defense attorneys in the jurisdiction. For this reason, providing actual time data is antithetical to the purpose of the Delphi workload study, which is to determine how much time a lawyer should require to conduct a particular task in a particular type of case to comply with applicable standards.

Instead, the applicable law and standards are the principal anchor for the consensus professional judgment of the Delphi Panel. The instructions to the Delphi panel regarding the law and standards as the principal anchor for their consensus of professional judgment serve much the same function as jury instructions, guiding the exercise of the professional judgment of each of the panel members.

2. The FTE and Caseload Methodology

Given the challenges of timekeeping, it was critical to develop an alternative method for completing the system analysis. This was done first in the Louisiana workload study.

The implementation of timekeeping in Louisiana proved problematic. First, as noted previously in this section, the implementation of timekeeping solely for the purposes of a workload study can be challenging.107 In the case of Louisiana, the timekeeping data showed that only 35.6% of the attorney time was recorded as related to a specific case.108 Most of the time was recorded in the broader categories of case-related or general work-related. When the data analysis consultants reviewed the case-related time entries, they determined that “71 percent (25,159 total hours) of Case Related time was spent on Delphi Case Tasks.”109 In other words, it was work that should have been recorded as case-specific but was not. Based on this finding, the data analysis consultants determined that the timekeeping data “underestimate[d] the Case Specific time spent on legal representation of clients on specific cases by public defenders during the analysis period.”110 For this reason, they determined that the timekeeping records “provided insufficient detail” to perform the requisite analysis.111

Instead, the consultants in Louisiana developed an alternative method of estimating the time spent on cases. They looked at historical personnel employment data for attorneys in the Louisiana public defender offices and converted the total attorney personnel to full-time equivalents (FTEs). The consultants then assumed that each FTE spent 2,080 hours112 annually on case work. In other words, the “FTE calculation conservatively assumes all hours are allocated to the legal representation of annual workload, without consideration for continuing education requirements, administrative tasks, vacation, etc.”113 While this method does not allow for a comparison of timekeeping at the Case Type or Case Task level, it does allow for a comparison of total attorney time available, based on FTE and caseloads, to total needed attorney time at the system level, based on the Delphi panel results and caseloads.

Multiplying the projected caseload (obtained by analysis of the recent historical caseloads in the jurisdiction) by the time needed by Case Type (as determined by the Delphi panel), produces the hours needed annually to provide reasonably effective assistance of counsel pursuant to prevailing professional norms. Dividing that number of hours by 2,080 (the estimated number of hours a single FTE works annually) produces the number of FTEs needed to provide defense services. The resulting number of FTEs can be compared to the number of FTEs currently in the system to calculate whether an attorney staffing deficit exists.

Using this methodology, the study in Louisiana concluded that “the Louisiana public defense system is currently deficient 1,406 FTE attorneys.”114 At that time, the Louisiana public defense system employed 363 FTE attorneys. The FTE methodology was also used in Rhode Island,115 with the study determining that the Rhode Island Public Defender “is currently deficient at least 87 FTE attorneys.”116 At the time of the study, the Rhode Island Public Defender employed 49 FTE attorneys.

One of the most challenging aspects of the FTE method of system analysis is trying to determine with the requisite degree of accuracy the number of FTEs historically in the public defense system. In Louisiana, this was relatively easy because the Louisiana public defense system operates statewide and has authority over personnel records.117 The data analysis consultants were able to obtain accurate historical FTE calculations using compensation reports and other reports available from the public defender.118 The same was true in Rhode Island.119 However, in other systems, determining FTEs can be more complex, particularly in systems that are organized on the county level, those making significant use of assigned counsel, and/or contracts.

It is also critical to understand that FTE analysis generally produces a conservative calculation of deficiencies because it assumes that all attorney time [2080 annual hours per FTE] is devoted to case-specific work.120

B. Decisions in the Delphi Process

As with the system analysis, several questions can arise during the Delphi process. This section explains some of the questions that have arisen in the ABA SCLAID workload studies and how the Research Teams chose to address them.

1. How Many Delphi Panels?

Initial workload studies, such as the one completed in Missouri, utilized a single Delphi panel. The panel was asked to address Case Types that covered all, or almost all, of the types of cases in which public defense attorneys provided representation, including misdemeanors, homicide/ murders, juvenile cases, appeals, and special writs.121

Using a single Delphi panel for a broad range of Case Types presents some problems. First, it
may not accurately reflect how most public defense attorneys practice. While the same attorney may represent clients in misdemeanor and felony cases, it is relatively rare that such attorneys also take appeals and writs. As a result, many participants in the Delphi panel may only be able
to answer questions regarding one Case Type, e.g., appeals. As appellate attorneys, they likely would not have the requisite experience in any other type of case to participate in those sections of the survey. Convincing such individuals to participate in a survey in which they must decline 90% of the questions as not relevant to their practice is difficult. Convincing these same attorneys to complete the process, including taking a full day to attend the Round 3 in-person meeting, is very difficult. Yet, their participation is critical to achieving accurate results regarding the one Case Type for which they have the requisite experience.

Second, research teams realized that having only one Case Type in specialist areas, such as appeals and juvenile cases, might not provide the adequate level of distinction necessary for these specialist practitioners to make accurate time estimates. For example, a juvenile defender has a difficult time thinking about a typical juvenile case when such cases range from status violations
to serious assaults and even murder. This frustration resulted in the number of Case Types increasing. For example, in the Colorado workload study, there were 18 Case Types, including three juvenile Case Types. This allowed the panel to distinguish between juvenile misdemeanors, juvenile felonies, and juvenile sex offense cases. However, with 18 Case Types, the number of questions the Delphi panel was required to address resulted in an exceptionally long survey.122 While the level of detail was desirable, the process became unwieldy. In looking at this closely, the research team observed that the Colorado Case Types and Case Tasks both included categories specific to juvenile practice. This observation, along with the success of specialty panels in Texas, led ABA SCLAID to consider using separate or different Delphi Panels and surveys for specialty practice areas, e.g., appeals and juveniles.

For each workload study jurisdiction, it is worth considering how many Delphi Panels to use. In thinking about whether separate panels would be useful, consider:

• Are there areas in which criminal defense lawyers generally operate as specialists, taking one type of case but not others?

  • Do defense lawyers with cases in juvenile court typically also practice in adult criminal court in this jurisdiction?

  • Do appellate defense lawyers also typically do trial work in this jurisdiction?

    If a tendency toward specialization emerges, a separate Delphi Panel might be an appropriate way to retain a larger number of Case Types/Case Tasks without overly taxing participants.

In the Indiana workload study, the research team convened four separate Delphi panels: (1)
Adult Criminal; (2) Juvenile; (3) Appeals; and (4) Children in Need of Services/Termination of Parental Rights. Conducting multiple panels allowed the study to address more case types. The Indiana juvenile Delphi survey, for example, broke juvenile cases into six case types ranging from status cases to homicide cases. Additionally, at the in-person meeting of the juvenile Delphi panel, most respondents were able to participate in most Case Types, and therefore a greater portion of the discussion.

Dividing the panels need not exclude the participation of generalist practitioners. If a lawyer has significant experience in both adult criminal cases and juvenile cases, for example, he or she could participate in both panels. In the Indiana study, however, there were only a few participants invited to serve on multiple Delphi Panels. This suggests that the use of more specialized panels may, in fact, better reflect how public defense attorneys practice, at least in Indiana.

The question of how many Delphi panels to use is one that should be considered in each study jurisdiction. The use of more specialized Delphi panels has the potential to increase the level of detail reached in workload studies, however, additional panels require additional consulting time for survey design and data analysis, as well as additional in-person meetings, which is also likely to raise costs significantly.

2. How Many Case Types/Case Tasks?

A related issue that arises early in the survey development process is determining the appropriate number of Case Types and Case Tasks that should be identified for each Delphi panel. On the one extreme, why not simply have participants answer for each individual type of case (assault, robbery, burglary, trespass, etc.) and each task they undertake (preliminary hearings, client interviews, discovery motions, etc.)? On the other extreme, why not group all felonies together, as is done in the NAC Standards? The answers, when posed at the extremes, seem obvious. The overly detailed survey would be unmanageable, creating far too many survey questions to answer. The overly broad survey would make it difficult for the defenders to adequately envision the typical case, allowing them to answer the survey questions.

To a large extent this is a decision that must be made by the Consulting Panel, which, as noted above, is the group of experienced attorneys–public defenders and private practice criminal defense lawyers–that use their expertise in the jurisdiction to select the best groupings of Case Types and Case Tasks. The Case Types must cover, to the extent possible, the full range of public defense practice, and be grouped in a way that allows the Delphi Panel to understand and identify a typical case of this type. Similarly, the Case Tasks must group together aspects of a lawyer’s work on a case logically and with sufficient detail to cover almost all of the work done on a case, but not create so many tasks that answering about each of them becomes overly onerous. At the end of the day, the Delphi Panel must, in each survey and during the in-person meeting, go through each Case Type and determine the time for each Case Task.

When working with a Consulting Panel, it is important to emphasize these points, and to ask questions about whether the groupings they have selected allow them to conceive of a typical
case that they could keep in mind to answer survey questions. If the groupings seem overly detailed, questions should be asked about whether two Case Types are sufficiently similar to be combined. In the ABA SCLAID workload studies completed thus far, the number of Case Types has ranged from eight (Missouri) to 18 (Colorado). The Rhode Island study used 11 Case Types, while Louisiana used 10. The number of Case Tasks has ranged from 11 (Louisiana) to 19 (Missouri), with Colorado and Rhode Island each having 12.

3. Trial v. Plea Analysis

In the ABA SCLAID workload studies published at the time of this writing, the Case Types and Case Tasks had not been further divided into cases that go to trial versus cases that are resolved by plea or dismissal. Given the issues regarding the numbers of Case Types and Case Tasks, the research teams were wary of making this additional distinction, because it effectively doubles the number of decisions the Delphi panel must make. The research teams also did not want to somehow suggest that it would be appropriate to decide, at the outset of a case, whether the case should plead or go to trial.

However, then members of the ABA SCLAID team were asked to consult on a public defense workload study in Texas conducted by the Texas A&M University Public Policy Research Institute for the Texas Indigent Defense Commission.123 The research team in the Texas study decided to ask Delphi Panel participants what percentage of cases of each Case Type should go to trial versus what percentage was resolved by plea or some other resolution. They then asked, for each Case Task under the Case Type, how much time it should take to complete the task in cases that go to trial and then cases that resolve by plea or reach other resolution. To the surprise of the members of the ABA SCLAID team, this worked well, allowing lawyers to account for clear differences in work that must be done in cases that plead versus those that go to trial, without resulting in any disregard of cases that plead.

The workload studies conducted have all emphasized ABA Defense Function Standard 4-6.1 (b):

In every criminal matter, defense counsel should consider the individual circumstances of the case and of the client, and should not recommend to a client acceptance of a disposition offer unless and until appropriate investigation and study of the matter has been completed. Such study should include discussion with the client and an analysis of relevant law, the prosecution’s evidence, and potential dispositions and relevant collateral consequences.124

The vast majority of cases in the criminal justice system are resolved though plea bargaining.125 Separating out cases that plead or otherwise resolve allowed the study to be more reflective of this reality and also allowed for greater focus on this standard, which applies before a defense attorney may advise a client to entertain a plea. Based on the success of the Texas study, future research teams should consider incorporating the plea/trial distinction. The ABA SCLAID workload study in Indiana successfully implemented a plea/trial distinction.126

4. Survey Interface Options

In every ABA SCLAID workload study, one technological question that must be addressed concerns how best to distribute the online Delphi survey. Factors to consider in selecting a distribution platform include the user experience (whether it is easy for the panelists to use) and the data that results (whether it is easy to conduct the analysis needed). For the panelists, the easiest interface appears like a spreadsheet for each Case Type, in which Case Tasks are rows and pleas/trials are different columns. However, this appearance is difficult, if not impossible, to generate with most standard online surveying tools, such as Survey MonkeyTM. In Rhode Island, the Research Team used a version of Survey MonkeyTM; however, it required all questions to be written out as follows:

Case Task 1 – Case Preparation: Reviewing, analyzing and organizing case-related materials/evidence; dictating and editing case-related memos; defense team meetings (unless related to a court appearance, which falls under Court Preparation); documenting case file.

A. In what PERCENTAGE of Murder cases should a lawyer typically perform this task?

B. When a lawyer performs this task in Murder cases, how much time, in MINUTES per case, should a lawyer typically spend performing the task?

An online platform like FormsiteTM, which was used in the Louisiana workload study, provides
the desired spreadsheet-like interface. However, the data analysis process was labor intensive. Columns of data had to be matched into an Excel spreadsheet built for each survey to allow for
the compilation and analysis of panel responses. In Texas, these challenges led the research
team to send each survey participant a physical Excel file to fill out and return.127 More advanced surveying tools, such as QualtricsTM, may offer a good combination of user experience and data analysis capabilities, if available to the research team. Each research team must determine which program or method works best for them given the current state of surveying tools and the strengths of their team.

5. Structured Feedback

As described in the overview of the Delphi process,128 in each successive round, data from the previous round is analyzed and summarized to provide structured feedback to the panelists. Before Round 2 begins, the Delphi Panel’s responses to Round 1 are compiled to determine the range
of responses and then trimmed to exclude outliers. The same is done with Round 2 responses before the Round 3 meeting takes place. At this Round 3 meeting, this trimmed data or structured feedback from Round 2 is presented for discussion.

Outliers exist for all sorts of reasons. Sometimes a participant misunderstood the questions or how to format their responses, e.g., answering in hours rather than minutes. Some participants may purposely provide inflated time estimates to obtain lower caseloads. Other participants may not
be able to understand the guidance of the standards and endeavor to answer how much time they currently use to complete each task. Trimming the outliers before computation between each round helps to redress such efforts.

Failure to trim the presented data can skew the results dramatically. For example, in Tennessee, the research team failed to trim the data from Round 1, as a result the feedback presented to respondents in Round 2 reflected the inclusion of extreme outliers. This, in turn, skewed the results of Round 2, tainting the entire study.

Determining how much to trim the responses before computing the feedback to be provided is a question for the data analysis consultants. In some ABA SCALID studies, the data was trimmed
by one quartile at each extreme. In other words, the highest 25% of respondents’ answers (top quartile) and the lowest 25% of respondent answers (bottom quartile) were dropped, and the remaining data range was provided as feedback.129 Whatever the data analysis team decides in terms of trimming the results, the Delphi Panel participants should be advised that the results have been trimmed in each successive round.

The type of feedback to provide to the group in the second round is similarly a question for the consultants. The only requirement is that the feedback provided must maintain the anonymity of responses from the previous round. In some ABA SCLAID studies, the consultants provided the trimmed range130 (also called the peer range), as well as the trimmed mean131 (the mean of the responses in the trimmed range, also called the peer mean). In the Texas study, the peer median132 and the peer range were provided.133

One question that has arisen while designing Round 2 surveys is whether the respondents should be provided with their original answers to the same questions in addition to the group data. In Indiana, the data analysis consultants decided against providing this information in the second- round survey. They believed that the group data should be the focus and that adding the individual data might serve to over-emphasize when and how often an individual’s answers were outside the peer range. In the Texas study, however, the research team did provide each individual with his or her previous response.

The feedback prepared for the Round 3 in-person Delphi meeting is similar to the feedback provided in the Round 2 survey (e.g., peer mean and peer range), but is compiled from the second-round survey responses. Again, the feedback is anonymous and generated from data that is trimmed to eliminate outliers. At the in-person meeting, this feedback is compiled into
a presentation for discussion. The Delphi panel discusses each Case Task for each Case
Type, reviewing the feedback data, reviewing the applicable standards, and deliberating until
a |consensus can be reached on the time and frequency required. For each jurisdiction, it is worthwhile to consider what feedback data, beyond the peer mean or median and peer range, might be helpful to the group in making its decisions. For example, in Missouri and Colorado, separate peer means for public defense attorneys and private practice attorneys were presented as an aspect of the discussion.

6. Attrition

Attrition occurs throughout each phase of the workload study. Only a certain number of people on the initial Delphi panel list will accept the invitation to participate and complete the first online survey round. Even fewer will complete the second online round and fewer still will attend the in- person Delphi meeting. If a panelist fails to complete one step, they cannot later rejoin the panel. For example, a panelist who fails to complete the second survey round cannot then attend the in- person Round 3 meeting of the Delphi Panel.

Because it is important to try to maintain the diversity of the panel and to have robust input into the ultimate consensus decisions, it is essential to minimize attrition. Research teams can combat attrition by sending multiple reminders, by email, to complete the surveys. The public defense organization can also call each panelist at each stage and ask them personally to complete the next phase of the process. This ensures that local lawyers are hearing this request from the local organization, which gives credibility to the study.

C. Timeline for a Workload Study

Typically, a Delphi workload study can be completed in approximately 12-18 months. The single biggest potential for delay concerns the system analysis. As noted above, if a public defense organization is initiating timekeeping for the purpose of the Delphi process, the time frame may need to be extended to allow participants to become familiar with the timekeeping program and begin to use it consistently and reliably before the data collection period.

Coordinating the system analysis and the Delphi process is a challenging task. A written schedule or timeline that identifies each organization’s duties and responsibilities is essential, as are
regular group calls. Success requires constant interaction between and among the data analysis consultants, the law and standards team, and the public defense organization. Because the resulting workload study report may be the subject of litigation, care should be taken with respect to written communications and preservation of records.

D. Costs of a Workload Study

Conducting a workload study requires adequate funding, which is a significant commitment. Total costs for workload studies often exceed $250,000. Any entity or organization seeking to conduct a study should make sure that the two critical skill sets–data analysis and law and standards–have sufficient compensation to ensure adequate time to devote to the project.

The cost of hiring the data analysis consultant often has been the largest cost of the project.
A workload study requires substantial time and attention over a prolonged period of time and, accordingly, the consulting costs can easily run over $100,000, particularly if the work involves multiple Delphi panels. Some accounting and consulting firms may be willing to donate part of their time as a public service, but it is imperative that adequate time be part of that commitment.

As noted above, in ABA SCLAID workload studies, the law and standards team has been made up of two attorney consultants and ABA SCLAID staff. The consultants have been compensated, though much of the work has been done at a significantly discounted rate. ABA SCLAID has also contributed some of the staff support without reimbursement. Other law and standards organizations or consultants may or may not be in a position to contribute some portion of their time pro bono.

It is also important to ensure that lawyers from all parts of a jurisdiction are able to participate in
the study. For this reason, the budget should include reimbursement funds for travel for the various panelists to attend the necessary in-person meetings–the Consulting Panel’s Case Type/Case Task identification meeting and the Round 3 in-person Delphi meeting. In larger jurisdictions, where these in-person meetings may require an overnight stay or air travel, these costs can be significant.


In 2012, ABA SCLAID sought to develop a way of accurately quantifying maximum workloads using reliable data and analytics. The goal of this project has been to provide public defense organizations and attorneys with the information needed to advocate for relief when their workloads are too high. There is now a considerable body of reliable data and analytics in public defender workload studies for that community to use in managing their operations, advocating for increased resources with policymakers and, if that fails, litigation. Organizations and attorneys in jurisdictions that have been the subject of a Delphi workload study have found the studies useful in explaining caseload and budget issues to key policymakers and advocating for greater resources and improved caseload controls.

ABA SCLAID and its partners hope to continue conducting workload studies in jurisdictions across the country and help other public defense organizations reduce excessive workloads. At the
same time, other organizations are strongly encouraged to undertake similar studies. Many more jurisdictions are in need of study and assistance than can be reached by one Research Team. The goal of this report is to be helpful in such endeavors by sharing information regarding methodology and decision-making.

  1. This report covers the ABA SCLAID workload studies through early 2020. Although the Indiana Project was underway during the drafting of this report, it had not yet been completed. The Indiana Project report was then released in July 2020. See ABA Standing Committee on Legal Aid and Indigent Defendants, The Indiana Project: An Analysis of the Indiana Public Defense System and Workload Standards (ABA 2020), available at aba/administrative/legal_aid_indigent_defendants/the-indiana-project-july-2020.pdf.

  2. Throughout this report we use the term public defenders to include all attorneys who provide public defense services, whether as full-time employees of a governmental or non-profit office, as contractors or pursuant to court-appointment.

  3. 372 U.S. 335 (1963).

  4. Id. at 344.

  5. Strickland v. Washington, 466 U.S. 668 (1983).

  6. Throughout this report the term “caseload” refers to the total number and different kinds of cases assigned to a public defense attorney or organization during a certain period of time. “Workload” is a broader concept, including all of an attorney’s responsibilities: the cases on which an attorney works during the course of a year, as well as the many other responsibilities not pertaining specifically to cases for which the attorney is responsible. This would include administrative responsibilities, training time, supervision time, etc. As discussed in more detail later in this report, the Delphi survey process addresses only the time requirements of attorneys for legal representation tasks performed on their various types of cases under study.

  7. See, e.g., Gideon Undone: The Crisis in Indigent Defense Funding – Transcript of a Hearing on the Crisis in Indigent Defense Funding held during the Annual Conference of the National Legal Aid and Defender Association, November 1982 (ABA 1983), available at downloads/indigentdefense/gideonundone.authcheckdam.pdf; National Right to Counsel Committee, Justice Denied: America’s Continuing Neglect of Our Constitutional Right to Counsel (The Constitution Project 2009), available at https://; Joel M. Schumm, National Indigent Defense Reform: The Solution is Multifaceted (ABA and NACDL 2012), available
    at ; Andrea M. Marsh, State of Crisis: Chronic Neglect and Underfunding for Louisiana’s Public Defense System (NACDL 2017), available at https://www.

  8. For an extensive summary of the depth and breadth of caseload issues, see Norman Lefstein, Securing Reasonable Caseloads: Ethics and Law in Public Defense 12-19 (ABA 2011) available at: aba/publications/books/ls_sclaid_def_securing_reasonable_caseloads.authcheckdam.pdf.

  9. attorneys who are providing representation to individual criminal defendants unable to afford counsel.

  10. 10  ABA Standing Committee on Legal Aid and Indigent Defendants, Gideon’s Broken Promise: America’s Continuing Quest for Equal Justice (2004), available at defendants/ls_sclaid_def_bp_right_to_counsel_in_criminal_proceedings.authcheckdam.pdf

  11. 11  Id. at 16.

  12. Id.

  13. Robert Boruchowitz, Malia Brink, and Maureen Dimino, Minor Crimes, Massive Waste: The Terrible Toll of America’s Broken Misdemeanor Courts (NACDL 2009), available at MinorCrimesMassiveWasteTollofMisdemeanorCourts.

  14. Id. at 20-21.

  15. Oliver Laughland, “The Human Toll of America’s Public Defender Crisis,” The Guardian, Sept. 7, 2017, available at

  16. Texas Indigent Defense Commission, Indigent Defense Data for Texas (2015), available at

  17. See American Bar Association, Eight Guidelines of Public Defense Related to Excessive Workloads (2009), at Guideline (“[I]f workloads are excessive, neither competent nor quality representation is possible.”), available at https://www. public_defense.pdf.

  18. See ABA Standing Committee on Ethics and Professional Responsibility, Formal Opinion 06-441: Ethical Obligations
    of Lawyers Who Represent Indigent Criminal Defendants When Excessive Caseloads Interfere With Competent and Diligent Representation (May 13, 2006), (“The obligations of competence, diligence, and communication . . . apply equally to every lawyer.”), available at defendants/ls_sclaid_def_ethics_opinion_defender_caseloads_06_441.authcheckdam.pdf.

  19. ABA Model Rules of Professional Conduct (2016), Rule 1.1. (“A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”) and Rule 1.3 (“A lawyer shall act with reasonable diligence and promptness in representing a client.”), available at conduct/model_rules_of_professional_conduct_table_of_contents/.

  20. Id. at Rule 1.1.

  21. Id. at Comment 5 (emphasis added).

  22. American Bar Association, Criminal Justice Standards for the Defense Function (Defense Function Standards) (4th ed. 2015) at Standard 4-1.1 (a) (“These Standards are intended to address the performance of criminal defense counsel
    in all stages of their professional work.”), available at DefenseFunctionFourthEdition/#4-1.1. Note that the United States Supreme Court found that these standards are “valuable measures of the prevailing professional norms of effective representation.” Padilla v. Kentucky, 559 U.S. 356, 367 (2010).

  23. Defense Function Standards at Standard 4-4.1.

  24. Id. at Standard 4-4.6

  25. Id. at Standards 4-3.1, 4-3.3, 4-3.9, 4-5.1, and 4-5.4

  26. Id. at Standards 4-6.1, 4-6.2, and 4-6.3.

  27. Id. at Standards 4-3.2, 4-7.11, and 4-8.1.

  28. Id. at Standard 4-4.6.

  29. Id. at Standards 4-4.1 and 4-6.1(b).

  30. See ABA Eight Guidelines, supra n. 17, Comment to Guideline 1.

  31. See ABA Model Rules, supra n. 19, at Rule 1.3, Comment 2 (“A lawyer’s workload must be controlled so that each matter

    can be handled competently.”); see also Defense Function Standards, supra n. 22, at Standard 4-1.8(a) (Lawyers “should not carry a workload that, by reason of its excessive size or complexity, interferes with providing quality representation, endangers a client’s interest in independent, thorough or speedy representation, or has significant potential to lead to
    the breach of professional obligations.”); American Bar Association, Ten Principles of a Public Defense Delivery System (2002), Principle 5 (“Defense counsel’s workload is controlled to permit the rendering of quality representation.”), available at tenprinciplesbooklet.authcheckdam.pdf.

  32. ABA Eight Guidelines, supra n. 17, at the Comment to Guideline 6.

  33. ABA Model Rules, supra n. 19, Rule 1.16(a)(1) (“A lawyer shall not represent a client or, where representation has

    commenced, shall withdraw from the representation of a client if the representation will result in a violation of the rules of

    professional conduct or other law.”); and ABA Formal Opinion 06-441, supra n. 18.

  34. ABA Eight Guidelines, supra n. 17, at the Comment to Guideline 6.

  35. Id. at the Comment to Guideline 1, citing In re Order on Prosecution of Criminal Appeals by the Tenth Judicial Circuit

    Public Defender, 561 So. 2d 1130, 1135 (Fla. 1990).

  36. ABA Model Rules, supra n. 19, Rule 1.7(a)(2).

  37. Lefstein, supra n. 8 at 61-2.

  38. CBS, 60 Minutes, Inside NOLA Public Defenders’ Decision to Refuse Felony Cases, April 16, 2017, available at

  39. Id.

  40. ABA SCLAID was the organization responsible for the law and standards on the Missouri, Louisiana, Rhode Island, and Colorado studies, as well as the failed study in Tennessee. Additionally, ABA SCLAID study personnel consulted on the study in Texas. Highly respected national accounting and consulting firms partnered with ABA SCLAID and were responsible for the data analysis portions of these studies. An experienced academic research organization was responsible for the data analysis portion of the Texas study.

  41. Workload studies are generally commissioned by an organization with oversight responsibility for public defense in a jurisdiction. This can be a Commission, as in Indiana, or a public defender office or agency, as in Colorado or Rhode Island. In this report we call these entities, collectively, public defense organizations.

  42. National Advisory Commission on Criminal Justice Standards and Goals (1973) at Standard 13.12-Workload of Public Defenders, available at

  43. Ten Principles, supra n. 31.

  44. Id. at Principle 5.

  45. The NAC Standards are, as a result of this citation and a citation in ABA Formal Opinion 06-441 (supra n. 18), often

    erroneously referred to as the ABA Standards.

  46. American Council of Chief Defenders Statement on Caseloads and Workloads (August 24, 2007), available at https://


  47. Id. at 12.

  48. Id.

  49. Id. at 2 (“One system that can be utilized to arrive at an appropriate maximum limit for complex cases is a case credit

    system that allows multiple credits for specific types of cases and recognizes that lawyers can handle fewer of those cases per year.”).

  50. 50  See Thomas A. Schwiech, Missouri State Auditor, Missouri State Public Defender (Oct. 2012), available at https://app.

  51. 51  Id. at 14.

  52. 52  Id.

  53. 53  Id.

  54. The Spangenberg Group was a nationally recognized research and consulting firm specializing in improving justice programs. The Spangenberg Group conducted weighted caseload studies in Colorado (1996) and Tennessee (1999). For more information, see The Spangenberg Group, Keeping Public Defender Workloads Manageable, Bureau of Justice Assistance, Jan 2001, available at

  55. The National Center for State Courts has conducted workload and resource assessments over the years for various aspects of the criminal justice system, including judges, probation officers, and defenders. A complete listing of their work is available at

  56. Brian J. Ostrom, Maryland Attorney and Staff Workload Assessment, 2005, available at https://cdm16501.contentdm.oclc. org/digital/collection/ctadmin/id/414.

  57. Daniel J. Hill, A Workload Assessment Study for the New Mexico Trial Court Judiciary, New Mexico District Attorneys’ Office and the New Mexico Public Defender (National Center for State Courts 2007), available at https://cdm16501.

  58. Matthew Kleiman, Ph.D. and Cynthia Lee, J.D., Virginia Indigent Defense Commission Attorney and Support Staff Workload Assessment, available at

  59. See, Nicholas M. Pace, et. al., Caseload Standards for Indigent Defenders in Michigan (RAND Corporation 2019), available at

  60. See Nicholas M. Pace, et al., Case Weights for Federal Defenders (RAND Corporation 2011), available at https://www.

  61. New Mexico Workload Assessment Study, supra n. 57 at 74.

  62. Id. at 78.

  63. Id.

  64. Id. at 78-79.

  65. Id. at 83-84.

  66. Id. at 85. It is noteworthy that the time-sufficiency survey followed by the meeting of veteran public defenders used in this

    type of workload study is also viewed as an application or modification of a Delphi method.

  67. Id. at 86.

  68. Id. at 10.

  69. Id. at 87. Note that in this study the NAC Standards are referred to as the ABA Standards.

  70. Id. at 10-11.

  71. Id. at 5. Interestingly, the study concluded that the District Attorneys should also receive an increase of 41 FTE attorneys,

    from 324 to 365.

  72. Id. at 79.

  73. 73  For this reason, the RubinBrown and all other accounting and consulting firms that have worked on ABA workload studies have rejected the use of timekeeping data as an anchor, even when the time data is reliable. Instead, the ABA studies
    use law and standards as the anchor. This risk was also well-described in the recent RAND workload study of Michigan: “Providing such information to decisionmakers undoubtedly runs the risk of an anchoring bias, where judgments (such as estimates of the average amount of attorney time needed for effective representation) are excessively influenced by initial values presented to those decisionmakers (such as findings of a time study showing the average amount of attorney time currently spent on cases).” Pace, supra n. 59, at 23. The RAND researchers chose to show the time data to its panels despite acknowledging that there was no way to fully avoid the anchoring bias, although RAND also found that the ABA’s decision not to show this data to panelists was reasonable. See id. at 18 (noting that the ABA studies are among the “well- tested models” for how to conduct public defense workload studies).

  74. Lefstein, supra n. 8 at 150.

  75. State v. Waters, 370 S.W.3d 592 (Mo. 2012) (en banc).

  76. See Stephen F. Hanlon, The Appropriate Legal Standard Required to Prevail in a Systemic Challenge to an Indigent

    Defense System, 61 St. Louis L.J. 625, 636 (2017). Mr. Hanlon was lead counsel for the Missouri Public Defender in


  77. Waters, 370 S.W. 3d at 607.

  78. Lefstein, supra n. 8 at 147.

  79. ABA Standing Committee on Legal Aid and Indigent Defendants, The Missouri Project: A Study of the Missouri Public

    Defender System and Attorney Workload Standards (ABA 2014), at 9, available at


  80. Id. at 10 (citing Chia-Chien Hsu & Brian A. Sandford, The Delphi Technique: Making Sense of Consensus, 12 Prac.

    Assessment, Res.& Eval., 1 (2007),

  81. Id. at 10.

  82. As addressed in note 2, supra, this report uses the term public defenders to include all types of public defense providers. The selection of participants in each jurisdiction reflects how public defense is provided in the jurisdiction being studied. Where public defense is provided, in part, by contractors or court-appointed counsel, efforts are made to ensure that these types of public defenders are included in the Delphi process.

  83. As a term of art in the science of research methodology, an anchor is used to constrain the consensus professional judgment of the Delphi panel.

  84. The United States Supreme Court has found that these standards are “valuable measures of the prevailing professional norms of effective representation.” Padilla, 559 U.S. at 367. These Standards are the result of a lengthy process that began in 1964, and, importantly, they “are the result of the considered judgment of prosecutors, defense lawyers, judges, and academics who have been deeply involved in the process.” Martin Marcus, The Making of the ABA Criminal Justice Standards: Forty Years of Excellence, 23 Crim. J. 10 (2009), available at publications/criminal_justice_magazine/makingofstandards_marcus.pdf.

  85. The ABA Model Rules of Professional Conduct, supra n. 19, have been adopted by most jurisdictions.

  86. Strickland, 466 U.S. at 686, 688.

  87. Defense Function Standards, supra n. 22 at Standard 4-6.1(b). In 2012, the United States Supreme Court, in Missouri v Frye, citing to the Department of Justice, Bureau of Statistics, noted that “ninety-four percent of state convictions are the result of guilty pleas.” Missouri v. Frye, 132 S.Ct. 1399, 1407 (2012).

  88. Three years appears sufficiently long to show trends. See ABA Standing Committee on Legal Aid and Indigent Defendants, The Louisiana Project: A Study of the Louisiana Public Defender System and Attorney Workload Standards, at 7-10 (2017), available at ls_sclaid_louisiana_project_report.pdf.

  89. Timekeeping with sufficient accuracy and consistency to allow for reliable comparisons has proven difficult in several jurisdictions. For a more robust discussion of whether to conduct timekeeping and how to complete a system analysis without timekeeping, see infra Section IV(A).

  90. See infra Section III(B)(5) for the definitions of Case Tasks and Case Types.

  91. The ABA Criminal Justice Standards have many volumes applicable to all aspects of criminal practice. In public defender workload studies, the most important standards have been the Defense Function Standards, which “are intended
    to address the performance of criminal defense counsel in all stages of their professional work.” Defense Function Standards, supra n. 22, at Standard 4-1.1(a). Other aspects of the ABA Criminal Justice Standards, e.g., the Criminal Appeals Standards and Juvenile Justice Standards, have been relevant to particular panels or aspects of studies.

  92. As noted previously, the term public defender, as used in this report, is a broad one. An effort is made to ensure representation for a diverse group of public defenders, representing different jurisdictions and different types of public defense provider, as relevant in the jurisdiction being studied.

  93. In the ABA Delphi studies published to date, a single panel was used to cover adult criminal matters, juvenile cases and appeals. In Texas, the Research Team determined that it would be useful to create separate panels for different practice areas to provide greater expertise and develop more meaningful, substantive results. The ABA workload study in Indiana is also using multiple Delphi Panels. More on this decision is discussed infra at Section IV(B)(1).

  94. Decisions on how to summarize the data and what data to include as feedback are discussed in Section IV(B)(5).

  95. This result is often called a case weight, but to avoid confusing the results of our workload studies with Weighted Caseload Studies, ABA SCLAID has not historically employed this term.

  96. Some researchers have questioned whether failing to ground the study in time-keeping data leads to over-estimations
    of time needed by study participants. The planning fallacy, the human propensity to under-estimate how long something will take, is long described in social science and strongly suggests that time estimates provided through the Delphi process, are, if anything, underestimated. See, e.g., Daniel Kahneman and Amos Tversky, Intuitive Prediction: Biases and Corrective Procedures (Advanced Decision Technology 1977), available at pdf. Numerous steps that have been shown to combat the planning fallacy are incorporated into the Delphi methodology. These steps include segmentation (breaking the overall time to be estimated into smaller component parts and guiding that process by reference to concrete steps, e.g., the tasks of a defense lawyer as described in the ABA Criminal Justice Standards) and discussion with peers, which occurs particularly in the third round of the Delphi process. Participants discuss their time estimates and how they were derived at length with their colleagues before coming to a consensus.

  97.  The Missouri Project, supra n. 79.

  98. The Louisiana Project, supra n. 88.

  99. ABA Standing Committee on Legal Aid and Indigent Defendants, The Colorado Project: A Study of the Colorado Public Defender System and Attorney Workload Standards (ABA 2017), available at aba/administrative/legal_aid_indigent_defendants/ls_sclaid_def_co_project.authcheckdam.pdf.

  100. ABA Standing Committee on Legal Aid and Indigent Defendants and the National Association of Criminal Defense Lawyers, The Rhode Island Project: A Study of the Rhode Island Public Defender System and Attorney Workload Standards (NACDL 2017), available at defendants/ls_sclaid_def_ri_project.pdf.

  101. Pace, Caseload Standards for Indigent Defenders in Michigan, supra n. 59 at 18.

  102. The Indiana Project, supra n. 1.

  103. In the absence of a comprehensive system analysis, a Delphi workload study can still be valuable. On its own, the Delphi study establishes the time needed to adequately represent clients by Case Type. This data can be used for budgetary and planning purposes, and compared to whatever existing data may exist, e.g., individual attorney or office caseloads.

  104. See, e.g., The Missouri Project, supra n. 79 at 15; The Colorado Project, supra n. 99 at 16-17; The Rhode Island Project, supra n. 100 at 16-17.

  105. The Missouri Project, supra n. 79 at 15.

  106. The Colorado Project, supra n. 99 at 16.

  107. The Louisiana Project, supra n. 88 at 11.

  108. Id. at 12.

  109. Id. at 12.

  110. Id.

  111. Id. at note 26.

  112. 52 weeks per year, 40 hours per week = 2,080 hours annually per FTE. This number of average annual case-specific

    hours is comparable to the hours generally required of associates in large law firms, which range from 1700 to 2300 hours. See The Truth about the Billable Hour, Yale Law School, available at hour-0. The 2,080 annual hour number for annual work is undoubtedly conservative, as it does not permit any time for administrative or supervisory work, general meetings, training, travel time, wait time, or other time not devoted to case- specific legal work. It also is not discounted to allow for public holidays, sick leave or vacation.

  113. The Louisiana Project, supra n. 88 at 13.

  114. Id. at 2.

  115. The Rhode Island Project, supra n. 100 at 19.

  116. Id. at 7.

  117. The Louisiana Project, supra n. 88 at 7.

  118. Id.

  119. The Rhode Island Project, supra n. 100 at 26.

  120. The 2,080 annual hour number for attorney work is undoubtedly conservative, as it does not permit any time for

    administrative or supervisory work, general meetings, training, travel time, wait time, or other time not devoted to case- specific legal work. It also is not discounted to allow for public holidays, sick leave or vacation. The hours, as a result, come out almost identical to the hours worked by law firm associates. See Update on Associate Hours Worked, NALP Bulletin, 2016, available at (noting that the data from 2014 shows that law firm associates worked, on average, 2,081 hours per year, which was up from an average of 2,067 hours worked in 2013). If a system had other data to use to determine the amount of time an FTE had available for case-specific work, the FTE hours amount could be discounted based on that data. This was done in the Colorado workload study, which determined that each attorney had 1,269 hours available for case specific work per year. See The Colorado Project, supra n. 99 at 25.

  121. Participants are instructed to respond to only those sections of the survey that concern Case Types with which they have significant experience.

  122. This issue is discussed in more depth infra at Section IV(B)(2) and (B)(3).

  123. See Texas A&M Public Policy Research Institute, Guidelines for Indigent Defense Caseloads: A Report to the Texas Indigent Defense Commission (Jan. 2015), available at indigent-defense-caseloads-01222015.pdf.

  124. Defense Function Standards, supra n. 22 at Standard 4-6.1 (b) (emphasis added).

  125. See NACDL, The Trial Penalty: The Sixth Amendment Right to Trial on the Verge of Extinction and How to Save It. (2018),

    available at:

  126. See The Indiana Project, supra n. 1 at 20.

  127. See TX Guidelines for Indigent Defense Caseloads, supra n. 123 at Appendix H.

  128. See Section III(B), supra.

  129. See, e.g., TX Guidelines for Indigent Defense Caseloads, supra n. 123 at Appendix H.

  130. The trimmed range is the range of responses after the outliers are trimmed. For example, the full range of responses to

    a question might be 5-60. After trimming the highest 25% of responses and the lowest 25% of responses, the trimmed

    range might be 15-45.

  131. The “mean” is the average of a set of results. The trimmed mean is the average of the results remaining after the range of

    results has been trimmed.

  132. The median is found by listing all responses in ascending order and then locating the number at the middle of the


  133. See, e.g., TX Guidelines for Indigent Defense Caseloads, supra n. 123 at Appendix H.