Defense let 48 Hours film their mock jury trial, but the verdict of the real jury didn’t turn out the same . . . What Went Wrong?
By Geri E. Satin, Esq., Ph.D.
CBS’s 48 Hours recently covered a high-profile Texas murder case involving the death of Jessie Bardwell. Defense attorneys believed that allowing the defendant to tell his story (i.e., that his girlfriend’s death was an accident) would lead to an acquittal. To gauge jurors’ reaction to the defendant’s story, defense attorneys decided to put on a mock jury trial. They had never done one before. The attorneys conducted the mock trial on their own, using a single panel of jurors. After brief deliberations, mock jurors unanimously found the defendant not guilty.
Defense attorneys used the jury research results to inform their trial strategy – deciding to put the defendant on the stand at trial to tell his story. This time, however, the jury found the defendant guilty of murder. He was sentenced to 50 years in prison.
How could mock jurors so clearly come out one way and the actual jury the exact opposite way? There is, of course, no guarantee that a mock trial will duplicate a jury’s findings. With that said, what happened in the Bardwell case is a prototype of what NOT to do when conducting jury research.
Mock trials and focus groups provide a wealth of critical case information – not doing them is akin to going into a jury trial blind (particularly these days given that your opposing counsel has likely conducted jury research). But if the research is not conducted in a scientifically sound manner and if the data is not analyzed accurately, the mock trial may not only be worthless – it may be flat-out misleading!
Experimenter Effects
The Bardwell attorneys conducted their own mock trial. This can create what those of us in the social science world call Demand Characteristics. Essentially, attorneys go into a mock trial with a certain goal (e.g., to have jurors find the defendant not guilty). Throughout the mock trial, jurors are likely to form an interpretation of this goal and change their behavior to fit with that goal (e.g., unanimously finding the defendant not guilty). The very short and uninformative juror deliberations in the Bardwell case speak to the likelihood of demand characteristics being at play. Jurors – whether consciously or unconsciously – were likely influenced by the defense attorneys’ desired outcome.
The Bardwell mock trial may have also suffered from a related (but equally damaging) phenomenon: the observer-expectancy effect. Attorneys involved in a trial invariably have beliefs about the process and outcome of litigation. Every single trial we have worked on includes at least one document/email, key witness, thematic point, or jury instruction that the trial team is convinced will drive the case outcome. These predisposed viewpoints turn into what is known as Cognitive Biases, which can and (if not controlled for by an independent consultant) subconsciously influence mock jurors. Neutrality is nearly impossible if not properly vetted for in advance of and during a mock trial.
Recruitment and Sample Size Problems
Although 48 Hours did not cover the juror recruitment methods used by the Bardwell defense attorneys, I suspect this may have been part of the problem. Juror recruitment can be very damaging to mock trial results if not controlled properly. I’ve heard too many attorneys tell me they mock-try cases before “co-workers,” “family and friends,” or even “people pulled off the street.” There is no doubt that the use of Convenience Samples of participants is fast, inexpensive, and easily accessible. However, there is a price to pay for this lack of recruitment effort: convenience samples are almost assuredly not representative of the population of eligible jurors who will be subpoenaed for jury duty. If all of your co-workers have the same view on damages caps, asking them to decide a personal injury case tells you virtually nothing about how people in the jurisdiction will award damages. Same goes for pulling people off the street – all of these people may be in a certain part of town for the same reason (e.g., socioeconomic status, attending an event, etc.).
The Bardwell mock trial also suffered from the number of jurors used (or lack thereof). The defense attorneys used a single panel of mock jurors. It is simply impossible and quite risky to make conclusions about how jurors in a venue will decide a case based on a single sample of participants. What if this group of jurors was uniquely pro-defense? What if there was a strong juror in the room who commandeered deliberations? What if this group of jurors misread the verdict form? To attain predictive case results (and to conduct other critical statistical analyses, like juror profiling), a larger sample size involving multiple deliberation groups is an absolute necessity.
The Problem with Relying on Verdicts
One of the key missteps by the Bardwell defense team was treating the mock jury verdict as the Holy Grail of litigation. Sure, it is nice to be able to give yourself a pat on the back for winning a mock trial, but a unanimously pro-defense verdict is virtually worthless to a defense team preparing for trial. From my perspective, as a trial consultant, jurors’ verdicts are of little (if any) importance to a mock trial.
The point of jury research should be to unearth the PROBLEMS with a case and to come up with strategies and solutions to combat those problems before trial. Which themes are not resonating with jurors? Which pieces of evidence are damaging to my case? What don’t jurors understand about the facts/law? Which witnesses are helpful and hurtful to my case? Do jurors understand and properly interpret the jury instructions and verdict forms? Are my demonstrative aids effective? Which juror types/groups are problematic in jury selection? Etc.
Each of the above-questions should always be followed by WHY? Why is this witness perceived as not credible? Why do jurors find this piece of evidence damaging to my case? Why don’t jurors understand this jury instruction? Why are jurors with this prior life experience or pre-existing bias a dangerous group in the deliberation room?
By focusing only on the bottom line, the Bardwell defense attorneys missed a wealth of critical juror intelligence that could have been derived from their mock trial.
How Do We Fix These Problems?
The above-problems are likely the tip of the iceberg in terms of the Bardwell mock trial (and other DIY jury research projects). But, they are a great jumping off point for discussion on how to execute a sound, reliable mock trial.
As to experimenter effects, my best advice is to avoid them completely. We’ve already discussed the dangers of D-I-Y Jury Research (Doing It Yourself) in a previous blog post. But, the main takeaway is that independent trial consultants are a must in terms of designing, running, and analyzing the results of a mock trial. DIY jury research threatens the internal validity of a study (a fancy way of saying, the extent to which the study findings can be relied upon). And, if you can’t trust the accuracy of a mock trial, why do it at all?
When it comes to juror recruitment, the goal is to randomize the recruitment process as much as possible and to make it all-inclusive. Multiple recruitment procedures and sources should be used (none of which should be a market research firm – these firms generally recycle what we in the industry call “professional mock jurors”). The second goal is to piece together a mosaic of jury-eligible citizens who properly reflect the key demographics of the venue in which the case is set to be tried. These jurors must reflect different ages, socioeconomic statuses, races/ethnicities, levels of education, political party affiliations, occupations, etc. Each juror must be prescreened using unbiased cognitive-based questioning for eligibility and psychosocial appropriateness for jury duty. I realize that this is all quite detailed and time-consuming – which is precisely why we have an in-house juror recruitment department.
Sample size is a tricky issue (at least from a budgetary standpoint). More mock jurors and more deliberation groups mean more expense for your trial team and client. But they also mean more accuracy and precision. Using small sample sizes can provide a trial team with important qualitative juror feedback, insights, and results. However, these results are not necessarily predictive of case outcome. To attain quantitatively significant results (e.g., developing juror profiles for voir dire, providing reliable forecasts of liability and case value, etc.), it is imperative that a large number of jurors, stratified into different deliberation groups, are used in the mock trial.
Finally, let’s readdress misplaced focus on mock jury verdicts (particularly favorable ones). As a trial consultant, I often tell clients that a successful mock trial should be a “worst case scenario” – to see the good, the bad, and the ugly in terms of what might happen at trial. And, in attaining that goal, jurors’ attitudinal, perceptual, and decision-based viewpoints must be unearthed at every juncture of the mock trial (before, during, and after case presentations and deliberations). This is the critical case information. From the moment a juror walks into a mock courtroom, they should be polled, surveyed, and questioned using research-tested psychometric measures designed to elicit juror information in an unbiased and neutral way.
Ok, ok. I realize I may have lost you at this point (too many years of graduate statistics courses). But, the takeaway is simple. A mock trial is of incomparable value to a trial team – IF and WHEN conducted and analyzed correctly. Mock trials are not what you see voluntary bar associations put on via CLEs and trial skills workshops. They are not what private attorneys put on before family, friends, and co-workers. They are scientific jury research studies performed by specialized PhDs and legal experts who use psychology, the law, statistics, and human behavior to create winning trial strategies, reliable jury verdicts, and predictive case valuation profiles.