The most powerful jurists in the country cannot do math. In March 2018, when the Supreme Court of the United States heard oral arguments in Gill v. Whitford, a landmark case that would determine the future of partisan gerrymandering, its members were reluctant to consider statistical evidence seriously.
A concept called the “efficiency gap” lies at the core of Gill v. Whitford: it is a simple fraction where the numerator is the difference between the two parties’ wasted votes and the denominator is the total number of votes cast. During the oral argument, Chief Justice Roberts dismissed the straightforward concept, stating that “[i]t may be simply my educational background, but I can only describe it as sociological gobbledygook.” Justice
…show more content…
Breyer made similar remarks, asserting that “…the hard issue in this case are… standards manageable by a court, not by some group of social science political ex … you know, computer experts.” While many people view the Justices as unreasonable in their dismissal of the efficiency gap, they reveal a broader concern in today’s legal system: lawyers and judges oftentimes lack the background knowledge to appropriately assess mathematical evidence. As technology rapidly advances, the legal profession’s often hostile relationship with math and science – as exemplified by the ways in which courts admit and misuse statistical evidence – inevitably diminishes the rule of law. At the dawn of the 21st century, Americans face complex legal issues regarding matters like digital privacy and biometric analyses, the fate of which will unavoidably suffer as a result of the legal profession’s dearth of mathematical knowledge. In the first two parts of this paper, I will show that judges’ lack of quantitative understanding manifests itself in not only the rejection of but also the misuse of statistical evidence. Then, in the latter two parts, I will contend that one of the reasons contributing to this intellectual epidemic lies within the development of America’s legal education system. I will outline the history of legal education from the colonial era to the present and argue that academic institutions’ misleading emphasis on “studying the law as a science” have distracted us from learning and understanding the essence of quantitative reasoning in the context of law. Part I: The Rejection of Statistical Evidence In 2010, the Supreme Court’s controversial ruling in Citizens United v. FEC practically allowed corporations to spend unlimited amount of money to support or denounce political candidates. Some argue that the ruling has led to today’s explosion of outside spending in American elections. In response to this new torrent of outside spending, several states – including Arizona, Connecticut, and Maine – strengthened their public funding laws that sought to ensure publicly funded candidates an equal playground as their privately funded opponents. In Arizona, the Clean Election Act provided matching government funding to publicly financed candidates based on the money raised and spent by their privately funded opponents. The fate of the Clean Election Act was put at stake before the Supreme Court in the 2011 campaign finance case, Arizona Free Enterprise Club’s Freedom Club PAC v. Bennett. Arizona attempted to justify its legislation on grounds of fairness and equality, but the question eventually boiled down to whether providing publicly funded candidates with matching funds burdens political speech of their privately funded opponents. Although an abundance of statistical analyses by multiple scholars showed that private contributors do not alter their behavior in response to matching funds, these social science findings were ultimately dismissed despite the fact that they were heavily submitted to the Court. As a result, the Supreme Court ruled the Clean Election Act unconstitutional, arguing that the state’s anti-corruption interests did not justify the “substantial burden” on free speech. In light of the ruling, many people argue that the decision in Arizona Free Enterprise Club’s Freedom Club PAC v. Bennett has exposed many elections to the influence of big money forces by disincentivizing candidates who are willing to forgo private campaign contributions. Nowhere in the Court’s analysis did it directly engage with the numerous statistical data presented in front of it. Instead, in his majority opinion, Justice Roberts cites a precedential statement from the Warren Court: “Since as a practical matter it is never easy to prove a negative, it is hardly likely that conclusive factual data could ever be assembled.” What Roberts argues is that no matter how much evidence there is supporting the fact that privately funded candidates are unaffected by the matching provision, one can never definitively prove that none of these candidates is burdened. Robert’s argument in rejecting statistical evidence involves a fundamental mistake called the “negative effect fallacy”, as coined by Harvard Professor Ryan Enos in 2017. The fallacy occurs when one argues that because on a philosophical level it is impossible to prove with absolute certainty that something does not exist, all statistical evidence should be discarded. For instance, one can erroneously argue that no many how many non-flying pigs there are on the planet, their numbers do not prove the nonexistence of pigs that can fly. Nevertheless, Enos points out that the negative effect fallacy is simply an error of language and logic, and it prevents people from using statistical tools that have highly accurate descriptive and predictive powers. In this way, powerful quantitative evidence has been ignored by the Court based on a simple yet detrimental fallacy. Arizona Free Enterprise Club’s Freedom Club PAC v. Bennett was certainly not the only time the Court blindly rejected statistical evidence. In 2001, although the Court cited statistical data that showed that banning the dissemination of illegally intercepted communications does not deter future illegal interceptions, Justice Rehnquist in his dissenting opinion dismissed the evidence and called it “voodoo statistics”. At this point, it is important to point out that I am not contending that any of the cases mentioned above was wrongfully decided or that judges ought to make decisions based solely on statistical evidence. I am, however, arguing that it was unreasonable for the Court to think that “it is hardly likely that conclusive factual data could ever be assembled.” This erroneous way of reasoning has allowed the Court to ignore highly relevant empirical evidence and turn to other forms of thinking that do not necessarily reflect the truth. Harvard law professor Laurence H. Tribe once denounced the use of mathematics at trial, saying that the “overbearing impressiveness” of numbers tends to “dwarf” other evidence. However, in the age of science and technology, quantitative data will play an ever more important role in judicial deliberations. We cannot and should not discard important evidence simply because it seems difficult to comprehend – doing so would impede justice, and in worst cases, destroy innocent lives. Part II: The Misuse of Statistical Evidence Although the Court has blindly rejected powerful statistical evidence in recent history, it paradoxically at the same time has acknowledged the importance of such evidence. As statistical methodologies improved and as society’s demand for scientific findings increased, there has been a rapid movement in pursuing quantifiable data in criminal justice. Despite this, the judicial system still struggled to prevent flawed or misguiding statistical evidence out of the court room. A primary example is the rule of admissibility for scientific evidence. In its 1993 decision in Daubert v. Merrell Dow Pharmaceuticals, the Supreme Court devised a five-factor test to examine the reliability of scientific evidence. The Court wrote that in order for a piece of scientific evidence to be admissible, it must satisfy the following criteria: • The method employed can be and has been tested • The results are subjected to peer review and publication • The known or potential rate of error must be acceptable • The technique employed must be generally accepted in the scientific community • The research must be conducted independently without the intent to provide a proposed testimony Building onto the ruling in Daubert, the Court subsequently decided in General Electric Co.
v. Joiner and Kumho Tire Co. v. Carmichael that the task of “gatekeeping” – or ensuring that expert testimonies, including statistical evidence, are relevant and reliable – belongs to the judge rather than independent experts. Eventually, in 2000, the decisions in Daubert, General Electric, and Kumho served as the basis for the Federal Rule of Evidence 702. Since then, over half of the states have adopted some variation of the so-called Daubert standard.
In light of Rule 702, judges who oftentimes lack a quantitative background took over the responsibility that previously belonged to highly trained experts. Their newly gained role as evaluators of statistical evidence has led to a series of chaos and confusion. The interpretation of the term “peer review”, for instance, has been inconsistent among different courts throughout the country. Professor Paul Giannelli writes in Case Western Reserve Law Review that the “peer review” standard in some courts has been interpreted to simply mean that someone has double-checked a lab analyst’s results rather than a “rigorous peer review with independent external reviewers to validate the accuracy … [and] overall consistency
…show more content…
with scientific norms of practice,” which was Daubert’s intent. As a result of these inconsistent interpretations of “peer review”, courts have mistakenly cited the prevalence of certain forensic techniques as evidence for their scientific value.
In U.S. v. Havvard, numerous scientific findings challenged the validity and accuracy of the latent fingerprint matching technique. The technique involves using special procedures to uncover finger print residues that are invisible to the naked eye, and studies find that it can vary significantly in its quality and correctness. The court, nevertheless, referred to latent finger print matching as the “archetype” of reliable expert testimony by asserting that it “[has] been used in ‘adversarial testing for roughly 100 years,’ which offered a greater sense of the reliability of fingerprint comparisons than could the mere publication of an article.” Though many studies point out that finger print collection and examination can be highly inaccurate if done without rigor, fingerprinting methods such as latent print matching have not suffered a sustained challenge in federal court in nearly 100 years. In addition to latent print matching, the National Academy of Sciences, the National Commission on Forensic Science, the President’s Council of Advisors on Science, and Technology and the Texas Forensic Science Commission have found that many well-known and admitted forensic science techniques such as bite-mark analysis, microscopic hair comparison, and arson evidence are questioned by independent
studies but are still used in some courtrooms today.
Victoria and New South Wales (NSW) take a similar approach in relation to tendency and coincidence evidence (‘the Evidence’). However, until the case of KJM (No 2) , they took different approaches in reviewing rulings of the Evidence .
There are certain standards that the courts use to determine competency. In order to find the accused competent, a court should find out by a preponderance of evidence that the defendant has remarkable ability to consult with his lawyer with a reasonable degree of rational indulgence. The def...
Buckler, Justin. "Population Equality And The Imposition Of Risk On Partisan Gerrymandering." Case Western Reserve Law Review 62.4 (2012): 1037-1055. Academic Search Premier. Web. 23 Apr. 2014.
Thesis Statement: In this speech I am going to explain how forensic teams use fingerprints to identify individuals.
Nowadays, DNA is a crucial component of a crime scene investigation, used to both to identify perpetrators from crime scenes and to determine a suspect’s guilt or innocence (Butler, 2005). The method of constructing a distinctive “fingerprint” from an individual’s DNA was first described by Alec Jeffreys in 1985. He discovered regions of repetitions of nucleotides inherent in DNA strands that differed from person to person (now known as variable number of tandem repeats, or VNTRs), and developed a technique to adjust the length variation into a definitive identity marker (Butler, 2005). Since then, DNA fingerprinting has been refined to be an indispensible source of evidence, expanded into multiple methods befitting different types of DNA samples. One of the more controversial practices of DNA forensics is familial DNA searching, which takes partial, rather than exact, matches between crime scene DNA and DNA stored in a public database as possible leads for further examination and information about the suspect. Using familial DNA searching for investigative purposes is a reliable and advantageous method to convict criminals.
Crime is a common public issue for people living in the inner city, but is not limited to only urban or highly populated cities as it can undoubtedly happen in small community and rural areas as well. In The Real CSI, the documentary exemplified many way in which experts used forensic science as evidence in trial cases to argue and to prove whether a person is innocent or guilty. In this paper, I explained the difference in fingerprinting technology depicted between television shows and in reality, how DNA technology change the way forensics evidence is used in the court proceedings, and how forensic evidence can be misused in the United States adversarial legal system.
Dissent: They were two judges who dissented. Judges Mosk, J. and Panelli, J. Said that they were we to eliminate the doctrine of assumption of risk, we would put an end to the doctrinal confusion that now surrounds apportionment of fault in such cases. Assumption of risk now stands for so many different legal concepts that it’s utility has diminished. The assumption of risk has different legal concepts to it and it reduces the right of the plaintiff if the defendant can demonstrate the plaintiff voluntarily know the
Contextual information also affects the human comparative part of fingerprint analysis, in ways that alter the matching of the same fingerprints, years apart; however, when contextual information is provided, it actually helps 20% of forensic technicians, but that still leaves 80% hindered by contextual information (Dror et al, 2006). Contextual information affects the psychological aspects of perception and problem-solving, in a way that can obscure information that does not support the context, and it can even affect how forensic technician’s view and handle forensic evidence (Bernstein et al, 2013). However, there are some advantages of contextual information, because it can give the forensic division a mental shortcut, saving time and money; however, these shortcuts lead to inaccurate and biased conclusions. This essay has shown that contextual information creates erroneous mistakes and prejudiced results in forensic investigations. A possible way to remove the negative effects of contextual information is to have the forensic technicians, not know the context of the crime so that they do not
Therefore, the criminal justice system relies on other nonscientific means that are not accepted or clear. Many of forensic methods have implemented in research when looking for evidence, but the methods that are not scientific and have little or anything to do with science. The result of false evidence by other means leads to false testimony by a forensic analyst. Another issue with forensic errors is that it is a challenge to find a defense expert (Giannelli, 2011). Defense experts are required to help the defense attorneys defend and breakdown all of the doubts in the prosecutors scientific findings in criminal cases. Scientific information is integral in a criminal prosecution, and a defense attorney needs to have an expert to assist he/she in discrediting the prosecution (Giannelli,
Having the ability to identify types of prints and surfaces, and the corresponding techniques to develop the prints, has helped crime scene investigators identify criminals and victims of scenes, and aided in the prosecution of defendants in the criminal justice system. Although the history and techniques go far beyond what was discussed in these few pages, it is important as a law enforcement officer or investigator to understand the very basics of how fingerprint identification began, and the simple techniques used to develop them today.
Expert testimony in legal proceedings has been a subject of heated arguments of late; because of there have been numerous instances where scientific evidence has been misrepresented and fabricated to send innocent defendants to prison. The Frye standard served the purpose of general acceptance of scientific evidence was admissible in courts. However, the Criminal Justice system received a shock in 1993 when the Supreme Court gave the verdict that Frye test was insufficient as the general acceptance of scientific evidence. The Daubert vs. Merryl Dow case determined that Frye is no longer satisfactory to be admissible as scientific evidence; moreover, the Daubert test surpasses Frye, concerning the admissibility of scientific
The article Do Fingerprints Lie? was written by Michael Specter. It challenges the subject of whether fingerprint evidence is flawless as it is generally accepted in court. For a very long time, the US court system would accept any sort of "expert testimony", regardless of whether they were legitimately experts. Opinions that were "generally accepted" in the field were allowed, even if they weren't factually proven. The case of Daubert v. Merrell Dow Pharmaceuticals brought this into a more critical light and created stricter guidelines for what could be admissible.
Evidence collection is a crucial part of forensics. Its reliability can be compromised by input bias from law
There is a lot of controversy whether latent prints uncovered on firearm evidence can be deemed reliable. In order to fully understand this controversy, studies must be conducted so that there is supporting evidence. Two examples of studies that were conducted are the Study on Developing Latent Fingerprints on Firearm Evidence by Betzaida Maldonado, and Fingerprint & Cartridge Cases: How Often are Fingerprints Found on Handled Cartridge Cases and Can These Fingerprints Be Successfully Typed for DNA?, by Terry Spear, Jeanne Clark, Mike Giusto, Neda Khoshebari, Michael Murphy, and John Rush.
In 1993, the United States Supreme Court had set a new standard in a decision based on Daubert v. Merrill Dow, which amended Rule 702 (FRE). In contrast to the Frye Standard, the main focus of the Daubert Rule of evidence is based on the principles and methodology for admitting testifying expert witnesses; and not on the proffered conclusions. The new rule created four guidelines that a judge must consider on the relevancy and reliability of scientific knowledge or techniques (Cornell Law School,