16-biology-crime
S Sixteen Biology, the Criminal Justice System, and (Oh, Why Not?) Free Will* DON’T FORGET TO CHECK THEIR TEAR DUCTS ome years back a foundation sent a letter to various people, soliciting Big Ideas for a funding initiative of theirs. The letter said something along the lines of “Send us a provocative idea, something you’d never propose to another foundation because they’d label you crazy.” That sounded fun. So I sent them a proposal titled “Should the Criminal Justice System Be Abolished?” I argued that the answer was yes, that neuroscience shows the system makes no sense and they should fund an initiative to accomplish that. “Ha-ha,” they said. “Well, we asked for it. That certainly caught our attention. That’s a great idea to focus on interactions between neuroscience and the law. Let’s do a conference.” So I went to a conference with some neuroscientists and some legal types— law professors, judges, and criminologists. We learned one another’s terminology, for example seeing how we neuroscientists and the legal people use “possible,” “probable,” and “certainty” differently. We discovered that most of the neuroscientists, including me, knew nothing about the workings of the legal world, and that most of the legal folks had avoided science since being traumatized by ninth-grade biology. Despite the two-culture problem, all sorts of collaborations got started there, which eventually grew into a network of people studying “neurolaw.” Fun, stimulating, interdisciplinary hybrid vigor. And frustrating to me, because I kind of meant the title of the proposal that I had written. The current criminal justice system needs to be abolished and replaced with something that, while having some broad features in common with the current system,* would have utterly different underpinnings. Which I’m going to try to convince you of. And that’s just the first part of this chapter. — You can’t be less controversial than stating that the criminal justice system needs reform and that this should involve more science and less pseudoscience in the courtroom. If nothing else, consider this: according to the Innocence Project, nearly 350 people, a mind-boggling 20 of them on death row, imprisoned an average of fourteen years, have been exonerated by DNA fingerprinting.1 Despite that, I’m going to mostly ignore criminal justice reform by science. Here are some hot-button topics in that realm that I’m going to bypass entirely: What to do about the power and ubiquity of automatic, implicit biases (leading to, for example, juries meting out harsher decisions to African American defendants with darker skin). Should Implicit Association Tests be used in jury selection to eliminate people with strong, pertinent biases? Whether neuroimaging information regarding a defendant’s brain should be admissible in a courtroom.2 This has grown less contentious as neuroimaging has transitioned from revolutionary to a standard approach in science’s tool kit. But there remains the issue of whether juries should be shown actual neuroimages —the worry is that nonexperts are readily overly impressed with exciting, color-enhanced Pictures of the Brain (it’s turning out to be less of an issue than feared). Whether neuroimaging data regarding someone’s veracity should have a place in the courtroom (or in the workplace regarding security clearances). Basically, I know of no expert who thinks the technique is sufficiently accurate. Nonetheless, there are entrepreneurs selling the approach (including, I kid you not, a company called No Lie MRI). This issue extends to lower-tech but equally unreliable versions of is-that-brain-lying? This includes electroencephalograms (EEGs), which are admissible in Indian courtrooms.3 What should be the IQ cutoff for someone to be smart enough to be executed? The standard is an IQ of 70 or higher, and debate concerns whether it should be an average of 70 across multiple IQ tests, or if achieving that magic number even once qualifies you for being executed. This issue pertains to about 20 percent of people on death row.4 What to do with the fact that scientific findings can generate new types of cognitive biases in jurors. For example, the belief that schizophrenia is a biological disorder makes jurors less likely to convict schizophrenics for their actions but more likely to view them as more incurably dangerous.5 The legal system distinguishes between thoughts and actions; what to do as neuroscience increasingly reveals the former. Are we approaching precrime detection, predicting who will commit a crime? In the words of one expert, “We’re going to have to make a decision about the skull as a privacy domain.”6 And of course there’s that problem of judges judging more harshly when their stomachs are gurgling.7 — All of these are important issues, and I think reforms are needed at the intersection of progressive politics, civil liberties, and tough standards about new science. In other words, a standard liberal agenda. Most of the time I’m a clichéd card-carrying liberal; I even know the theme songs to many of NPR’s programs. Nonetheless, this chapter won’t take anything resembling a liberal approach to reforming criminal justice. The reason why is summarized in the following example of a classically liberal approach to a legal issue. It’s the middle of the 1500s. Perhaps because of lax societal standards and people being morally deprived and/or depraved, Europe is overrun with witches. It’s a huge problem—people fear going out at night; polls show that peasants-in- the-street list “witches” as more of a threat than “the plague” or “the Ottomans”; would-be despots gain supporters by vowing to be tough on witches. Fortunately, there are three legal standards for deciding if someone is guilty of witchcraft:8 The flotation test. Since witches reject the sacrament of baptism, water will reject their body. Take the accused, bound, and toss them into some water. If they float, they’re a witch. If they sink, they’re innocent. Quickly now, retrieve innocent person. The devil’s-spot test. The devil enters someone’s body to infect them with witch-ness, and that point of entry is left insensitive to pain. Systematically do something painful to every spot on the accused’s body. If some spot is much less sensitive to pain than the rest, you’ve found a devil’s spot and identified a witch. The tear test. Tell the accused the story of the crucifixion of Our Lord. Anyone not moved to tears is a witch. These well-established criteria allow authorities fighting this witch wave to identify and suitably punish thousands of witches. In 1563 a Dutch physician named Johann Weyer published a book, De Praestigiis Daemonum, advocating reform of the witch justice system. He, of course, acknowledged the malign existence of witches, the need to punish them sternly, and the general appropriateness of witch-fighting techniques like those three tests. However, Weyer aired an important caveat pertinent to older female witches. Sometimes, he noted, elderly people, especially women, have had atrophy of their lachrymal glands, making it impossible to cry tears. Uh-oh—this raises the specter of false convictions of people as witches. The concerned, empathic Weyer counseled, “Make sure you’re not torching some poor elderly person simply because her tear ducts don’t work anymore.” Now that’s a liberal reform of the witch justice system, imposing some sound thinking in one tiny corner of an irrational edifice. Much like what scientifically based reform of our current system does, which is why something more extreme is needed. L THREE PERSPECTIVES et’s get down to cases. There are three ways of viewing the place of biology in making sense of our behaviors, criminal or otherwise:
- We have complete free will in our behavior.
- We have none.
- Somewhere in between. If people are forced to carefully follow the logical extensions of their views, probably less than a thousandth of a percent would support the first proposition. Suppose someone convulsing with a grand mal epileptic seizure, flinging their arms around, strikes someone. If you truly believe we freely control our behavior, you must convict them of assault. Virtually everyone considers that absurd. Yet that legal outcome would have occurred half a millennium ago in much of Europe.9 That seems ludicrous because in the last few centuries the West has crossed a line and left it so far behind that a world on the other side is unimaginable. We embrace a concept that defines our progress—“It’s not him. It’s his disease.” In other words, at times biology can overwhelm anything resembling free will. This woman didn’t bump into you maliciously; she’s blind. This soldier standing in formation didn’t pass out because he doesn’t have what it takes; he’s diabetic and needs his insulin. This woman isn’t heartless because she didn’t help the elderly person who had fallen; she’s paralyzed from a spinal cord injury. Similar shifts in the perception of criminal responsibility have occurred in other realms. For example, from two to seven centuries ago, prosecution of animals, objects, and corpses thought to have intentionally harmed someone was commonplace. Some of these trials had a weirdly modern tint to them—in a 1457 trial of a pig and her piglets for eating a child, the pig was convicted and executed, whereas the piglets were found to be too young to have been responsible for their acts. Whether the judge cited the maturational state of their frontal cortices is unknown. Thus hardly anyone believes that we have complete conscious control over our behavior, that biology never constrains us. We’ll ignore this stance forever after. N DRAWING LINES IN THE SAND early everyone believes in the third proposition, that we are somewhere between complete and no free will, that this notion of free will is compatible with the deterministic laws of the universe as embodied in biology. Only a subset of versions of this view fit the fairly narrow philosophical stance called “compatibilism.” Instead this broader view is that we have something resembling a spirit, a soul, an essence that embodies our free will, from which emanates behavioral intent; and that this spirit coexists with biology that can sometimes constrain it. It’s a kind of libertarian dualism (“libertarian” in the philosophical rather than political sense), what Greene calls “mitigated free will.” It’s encapsulated in the idea that well-intentioned spirit, while willing, can be thwarted by flesh that is sufficiently weak. — Let’s start with the definitive legal framing of mitigated free will. In 1842 a Scotsman named Daniel M’Naghten tried to assassinate British prime minister Robert Peel.10 He mistook Peel’s private secretary, Edward Drummond, for the prime minister and shot him at close range, killing him. At his arraignment M’Naghten stated, “The Tories in my native city have compelled me to do this. They follow and persecute me wherever I go, and have entirely destroyed my peace of mind. They followed me to France, into Scotland . . . wherever I go. I cannot get no rest from them night or day. I cannot sleep at night. . . . I believe they have driven me into a consumption. I am sure I shall never be the man I formerly was. . . . They wish to murder me. It can be proved by evidence. . . . I was driven to desperation by persecution.” In today’s terminology M’Naghten had some form of paranoid psychosis. It may not have been schizophrenia—his delusional symptoms started many years later than the typical age of onset of the disease. Regardless of the diagnosis, M’Naghten had abandoned his business and spent the previous two years wandering Europe, hearing voices, convinced that he was being spied upon and persecuted by powerful people, with Peel his most diabolical tormentor. In the words of a doctor who testified as to his insanity, “The delusion was so strong that nothing but a physical impediment could have prevented him from committing the act [i.e., murder].” M’Naghten was so clearly impaired that the prosecution withdrew criminal charges, agreeing with the defense that he was insane. The jury agreed, and M’Naghten spent the rest of his life in insane asylums, reasonably well treated by the standards of the time. There was bellowing protest after the jury’s decision, ranging from the man in the street to Queen Victoria—M’Naghten had gotten away with murder. The presiding judge was grilled by Parliament and stood by the decision. The equivalent of the Supreme Court was tasked by Parliament with assessing the case and supported him. And out of the decision came the formalization of what is now the common criterion for finding someone innocent by reason of insanity, namely the “M’Naghten rule”: if, at the time of the crime, the person is so “laboring under such a defect of reason from disease of the mind,” that he cannot distinguish right from wrong.* The M’Naghten rule was at the core of John Hinckley Jr. being found not guilty for reasons of insanity in his attempted assassination of Reagan in 1981, being hospitalized rather than jailed. There was considerable “He’s getting away with it” outrage afterward; a number of states banned the M’Naghten criterion, and Congress essentially banned it for federal cases with the 1984 Insanity Defense Reform Act.* Nonetheless, the reasoning behind M’Naghten has generally withstood the test of time. This is the essence of a stance of mitigated free will—people need to be held responsible for their actions, but being floridly psychotic can be a mitigating circumstance. It is the idea that there can be “diminished” responsibility for our actions, that something can be semivoluntary. Here’s how I’ve always pictured mitigated free will: There’s the brain—neurons, synapses, neurotransmitters, receptors, brain- specific transcription factors, epigenetic effects, gene transpositions during neurogenesis. Aspects of brain function can be influenced by someone’s prenatal environment, genes, and hormones, whether their parents were authoritative or their culture egalitarian, whether they witnessed violence in childhood, when they had breakfast. It’s the whole shebang, all of this book. And then, separate from that, in a concrete bunker tucked away in the brain, sits a little man (or woman, or agendered individual), a homunculus at a control panel. The homunculus is made of a mixture of nanochips, old vacuum tubes, crinkly ancient parchment, stalactites of your mother’s admonishing voice, streaks of brimstone, rivets made out of gumption. In other words, not squishy biological brain yuck. And the homunculus sits there controlling behavior. There are some things outside its purview—seizures blow the homunculus’s fuses, requiring it to reboot the system and check for damaged files. Same with alcohol, Alzheimer’s disease, a severed spinal cord, hypoglycemic shock. There are domains where the homunculus and that brain biology stuff have worked out a détente—for example, biology is usually automatically regulating your respiration, unless you must take a deep breath before singing an aria, in which case the homunculus briefly overrides the automatic pilot. But other than that, the homunculus makes decisions. Sure, it takes careful note of all the inputs and information from the brain, checks your hormone levels, skims the neurobiology journals, takes it all under advisement, and then, after reflecting and deliberating, decides what you do. A homunculus in your brain, but not of it, operating independently of the material rules of the universe that constitute modern science. That’s what mitigated free will is about. I see incredibly smart people recoil from this and attempt to argue against the extremity of this picture rather than accept its basic validity: “You’re setting up a straw homunculus, suggesting that I think that other than the likes of seizures or brain injuries, we are making all our decisions freely. No, no, my free will is much softer and lurks around the edges of biology, like when I freely decide which socks to wear.” But the frequency or significance with which free will exerts itself doesn’t matter. Even if 99.99 percent of your actions are biologically determined (in the broadest sense of this book), and it is only once a decade that you claim to have chosen out of “free will” to floss your teeth from left to right instead of the reverse, you’ve tacitly invoked a homunculus operating outside the rules of science. This is how most people accommodate the supposed coexistence of free will and biological influences on behavior.* For them, nearly all discussions come down to figuring what our putative homunculus should and shouldn’t be expected to be capable of. To get a feel for that, let’s look at some of these debates. Age, Maturity of Groups, Maturity of Individuals In 2005’s Roper v. Simmons decision, the Supreme Court ruled that you can’t execute someone for a crime committed before the age of eighteen. The appropriate reasoning was straight out of chapters 6 and 7: the brain, especially the frontal cortex, is not yet at adult levels of emotional regulation and impulse control. In other words, adolescents, with their adolescent brains, aren’t as culpable as adults. The reasoning was the same as why the pig was executed but not her piglets. In the years since, there have been related rulings. In 2010’s Graham v. Florida and 2012’s Miller v. Alabama, the Court emphasized that juvenile offenders have the highest potential for reform (because of their still-developing brains) and thus banned life sentences without parole for them. These decisions have prompted a number of debates: Just because adolescents are, on the average, less neurobiologically and behaviorally mature than adults doesn’t rule out the possibility of some individual adolescents being as mature, thus being appropriately held to adult standards of culpability. Related to that is the obvious absurdity of implying that something neurobiologically magical happens on the morning of someone’s eighteenth birthday, endowing them with adult levels of self-control. The usual response to these points is that, yes, these are true, but the law often relies on group-level attributes with arbitrary age boundaries (e.g., the age at which someone can vote, drink, or drive). There is this willingness because you can’t test every teenager each year, month, hour, to determine whether they are mature enough yet to, say, vote. But it’s worth doing so when it comes to a teenage murderer. In another contrarian view the issue isn’t whether a seventeen- year-old is as mature as an adult but whether they are mature enough. Sandra Day O’Connor, in dissenting from the Roper decision, wrote, “The fact that juveniles are generally less culpable for their misconduct than adults does not necessarily mean that a 17-year-old murderer cannot be sufficiently culpable to merit the death penalty” (her emphasis). Another dissenter, the late Antonin Scalia, wrote that it is “absurd to think that one must be mature enough to drive carefully, to drink responsibly, or to vote intelligently, in order to be mature enough to understand that murdering another human being is profoundly wrong.”11 Amid these differing opinions everyone, including O’Connor and Scalia, agrees that there exist age-related boundaries on free will—everyone’s homunculus was once too young to have its adult powers.12 Maybe it wasn’t tall enough yet to reach all the control dials; maybe it was distracted from its job by fretting about that gross pimple on its forehead. And that needs to be considered during legal judgments. Just as with piglets and pigs, it’s just an issue of when a homunculus is old enough. The Nature and Magnitude of Brain Damage Essentially everyone working with a model of mitigated free will accepts that if there is enough brain damage, responsibility for a criminal act goes out the window. Even Stephen Morse of the Universit