There is probably no other scientific discipline in which fads come and go so quickly, and with so much hype, as psychology. In his Quick Fix, Jesse Singal discusses eight different psychological ideas that have been promoted as quick fixes for different social problems. He refers to these as “half-baked” ideas—ideas that may not be 100 percent bunk but which are severely overhyped” (p. 6).
The first chapter concerns the self-esteem movement, which began in 1990 with a report from the State of California titled Toward a State of Esteem. The report argued that increasing a person’s self-esteem, especially for children and adolescents, would improve nearly everything from social behavior to academic performance. The questionable origins of this report have, to my knowledge, not been previously described. Due to pressure from a “very eccentric California politician” (p. 13) named John Vasconcellos, major findings that called into question the utility of increasing self-esteem were suppressed from the report. This, in turn, led to all sorts of dingbat programs for improving self-esteem. The chapter provides many illuminating examples, such as banning games with winners in elementary schools. Self-esteem improvement programs do seem to make people score higher on subjective measures such as happiness, which is important. But they have little effect on more objective measures of behavior. The cottage industry of self-esteem therapists is doing little to improve objective measures.
The concept of the “superpredator” (Chapter 2), the (usually Black) teenager who ran wild killing, raping, and pillaging, became a popular stereotype in the 1990s. It generated a rush of legislation that meted out much harsher punishment for teenage criminals. The claim was that these teens were destined to become career criminals because of genetic faults, poor upbringing, or both. Since birth rates were increasing, the fear was that there would be a dramatic increase in the coming years of such wilding teens, thus posing a severe threat to society. The idea was advanced by some criminologists and picked up by politicians of both conservative and liberal persuasions. Prominent among the criminologists who advanced the superpredator idea was John Dilulio, “a careful academic in other respects” (p. 72). Singal notes that Dilulio did not put forth this idea in peer-reviewed publications, and thus the idea was not subject to the criticisms that it would have generated due to lack of evidence and sloppy conceptualization. In 2001 Dilulio “acknowledged… that he had simply been wrong” (p. 72) but rejected the idea that he was the cause of so many kids going to jail.
Remember how your mother would tell you to “sit up straight and have a good posture?” Well, in 2010 that advice was reshaped into a sure-fire method of empowerment, especially for women, in the form of “power posing.” The idea was that if you sat up straight, leaned forward, sort of took possession of the space around you…all kinds of good things would happen. The original paper reported that assuming such a pose increased feelings of power and people’s willingness to take a financial risk. It even increased testosterone levels compared to what was defined as more submissive or passive poses. This led to the expected outbreak of self-help books, TED talks, and general hype. The trouble was that none of it was true. In 2016 the lead author of the study, Dana Carney, posted on her UC-Berkeley webpage that “I do not believe that ‘power pose’ effects are real” (p. 82), although she has never formally had the paper retracted. The problem was a statistical manipulation (called p-hacking) that led to finding differences between the power and passive pose conditions where none existed.
One of the goals of the power pose movement was a legitimate one—to help women overcome sex/gender discrimination in hiring and salaries. Singal makes an important point here and throughout the book: it would be better to direct attention to the root causes of these problems rather than fall back on “half-baked” fad psychology quick fixes that don’t fix much of anything.
“Positive psychology,” the focus of Chapter 4, is a kind of successor to humanistic psychology, but without the high psychobabble content of the former and more interest in empirical verification. Positive psychology emphasizes finding ways to make already psychologically healthy people happier and more satisfied with their lives rather than dwell on psychopathology. This is a laudable goal, but positive psychology has had major problems empirically verifying its interventions. One of the founders of positive psychology is Martin Seligman, a professor at the University of Pennsylvania. Seligman is famous for trying to apply the principles of positive psychology on a mass basis through various interventions. However, these interventions have proven to be of questionable effect. “On multiple occasions, Seligman and his center [Positive Psychology Center] have made impressive claims about interventions that outpace the available evidence” (p. 108). One program, the Strath Haven Positive Psychology Curriculum, is aimed at increasing the “strength of character” of elementary school students. On his university website, Seligman claimed that the program “builds character strengths, relationships, and meaning, as well as raises positive emotions and reduces negative emotions” (p. 109). But in a peer-reviewed journal paper, he said precisely the opposite; specifically, that the “positive psychology program did not improve…character strengths” nor several other outcome measures. That report is vague about the overall effects of the program, and Singal notes that, while the study was funded by a grant worth almost $3 million, no complete report of the results has ever been published.
Despite the questionable effectiveness of Seligman’s programs, in 2008 the United States Army reached out to him to devise an intervention to deal with a significant problem—PTSD among soldiers. The result was the Comprehensive Soldier Fitness (CSF) program which incorporated modifications of an earlier intervention called the Penn Resilience Program (PRP). The PRP was “delivered to (mostly) healthy students by laypeople who can be quickly trained for the task” (p. 114). The intervention was done in groups and, not surprisingly, didn’t have much effect on students. Promoting it as an effective treatment for adults who had suffered severe trauma was, to put it mildly, a stretch. Nonetheless, the Army gave Seligman’s group a $31 million contract. As expected, the program had little effect.
The CSF program was approved and mandated by a single person, the then Army Chief of Staff, General George Casey. Casey, a fine general that he might have been, had no experience evaluating psychological intervention programs. Singal cites this as an example of what he terms “unskilled intuition,” which is when a decision maker thinks they have the skills and knowledge to make a decision but do not. This is a case of the Dunning Kruger Effect, a cognitive bias whereby people with limited knowledge or competence in a given intellectual or social domain vastly overestimate their knowledge or competence relative to objective criteria or performance of their peers or people in general. By falling for the sales pitch from Seligman et al., the Army passed up the opportunity to implement more effective programs to treat PTSD.
The concept of “grit,” (Chapter 5) pretty much the same as stick-to-it-iveness, is another spawn of positive psychology. Grit was marketed to American schools by Angela Duckworth in her 2016 book Grit: The Power of Passion and Perseverance. The text mainly consisted of success stories of people with, you guessed it, real grit. But as Singal correctly notes, this was cherry picking. Reports of students who clearly had grit but didn’t succeed were largely left out. And such people indeed do exist, as documented in Linda Nathan’s 2017 book When Grit Isn’t Enough.
Grit is said to be able to predict success in various situations better than older, well-established measures such as consciousness. For example, a short ten-item grit scale was said to make valuable predictions about whether West Point cadets would make it through a challenging seven-week training course. And so it did… But not really. Ninety-eight percent of cadets scoring high on this scale completed the course. But 95 percent of all cadets complete the course, so the grit scale didn’t really add much. Some schools have jumped on a grit bandwagon with the hope that it is possible to increase grit levels and thus student success. This harkens back to the self-esteem movement in many ways.
Similarly, since grit doesn’t correlate very highly with measures of student success, and there is little evidence that interventions can change grit, such programs are ill-conceived. As was the case with the Comprehensive Soldier Fitness program to combat PTSD, there are much better and proven ways of improving student success, such as teaching best study habits and nurturing skills that require class attendance and time management. Grit was just the fancy new kid on the block who got all the attention.
An appealing marketing ploy for grit was to claim that increasing grit would be especially helpful in decreasing the inequality between wealthy and poor children in school achievement. The failure of grit to improve much of anything, or to predict much of anything, belies this hope. Grit was another attempt to avoid making the major changes in the American educational system that would be needed to really address social inequalities. It was just another failed, quick fix.
In Chapter 6, Singal discusses the Implicit Attitude Test (IAT), commonly known as the “bias test,” arguably the most controversial topic in social psychology. There are numerous different varieties of this test, first developed in 1998. “Implicit,” as used here, means “unconscious.” The test is said to measure implicit or unconscious bias against a given racial or ethnic group by using a reaction time measure. Bias is found when “someone is quicker to connect positive concepts with white people and negative concepts with black people” (p. 186). The controversial finding is that people who show no racial or ethnic biases in behavior or explicit attitudes are scored as highly biased by the IAT. The test has become a mainstay of diversity training programs. The basic idea is to identify people who hold implicit biases and then train these biases out of them.
There are serious problems with this approach. The IAT is a test and, like any other test, must meet two fundamental criteria before it can be ethically used to guide any decision making. First, a test claimed to measure some stable characteristic must be reliable. Reliability means that a test must give close to the same results on repeated testing. If the Hines Test of Baseball Skill (HTBS) generates widely different scores when given two weeks apart, it isn’t reliable. A test must also be valid—there must be independent evidence that it measures what it claims to measure. If the HTBS is very reliable, but HTBS scores do not correlate highly with some real-world measure of baseball skill, it is not valid. The IAT is not reliable. The correlations obtained when reliability is measured “have ranged from r = .32 to r = .65” (p. 182). “By the normal standards of psychology,” these figures put “the IAT well below the threshold of usefulness in real-world settings” (p. 181). What Singal does not point out, unfortunately, is that if a test is not reliable, it cannot be valid. That is, if the scores are bouncing around, they can’t be telling us anything about the stable trait the text is advertised as measuring. Indeed, it is clear that the IAT is not valid based on several meta-analyses described by Singal.
A related problem exists: “it has never been clearly stated what it [the IAT] measures” (p. 186) but simply tautologically assumed that having a particular score on the IAT meant that the person had implicit bias “without that score implying a connection to real-world behavior” (p. 187). The meta-analyses referred to above show that “the evidence is simply too lacking for the test to be used to predict individual behavior” (p. 184). Still, people do show a wide range of scores on these tests—these differences must be due to something. One possibility, of course is some sort of bias. But Singal reviews “a significant amount of evidence that the IAT measures a variety of things apart from implicit bias itself” (p. 188). Given this, it’s certainly odd that the IAT is accepted when the “psychological establishment… would surely reject a similarly noisy and arguably misleading test of depression or anxiety” (p. 188).
The general lack of validity of the IAT makes it highly problematic as a tool for changing behavior, although it has become an established tool in antiracism and diversity training. Singal devotes much discussion at the end of Chapter 6 to the idea that it would be better to recognize that the most serious problem facing minority groups is not implicit cognitions that may never express themselves in overt behavior but in the structure of a society that oppresses minorities. This point is similar to the one made regarding self-esteem and grit in previous chapters. It’s a lot easier to focus on “even more microscopic examinations of white people’s behavior and attitudes and etiquette” than to change the structure of the system that so disadvantages minorities. None of this is to say that implicit bias doesn’t exist, an important point made by Singal. It does. The question is whether the IAT: (1) measures it; and (2) whether training programs based on the IAT have any real beneficial effects in mitigating it. The answer to both these questions appears to be “no.”
To the extent that quick fixes don’t work particularly well, the groups at which they were directed will not benefit very much. There is then a danger that these groups will be blamed for their failures.
The crisis of replication in psychology in general and the claims for “social priming” in particular are the topics of Chapter 7. Social priming refers to the idea that subtle environmental cues can have large effects on behavior. Two such claims are illustrative. In one study, one group of college students processed words that suggested elderliness (i.e., frail, old, Florida, etc.) while a control group processed age-neutral words. The supposed finding was that those who processed the “geezer” words took more time to walk down a corridor than the control group. In another study, looking at a picture of Rodan’s The Thinker reduced viewers’ religiosity compared to a control group. Studies like these exploded in the early 21st century. Then along came Daryl Bem and his (in)famous study of psi in which he claimed to have shown real psi effects. Since his paper was published in what was considered the leading journal of social psychology, it attracted a great deal of attention from other psychologists and the popular media.
Singal discusses the fact, noted previously by many other commentators, that Bem’s study was the straw that broke the camel’s back in terms of accepting the standard way that statistical analyses of psychological research had been done. This was because the results of Bem’s experiments were so inherently implausible. That the usual statistical analyses seemed to yield evidence in favor of parapsychological phenomena suggested something badly amiss in how those analyses operated. These included using multiple statistical tests and then reporting only those that seemed to confirm the initial hypothesis. There was also the practice of changing the study’s hypothesis after the fact to conform with the obtained results, among other issues. A broader problem was calculating levels of statistical significance and reporting them as traditional p-values where .05 or less was taken as showing that the effect was real. To be clear—all that the .05 means is that the result is unlikely; that is, it would have occurred by chance five times or less out of 100. It does not mean that it could not have occurred by chance.
The replication crisis refers to the finding that many of the much-ballyhooed study results in social psychology do not replicate when other researchers repeat the experiments. This, too, became clear when Bem’s results did not replicate in the hands of those who tried. To make matters worse, even the journal that published Bem’s paper refused, initially, to publish failures to replicate his findings, not even sending the paper reporting the failures out for peer review. Most journals never published studies reporting attempts to replicate previous findings, whether the replications succeed or not. Thus, results due to chance or statistical manipulation continued to be accepted as real. When this was realized, attempts began to replicate many of the “sexy” findings in social priming. Most failed to replicate, including the priming studies noted above.
The positive response to this methodological embarrassment is that some journals now require more rigorous standards for publication. Some even require that researchers submit a sort of “letter of intent” detailing the exact hypotheses to be tested, methodology, and statistical analysis to be used before the study is even begun. More researchers are using Bayesian approaches to statistical analysis. This approach can be best summed up by the phrase well known to skeptics—“extraordinary claims demand extraordinary proof.” In other words, if your claim is highly unlikely to be true before the study (i.e., looking at The Thinker makes people less religious), you’d better have more than one lone result of p < .05 to support it.
Oddly, Singal hardly mentions that the same replication crisis is found in many medical studies and does not cite Ioannidis’s 2005 PLoS Medicine paper that brought this problem to the fore, well before Bem’s paper appeared. The chapter seemed a bit out of place in the book because, popular as social priming was, the enthusiasm about it never reached the level of claiming that priming was a way to cure various social ills, as was the case for the topics of the other chapters.
The final chapter with a specific program or concept as its subject, Chapter 8, is about “nudging.” Nudging is a way of arranging the environment to make it easier for people to behave in a desired way, as opposed to strong-arm tactics such as regulations or legislation. This technique for changing behavior “has a fair bit of genuine empirical heft behind it” (p. 263). The chapter starts with a great example. Before 2015 or so, New Yorkers who committed minor violations were given a carbon copy of the ticket the office wrote. Buried in the small print on the ticket was the requirement that the defendant appear in court at a particular date, place, and time. An unacceptable number of people didn’t show up for their court dates. To solve this problem, the design of the ticket copy was changed to make the requirements much more obvious. This is a beautiful example of using human factors design to solve a problem. Given this example, I expected the rest of the chapter to be about how the human factors approach to designing such things as forms, signs, roadways, kitchen appliances, and even buildings can be extremely useful in producing desired behavior. But right away, the chapter took a bizarre turn. It veered off into decision-making research and the work of Kahneman and Tversky on how mental shortcuts (“heuristics”) result in poor decision making. This goes on for a few pages, and then we’re back to nudging. There’s an interesting example of how the Obama administration arranged for stimulus money to be distributed to individuals in increments rather than as one lump sum. The goal of the stimulus money was to get people to spend more. Had it been delivered in one lump sum, people would have been more likely to put it away in savings. Multiple smaller individual payments were more likely to be spent. The chapter, which seems more disjointed than the others, ends with the important observation that nudges don’t always work and that by focusing on them, more serious institutional problems can be overlooked.
In the book’s final chapter, Singal covers the reasons for the wide acceptance of quick fixes and the problems with such acceptance. The reasons are rather obvious—quick fixes are easy to understand and thus gain popularity, especially when their creators promote them through TED talks and public media. As mentioned previously, unskilled intuition also plays a role. Quick fixes get other rewards—academic promotions, consulting gigs, book royalties, etc. Nothing too surprising there.
What is more revealing is how the acceptance of quick fixes may do harm—more harm than just not solving the problems very well. To the extent that quick fixes don’t work particularly well, the groups at which they were directed will not benefit very much. There is then a danger that these groups will be blamed for their failures. If all it takes for disadvantaged children to succeed in school is more grit, then when they get all gritty and still don’t excel, well, it must be their fault. And this can, in turn, breed disappointment and hostility.
Singal’s book is an excellent contribution to the skeptical evaluation of social programs where the claims go far beyond reality. It will be eye-opening to many unfamiliar with the actual success rates of the programs discussed. The text is never heavy with academic jargon and clearly explains the many, sometimes complex, ideas. It is well referenced and not without a pleasing bit of wit.
About the Author
Terence Hines is a cognitive neuroscientist and professor at the Psychology Department, Pace University, Pleasantville, NY and adjunct professor of neurology at New York Medical College in Valhalla, NY. His research focuses on paranormal belief, the cognitive representation of number and, when he has time, the nature of bilingual memory. He is the author of Pseudoscience and the Paranormal. He received his undergraduate education at Duke University and his PhD from the University of Oregon.
This article was published on February 25, 2023.