Over the past decade behavioral science, particularly psychology, has come under fire from critics for being fixated on progressive political ideology, most notably Diversity, Equity, and Inclusion (DEI). The critics’ evidence is, unfortunately, quite strong. For example, a recent volume, Ideological and Political Bias in Psychology,1 recounts many incidents of scholarly censorship and personal attacks that a decade ago might have only been conceivable as satire.
We believe that many problems plaguing contemporary behavioral science, especially for issues touching upon DEI, can best be understood, at their root, as a failure to adhere to basic scientific principles. In this essay, we will address three fundamental scientific principles: (1) Prioritize Objective Data Over Lived Experience; (2) Measure Well; and (3) Distinguish Appropriately Between Correlation and Causation. We will show how DEI scholarship often violates those principles, and offer suggestions for getting behavioral science back on track. “Getting back to the basics” may not sound exciting but, as athletes, musicians, and other performers have long recognized, reinforcing the fundamentals is often the best way to eliminate bad habits in order to then move forward.
The Failure to Adhere to Basic Scientific Principles
Principle #1: Prioritize Objective Data Over Lived Experience
A foundational assumption of science is that objective truth exists and that humans can discover it.2, 3, 4, 5 We do this most effectively by proposing testable ideas about the world, making systematic observations to test the ideas, and revising our ideas based on those observations. A crucial point is that this process of proposing and testing ideas is open to everyone. A fifth grader in Timbuktu, with the right training and equipment, should be able to take atmospheric observations that are as valuable as those of a Nobel Prize-winning scientist from MIT. If the fifth grader’s observations are discounted, this should only occur because their measurement methods were poor, not because of their nationality, gender, age, family name, or any other personal attribute.
A corollary of science being equally open to all is that an individual’s personal experience or “lived experience” carries no inherent weight in claims about objective reality. It is not that lived experience doesn’t have value; indeed, it has tremendous value in that it provides a window into individuals’ perceptions of reality. However, perception can be wildly inaccurate and does not necessarily equate to reality. If that Nobel Prizewinning scientist vehemently disputed global warming because his personal experience was that temperatures have not changed over time, yet he provided no atmospheric measurements or systematic tests of his claim, other scientists would rightly ignore his statements—at least as regards the question of climate change.
The limited utility of a person’s lived experience seems obvious in most scientific disciplines, such as in the study of rocks and wind patterns, but less so in psychology. After all, psychological science involves the study of people—and they think and have feelings about their lived experiences. However, what is the case in other scientific disciplines is also the case in psychological science: lived experience does not provide a foolproof guide to objective reality.
To take an example from the behavioral sciences, consider the Cambridge-Somerville Youth Study.6 At-risk boys were mentored for five years, from the ages of 10 to 15. They participated in a host of programs, including tutoring, sports, and community groups, and were given medical and psychiatric care. Decades later, most of those who participated claimed the program had been helpful. Put differently, their lived experience was that the program had a positive impact on their life. However, these boys were not any better in important outcomes relative to a matched group of at-risk boys who were not provided mentoring or extra support. In fact, boys in the program ended up more likely to engage in serious street crimes and, on average, they died at a younger age. The critical point is that giving epistemic authority to lived experience would have resulted in making inaccurate conclusions. And the Cambridge-Somerville Youth Study is not an isolated example. There are many programs that people feel are effective, but when tested systematically turn out to be ineffective, at best. These include programs like DARE,7 school-wide mental health interventions,8 and—of course—many diversity training programs.9
DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature.
Indeed, when it comes to concerns related to DEI, the scientific tenet of prioritizing testable truth claims over lived experience has often fallen to the wayside. Members of specific identity groups are given privilege to speak about things that cannot be contested by those from other groups. In other words, in direct contradiction of the scientific method, some people are granted epistemic authority based solely on their lived experience.10
Consider gender dysphoria. In the past decade, there has been a drastic increase in the number of people, particularly children and adolescents, identifying as transgender. Those who express the desire to biologically transition often describe their lived experience as feeling “born in the wrong body,” and express confidence that transition will dramatically improve their lives. We argue while these feelings must be acknowledged, they should not be taken as objective truth; instead, such feelings should be weighed against objective data on life outcomes of others who have considered gender transition and/or transitioned. And those data, while limited, suggest that many individuals who identify as transgender during childhood, but who do not medically transition, eventually identify again with the gender associated with their birth sex.11, 12 Although these are small, imperfect studies, they underscore that medical transition is not always the best option.
Caution in automatically acceding to a client’s preference to transition is particularly important among minors. Few parents and health care professionals would affirm a severely underweight 13-year-old’s claim that, based on their lived experience, they are fat and will only be happy if they lose weight. Nevertheless, many psychologists and psychiatrists make a similar mistake when they affirm a transgender child’s desire to transition without carefully weighing the risks. In one study, 65 percent of people who had detransitioned reported that their clinician, who often was a psychologist, “did not evaluate whether their desire to transition was secondary to trauma or a mental health condition.”13 The concern, in other words, is that lived experience is being given too much weight. How patients feel is important, but their feelings should be only one factor among many, especially if they are minors. Mental health professionals should know this, and parents should be able to trust them to act accordingly.
Principle #2: Measure Well
Another basic principle of behavioral science is that anything being measured must be measured reliably and validly. Reliability refers to the consistency of measurement; validity refers to whether the instrument is truly measuring what it claims to measure. For example, a triple beam balance is reliable if it yields the same value when repeatedly measuring the same object. The balance is valid if it yields a value of exactly 1 kg when measuring the reference kilogram (i.e., the International Prototype of the Kilogram), a platinum-iridium cylinder housed in a French vault under standardized conditions.
Behavioral scientists’ understanding of any concept is constrained by the degree to which they can measure it consistently and accurately. Thus, to make a claim about a concept, whether about its prevalence in a population or its relation to another concept, scientists must first demonstrate both the reliability and the validity of the measure being used. For some measures of human behavior, such as time spent listening to podcasts or number of steps taken each day, achieving good reliability and validity is reasonably straightforward. Things are generally more challenging for the self-report measures that psychologists often use.
Nevertheless, good measurement can sometimes be achieved, and the study of personality provides a nice model. In psychology, there are several excellent measures of the Big Five personality factors (Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness).14 Individuals’ responses are highly reliable: people who rate themselves as highly extraverted as young adults rate themselves similarly years later. Moreover, personality assessments are valid: individuals’ responses correlate with their actual day-to-day behaviors, as reported by themselves and as observed by others.15 In other words, people who rate themselves as high (versus low) in extroversion on psychological questionnaires, for example, really do spend more time socializing.
However, not all psychological measures turn out to have solid reliability and validity. These include the popular Myers Briggs Type Indicator personality test and projective tests such as the Rorschach. Unfortunately, in the quest to support DEI, some concepts that fail the requirements of good measurement are used widely and without reservation. The concept of microaggressions, for example, has gained enormous traction despite its having fundamental measurement issues.
“Microaggressions” were brought to psychologists’ attention by Derald Wing Sue and colleagues.16 Originally described as “brief and commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of color” (p. 271),17 the concept has since expanded in use to describe brief, verbal or nonverbal, indignities directed toward a different “other.”18, 19
In 2017, Scott Lilienfeld discussed how the failure to adhere to the principles of good measurement has rendered the concept of microaggression “wide open,” without any clear anchors to reality.20 The primary weakness for establishing validity, that is, for establishing evidence of truly measuring what scientists claim to be measuring, is that “microaggression” is defined in the eye of the beholder.21 Thus, any person at any point can say they have been “microaggressed” against, and no one can test, let alone refute, the claim because it is defined solely by the claimant’s subjective appraisal—their lived experience.
As Scott Lilienfeld explained, the end result is that essentially anything, including opposing behaviors (such as calling on a student in class or not calling on a student in class) can be labeled a microaggression. A question such as, “Do you feel like you belong here?” could be perceived as a microaggression by one person but not by someone else; in fact, even the same person can perceive the same comment differently depending on their mood or on who asks the question (which would indicate poor reliability). Our criticism of microaggressions, then, spans concerns related to both weak measurement and an undue reliance on lived experience.
Another of psychology’s most famous recent topics is the Implicit Association Test (IAT), which supposedly reveals implicit, or subconscious, bias. The IAT measures an individual’s reaction times when asked to classify pictures or text spatially. A video22 may be the best way to appreciate what is happening in the IAT, but the basic idea is that if a person more quickly pairs pictures of a Black person than those of a White person with a negative word (for example, “lazy” or “stupid”) then they have demonstrated their unconscious bias against Black people. The IAT was introduced by Anthony Greenwald and colleagues in the 1990s.23 They announced that their newly developed instrument, the race IAT, measures unconscious racial prejudice or bias and that 90 to 95 percent of Americans, including many racial minorities, demonstrated such bias. Since then, these scholars and their collaborators (plus others such as DEI administrators) have enjoyed tremendous success advancing the claim that the race IAT reveals pervasive unconscious bias that contributes to society-wide discrimination.
Despite its immense influence, the IAT is a flawed measure. Regarding reliability, the correlation between a person’s response when taking the test at two different times hovers around 0.5.24 This is well below conventionally acceptable levels in psychology, and far below the test-retest reliabilities for accepted personality and cognitive ability measures, which can reach around .8, even when a person takes the tests decades later.25, 26
The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation.
As for the IAT’s validity, nobody has convincingly shown that patterns of reaction times actually reflect “unconscious bias” (or “implicit prejudice”) as opposed to cultural stereotypes.27 Moreover, in systematic syntheses of published studies, the association between scores on the race IAT and observations or measurements of real-world biased behavior is inconsistent and weak.28, 29 In other words, scores on the IAT do not meaningfully correlate with other ways of measuring racial bias or real life manifestations of it.
Principle #3: Distinguish Appropriately Between Correlation and Causation
“Correlation does not equal causation” is another basic principle of behavioral science (indeed, all science). Although human brains seem built to readily notice and even anticipate causal connections, a valid claim that “X” has a causal effect on “Y” needs to meet three criteria, and a correlation between X and Y is only the first. The second criterion is that X precedes Y in time. The third and final criterion is the link between X and Y is not actually due to some other variable that influences both X and Y (“confounders”). To test this final point, researchers typically need to show that when X is manipulated in an experiment, Y also changes.
Imagine, for instance, that a researcher asks students about their caffeine intake and sleep schedule, and upon analyzing the data finds that students’ caffeine consumption is negatively correlated with how much they sleep—those who report consuming more caffeine tend to report sleeping less. This is what many psychologists call correlational research (or associational or observational research). These correlational data could mean that caffeine consumption reduces sleep time, but the data could also mean that a lack of sleep causes an increase in caffeine consumption, or that working long hours causes both a decrease in sleep and an increase in caffeine. To make the case that caffeine causes poor sleep, the researcher must impose, by random assignment, different amounts of caffeine on students to determine how sleep is affected by varying doses. That is, the researcher would conduct a true experiment.
Distinguishing between correlation and causation is easier said in the abstract than practiced in reality, even for psychological scientists who are specifically trained to make the distinction.30 Part of the difficulty is that in behavioral science, many variables that are generally thought of as causal cannot be manipulated for ethical or practical reasons. For example, researchers cannot impose neglect (or abuse, corporal punishment, parental divorce, etc.) on some children and not others to study how children are affected by the experience. Still, absent experiments, psychologists bear the responsibility of providing converging, independent lines of evidence that indicate causality before they draw a causal conclusion. Indeed, scientists did this when it came to claiming that smoking causes cancer: they amassed evidence from national datasets with controls, discordant twin designs, correlational studies of exposure to second-hand smoke, non-human experiments, and so on—everything but experiments on humans—before coming to a consensus view that smoking causes cancer in humans. Our point is that investigating causal claims without true experiments is possible, but extremely difficult and time consuming.
That said, the conflation of correlation with causation seems especially prevalent when it comes to DEI issues. In the context of microaggressions, for example, a Google search quickly reveals many scholars claiming that microaggressions cause psychological harm. Lilienfeld has been a rare voice suggesting that it is dangerous to claim that microaggressions cause mental health issues when there are no experimental data to support such a claim. Moreover, there is a confounding variable that predicts both (1) perceiving oneself as having been “microaggressed” against and (2) struggling with one’s mental health—namely, the well-documented personality trait of neuroticism. In other words, individuals who are prone to experience negative emotions (those who are high in neuroticism) often perceive that more people try to inflict harm on them than actually do, and these same individuals also struggle with mental health.
Assuming we were able to develop a workable definition of “microaggressions,” what would a true experiment look like? An experiment would require that participants be exposed to microaggressions (or not), and then be measured or observed for indications of psychological harm. There are valid ethical concerns for such a study, but we believe it can be done. There is a lengthy precedent in psychological research where temporary discomfort can be inflicted with appropriate safeguards. For instance, a procedure called the “trier social stress test” (TSST) is widely used, where participants make a speech with little preparation time in front of judges who purposefully avoid any non-verbal reaction. This is followed by a mental arithmetic task.31 If the TSST is acceptable for use in research, then it should also be acceptable to expose study participants to subtle slights.
This fallacy of equating correlation with causation also arises in the context of gender transitioning and suicide. To make the point that not being able to transition is deeply damaging, transgender individuals, and sometimes their professional supporters, may ask parents something such as, “would you rather have a dead daughter or a living son?” One logical flaw here is in assuming that because gender distress is associated with suicidal ideation, then the gender distress must be causing the suicidal ideation. However, other psychological concerns, such as depression, anxiety, trauma, eating disorders, ADHD, and autism, could be causing both the gender distress and the suicidal ideation—another case of confounding variables. Indeed, these disorders occur more frequently in individuals who identify as transgender. Thus, it is quite possible that someone may suffer from depression, and this simultaneously raises their likelihood of identifying as transgender and of expressing suicidal ideation.
It is not possible (nor would it be ethical if possible) to impose gender identity concerns on some children and not others to study the effect of gender dysphoria on suicidality. However, at this point, the correlational research that does exist has not offered compelling evidence that gender dysphoria causes increased suicidality. Studies have rarely attempted to rule out third variables, such as other mental health diagnoses. The few studies that have tried to control for other variables have yielded mixed results.32, 33 Until researchers have consistently isolated gender dysphoria as playing an independent role in suicidality, they should not claim that gender dysphoria increases suicide risk.
Over three decades ago, the psychologist David Lykken wrote, “Psychology isn’t doing very well as a scientific discipline and something seems to be wrong somewhere” (p. 3).34 Sadly, psychology continues to falter; in fact, we think it has gotten worse. The emotional and moral pull of DEI concerns are understandable but they may have short-circuited critical thinking about the limitations of lived experience, the requirement of using only reliable and valid measurement instruments, and the need to meet strict criteria before claiming that one variable has a causal influence on another variable.
DEI Concepts Contradict Known Findings about Human Cognition
The empirical bases for some DEI concepts contradict social scientific principles. Additionally, certain DEI ideas run counter to important findings about human nature that scientists have established by following the required scientific principles. We discuss three examples below.
Out-Group Antipathy
Humans are tribal by nature. We have a long history of living in stable groups and competing against other groups. Thus, it’s no surprise that one of social psychology’s most robust findings is that in-group preferences are powerful and easy to evoke. For example, in studies where psychologists create in-groups and out-groups using arbitrary criteria such as shirt color, adults and children alike have a large preference for their group members.35, 36 Even infants prefer those who are similar to themselves37 and respond preferentially to those who punish dissimilar others.38
Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently.
DEI, although generally well-intentioned, often overlooks this tribal aspect of our psychology. In particular, in the quest to confront the historical mistreatment of certain identity groups, it often instigates zero-sum thinking (i.e., that one group owes a debt to another; that one group cannot gain unless another loses). This type of thinking will exacerbate, rather than mitigate, animosity. A more fruitful approach would emphasize individual characteristics over group identity, and the common benefits that can arise when all individuals are treated fairly.
Expectancies
When people expect to feel a certain way, they are more likely to experience that feeling.39, 40 Thus, when someone, especially an impressionable teenager or young adult, is told that they are a victim, the statement (even if true) is not merely a neutral descriptor. It can also set up the expectation of victimhood with the downstream consequence of making one feel themselves to be even more of a victim. DEI microaggression workshops may do exactly this—they prime individuals to perceive hostility and negative intent in ambiguous words and actions.41 The same logic applies to more pronounced forms of bigotry. For instance, when Robin DiAngelo describes “uniquely anti-black sentiment integral to white identity” (p. 95),42 the suggestion that White people are all anti-Black might have the effect of exacerbating both actual and perceived racism. Of course, we need to deal honestly with any and all racism when it does exist, but it is also important to understand potential costs of exaggerating such claims. Expectancy effects might interact with the “virtuous victim effect,” wherein individuals perceive victims as being more moral than non-victims.43, 44 Thus, there can be a social value gained simply in presenting oneself as a victim.
Cognitive Biases
Cognitive biases are one of the most important and well-replicated discoveries of the behavioral sciences. It is therefore troubling that, in the discussion of DEI topics, psychologists often fall victim to those very biases.
A striking example is the American Psychological Association’s (APA) statement shortly after the death of George Floyd, which provides a textbook illustration of the availability bias, the tendency to overvalue evidence that easily comes to mind. The APA, the largest psychological organization in the world, asserted after Floyd’s death that “The deaths of innocent black people targeted specifically because of their race—often by police officers—are both deeply shocking and shockingly routine.”45 How “shockingly routine” are they? According to the Washington Post database of police killings, in 2020 there were 248 Black people killed by police. By comparison, over 6,500 Black people were killed in traffic fatalities that year—a 26-fold difference.46 Also, some portion of those 248 victims were not innocent—given that 216 were armed, some killings would probably have been an appropriate use of force by the police to defend themselves or others. Some portion was also not killed specifically because of their race. So why would the APA describe a relatively rare event as “shockingly routine”? This statement came in the aftermath of the widely publicized police killings of Floyd and those of Ahmaud Arbery and Breonna Taylor. In other words, these rare events were seen as common likely because widespread media coverage made them readily available in our minds.
Unfortunately, the APA also recently fell prey to another well-known bias, the base rate fallacy, where relevant population sizes are ignored. In this case, the APA described new research that found “The typical woman was considered to be much more similar to a typical White woman than a typical Black woman.”47 Although not stated explicitly, the implication seems to be that, absent racism, the typical woman would be roughly midway between typical White woman and typical Black woman. That is an illogical conclusion given base rates. In the U.S., White people outnumber Black people by roughly 5 to 1; hence the typical woman should be perceived as more similar to a typical White woman than to a typical Black woman.
What Happened? Some Possible Causes
At this stage, we expect that many readers may be wondering how it can be that social scientists regularly violate basic scientific principles—principles that are so fundamental that these same social scientists routinely teach them in introductory courses. One possible reason is myside bias, wherein individuals process information in a way that favors their own “team.” For example, in the case of the race Implicit Association Test, proponents of the IAT might more heavily scrutinize the methodology of studies that yield negative results compared to those that have yielded their desired results. Similarly, although lived experience is a limited kind of evidence, it certainly is a source of evidence, and thus scholars may elevate its importance and overlook its limitations when doing so bolsters their personal views.
A related challenge facing behavioral scientists is that cognitive biases are universal and ubiquitous—everyone, including professional scientists, is susceptible.48 In fact, one might say that the scientific method, including the three principles we emphasize here, is an algorithm (i.e., a set of rules and processes) designed to overcome our eternally pervasive cognitive biases.
A third challenge confronting behavioral scientists is the current state of the broader scientific community. Scientific inquiry works best when practiced in a community adhering to a suite of norms, including organized skepticism, that incentivize individuals to call out each other’s poor practices.49, 50 In other words, in a healthy scientific community, if a claim becomes widely adopted without sufficient evidence, or if a basic principle is neglected, a maverick scientist would be rewarded for sounding the alarm by gaining respect and opportunities. Unfortunately, the scientific community does not act this way with respect to DEI issues, perhaps because the issues touch widely held personal values (e.g., about equality between different groups of people). If different scientists held different values, there would probably be more healthy skepticism of DEI topics. However, there is little ideological diversity within the academy. In areas such as psychology, for example, liberal-leaning scholars outnumber conservative-leaning scholars by at least 8 to 1, and in some disciplines the ratio is 20 to 1 or even more.51, 52 A related concern is that these values are more than just personal views. They often seem to function as sacred values, non-negotiable principles that cannot be compromised and only questioned at risk to one’s status within the community.
From this perspective,53 it is easy to see how those who question DEI may well face moral outrage, even if (or maybe especially if) their criticisms are well-founded. The fact that this outrage sometimes translates into public cancellations is extremely disheartening. Yet there are likely even more de facto cancellations than it seems. Someone can be cancelled directly or indirectly. Indirect cancellations can take the form of contract nonrenewal, pressure to resign, or having one’s employer dig for another offense to use as the stated grounds of forcing someone out of their job. This latter strategy is a very subtle, yet no less insidious, method of cancellation. As an analogy, it is like a police officer following someone with an out-of-state license plate and then pulling the car over when they fail to use a turn signal. An offense was committed, but the only reason the offense was observed in the first place is because the officer was looking for a reason to make the stop and therefore artificially enhanced the time window in which the driver was being scrutinized. The stated reason for the stop is failure to signal; the real reason is the driver is from out of town. Whether direct or indirect, the key to a cancellation is that holding the same job becomes untenable after failing to toe the party line on DEI topics.
It is against this backdrop that DEI scholarship is conducted. Academics fear punishment (often subtle) for challenging DEI research. Ideas that cannot be freely challenged are unfalsifiable. Those ideas will likely gain popularity because the marketplace of ideas becomes the monopoly of a single idea. An illusory consensus can emerge about a complex area for which reasonable, informed, and qualified individuals have highly differing views. An echo chamber created by forced consensus is the breeding ground for bad science.
How to Get Behavioral Science Back on Track
We are not the first ones to express concern about the quality of science in our discipline.54, 55 However, to our knowledge, we are the first to discuss how DEI over-reach goes hand-in-hand with the failure to engage in good science. Nonetheless, this doesn’t mean it can’t be fixed. We offer a few suggestions for improvement.
First, disagreement should be normalized. Advisors should model disagreement by presenting an idea and explicitly asking their lab members to talk about its weaknesses. We need to develop a culture where challenging others’ ideas is viewed as an integral (and even enjoyable) part of the scientific process, and not an ad hominem attack.
Second, truth seeking must be re-established as the fundamental goal of behavioral science. Unfortunately, many academics in behavioral science seem now to be more interested in advocacy than science. Of course, as a general principle, faculty and students should not be restricted from engaging in advocacy. However, this advocacy should not mingle with their academic work; it must occur on their own time. The tension between advocacy and truth seeking is that advocates, by definition, have an a priori position and are tasked with convincing others to accept and then act upon that belief. Truth seekers must be open to changing their opinion whenever new evidence or better analyses demand it.
To that end, we need to resurrect guardrails that hold students accountable for demonstrating mastery of important scientific concepts, including those described above, before receiving a PhD. Enforcing high standards may sound obvious, but actually failing students who do not meet those standards is an exclusionary practice that might be met with resistance.
Another intriguing solution is to conduct “adversarial collaborations,” wherein scholars who disagree work together on a joint project.56 Adversarial collaborators explicitly spell out their competing hypotheses and together develop a method for answering a particular question, including the measures and planned analyses. Stephen Ceci, Shulamit Kahn, and Wendy Williams,57 for example, engaged in an adversarial collaboration that synthesized evidence regarding gender bias in six areas of academic science, including hiring, grant funding, and teacher ratings. They found evidence for gender bias in some areas but not others, a finding that should prove valuable in decisions about where to allocate resources.
In conclusion, we suggest that DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature. The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation. We need to remember that the goal of science is to discover truth. This requires putting ideology and advocacy aside while in the lab or classroom. Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently. The scientific method requires us to stay humble and accept that we just might be wrong. That principle applies to all scientists, including the three authors of this article. To that end, readers who disagree with any of our points should let us know! Maybe we can sort out our differences—and find common ground— through an adversarial collaboration.
The views presented in this article are solely those of the authors. They do not represent the views of any author’s employer or affiliation.
About the Author
April Bleske-Rechek is a Professor of Psychology at the University of Wisconsin-Eau Claire. Her teaching and research efforts focus on scientific reasoning and individual and group differences in cognitive abilities, personality traits, and relationship attitudes.
Michael H. Bernstein is an experimental psychologist and an Assistant Professor at Brown University. His research focuses on the overlap between cognitive science and medicine. He is co-editor of The Nocebo Effect: When Words Make You Sick.
Robert O. Deaner is a Professor of Psychology at Grand Valley State University. He teaches courses on research methods, sex differences, and evolutionary psychology. His research addresses sex differences in competitiveness.
References
- Frisby, C.L., Redding, R.E., O’Donohue, W.T., & Lilienfeld, S.O. (2023). Ideological and Political Bias in Psychology. Springer Nature.
- https://bit.ly/4aJLRyO
- Merton, R.K. (1993). The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press.
- Rauch, J. (2013). Kindly Inquisitors: The New Attacks on Free Thought. University of Chicago Press.
- Rauch, J. (2021). The Constitution of Knowledge: A Defense of Truth. Brookings Institution Press.
- https://bit.ly/3xATvNI
- https://bit.ly/4cTS4Kq
- https://bit.ly/4cXcRNe
- https://bit.ly/3Q15SZU
- https://bit.ly/3xCzeY8
- https://bit.ly/43W5bGW
- https://bit.ly/3TUw0GR
- https://bit.ly/4401VKr
- https://bit.ly/3Ufx4q1
- Funder, D. C. (2019). The Personality Puzzle (8th ed.). W.W. Norton & Company.
- https://bit.ly/3UhIOsn
- Ibid.
- https://bit.ly/3W0liBc
- https://bit.ly/3VShodH
- Ibid.
- https://bit.ly/3UhIOsn
- https://bit.ly/49vFle5
- https://bit.ly/3JmZxUw
- https://bit.ly/3Jifb3O
- https://bit.ly/3Q37UZc
- https://bit.ly/3Q0Oe8h
- https://bit.ly/49zSTFk
- https://bit.ly/3xrWU15
- https://bit.ly/49QWBux
- Bleske-Rechek, A., Gunseor, M.M., & Maly, J.R. (2018). Does the Language Fit the Evidence? Unwarranted Causal Language in Psychological Scientists’ Scholarly Work. The Behavior Therapist, 41(8), 341–352.
- https://bit.ly/49DQZmW
- https://bit.ly/49zKdif
- https://bit.ly/49JeECQ
- Lykken, D.T. (1991). What’s Wrong With Psychology Anyway? In D. Cicchetti & W.M. Grove (Eds.), Thinking Clearly About Psychology: Essays in Honor of Paul E. Meehl. University of Minnesota Press.
- Tajfel, H. (2020). Experiments in Intergroup Discrimination. Scientific American, 223, 96–102.
- https://bit.ly/3xC9on5
- https://bit.ly/4aO5dTe
- https://bit.ly/4aSLamR
- https://bit.ly/3Q2m9gO
- Bernstein, M., Blease, C., Locher, C., & Brown, W. (2024). The Nocebo Effect: When Words Make You Sick. Mayo Clinic Press.
- https://bit.ly/4aQmv2e
- DiAngelo, R. (2018). White Fragility: Why It’s So Hard for White People to Talk About Racism. Beacon Press.
- https://bit.ly/4awG3sR
- https://bit.ly/4cSOEYn
- https://bit.ly/43XhN0k
- https://bit.ly/3UfKH8L
- https://bit.ly/43ZM1zH
- Stanovich, K. E. (2021). The Bias That Divides Us: The Science and Politics of Myside Thinking. The MIT Press.
- https://bit.ly/4aJLRyO
- Ritchie, S. (2020). Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. Metropolitan Books.
- https://bit.ly/43XRkzI
- https://bit.ly/3TXsw6n
- https://bit.ly/3TXsxar
- Lykken, D.T. (1991). What’s Wrong With Psychology Anyway? In D. Cicchetti & W.M. Grove (Eds.), Thinking Clearly About Psychology: Essays in Honor of Paul E. Meehl. University of Minnesota Press.
- https://bit.ly/4aybGSy
- Clark, C.J., & Tetlock, P.E. (2023). Adversarial Collaboration: The Next Science Reform. In C.L. Frisby, R.E. Redding, W. T. Donohue, & S.O. Lilienfeld (Eds.), Ideological and Political Bias in Psychology (pp. 905–927). Springer.
- https://bit.ly/3vQQ5FW
This article was published on August 30, 2024.