Science

Study on gay Latter-day Saints led to questionable headlines | Opinion – Deseret News

In 2014, a study appeared in the prestigious journal Science that made almost immediate waves in the national conversation with its conclusion that a mere 20-minute conversation with a gay canvasser telling a personal, heart-felt story led to persistent changes in attitudes — as confirmed by nine-month follow-ups.

When Jon Krosnick, a Stanford social psychologist, was contacted for comment, his response was, “Gee, that’s very surprising and doesn’t fit with a huge literature of evidence. It doesn’t sound plausible to me.”

Nonetheless, a feature piece ran in The New York Times the same week the study posted — the first of many similar commentaries. And the following spring, the radio program “This American Life” amplified these “groundbreaking” findings. Summarizing the cumulative effects of this single study, journalist Jesse Singal states, “It rerouted countless researchers’ agendas, inspired activists to change their approach to voter outreach, generated shifts in grant funding, and launched follow-up experiments.”

The only problem was this: The study turned out to be fraudulent from the beginning. Attitudes change, yes, but as anyone who has knocked door-to-door will tell you, that almost never happens so quickly and easily. The study — which would have cost upwards of a million dollars in real life — never happened, with the UCLA graduate student actually drawing the numbers from an existing data set and convincing a respected researcher, Donald Green, that they reflected thousands of knocked doors. Green later said, “I am deeply embarrassed that I did not suspect and discover the fabrication of the survey data.”

A scientific retraction and journalistic mea culpas soon followed. As Singal summarized, “(Michael) LaCour’s impossible-seeming results were treated as truth, in part because of the weight Green’s name carried, and in part, frankly, because people — researchers, journalists, activists — wanted to believe them. There was a snowball effect here: The more the study’s impact and influence grew, the greater the incentive to buy into the excitement.”

They wanted to believe them.

On a much smaller level, something similar happened last week. One college professor got excited about some new statistics about Latter-day Saints identifying as lesbian, gay, bisexual or other, and in partnership with a journalist colleague, couldn’t help but bring some immediate national attention to what they assumed was big news.

And what was that news?

First, most prior surveys — including one from this same duo — had found the percentage of Latter-day Saint millennials identifying as LGB+ around 10%, which tracks (as would be expected) with other Christian groups at about 12% nonheterosexual.

But this new survey — more than five years later — found double that number of Latter-day Saint millennials and Gen Zers reporting a nonheterosexual identity.

Second, while recent national Gallup polling had estimated the overall percentage of Gen Z folks not identifying as heterosexual at 21%, this professor’s analysis of the Nationscape numbers for Gen Z Latter-day Saints outside the West found the percentage of those identifying as nonheterosexual to be a whopping 35% (which is, to say, very surprising and highly unlikely for a more conservative religion). Indeed, another analyst I spoke with pointed out that this same dataset shows that in 11 states the proportion of Latter-day Saints reporting to be nonheterosexual was 50% or more.

This is precisely the moment at which these authors and others who shared the information might have paused, sought out some second opinions and considered whether anything else might be at play.

Instead, Jana Riess and Benjamin Knoll went to press, publishing a report in Religion News Service touting these striking findings from this “major national study.” Two days later, Professor Knoll was on the “Mormon Land” podcast with a vocal student in the LGBT community to “unpack the latest data.”

News of the study was retweeted and liked hundreds of times, including by respected academics, across the various social media platforms promoting the findings. Enthusiastic about the results, the student activist suggested on the podcast that the surprising findings were likely even lower than reality, speculating that gay Latter-day Saints were more like “1 in 4.”

Throughout this press coverage, the trustworthiness of the dataset was underscored by emphasizing its size, referring to it as “one of the largest studies of Mormons ever fielded in the United States” or, as the lead analyst put it, “the biggest sampling of Gen Zers I’m aware of” for Latter-day Saints.

Hard to dispute a study so large, right? Yet, it was apparent to others that something was very wrong with their conclusions. And after getting some feedback on ways their analysis was off, Riess and Knoll, to their credit, later retracted their 1 in 5 claim — which by now had already spread far and wide.

What was wrong with the data? While the total numbers surveyed were indeed large, including nearly 4,000 Latter-day Saints total, only small numbers of subgroups were represented. For instance, only a couple hundred Gen Z Latter-day Saints were in the dataset to represent the entire United States.

Furthermore, Nationscape data was never meant to be representative of Latter-day Saints or any other religion. Given the wide disparities between these findings and more representative surveys, it’s almost certain the process of selecting the sample overrepresented LGB+-identifying Latter-day Saints outside the West.

Kudos to Riess and Knoll for acknowledging some of this in an update at the top of the story, broaching the representation problem and clarifying the geographical disparities in the data. Given this, they admitted it was likely the true numbers were “around 7 to 9 percentage points lower” than initially reported.

Theirs was a good faith error, and their willingness to correct it reflects more of what we need to see. Even so, much of their public clarification read to some as justification that though the numbers may be a little off, their overall take-aways were still “on the right track.”

But, being off “7 to 9 percentage points” is a substantial error — representing nearly double what the actual estimate likely is and significantly off from the 1 in 5 claim they retracted (and which the initial headline promoted).

Of course people make mistakes, but the larger point goes beyond this particular dataset to the broader tendency to seize upon data to amplify a popular position without the requisite caution research is supposed to require. Unlike the fraudulent study mentioned earlier, this example, and many more like it, seem to reflect the power of confirmation bias.

Part of this is human nature and probably true for most of us. Whom among us don’t see new information that reinforces what we already believe or feel and sometimes gleefully seize on it as further proof that we’re right — maybe even widely sharing it with others around us?

And when that new information is “scientific”? All the better. Few things are as gratifying or persuasive in 21st-century life than claiming our own cherished beliefs are backed up by the “latest scientific research.”

So, little wonder that we see people on every issue (and every side) claiming essentially the same thing: “Guess what? The (best) research confirms everything we’ve been saying.”

This can seem strange to outsiders who see science as fixed, obvious truth. But among researchers themselves, this is far less confusing. As they know by experience, virtually all scientific data can be interpreted in starkly different ways with competing theoretical camps.

We might expect this would mean researchers themselves would be especially cautious. But Dr. Christopher Rosik has raised concern for years about what he calls “the triumph of activism over science” — cautioning against making “definitive statements from limited or ambiguous data,” especially on sensitive and contested issues.

Yet, this is precisely what many have continued to do in many different domains. Competing political or ideological camps continue to pretend as if the available research vindicates their full agendas.

Arguably no area has been more scientifically finessed and fraught than matters of sexuality. Multiple conservative scholars, in particular, report the near impossibility of getting well-vetted research findings published in major journals because the findings run against the dominant narratives in the field. (I’ve learned for myself that even raising honest concerns like these is enough for some to label you “anti-gay.” Is it any wonder many researchers decide to stay quiet?)

Clearly, the work of deepening love and expanding acceptance must continue with an aim to ensure everyone seeking to follow Jesus Christ feels welcome among us. That ongoing work is best served by a public discussion where all voices are heard and the full truth is sought.

It’s of concern, then, to hear some continue to claim that existing research findings are somehow unambiguous as an indictment of traditional Judeo-Christian faith when it comes to LGBT mental health or suicidality. Yet, as Dr. Stephen Cranney notes in a recent summary, a closer review of the available data doesn’t support this narrative. Others have made similarly inflated claims about how often “mixed-orientation marriages” fail with little attention to ways in which the research in question was designed in a way that predisposed these very results.

Part of these issues could be solved with an ideologically diverse research team, which allows for checks and balances on sociopolitically biased decisions and conclusions. Jonathan Haidt and colleagues note that in the past 50 years, social science has lost most of its viewpoint diversity, with the large majority of social psychologists identifying as politically liberal — noting, “This lack of political diversity can undermine the validity of social psychological science via mechanisms such as the embedding of liberal values into research questions and methods … and producing conclusions that mischaracterize liberals and conservatives alike.”

They likewise caution: “Having common values makes a group cohesive, which can be quite useful, but it’s the last thing that should happen to a scientific field.” Reflecting on this broader tendency in academia, they note that when “left unchecked” a group “can become a cohesive moral community, creating a shared reality that subsequently blinds its members to morally or ideologically undesirable hypotheses.”

It’s been heartening to see other counter examples among researchers, such as Lee Beckstead and Ty Mansfield partnering on research in this same area of sexuality, despite (and very much because of) their profound differences in opinion. Results of their own emerging studies demonstrate the considerable nuance and depth that arise when research is generated (and interpreted) from a place of appropriate viewpoint diversity.

As witnessed throughout history, scientific research has the potential to unite us as we gather together to grapple over truth and consider the meaning of particular datasets. This scientific process at its best can undermine even popular narratives and unsettle cherished ideas. But at its worst, we see more and more examples of exactly the opposite — selective data analyses being leveraged to advance popular narratives, quite apart from what the actual truth of the matter is.

And that should concern us all.

Jacob Hess serves on the board of the National Coalition of Dialogue and Deliberation. He has a doctorate in clinical-community psychology from the University of Illinois, Urbana-Champaign.