Opinion polls are a key accelerant in the inflamed civil discourse of our time. And, as Teresa Carr explain, that’s a problem…
Last December, a joint survey by the Economist and the polling organization YouGov claimed to reveal a striking antisemitic streak among America’s youth. One in five young Americans thinks the Holocaust is a myth, according to the poll. And 28 percent think Jews in America have too much power.
“Our new poll makes alarming reading,” declared the Economist. The results inflamed discourse over the Israel-Hamas war on social media and made international news.
There was one problem: The survey was almost certainly wrong. The Economist/YouGov poll was a so-called opt-in poll, in which pollsters often pay people they’ve recruited online to take surveys. According to a recent analysis from the nonprofit Pew Research Center, such polls are plagued by “bogus respondents” who answer questions disingenuously for fun, or to get through the survey as quickly as possible to earn their reward.
In the case of the antisemitism poll, Pew’s analysis suggested that the Economist/YouGov team’s methods had yielded wildly inflated numbers. In a more rigorous poll posing some of the same questions, Pew found that only 3 percent of young Americans agreed with the statement “the Holocaust is myth.”
These are strange times for survey science. Traditional polling, which relies on responses from a randomly selected group that represents the entire population, remains the gold standard for gauging public opinion, said Stanford political scientist Jon Krosnick. But as it’s become harder to reach people on the phone, response rates have plummeted, and those surveys have grown exponentially more expensive to run. Meanwhile, cheaper, less-accurate online polls have proliferated.
“Unfortunately, the world is seeing much more of the nonscientific methods that are put forth as if they’re scientific,” said Krosnick…
…
… headlines as outrageous as they are implausible continue to proliferate: 7 percent of American adults think chocolate milk comes from brown cows; 10 percent of college graduates think Judge Judy is on the Supreme Court; and 4 percent of American adults (about 10 million people) drank or gargled bleach to prevent Covid-19. And although YouGov is one of the more respected opt-in pollsters, some of its findings — one third of young millennials aren’t sure the Earth is round, for example — strain credulity.
Amidst a sea of surveys, it’s hard to distinguish solid findings from those that dissolve under scrutiny. And that confusion, some experts say, reflects deep-seated problems with new methods in the field — developed in response to a modern era in which a representative sample of the public no longer picks up the phone.
The fractious evolution in polling science is likely to receive fresh attention as the 2024 elections heat up, not least because the consequences of failed or misleading surveys can go well beyond social science. Such “survey clickbait” erodes society’s self-esteem, said Duke University political scientist Sunshine Hillygus: It “undermines people’s trust that the American public is capable of self-governance.”
Veteran pollster Gary Langer compares traditional randomized polling methods, known as probability polling, to dipping a ladle into a well-stirred pot of minestrone soup. “We can look in and see some cannellini beans, little escarole, chunks of tomato,” he said. “We get a good representation of what’s in the soup.”
It doesn’t matter if the pot is the size of Yankee Stadium, he said. If the contents are thoroughly mixed, one ladle is enough to determine what’s in it. That’s why probability surveys of 1,000 people can, in theory, represent what the entire country thinks.
The problem is that getting a truly representative sample is virtually impossible, said YouGov’s Douglas Rivers, who pointed out that these days a good response rate to a randomized poll is 2 percent…
…
… with the appropriate guardrails against fraud, YouGov chief scientist Rivers said, such methods offer a practical alternative to conventional probability sampling, where the costs are too high, and the response rates are too low. In some sense, he suggested, most polling is now nonprobability polling: When only 2 out of 100 people respond to a survey, it’s much harder to claim that those views are representative, said Rivers. “Sprinkling a little bit of randomness at the initial stage does not make it a probability sample.”
“Our approach has been: Let us assemble a sample systematically based on characteristics,” said Rivers. “It’s not comparable to what the census does in the current population survey, but it’s performed very well in election polling.” Rivers pointed to YouGov’s high ranking on the website FiveThirtyEight, which rates polling firms based on their track record in predicting election results and willingness to show their methods.
Gary Langer was not particularly impressed by high marks from FiveThirtyEight. (His own firm, Langer Research Associates, also gets a top grade for political polling they conduct on behalf of the partnership between ABC News and The Washington Post.) “Pre-election polls, while they get so much attention, are the flea on the elephant of the enterprise of public opinion research,” he said. The vast majority of surveys are concerned with other topics. They form the basis of federal data on jobs and housing, for example, and can reflect the public’s views on education, climate change, and other issues. “Survey data,” he said, “surrounds us, informs our lives, informs the choices we make.”
Given the stakes, Langer relies exclusively on probability polling. Research shows that opt-in polls just don’t produce the same kind of consistent, verifiable results, said Langer…
Research suggests that widely used nonprobability methods, in particular online opt-in polls such as the Economist/YouGov survey, have inherent vulnerabilities.
The prospect of cash or rewards can incentivize some people to complete surveys quickly and with as little effort as possible. “They’re giving you data and answers that just can’t possibly be true,” said Kennedy.
For example, in one test of opt-in polling, 12 percent of U.S. adults younger than 30 claimed that they were licensed to operate a nuclear submarine. The true figure, of course, is approximately 0 percent…
…
… Media consumers should be skeptical of implausible findings, said Krosnick. So should reporters, said Langer, who spent three decades as a journalist, and who said news outlets have a responsibility to vet the polls they report on: “Every newsroom in the country — in the world — should have someone on their team evaluate surveys and survey methodologies.”
In the end, people need to realize that survey research involves some degree of uncertainty, said Joshua Clinton, a political scientist at Vanderbilt University, who noted that polls leading up to the 2024 election are bound to get something wrong. “My concern is what that means about the larger inferences that people make about not only polling, but also science in general,” he said. People may just dismiss results as a predictable scientific failure: “‘Oh, the egghead screwed up again.’” Clinton said he wants people to recognize the difficulty of doing social science research, rather than to delegitimize the field outright.
Even Rivers, whose firm produced the Economist poll that made headlines, acknowledged that readers should be cautious with eye-catching headlines. “We’re in a challenging environment for conducting surveys,” he said. That means that people need to take survey results — especially those that are provocative — with a grain of salt.
“The tendency is to overreport polls,” said Rivers. “The polls that get reported are the ones that are outliers.”…
It’s very difficult to get anyone to answer a phone call—and that’s skewing data on everything from chocolate milk to antisemitism: “We’re in a New Era of Survey Science,” from @TeresaRCarr in @undark via @Slate. Eminently worth reading in full.
In 1940, MGM had produced “Puss Gets the Boot,” based on Hanna’s and Barbera’s pitch for a story rooted in two “equal characters who were always in conflict with each other.” It was the first collaboration between William Hanna and Joseph Barbera (founding a partnership that would last over 50 years and yield such treasures as The Flintstones, Huckleberry Hound, The Jetsons, Scooby-Doo, Top Cat, and Yogi Bear); at over nine minutes in length, it’s the longest T&J ever produced– and the first of three T&J essays (with “Puss n’ Toots” and “Puss ‘n’ Boats”) to pun it’s title on the fairy tale “Puss in Boots.” “Puss Gets the Boot” was nominated for an Academy Award– the first of Hanna and Barbera’s many Oscar nominations.
The cat in “Puss Gets the Boot” was actually named “Jasper”; the mouse, “Jinx.” But when the pilot got the go-ahead to become a series, animator John Carr won a studio-wide naming contest with his suggestion: “Tom and Jerry.” The cat’s owner, “Mammy Two-Shoes,” was voiced by June Foray— who later earned immortality as the voice of Rocky J. Squirrel.
You must be logged in to post a comment.