(Roughly) Daily

Posts Tagged ‘polling

“Public opinion polls are rather like children in a garden, digging things up all the time to see how they’re growing”*…

As the press continues to treat this year’s alltooconsequential election as a horse race, your correspondent is re-visiting a topic touched a few weeks ago: the prevalence of polling data in election coverage. Rick Perlstein weighs in with a (fascinating) history of presidential election polling, then turns to it implications…

… That polls do not predict Presidential election outcomes any better now than they did a century ago is but one conclusion of this remarkable history. A second conclusion lurks more in the background—but I think it is the most important one to absorb.

For most of this century, the work was the subject of extraordinary ambivalence, even among pollsters. In 1948, George Gallup called presidential polling (as distinguished from issue polling, which has its own problems) “this Frankenstein.” In 1980, Elmo Roper admitted that “our polling techniques have gotten more and more sophisticated, yet we seem to be missing more and more elections.” All along, conventional journalists made a remarkably consistent case that they were empty calories that actively crowded out genuine civic engagement: “Instead of feeling the pulse of democracy,” as a 1949 critic put it, “Dr. Gallup listens to its baby talk.”

Critics rooted for polls to fail. Eric Sevareid, in 1964, recorded his “secret glee and relief when the polls go wrong,” which might restore “the mystery and suspense of human behavior eliminated by clinical dissection.” If they were always right, as James Reston picked up the plaint in 1970, “Who would vote?” Edward R. Murrow argued in 1952 that polling “contributed something to the dehumanization of society,” and was delighted, that year, when “the people surprised the pollsters … It restored to the individual, I suspect, some sense of his own sovereignty” over the “petty tyranny of those who assert that they can tell us what we think.”

Still and all, the practice grew like Topsy. There was an “extraordinary expansion” in polls for the 1980 election, including the first partnerships between polling and media organizations. The increase was accompanied by a measurable failure of quality, which gave birth to a new critique: news organizations “making their own news and flacking it as if it were an event over which they had no control.”

And so, after the 1980 debacle, high-minded observers began wondering whether presidential polls had “outlived their usefulness,” whether the priesthood would end up “defrocked.” In 1992, the popular columnist Mike Royko went further, proposing sabotage: Maybe if people just lied, pollsters would have to give up. In 2000, Alison Mitchell of The New York Times proposed a polling moratorium in the four weeks leading up to elections, noting the “numbing length … to which polling is consuming both politics and journalism.”

Instead, polling proliferated: a “relentless barrage,” the American Journalism Review complained, the media obsessing over each statistically insignificant blip. Then, something truly disturbing started happening: People stopped complaining.

A last gasp was 2008, when Arianna Huffington revived Royko’s call for sabotage, until, two years later, she acquired the aggregator Polling.com and renamed it HuffPost Pollster. “Polling, whether we like it or not,” the former skeptic proclaimed, “is a big part of how we communicate about politics.”

And so it is.

Even as the resources devoted to every other kind of journalism atrophied, poll-based political culture has overwhelmed us, crowding out all other ways of thinking about public life. Joshua Cohen tells the story of the time Silver, looking for a way to earn eyeballs between elections, considered making a model to predict congressional votes. But voters, he snidely remarked, “don’t care about bills being passed.”

Pollsters might not be able to tell us what we think about politics. But increasingly, they tell us how to think about politics—like them. Following polls has become our vision of what political participation is. Our therapy—headlines like the one on AlterNet last week, “Data Scientist Who Correctly Predicted 2020 Election Now Betting on ‘Landslide’ Harris Win.” Our political masochism: “Holy cow, did you hear about that Times poll.” “Don’t worry, I heard it’s an outlier …”

The Washington Post’s polling director once said, “There’s something addictive about polls and poll numbers.” He’s right. When we refer to “political junkies,” polls are pretty much the junk.

For some reason, I’ve been able to pretty much swear off the stuff, beyond mild indulgence. Maybe it’s my dime-store Buddhism. I try to stay in the present—and when it comes to the future, try to stick with things I can do. Maybe, I hereby offer myself as a role model?

As a “political expert,” friends, relatives, and even strangers are always asking me, “Who’s going to win?” I say I really have no idea. People are always a little shocked: Prediction has become what people think political expertise is for.

Afterward, the novelty of the response gets shrugged off, and we can talk. Beyond polling’s baby talk. About our common life together, about what we want to happen, and how we might make it so. But no predictions about whether this sort of thing might ever prevail. No predictions at all…

Presidential polls are no more reliable than they were a century ago. So why do they consume our political lives?

Eminently worth reading in full. Presidential polls are no more reliable than they were a century ago. So why do they consume our political lives? “The Polling Imperilment,” from @rickperlstein in @TheProspect.

Pair with: “The Problems with Polls.”

For more on why today’s polls are so flawed, see “A public-opinion poll is no substitute for thought.”

Apposite: from the estimable James Fallows: “Election Countdown, 38 Days to Go: What Is Wrong With Our Leading Paper?

* J. B. Priestley

###

As we pray for more consequential coverage, we might recall that it was on this date in 1936 that the (then-venerable) Literary Digest mailed out return postcard to 2,000,000 Americans, asking them to return the card with an indication for whether they would be voting in the upcoming presidential election for incumbent, Franklin D. Roosevelt or challenger Alf Landon. They published the results of their anxiously-anticipated poll in their October 31 issue: a massive victory for Landon. In the event, of course, Roosevelt defeated Landon in an unprecedented landslide.

The issue in question (source)

Written by (Roughly) Daily

September 30, 2024 at 1:00 am

“A public-opinion poll is no substitute for thought”*…

Opinion polls are a key accelerant in the inflamed civil discourse of our time. And, as Teresa Carr explain, that’s a problem…

Last December, a joint survey by the Economist and the polling organization YouGov claimed to reveal a striking antisemitic streak among America’s youth. One in five young Americans thinks the Holocaust is a myth, according to the poll. And 28 percent think Jews in America have too much power.

“Our new poll makes alarming reading,” declared the Economist. The results inflamed discourse over the Israel-Hamas war on social media and made international news.

There was one problem: The survey was almost certainly wrong. The Economist/YouGov poll was a so-called opt-in poll, in which pollsters often pay people they’ve recruited online to take surveys. According to a recent analysis from the nonprofit Pew Research Center, such polls are plagued by “bogus respondents” who answer questions disingenuously for fun, or to get through the survey as quickly as possible to earn their reward.

In the case of the antisemitism poll, Pew’s analysis suggested that the Economist/YouGov team’s methods had yielded wildly inflated numbers. In a more rigorous poll posing some of the same questions, Pew found that only 3 percent of young Americans agreed with the statement “the Holocaust is myth.”

These are strange times for survey science. Traditional polling, which relies on responses from a randomly selected group that represents the entire population, remains the gold standard for gauging public opinion, said Stanford political scientist Jon Krosnick. But as it’s become harder to reach people on the phone, response rates have plummeted, and those surveys have grown exponentially more expensive to run. Meanwhile, cheaper, less-accurate online polls have proliferated.

“Unfortunately, the world is seeing much more of the nonscientific methods that are put forth as if they’re scientific,” said Krosnick…

… headlines as outrageous as they are implausible continue to proliferate: 7 percent of American adults think chocolate milk comes from brown cows; 10 percent of college graduates think Judge Judy is on the Supreme Court; and 4 percent of American adults (about 10 million people) drank or gargled bleach to prevent Covid-19. And although YouGov is one of the more respected opt-in pollsters, some of its findings — one third of young millennials aren’t sure the Earth is round, for example — strain credulity.

Amidst a sea of surveys, it’s hard to distinguish solid findings from those that dissolve under scrutiny. And that confusion, some experts say, reflects deep-seated problems with new methods in the field — developed in response to a modern era in which a representative sample of the public no longer picks up the phone.

The fractious evolution in polling science is likely to receive fresh attention as the 2024 elections heat up, not least because the consequences of failed or misleading surveys can go well beyond social science. Such “survey clickbait” erodes society’s self-esteem, said Duke University political scientist Sunshine Hillygus: It “undermines people’s trust that the American public is capable of self-governance.”

Veteran pollster Gary Langer compares traditional randomized polling methods, known as probability polling, to dipping a ladle into a well-stirred pot of minestrone soup. “We can look in and see some cannellini beans, little escarole, chunks of tomato,” he said. “We get a good representation of what’s in the soup.”

It doesn’t matter if the pot is the size of Yankee Stadium, he said. If the contents are thoroughly mixed, one ladle is enough to determine what’s in it. That’s why probability surveys of 1,000 people can, in theory, represent what the entire country thinks.

The problem is that getting a truly representative sample is virtually impossible, said YouGov’s Douglas Rivers, who pointed out that these days a good response rate to a randomized poll is 2 percent…

… with the appropriate guardrails against fraud, YouGov chief scientist Rivers said, such methods offer a practical alternative to conventional probability sampling, where the costs are too high, and the response rates are too low. In some sense, he suggested, most polling is now nonprobability polling: When only 2 out of 100 people respond to a survey, it’s much harder to claim that those views are representative, said Rivers. “Sprinkling a little bit of randomness at the initial stage does not make it a probability sample.”

“Our approach has been: Let us assemble a sample systematically based on characteristics,” said Rivers. “It’s not comparable to what the census does in the current population survey, but it’s performed very well in election polling.” Rivers pointed to YouGov’s high ranking on the website FiveThirtyEight, which rates polling firms based on their track record in predicting election results and willingness to show their methods.

Gary Langer was not particularly impressed by high marks from FiveThirtyEight. (His own firm, Langer Research Associates, also gets a top grade for political polling they conduct on behalf of the partnership between ABC News and The Washington Post.) “Pre-election polls, while they get so much attention, are the flea on the elephant of the enterprise of public opinion research,” he said. The vast majority of surveys are concerned with other topics. They form the basis of federal data on jobs and housing, for example, and can reflect the public’s views on education, climate change, and other issues. “Survey data,” he said, “surrounds us, informs our lives, informs the choices we make.”

Given the stakes, Langer relies exclusively on probability polling. Research shows that opt-in polls just don’t produce the same kind of consistent, verifiable results, said Langer…

Research suggests that widely used nonprobability methods, in particular online opt-in polls such as the Economist/YouGov survey, have inherent vulnerabilities.

The prospect of cash or rewards can incentivize some people to complete surveys quickly and with as little effort as possible. “They’re giving you data and answers that just can’t possibly be true,” said Kennedy.

For example, in one test of opt-in polling, 12 percent of U.S. adults younger than 30 claimed that they were licensed to operate a nuclear submarine. The true figure, of course, is approximately 0 percent…

… Media consumers should be skeptical of implausible findings, said Krosnick. So should reporters, said Langer, who spent three decades as a journalist, and who said news outlets have a responsibility to vet the polls they report on: “Every newsroom in the country — in the world — should have someone on their team evaluate surveys and survey methodologies.”

In the end, people need to realize that survey research involves some degree of uncertainty, said Joshua Clinton, a political scientist at Vanderbilt University, who noted that polls leading up to the 2024 election are bound to get something wrong. “My concern is what that means about the larger inferences that people make about not only polling, but also science in general,” he said. People may just dismiss results as a predictable scientific failure: “‘Oh, the egghead screwed up again.’” Clinton said he wants people to recognize the difficulty of doing social science research, rather than to delegitimize the field outright.

Even Rivers, whose firm produced the Economist poll that made headlines, acknowledged that readers should be cautious with eye-catching headlines. “We’re in a challenging environment for conducting surveys,” he said. That means that people need to take survey results — especially those that are provocative — with a grain of salt.

“The tendency is to overreport polls,” said Rivers. “The polls that get reported are the ones that are outliers.”…

It’s very difficult to get anyone to answer a phone call—and that’s skewing data on everything from chocolate milk to antisemitism: “We’re in a New Era of Survey Science,” from @TeresaRCarr in @undark via @Slate. Eminently worth reading in full.

* Warren Buffett

###

As we take it with a grain of salt, we might recall that it was on this date in 1941 that Tom and Jerry first appeared on screen with those names in the MGM cartoon “The Midnight Snack,” though it was in fact their second screen appearance.

In 1940, MGM had produced “Puss Gets the Boot,” based on Hanna’s and Barbera’s pitch for a story rooted in two “equal characters who were always in conflict with each other.”  It was the first collaboration between William Hanna and Joseph Barbera (founding a partnership that would last over 50 years and yield such treasures as The FlintstonesHuckleberry HoundThe JetsonsScooby-DooTop Cat, and Yogi Bear); at over nine minutes in length, it’s the longest T&J ever produced– and the first of three T&J essays (with “Puss n’ Toots” and “Puss ‘n’ Boats”) to pun it’s title on the fairy tale “Puss in Boots.”  “Puss Gets the Boot” was nominated for an Academy Award– the first of Hanna and Barbera’s many Oscar nominations.

The cat in “Puss Gets the Boot” was actually named “Jasper”; the mouse, “Jinx.”  But when the pilot got the go-ahead to become a series, animator John Carr won a studio-wide naming contest with his suggestion: “Tom and Jerry.”  The cat’s owner, “Mammy Two-Shoes,” was voiced by June Foray— who later earned immortality as the voice of Rocky J. Squirrel.

“In America, everyone is entitled to an opinion, and it is certainly useful to have a few when a pollster shows up”*…

In the last couple of decades, opinion polling in the U.S. has exploded; the number of national pollsters has more than doubled. Over the same period, American lifestyles have changed in ways that have challenged pollsters– and led them to innovate in a quest for accuracy. Indeed, after the embarrassment of the election of 2016, 61% of national pollsters have changed their methods…

The pollsters at The Pew Research Center— arguably the best of bunch– have polled the pollsters…

The 2016 and 2020 presidential elections left many Americans wondering whether polling was broken and what, if anything, pollsters might do about it. A new Pew Research Center study finds that most national pollsters have changed their approach since 2016, and in some cases dramatically. Most (61%) of the pollsters who conducted and publicly released national surveys in both 2016 and 2022 used methods in 2022 that differed from what they used in 2016. The study also finds the use of multiple methods increasing. Last year 17% of national pollsters used at least three different methods to sample or interview people (sometimes in the same survey), up from 2% in 2016.

This study captures what changes were made and approximately when. While it does not capture why the changes were made, public commentary by pollsters suggests a mix of factors – with some adjusting their methods in response to the profession’s recent election-related errors and others reacting to separate industry trends. The cost and feasibility of various methods are likely to have influenced decisions.

This study represents a new effort to measure the nature and degree of change in how national public polls are conducted. Rather than leaning on anecdotal accounts, the study tracked the methods used by 78 organizations that sponsor national polls and publicly release the results. The organizations analyzed represent or collaborated with nearly all the country’s best-known national pollsters. In this study, “national poll” refers to a survey reporting on the views of U.S. adults, registered voters or likely voters. It is not restricted to election vote choice (or “horserace”) polling, as the public opinion field is much broader. The analysis stretches back to 2000, making it possible to distinguish between trends emerging before 2016 (e.g., migration to online methods) and those emerging more recently (e.g., reaching respondents by text message)…

Fascinating– and important: “How Public Polling Has Changed in the 21st Century,” from @pewresearch (via friend PH).

* Neil Postman, Amusing Ourselves to Death

###

As we consider our answers, we might recall that it was on this date in 2016 that Pew Research Center published the results of a poll on voter satisfaction with U.S. Presidential candidates:

Voter satisfaction with the choice of presidential candidates, already at a two-decade low, has declined even further. A new survey finds that just a third of registered voters say they are very or fairly satisfied with the choices, while 63% say they are not too or not at all satisfied. That represents a 7-percentage-point drop since June in the share of voters expressing satisfaction with their candidate choices…

Already-low voter satisfaction with choice of candidates falls even further

“It’s the end of the world as we know it / And I feel fine”*…

From the Department of Polarization…

While the percentage of Americans who are satisfied with the direction of the United States is only around 17 percent — up from 11 percent in the pits of the pandemic but still down from 41 percent two years ago — respondents are telling pollsters that nevertheless they’re personally doing just great. Fully 85 percent of respondents said they are satisfied with how things are going in their personal life, a little bit off the all-time highs of 90 percent but still definitely on the higher side of the historical range in responses to the question, which has been asked since 1979. While 51 percent of Americans are “very dissatisfied” with the direction of the country, 51 percent are also “very satisfied” with their own personal life.

@WaltHickey and his invaluable Numlock News (@NumlockAM) on Gallup‘s (@Gallup) January, 2022 “Mood of the Nation” poll.

* REM

###

As we reconcile, we might recall that it was on this date in 1820 that the first 86 African American immigrants sponsored by the American Colonization Society departed New York to start a settlement in present-day Liberia.

The ACS had been founded in 1816 by Robert Finley to encourage and support the migration of free African Americans to the continent of Africa– in response to what he and his cohort saw as a growing social problem: what to do with free Blacks. Slave owners feared that these free Blacks might help their slaves to escape or rebel. At the same time, many white Americans saw African Americans as an inferior race. To these whites, “amalgamation,” or integration, of African Americans with mainstream American culture—giving them citizenship—was undesirable, if not altogether impossible. There was, the ACS argued, little prospect of changing these views. African Americans, therefore, should be relocated somewhere they could live in peace, free of prejudice, where they could be citizens.

The African-American community and abolitionist movement overwhelmingly opposed the project. Contrary to stated claims that emigration was voluntary, many African Americans were pressured into emigrating. Indeed, enslavers sometimes manumitted their slaves on condition that the freedmen leave the country immediately. William Lloyd Garrison, author of Thoughts on African Colonization (1832), proclaimed the Society a fraud. According to Garrison and his many followers, the Society was not a solution to the problem of American slavery—it actually was helping, and was intended to help, to preserve it.

According to historian Marc Leepson, “Colonization proved to be a giant failure, doing nothing to stem the forces that brought the nation to Civil War.” Between 1821 and 1847, only a few thousand African Americans, out of millions in the US, emigrated to what would become Liberia. Close to half of them died from tropical diseases.

Map of Liberia circa 1830 (source)

“Suffrage is the pivotal right”*…

… but how we vote matters. We tend to take the electoral system in which we exercise our franchise for granted. Perhaps we should think more broadly. Why Is This Interesting? explains how Venice selected its Doges, and ponders the questions that raises for our own elections…

The way societies make decisions is important. There is a growing understanding that different systems can lead to quite different outcomes. Ireland rejected the British first-past-the-post system after independence and adopted the single transferable vote in 1921. New York City started using ranked-choice voting this summer, with some hiccups. Other countries have moved to full proportional representation where seats are allocated to parties more or less based on national vote share.

There’s also the question of the best level of representation. Should city councils be elected at-large for the whole city (like in Cambridge, Mass.) or in single-member districts, and how would that affect outcomes such as diversity and zoning? Perhaps some decisions should be taken away from the city council, and either moved down to the neighborhood level or up to the regional level? And should some decisions, such as monetary policy, be taken out of democratic control altogether and left to technocrats?

Using sortition to choose government officials, as Venice and Ancient Athens did, is a niche idea these days, but in common-law countries, juries deciding legal cases are (supposed to be) chosen randomly from the population. Nobel laureate Daniel McFadden wants to use “economic juries” of randomly selected people to decide on big public projects, arguing that this can better reflect public opinion than a referendum.

Since these political design choices affect policy outcomes, it would be naive to think this is only about high-minded notions of the “quality” of decisions. But that doesn’t make the question of how societies should make decisions any less interesting.

What’s the best way to hold elections? On Venice, decisions, and policy outcomes: “The Dogal Elections Edition,” from Why is This Interesting? (@WhyInteresting) Eminently worth reading in full.

[Image above: source]

* Susan B. Anthony

###

As we ponder the practice of polling, we we might recall that it was on this date in 1620 that 41 adult male colonists recently arrived in what we now call Massachusetts, including two indentured servants, signed the Mayflower Compact (although it wasn’t called that at the time). Though they intended to reach the Colony of Virginia, storms had forced The Mayflower and its pilgrim passengers to anchor at the hook of Cape Cod in Massachusetts. It was unwise to continue with provisions running short. This inspired some of the non-Puritan passengers (whom the Puritans referred to as ‘Strangers’) to proclaim that they “would use their own liberty; for none had power to command them” since they would not be settling in the agreed-upon Virginia territory. To prevent this, the Pilgrims determined to establish their own government, while still affirming their allegiance to the Crown of England. Thus, the Mayflower Compact was based simultaneously upon a majoritarian model and the settlers’ allegiance to the king. It was in essence a social contract in which the settlers consented to follow the community’s rules and regulations for the sake of order and survival– the first (colonial) document to establish self-government in the New World.

Signing the Mayflower Compact 1620, a painting by Jean Leon Gerome Ferris 1899

source