Search

Can These Democratic Pollsters Figure Out What Went Wrong? - The New York Times

kotortopo.blogspot.com

Five competing Democratic polling firms put their heads (and their data) together about 2020.

Everybody agrees the polls missed the mark in 2020, as they had four years earlier. But nobody’s certain why.

In search of answers, five competing Democratic polling firms have decided to put their heads (and their data) together, forming a group that will undertake a major effort to figure out what went wrong in 2020 — and how the polling industry can adjust.

The team released a memo today announcing the project and offering some preliminary findings that seek to address why polls again underestimated support for Donald Trump. But over all, the message was one of openness and uncertainty. The big takeaway: Things need to change, including the very nature of how polls are conducted.

The authors wrote that their analysis thus far had pushed them toward thinking that pollsters must take a boldly innovative approach when mapping out the road ahead.

“We know we have to explore all possibilities,” Fred Yang of Garin-Hart-Yang Research Group, one of the five firms involved in the study, said in an interview today.

That will probably mean embracing some tools that had been considered too untested for mainstream public polling: Officially, the survey-research community still considers live-interview phone calls to be the gold standard, but there is growing evidence that innovative methods, like sending respondents text messages that prompt them to respond to a survey online, could become essential.

And it could also mean going back to some methods that have become less common in recent decades, including conducting polls via door-to-door interviews, or paying respondents to participate.

“We are going to put every solution, no matter how difficult, on the table,” the memo read.

The consortium of Democratic firms plans to release a fuller report this year; so will a number of traditional survey-research institutions. The American Association for Public Opinion Research, which undertook a widely discussed post-mortem analysis in 2016, is already at work on another. AAPOR is a bastion of polling traditionalism, but if the Democratic groups’ preliminary report is any indication, even the association’s coming analysis might acknowledge that the industry should embrace more experimental approaches to data collection.

In a separate analysis released late last month, Nate Silver of FiveThirtyEight found that traditional, live-interview phone polls weren’t meaningfully more accurate than others. In fact, out of dozens of polling firms analyzed, none of those with the lowest average error had exclusively used live-interview phone calls (and some hadn’t used them at all). Two of the three most accurate firms were Republican-aligned companies that are held in suspicion by most leaders in the social-science world, partly because they use methods that have long been considered suspect — including robo-calling, as well as newer techniques like contacting respondents via text message.

The Democratic firms’ memo said polls had slightly missed the mark when determining the makeup of the electorate last year. This means they misunderstood, to some degree, who was likely to vote and who wasn’t: a crucial “X” factor in pre-election polling.

Among so-called low-propensity voters — that is, the ones pollsters consider the least likely to turn out — Republicans proved four times as likely as Democrats to actually end up casting a ballot in November. This can be taken as another indication of how effective Donald Trump was at expanding the Republican electorate, and pollsters’ difficulties accounting for that, particularly among white voters without college degrees and those in rural areas.

Tellingly, the researchers found that voters who considered Trump “presidential” were underrepresented in polls.

But a greater source of concern was so-called measurement error. That’s a fancy way of saying polls have had trouble figuring out what percentage of people in certain demographic groups plan to vote for one candidate over the other.

The report proposed some explanations for why there was significant measurement error in 2020 pre-election polling, and it landed on two big potential culprits. One was the higher prevalence of anti-institutional views (sometimes referred to as “social distrust”) among Trump supporters, meaning those voters would be less willing to respond to official surveys. The second explanation was the lower incidence of pandemic-related fears among Trump voters, meaning they were more likely than Biden voters to be willing to turn out to vote.

“What we have settled on is the idea there is something systematically different about the people we reached, and the people we did not,” the report’s authors wrote. “This problem appears to have been amplified when Trump was on the ballot, and it is these particular voters who Trump activated that did not participate in polls.”


New York Times Podcasts

Why do election fraud allegations live on, even after they’ve been debunked? In our new audio series with Serial Productions, we went to one rural county to try to find out. Listen to the first episode now.

On Politics is also available as a newsletter. Sign up here to get it delivered to your inbox.

Is there anything you think we’re missing? Anything you want to see more of? We’d love to hear from you. Email us at onpolitics@nytimes.com.

Let's block ads! (Why?)



"can" - Google News
April 14, 2021 at 05:45AM
https://ift.tt/3thJDBs

Can These Democratic Pollsters Figure Out What Went Wrong? - The New York Times
"can" - Google News
https://ift.tt/2NE2i6G
https://ift.tt/3d3vX4n

Bagikan Berita Ini

0 Response to "Can These Democratic Pollsters Figure Out What Went Wrong? - The New York Times"

Post a Comment

Powered by Blogger.