We support our Publishers and Content Creators. You can view this story on their website by CLICKING HERE.
Okay, okay, that’s not exactly how Jon Stewart put it last night. I suspect that Stewart tapped a little deeper into the American mood in how he did react to the mismatch between the polling and the results. Talking wasn’t really on his mind, either (NSFW language ahead):
Advertisement
🚨🇺🇸JON STEWART: ALL POLLSTERS CAN BLOW ME
“I don’t wanna ever f**king hear from you again.”pic.twitter.com/NAuDUi7XRR
— Mario Nawfal (@MarioNawfal) November 6, 2024
Best line in this rant is: “Oh, we were in the margin of blow me.” That’s because Stewart knows what’s coming from the polling industry, as do I. And while their rebuttal will have some technical merit, it will miss the point.Â
That point is that American pollsters have missed the support that Donald Trump gets in three straight elections. And they have missed it three cycles in a row where the support matters most, which is in the non-coastal, non-media-center states with substantially higher percentages of working- and middle-class voters. This time, they even managed to blow it in a coastal-media state, Virginia, where their estimates were off substantially but not enough to get the wrong outcome.Â
Pollsters will fire back by claiming that their results came within the margin of error, properly understood. Â That’s true, and there is a misconception that observers have about the nature of MoE. It has to be applied in both directions. To use an example: the last iteration of the New York Post’s national poll found Donald Trump and Kamala Harris tied at 49/49, with a MoE of 3.0%. What does that mean? It means either candidate could be in a range from 46-52% with a 95% confidence level when that result is applied against the target population. It could mean 52/48 Trump or 52/48 Harris or anything in between, with some chance of the range being larger.
Advertisement
If you look at that and think, “Well, that’s not really useful,” you’re right, at least in terms of solid predictive value. That’s why we tend to look at a large number of polls to see trends rather than single polls to use as predictive data. I mentioned this yesterday; polls are probabilities, not quantifiable data on voter behavior, and are shaped by the assumptions built into their models about the target populations. If pollsters don’t get samples that represent those populations, then their results are not going to be predictive.
The problem has become now that no one’s doing very well in crafting models that reflect the target populations. Is that an issue of “non-response bias,” ie, people refusing to cooperate with surveys? It might be, but that’s always been a component of political polling. It looks much more like the pollsters share the same faulty assumptions about electorates in places like Pennsylvania, Michigan, and Wisconsin, and keep failing to produce predictive results because of it. Using aggregators like RCP and Nate Silver doesn’t fully correct for that because almost all of the pollsters are producing faulty models, and more to the point, faulty models that keep missing the same segments of the electorate that turn out to matter in the same places. Garbage in, garbage out. Again and again and again.Â
The failures of the past eight years come from that issue, not misreading of MoE by observers. It’s not just one pollster (although Ann Selzer will never live down her comic faceplant in Iowa), it’s all of them. The pollsters still have not figured out how to model properly for Trump, not even on the third try. Whether that is a problem created by “herding” around bad modeling or outright bias is almost beside the point now. What is clear is that these pollsters, many of them based in either Academia or the Protection Racket Media, have no clue about the communities they are polling, and their models suck as a result.
Advertisement
Or as Stewart puts it more pungently, “You don’t know s*** about s***.” Nor does it seem as though they are willing to learn s*** about s***, even after producing s*** in cycle after cycle.Â
So what do we do about it? We can ignore polling, but that’s not terribly practical. We’d have to take the campaigns at face value in their claims about voter outreach, and that’s just as risky as looking at pollster data. Perhaps we can best judge how campaigns are going by simply watching their behavior. We could have seen this Trump victory from a mile away based on the increasing hysteria from Kamala Harris’ campaign over the past month. We should keep an eye on polling too, but with an understanding of what it is and is not … and how piss-poor its practitioners prove themselves to be.Â