Hey, I just couldn't resist!
Please note that in the following commentary, I am attempting to unpack the POTUS 2016 election analytics via the polls, not take political sides. I am one of those "independent/undecideds" that everyone complains about, partly because no particular party "represents" me. Call me a rogue!
-------------------------------------------------------------------------------------------------------
Storytime: - Can Mind have an effect over Matter - is Telekinesis possible?
Setup a random number generator in a computer so that it generates hundreds of numbers per second, all of them integers from 0 thu 99. Have it collect as many as it can in one minute. When it's done, the average of all numbers generated should hover around 50 (due to the Gaussian distribution (bell curve)). Once this is working, set up an observation and then focus your mind on something in the computer, something imaginary even, and "declare" this object in your mind to be the CPU or the generator itself.
Now start the recording and focus your mind and think - or even say out loud - "aim high". Continue these thoughts with as much intensity as possible. Perform this observation many times to get a lot of data. When the experiment is done and the observations are collated, the majority if not all of the averages will be above the value of 50. Conversely, when it's repeated with the phrase "aim low" the final results will show a downward skew below 50.
How is this possible? Is the mind truly able to affect an electronic random number generator? Does the concentration of the mind somehow force this outcome?
When independent observers watched the researchers repeat this and collected data both for "aim high" and "aim low" - an interesting pattern emerged. At the end of the test, an observer said "I noticed you kept resetting the test until you got it calibrated, what was that all about?" Well, said the researcher, when I'm trying to take a reading I have to make sure everything is the same. If I see that the average isn't moving I reset and refocus." "How many times do you do this?" Well, I don't really keep track of that."
The problem here is that the researcher was throwing out valuable information. It was showing that the return numbers really were no better than random chance but the researcher wanted to believe so much that a lack of rigor in the test protocols was a-okay. The researcher was throwing out observations that proved the hypothesis wrong and keeping those that agreed with the hypothesis.
----------------------------------------------------------------
So now that the election is done and the outcome is known, let's have a little fun with the analytics. You know, the pollsters. They claimed to have the answer (predictive analytics) - but most of them were dead wrong. Why was that?
Many of the news stories today start out with "The <winning candidate name here> in a surprising victory" - or "stunning upset", or "the nation changed its mind in the eleventh hour" - are all just weak ways of failing to admit that they got it wrong. Not only did they have it wrong in the end, they had it wrong all along.
The Electoral College complicates the analytics, so before we get started, a lot of foreigners read this blog and I have received "casual" questions in the past about the Electoral College, so here's a quick primer on it.
Ironically, the Broadway show "Hamilton" has been in post-election news, but the players in this production have all admitted that they didn't vote. The creator of the Electoral College was Alexander Hamilton, who also advocated that the POTUS be called a "king", and advocated steep immigration restrictions and even closed borders. This is ironic in the light of recent statements from the "Hamilton" cast. I wonder if they know these things about the play's namesake?
The election of a President Of The United States (POTUS) is unlike many other elections, in that it's not a popular vote. It is a collection of results of fifty-one separate elections (50 states and the District of Columbia).
There's a strong reason for this. The POTUS is elected by states that are each independently sovereign of one another. The United States is just that- a union of sovereign states, each with their own governments and laws.
In America, state sovereignty is protected in a variety of ways. The FBI cannot insinuate itself into every criminal investigation. There are strong lines of jurisdiction that determine whether its a state or federal crime, all borne on the sovereignty of the state. The federal government can't arbitrarily send troops into a state without the governor's permission. The same is true for a variety of federal agencies. For example, when Katrina hit Louisiana. President Bush was on-site the next day with an offer to the governor to send in troops to keep order and distribute food and provisions for survivors. The governor hesitated on this offer. A day or two later the levees in New Orleans broke and stranded millions. The fact remains, the troops could not enter Louisiana without permission. The states have rights.
The Constitution protects the rights of the states to maintain their sovereignty. A popular vote for "anything federal" would violate this sovereignty and put all voters, regardless of their geography, into a common pool. A candidate that made a lot of promises to the population centers would win hands-down. Moreover, election fraud in one location affects the whole. The POTUS election is this way by design, to keep democracy at bay, so that cheating is more complicated, the sovereignty of each state is honored, unbalanced power is not given to population centers to leave the majority of the country out of the race entirely.
Even as this is written, the state of California has laws that allow non-citizens to vote for local elections, and their totals are driving a "popular vote" for the loser because some 4 million votes were illegally cast for POTUS. Those haven't been excluded yet. They will likely line-up the popular vote with the electoral vote.
"Popular vote" would become a way for "big city dwellers" to tyrannize the country. Our founding fathers understood tyranny quite well, and the many forms it can take.
But what if the outcome were decided by popular vote? Consider that in states that are lopsided, like California, New York and Texas, voters for one candidate are always represented while voters for the other are not. Many voters for the other candidate don't go to the polls at all because they know it's a waste of time. But if this were a popular vote - those folks would have a vote that counted and be encouraged to go vote. Such things come into play when the votes are counted differently and would significantly affect how many additional votes are cast.
What if the candidates tie in Electoral votes or don't reach the necessary 270? The contest goes to the House of Representatives. Many people mistakenly believe that the House would cast 435 votes for President. This is not the case. The representatives of each state must "caucus" and have their own internal election, and only one vote from that state is cast for POTUS - 50 votes in all. Why? Because the states elect the President, not the people. It's a running theme.
This is a bit of a startling realization for some, that even the Father of the Constitution, James Madison, commented that "democracy is evil" - primarily because it allows the will of the many to trample the rights of the few. The founders deliberately put in place a variety of checks and balances that keep democracy at bay. As such, America is a Representative Republic, not akin to any form of democracy at all. This is confusing to some, who've been told America is a democracy and the Electoral College is "undemocratic" and if it's not democratic it must be bad.
History has proven that "the majority is often wrong" - and this was proven out in the POTUS pre-election polling - all but a few were completely wrong.
Recall that many hundreds of years ago, a chap named Galileo challenged the majority "opinion" on celestial mechanics. Oddly, the Aristotelians in the universities - all of academia - stood in opposition to him. Even the leaders of his own faith disowned him. He was a lone voice in in a sea of consensus - and all they had to do was look into the telescope. In all walks of life, we often find that a small number of people "have it right" while the majority is wrong. The founders of America wisely recognized this "herd effect" of humanity and didn't want it to have any power. One of the reasons for this is that the "herd effect" is often driven by a majority of the least-informed and most-fearful.
Should population centers control the presidency? This question was asked and answered by the founding fathers in the form of the Electoral College.
This system however, radically complicates the prediction of an overall outcome. Each state's activity has to be predicted and modeled, and depends on a lot of very dynamic factors.
Even further - in the 2016 election - the media, the newspapers, the consultants and a wide range of politicians, including many politicians in the same party as the eventual winner - all claimed that the eventual winner would lose in a landslide and be the most devastating loss in history. All of them were wrong. Instead, the very opposite became reality. The naysayers were proven wrong, the "favored" candidate lost and that candidate's party was decimated.
The problem here isn't that the nation "suddenly turned at the last moment" as some have suggested, but the pollsters themselves fell ill to a common problem in these kinds of analytics. They had it wrong from the beginning and their polls failed to reflect reality. There's an obvious reason we'll get to a little later.
Here was a chance for predictive analytics to shine like no other - a time-boxed, measurable event and outcome. This is why those of us who champion analytics are so appalled by the epic-fail. It could have been a shining moment but instead just a fizzle. For analytics. The worst part is that people without any predictive computing also predicted what would happen in the aftermath of the election - and they were also right. No computer algorithms required. Perhaps the nature of a "learning machine" needs to be couched in terms of what is being learned and how we know it's true?
Many of us who want excellence in analytics were absolutely appalled at the abject lack of accuracy of the presidential polling. It seemed a lot like a circus and the election night seemed like their version of damage control more so than reporting an expected outcome.
After all, if the polls were scientific and on-the-money, when the polls closed and the votes were reported, the science should have already reflected the reality of the outcome. No surprises at all. Why was it such a shocker? An upset? A cinderella story?
Think about that for a moment. We hire someone to do analytics for us, find market opportunities and reduce risk, and when we act on their carefully crafted predictions, it goes south and we lose millions of dollars. Would this be a good testimony to the accuracy of our analysts, or an abject failure? Would we ever trust them again? Would we throw out the baby with the bathwater, so to speak, and forsake the value of analytics altogether? Some have done this, and later revisited it with a bit more scrutiny ( and are happy with their results now).
When it comes to our companies, our livelihoods, etc - trusting the data and the analysts is hard because it's personal. It's not like a video game where we get unlimited retries. The one failure may put us out of business. The one success may make us handsomely solvent for decades.
Let's get some things out of the way - we have a few broad reasons why they got it so wrong.
- On the dark side - they knew the results and were lying. I don't see any benefit for the pollster or the candidate in this. In fact, showing a candidate artificially high in the polls could spur the opposition voters to the polls!
- On the lighter side - they were incompetent - I can't accept that so many were off because of this - that's just me talkin'
- On the analytic side - they were victims of "confirmation bias". This explains it much better. They were sincere, but sincerely wrong. They were simply too personally invested in the outcome.
- Reuters and their partner IPSOS, recently shared that much of their polling data favored the eventual winner, including the most controversial issues - by over 83%. They don't see their failure to report these known facts as having any influence on the election. In fact, they completely changed their time-honored measurement system in a manner that favored the eventual loser. Why would they do this? It's really simple: they could not bring themselves to believe that they were wrong.
Many who have attempted to find a reason for the failure fall into one of two major buckets - those who look at it scientifically and those who look at it politically. The political view has a problem in that the presence of bias - of the modelers and analysts - tends to taint the model and inject a "confirmation bias". That is, the polls say what the analyst expected them to say, but the analyst may not realize that their bias has already affected the outcome. Many who look at it scientifically are using politics first, so the confirmation bias creeps in again.
Case in point, an analyst may ask the question, "If the election were held today, who would you vote for?" And it's interesting how many of those voters polled were dishonest in their response. How we know this will follow shortly. Taking these responses at face value, the analyst sees that the numbers trend as they expected, so they report the results. Another form of confirmation bias is when the analyst has a hypothesis in hand, such as "We know that nobody could possibly want to vote for Candidate A for so many important reasons, thus..." and this hypothesis is what drives their questions and likewise the rest of the analysis. They only ask questions in this context and only hear answers in this context.
This is a lot like the circular question of "Do you still kick your dog?" If someone answers "yes", they're a reprobate. If they answer "no", it implies they used to kick their dog...
Unbeknownst to many, this circularity is one of the major flaws of the scientific method. This is why the scientific method can't be used to "prove" anything. It can be used to falsify, but not to prove. Bias rears its head in common scientific experiments and even in police forensics.
"What happened to the evidence you collected in the bathroom of the crime scene?"
"We analyzed it and threw it out as irrelevant."
"You threw it out as irrelevant? Why?"
"Because it didn't match the suspect."
(real conversation between a prosecutor and CSIs in a triple murder)
In scientific circles - a professor at an Ivy league university had an intern who reported that the professor had collected a wide array of samples from a recent field trip and the intern had expected to spend the weekend collating them as part of his duties to the professor. But when the labeled/bagged samples arrived, only a fraction of them made it into triage. He asked the professor what happened to the rest of it, and the professor said it had to be discarded because "it didn't fit the profile".
This "doesn't match the suspect" or "doesn't fit the profile" is a problem because it means the analysts have a hypothesis concerning a specific outcome and have thrown out evidence suggesting, or even proving, that their hypothesis is wrong. The professor, in throwing out evidence that didn't match a profile, is aligning to his original hypothesis and discarding evidence that disagrees with it. In the case of the CSI, if they throw out all but the data that "matches the suspect" they are by definition throwing out the case against the real criminal. If any of that evidence could exonerate the "suspect" but the "suspect" is falsely convicted, the real criminal goes free.
In the case of Reuters/IPSOS - their hypothesis was that the eventual loser would win - has to win - in a total landslide. Any data that contradicted this simply had to be erroneous. Yet it was not - it was speaking the truth and they ignored it.
So the flaw of the scientific method is that the scientist can inadvertently use the hypothesis as the filter through which all evidence is examined, rather than it's intended purpose - a springboard for investigation to be either confirmed, rejected or modified based on the evidence discovered. Since many researchers apply for grants based on "the hypothesis", they must have a reasonable confidence that the hypothesis has merit in order to influence donors for funding. If the scientist has enough failures, those funding sources will dry up.
Thomas Edison claimed to have failed over 1000 times in inventing the lightbulb. His explanation was that in each case, he eliminated a candidate with the expectation that a final candidate would succeed - claiming 1 percent genius, 99 percent perspiration. Unfortunately, donors these days aren't so forgiving. Scientists like Edison, Newton and Lavoisier had independent means of supporting themselves while performing science. Today, scientists expect to be financially supported even while they do the work. That's reasonable, but it increases the cost of science, and tends to tempt the scientist into fudging the data to match the hypothesis. He has bills to pay, kids in college, vacations to pay for. A dependency on money, and lots of it, has been shown to affect their judgment.
As for polling, the polling firms sell their results to the candidates and it's collectively worth billions of dollars. Down-ballot candidates in 2012, 2014 and 2016 spent over a billion dollars in polling alone.
Some reason this way: if you're a pollster in the business of selling polls, and the first poll you take shows your "guy" waaay ahead of the pack, do you share this or do you show the race to be "very close"? This "very close" will cause your client to be nervous and want to purchase another polling result as soon as possible. You have a vested interest - even a conflict of interest - in whether or not you deliver the right polling results because it's your job to sell more results. Since everyone knows that the only polls that count - are the polls in the final week - hey - why not massage the numbers? Who's it gonna hurt?
In the book "Wrong: Why Experts Can't be Trusted" - the author cites one case after another of scientists eliminating, adding or fudging data. One researcher went so far as to use a magic marker to put stripes in the fur of a lab rat so that it could pass certain acceptance criteria. He notes that even though doctors have formally accepted a correct form to administer Cardio-Pulmonary Resuscitation (CPR) the average training manual and even the Red Cross still use the former method.
Periodically in the news will appear a case of fraud or collusion in the area of climate science. If the science is there, why the fraud? For example, the IPCC announced that global warming has been in a "pause" since 1997. The current trend is more toward cooling, as exemplified by the glacier forming in Mt. St. Helens' caldera. Many scientists are asking the impertinent question: Is the "pause" really a "pause" or is it something else, like "reality"?
In May of 2015 NASA announced that the polar ice caps are not receding, in fact the Antarctic sea ice is expanding.
http://www.forbes.com/sites/jamestaylor/2015/05/19/updated-nasa-data-polar-ice-not-receding-after-all/#4cfd4f9332da
For folks to claim that "the majority of climate scientists believe..." isn't relevant, because many centuries ago the majority of scientists believed the Sun revolved around the Earth. Still, science is science, and not subject to consensus or democratic vote. One "uncovered" memo after another has revealed that the "result" of climate science can be purchased regardless of the contradicting data. So why does anyone trust it? After all, doesn't Earth itself do more damage to itself, than mankind could possibly keep up with?
In the case of Presidential polling, there's no compelling reason to deliberately get it wrong. Some like Reuters may withhold information, but this isn't the same as false reporting. Statistics show that false polls don't do a lot to suppress or affect voter turnout or sentiment. In fact, one could make the case that a false poll strongly in favor of one candidate could strengthen the resolve of a person voting for the opponent. Likewise if a candidate is seen as strong in the polls, some of the candidate's supporters may not go vote under the assumption that their vote wouldn't count all that much. Both of these dynamics are partially in play in 2016 but nobody can measure it, so nobody knows for certain its impact.
I worked with a firm some ten years ago that had so much cash rolling in, they could have wallpapered the offices with 100-dollar bills and not missed any of it. Their business was so dramatically profitable that their investors loved them, their customers loved them, but one love was lost and had not returned - working for the corporate offices was drudgery. They had not upgraded their computing systems in many years, so many of the employees spent countless hours, every day, pulling data and collating spreadsheets.
In the meantime, a lack of visibility across their corporation made their expenses invisible. They were hemorrhaging cash in most departments and could not see it, and didn't care because so much more cash arrived to replace it, and then some. Such situations always have a day of reckoning. We were onsite because that day had arrived and the senior management may as well have been in witness protection they were so frightened.
Analytics matters. It tells you where you stand.
Unless it's presidential polling. Which is just bizarre. I hear that it's - uh - like the most powerful position in the world? I could be wrong about that, but for the presidential polling to be so completely off?
The reason we as voters don't particularly care about the presidential polling is that it's rarely accurate. We're sort of "inoculated" to it by now. We hear all sorts of stories of pollsters using the polls to shape opinion rather than reflect it. We want to believe that they're doing it for the right reasons - to be accurate and regarded as reliable, not as agenda-driven political hacks. Hope springs eternal, but at the end of every election season we see how completely wrong they were. At the beginning of every election season, the lessons of four-years-prior are forgotten and we find ourselves watching the polls ever-so-hopefully.
Pardon the analogy, but isn't this a lot like Lucy and Charlie Brown, where she holds the football for him and rips it away at the last moment? She promises each time she won't do that, but always does. Why does Charlie keep coming back? Well, because it's funny, and it's Peanuts, and we know it's make-believe.
But presidential polling isn't make believe and the outcome has real-world consequences.
Only three of the mainstream pollsters were even remotely accurate. All the rest (dozens of them) couldn't have been more off if they had just manufactured the numbers from thin air. In fact, any of us could lick a finger, test the political wind, and produce a poll that was more accurate than ninety-percent of those claiming to use analytical science. Just embarrassing.
And if it's science, why were they so wrong? I mean, so completely wrong?
This is why the final election outcome was such a "shocker". Expectations. Just the setting of false expectations is enough to set someone off. Tell your wife you have a romantic weekend planned and then at the last minute get called into a non-optional emergency meeting at a client site - uh - yeah - set those expectations carefully! In one particular case, I had set my client's expectations that I would be unavailable. They didn't even bother to call.
Nate Silver, famed analytics guru of many past elections, put his private formula alchemy to work and at the beginning of the election night, already had one candidate favored with 70 percent chance of winning. No margins, you see, just whether the candidate would win. As the polls closed in each time zone, Silver changed his percentages. They went to 60, then 50 and dipped below 50, moving downward "as the world turned" and polls closed by the hour. The opponent likewise rose in the other direction. Betting odds were a reversal of fortune for many.
People look at Silver's messaging in real-time, analyze it and proclaim, "The frontrunner is losing ground" or "the underdog is gaining ground". Why doesn't anyone see such sentiments as odd? The reason I say this is simply:
Friends of mine go to the tracks on occasion, probably more often than their wives would like, and spend time betting on dogs or horses. At the beginning of the day, the track officials publish the betting odds, much like Silver published "odds" during the course of the campaign. But those odds at the track are only good before the race begins. The officials don't change the odds after the starting bell.
And what happens when the gate opens and the horses charge forth? Seems to me they're just like Olympic runners in a starting block. They all have the same starting point - zero - and all of them have to gain ground faster than the opponents - to break the ribbon first. I mean, everyone gets that, it's why we watch races. I still recall many Summer Olympics ago, one of my favorite runners (Gail Devers) was in the 100-meter hurdle. She was at least five hurdles ahead when she hit the last hurdle and tripped over it. She landed on her knees and tried to recover, but finished third. She was favored to win, too. That really was a case of gaining ground only to lose it later. Many recall the Winter Olympic snowboarder who was favored to win second place. The frontrunner and backrunner tangled right out of the gate and the hero ran well ahead of them. When she hit the last hill, she decided to "hot dog" and brought her board up to touch it, lost her balance and spilled out. She hurriedly tried to make it right but the other two sped past her, an opportunity lost.
The lesson in all this, is that once the polls close - the outcome is prescribed. They can't gain or lose ground. At the voting precincts, there is no frontrunner or underdog after the polls close. It's all over but the counting.
The odd part about this being applied to the 2016 presidential race, is that when the polls closed and votes were reported, one jumped out in front and stayed out in front, and ultimately won the contest. Anything prior to the votes being counted - predictions, polling, exit polling - didn't matter any longer - because most of them got it completely wrong. The only question people have later is - if most of the pollsters were completely wrong, how do we know the others aren't just a fluke?
Isn't that what we'd say about any other kind of contest? If fifty people make a prediction for an outcome and only three get it right, we chalk it up to random chance, not the skills of the predictor. The point being - if science is directly applied, we move closer to the expected outcome. If no science is applied, we can't expect better than random chance.
So for Silver to claim "gaining ground" or "losing ground" after the polls close, is ridiculous. It's Silver's form of "damage control" after being proven so completely wrong. Moreover, he was repeatedly proven completely wrong about the winning candidate from the time the candidate announced a presidential bid. Every prediction he made - polling to the primaries - crashed and burned. He and others expected their front-runner to win in a landslide of 500 or more electoral points. Epic fail. How embarrassing is that?
Not for him, per se, but for analytics in general. It is because of Silver's past prior success that advanced analytics has gained ground in the marketplace, but when folks like Silver so completely and visibly fail, it sets-back analytics and in some ways can bring shame to those who sold their analytics based on Silver's success with it.
To Silver's credit, part of his "damage control" was his explanation of how close he called the "popular vote". Well, Nate, that's reeeeeal nice. But as noted above, the popular vote doesn't mean squat. This is the United States, each state is a sovereign entity not beholden to the other states, and do not participate in a popular-vote-based election. It is in their best interests, and always has been, to avoid being pooled with the popular vote.
A similar effect happened with the 2004 POTUS election, where one candidate was handily whipping the incumbent - based on exit polls alone - but when they started counting votes, the incumbent immediately jumped in front and never fell behind even once. Various groups in favor of the challenger cried foul - but strangely did not point a finger at the exit pollsters. The outcome of the election would have been the very same without their reports. And since their reports were so completely wrong, why report them at all? The more sinister among us would claim they were trying to affect the outcome. Voters are a little smarter than that, so it's hard to swallow. Especially if its across fifty sovereign states, each with the own vested interests - this just makes it a lot harder to cheat.
Another strange effect happened with the 1980 election, where the polls leading into the election night had the incumbent ten points ahead, but as the night unfolded, the challenger took the race by a total landslide. Only many years later was it learned that in the week prior to the election, the incumbent was taken aside and told that he wasn't ten points ahead, but ten point behind. No way, no how would he recover this ten points in a few days. This was a "brace yourself" moment, so that nothing about the election night was unexpected for the candidate. I suspect that the same numbers were available to the challenger as well. This was the first time that the loser conceded the race even before the polls closed on the west coast
In the 2016 election, one candidate was behind in most of the prior predictive polls, and led in only a few, while the other candidate enjoyed a comfortable "lead" throughout the campaign season. The underdog claimed that the polls could not be trusted. People laughed. The polls that showed him ahead were ridiculed. They however, were closer to the mark than anyone realized. Oddly, these same polls were the most accurate ones four years ago in 2012. Why weren't they trusted this time? Confirmation bias. They disagree with our hypothesis - so there's just no way they can be right.
Since both candidates were "seeking low ground" in their rhetoric - in the most bizarre and tumultuous race ever, the candidates dished out their fair share of mud. As a result, many voters were uncomfortable in openly committing to either candidate. Of course, each candidate had their own "openly loyal base" but knew they could not win with the base alone. They had to reach the "independents" and "undecideds" - but how to find them? How to know what they really think?
One of the pollsters used an interesting question - "How are your neighbors voting?" - and this seemed to unlock a wealth of information. In the final analysis, this one question unlocked the hidden information. He had found the true sentiment of "undecided" voters. While a person may not feel comfortable sharing their own opinion, it was easy to share the opinion of an "imaginary neighbor". This pollster's numbers were the most accurate, state by state than any of the other pollsters. He called it, but he had to use a little subterfuge to make it happen.
This subterfuge it seems, is the hallmark theme of politics. The politicians keep a public face and a private face. One candidate was revealed to have told donors to expect public responses to the voters that were incongruous with the private responses to the donors - not to worry, it's just politics.
On a personal note, my father was a District Attorney (an elected office) for over 30 years in East Texas, and was in office at the time of my wedding (to my wife of now 30 years) - and my wife's parents had taken-on the expense of the wedding reception. Dad told them to invite an additional 300 guests to the wedding and reception, which would have blown their budget sky-high. They objected but Dad said - no worries, it's just politics. None of them will actually show up, but it's bad form not to invite them. Sure enough, none of them showed up. But this is a bit of a subterfuge in itself, is it not?
Politics is an art of partial truths and partial subterfuge. Anyone who reveals their agenda from the outset is considered a poor political player. One must have a public agenda and a private agenda, if they want to "get anything done". At least, that's the "common wisdom".
This is why many technologists often divorce themselves from politics entirely. If I took a poll of technologists nationwide, of those eligible to vote I would find that only a small percentage are actually registered to vote. Technologists often watch politics like a sporting event, if they watch sporting events at all.
But this politics-as-usual problem - the subterfuge and hidden agendas - had apparently wearied the American voter. So when a candidate stepped forward with no political experience at all - nobody knew how to measure it. They still don't. Even now they are attempting to describe the candidate's victory within a political paradigm, and nothing they come up with is accurate. I read one dissertation that attempted to do the same-old shoe-horn of analyzing based on demographics, when the winning candidate had clearly appealed to a populist voter base that cross-sectioned a wide array of disparate demographics.
No wonder they were so completely wrong- they were looking in the wrong place, and asking the wrong questions. Conspiracy theories emerge. Are they that incompetent? Are they deliberately lying? Either way, how can we trust them?
Silver moves into "damage control" in the next days with "we were only 2 percentage points off" - well no, claiming that one candidate had over 70 percent chance of success is a lot more than 2 percentage points - but since he doesn't do polls himself but bases his information on existing polls and other information - he was effectively drinking from a poisoned well. If he had taken a superficial look at the candidates he would have seen why. One was a career politician and one had never run for office, and didn't understand the first thing about politics - so didn't conform to the common model. This not only confounded the opponent, it confounded the media - and the analytics.
I kept after my kids to pay attention to this election season because they'll tell their grandchildren about it - this will never happen again in our lifetimes.
In large part, this was unlike any other presidential political season for these very reasons, but the pollsters treated the second candidate like the first, attempting to shoe-horn everything into a common "career politician" model. One time after another, the non-politician candidate beat the predictions and nobody could understand why. In the end, the "political class" of consultants and the "establishment" were very afraid. Here they had set up a system through which all political candidates had to arrive, but this candidate proved that none of it was necessary. It relegated the established political engine to irrelevancy.
Moreover, the winner is about to enter office without any obligations to donors or other influencers- and no fingerprints on any of the problems taking place in government now. This was not the case with any of the other candidates.
Has anyone ever witnessed something so strange? The candidate stuck to several core issues that resonated with voters in all demographics and gathered more diversity under the candidacy than anyone prior. The losing candidate on the other hand, kept going through one re-invention after another, as if a phoenix rising from flames. Voters saw this as phony.
How does this apply to our internal corporate issues? Does dirty politics play a role in how numbers are reported? Do we seriously think that if the eeevil political player down-the-hall is able to manipulate numbers to his/her advantage, that they will be altruistic and avoid the urge to cheat? No doubt many avoid the urge, but there is an ever-present propensity to cheat, to capture numbers and spin the story to one's favor. It's just human nature.
When Inmon coined the phrase "Single version of the truth" - this is exactly the problem it addressed - to make sure, in certain, objective and scientific terms, that everyone was reporting from the same place, same totals, same everything, so that nobody could cheat. It's bad enough that someone would cheat to pad their numbers and look better, it's even worse when an underperformer pads their numbers to look even marginally acceptable. Corporate heads wanted neither, but a single place to go where everything was laid bare, the good, bad and the ugly.
In Red Storm Rising, Tom Clancy tells a story of the Russian Politburo and their analysts, who would arrive with three reports in-hand. One was the worst-case, one the best-case and one the middle-ground. When the analysts arrived, they would attempt to "read" the sentiment of the Politburo members - before choosing which report to proffer. The Politburo was known for being harsh with people who disagreed with them. So the analysts would attempt to discern the sentiment and intersect it with a report that aligned with Politburo sentiment rather than challenged it. In this particular storyline of Clancy's, this sentiment was ill-placed, the report was the "best case" and the outcome was disastrous.
Sometimes the folks asking the questions are their own worst enemy. They ask strongly biased questions in the wake of harsh outcomes for dissenters. If a person wants to keep from getting their head lopped off, they do whatever, say whatever to avoid this outcome, but it's not doing the decision-maker any good. If anything, it's misleading the decision-maker down a disastrous path. Unless of course, the decision-maker is just so good at what they do, they don't care about the opinions of others anyhow. Except to see who is loyal to them, of course.
This is the dichotomy presented to us by presidential polling versus our internal analysts. The pollsters and the analysts both have a loyalty or bias, so it is incumbent upon us to either determine that bias, or guide that bias in our favor. In business we want our analysts loyal to our goals and success. We want their honest answer as to where we're headed and whether or not it's a good idea, how to steer toward success and how to avoid danger, and the most effective way is to join-at-the-hip. Their fate is our fate - they have a vested interest in helping us get it right.
Consider the story of the king who went to visit a wise old soothsayer, who told the king, "You will cross a river, and a great king will be defeated." So the king mustered his troops, crossed the river and his entire army was routed. The soothsayer was right - a king had been defeated - but I'll bet the king asking the question would have wanted a more specific answer.
Conversely, who are the pollsters loyal to? Clearly at least one pollster was chasing "the answer" and found it - and reported it regardless of how many other laughed at him for it. The others, who had it wrong, were loyal to something else. We don't need to know what that something else was, just that they weren't pursuing the truth. And if they were pursuing the truth but were that-far-off - they certainly weren't pursuing it well. With the stakes so high, wouldn't we want the person to pursue it well? Don't we want them to be loyal to us?
And "loyal to us" isn't the Politburo gambit of "reading" us to tell us what we want to hear. We want them to tell us what we need to know.
Or are we really okay with analysts who "found what they were looking for" and upon "finding it" proclaimed "see I told you so" even as the company was entering Chapter 11? One particular very-large energy company in Texas (Enron) had one and only one analyst telling them they were on the wrong path. He was right, and could say "see I told you so" - but the decision-makers weren't listening.
This is a lesson for the analysts out there - just like only a few 2016 pollsters got it right while the others laughed at them - you as an analyst might be up against similar odds - and feel like Galileo in conflict with the greatest academics of his time. Your chief analysts may tell you that you're wrong - that you're making a bad career move to disagree with them. What if they are using the "common metrics" and you have found an outlier, a significant anomaly that creates tension in all the common answers? If the data is on your side, you have some decisions to make.
Many years ago I worked with a chap who built hardware parts for computers. One IEEE-certified schematic for a device showed that he needed a much larger wire than was necessary. The wire could hold more power than common house-current, but the device was powered by a nine-volt battery. The larger wire seemed like overkill, but the specification called for it. He took his case for the larger wire to the boss, who told him that he needed to use a smaller wire. The engineer stood his ground on the side of the schematic - and claimed that he'd taken an engineering oath not to follow instructions from people in opposition to a schematic specification. A battle ensued that lasted for many days until the young engineer tendered his resignation. A contractor was called in to fill his shoes, and he noted the same issue with the size of the wire. He called the vendor who said "This has already been published in errata. Do you not have a copy of it?" The contractor said no so they faxed the same. Lo and behold, the new schematic had specified a smaller wire. Why didn't the first engineer think to do this instead of sticking to the original data - in the face of such a glaring anomaly?
What does all this mean? We can stand our ground with bad data in our hands and be sincerely wrong. Or we can look at other aspects of the problem and regard glaring inconsistencies as problems to solve rather to ignore. Nate Silver ignored the "poisoned well" of the data he was using, as did the other pollsters. The ones who got it right, stuck to their answers even through ridicule - because they knew this race was different in too many ways to count, and required more than just the common metrics.
An old joke goes like this: Some analysts got together to determine the "meaning" of "two plus two" - and brought in a mathematician. His answer was "four - what's your point?"
They brought in a philosopher, who answered with "Well, "two" in one universe might mean different things than in ours, the same for the meaning of "plus" or even "four", so can you be more specific?" They thanked him for his time.
Then they brought in the attorney. Upon hearing the question, he rose from his seat, shut the door, seated himself and leaned into them, "What do we want it to be?"