*FiveThirtyEight*blog

*,*won the distinction of being the least wrong in predicting the outcome. Although he continually warned us that he

*might*be wrong, his clear implication was that Hillary would win (see

*Donald Trump's Six Stages of Doom*linked below if you question that). Others seemed to feel nearly certain.

In theory, no outcome was completely missing from the predictions, but many were given extremely low probabilities. There was even a possibility that Evan McMullin would win, just not a very big one. (See

*How Evan McMullin Could Win Utah And The Presidency*, linked below.)

*Wired*magazine ran a story, obviously before the election, entitled, “

*2016’s Election Data Hero Isn’t Nate Silver. It’s Sam Wang*.” To quote Wang, a professor of neuroscience at Princeton and now-famous insectivore who writes the Princeton Election Consortium blog, “It is totally over. If Trump wins more than 240 electoral votes, I will eat a bug.”

*Crunch. Crunch.*

*(The more precise term may be entomophage.)*

Nate Cohn at

*The Upshot*needs to eat a bug or two himself, as do the election markets. They all predicted much higher probabilities of a Clinton win than did Silver. I'm not sure there are enough bugs to go around.

These are a lot of high-powered minds with deep understandings of statistics and modeling but they were all – to quote The Donald –

*Wrong!*

How could that happen and what does it have to do with retirement planning?

Predicting election results is

*social*science. Models using Monte Carlo simulation are good at predicting outcomes for

*physical*science in which objects tend to behave in predictable ways under the same conditions. People are not so predictable.

There is an entire social science discipline, called behavioral economics, whose sole purpose is to explain the unpredictable and irrational economic behavior of humans. This unpredictability is a primary reason why statistical modeling won't be as predictive in social science fields as it is in physical science. (s

*ee The Marketplace of Perceptions*, linked below.)

Monte Carlo simulation was created by physicists. Enrico Fermi used Monte Carlo techniques in the calculation of neutron diffusion in the 1930s. Atoms tend to behave in the same probabilistic ways under the same conditions regardless of emotions, doubts, fears, and biases. People don't.

Silver, Wang, Cohn and the gang have to build a lot of judgments and assumptions into an election model and in 2016 many of these were obviously erroneous. As the exit polls come in, you will hear each expert explain why he was wrong or that he was actually right but we didn't interpret his results correctly. (Like Cohn's

*Putting the Polling Miss of the 2016 Election in Perspective,*link below.)

In other words, we had a lot of confidence in the models and that was a big mistake on

*our*part.

OK, poll aggregators, I can accept my responsibility here and promise to never put much stock in your predictions from now on – my bad. I actually drew comfort from the fact that so many models suggested the same result (a Clinton win) but apparently there was a strong correlation between models that I overlooked. Apparently “herding”, the tendency of polling firms to produce results that closely match one another, especially toward the end of a campaign, is a problem not only with pollsters but among poll aggregators, as well.

Physics models can be more accurately defined than social system models and can use better-defined input data.

The radioactive decay rate of

_{239}Plutonium is pretty consistent. “With a half-life of 24,100 years, about 11.5×10

^{12}of its atoms decay each second by emitting a 5.157 MeV alpha particle.” Monte Carlo simulation models of decay can predict outcomes pretty accurately.

Compare that with “We have a poll from the L.A. Times, but our judgment is that it is skewed a point and a half toward the conservative candidate most of the time so we give it a rating of B+ and weight its contribution a little lower.” You get both an imprecise measurement and a judgment of its quality.

A few weeks later, the L.A. Times poll will show a different result, but

_{239}Plutonium will still be decaying at precisely the same rate. Plugging more accurate and consistent inputs into a statistical model provides more predictive calculations.

Then there is the “one-time event” problem. If Silver was correct and Trump had a one-in-four chance of winning (it was probably much greater than that) then if the election were held one hundred times under identical conditions, Trump would win 25 of them. But the 2016 election was a one-time event. Trump won 100% of the election and Hillary lost 100% of it. Your household's retirement is also a one-time event.

We frequently use Monte Carlo simulations for retirement research. I think that is a far better application than for individual retirement planning precisely because research isn't trying to predict an outcome for a single household. It should be used much more carefully for the practice of retirement planning.

So, predicting the results of a one-time election event in a social system with imprecise and conflicting data sources turns out to be a very difficult thing to do and clearly far more difficult than many of us who believed in the modeling understood. In simplest terms, we are attempting to predict the future, or at least to characterize it fairly precisely, and you know what Yogi said about predictions – they're hard, especially about the future. Statistical models enable us to guess the future of social systems and be wrong with amazing accuracy.

There are so many variables, both known and unknown, so little high-quality clear data, and so much difficulty predicting human behavior that I suspect it's a fool's errand to try to predict a close national election. Someone pointed out that the models would have worked better if the electorate weren't so evenly split. That's probably true, but if the electorate weren't so equally split and the winner was more obvious, then why would we need the models?

A friend from my AOL days, Joe Dzikiewicz, made an interesting observation about the Electoral College (EC) and chaos theory. (So you don't spend the rest of your day wondering, it's pronounced “Ja-kev'-itz”). One attribute of chaotic systems is that a small change in initial conditions can result in dramatically different outcomes. If a small change either way in the popular vote can swing the Electoral College vote and greatly change the future path of world history, then the EC actually creates chaos, or "unpredictability." The outcomes may simply be unpredictable by statistical inference models under the initial condition that the electorate is closely divided. As I suggested in Retirement Income and Chaos Theory, the constant-dollar withdrawal assumption probably makes Monte Carlo retirement models chaotic, as well.

What does this have to do with retirement planning? Retirement planning is economics, a social science, not engineering. William Bernstein warned against using engineering techniques and historical data to develop retirement plans in a 2001 post at

*Efficient Frontiers*entitled "Of Math and History" (link below):

And of course, if you’re a math whiz, then all of life’s problems can be solved by spinning proofs and running the numbers. Not a week goes by that I don’t get a spreadsheet from someone demonstrating how this allocation or that strategy led to great riches over the past five, twenty-five, or seventy years.

My philosophy of retirement planning is summed up beautifully in that last sentence.The trouble is, markets are not circuits, airfoils, or bridges—they do not react the same way each time to a given input. (To say nothing of the fact that inputs are never even nearly the same.) The market, though, does have a memory, albeit a highly defective kind, as we’ll see shortly. Its response to given circumstances tends to be modified by its most recent behavior. An investment strategy based solely on historical data is a prescription for disaster.

The financial planning industry currently depends very heavily on Monte Carlo simulation for retirement planning. In fact, I have written that we often use Monte Carlo simulations instead of actual planning. Many of us try to show that the probability of failure is so low that we shouldn't worry about it (like Trump's odds of winning) and we give short shrift to actually planning for such a catastrophe as outliving our wealth, should it happen.

Assumptions and judgments can dramatically affect the results of retirement simulations. There may be a 95% probability that you won't outlive your wealth if you get good advice, but if you do you will be 100% up that famous creek. Precious few of the planners who use Monte Carlo simulation have Nate Silver's skills, but they make critical judgments and assumptions just the same.

Some of the retirement model assumptions are ridiculous, like assuming that retirees will keep spending the same amount until they go flat broke and the assumption that our limited availability of historical market returns data is adequate to predict the future with reasonable confidence. The election models struggle with assumptions about human behavior, but the typical retirement model is based on a particular human behavior that we all agree would be irrational.

[Tweet this]Yeah? Well, Hillary had a 95% chance of being elected President.

Though the polls were clearly flawed in many ways, thousands were available on which to base a prediction of election results in 2016. For retirement planning, we have about 200 years of data representing about seven distinct 30-year periods of market returns. We use rolling periods, a technique that is statistically flawed, to create 170 or so rolling 30-year periods. That still isn't much data to predict the future. Such a limited amount of historical market returns data makes for some very large confidence intervals.

As I noted in The Whoosh! of Exponential Retirement, Moshe Milevsky assures us that we can be 95% certain that a shortfall probability of 15% actually lies somewhere between 5% and 25%. Gordon Irlam showed us an example in which the optimal asset allocation has a huge 95

^{th}-percentile confidence interval of 10% to 82% equities. That's not a lot of confidence.

If predicting the results of a presidential election is difficult, retirement planners try to predict the probability of a one-time social event (a client's retirement) with far less data.

My point is this. If you think an election not turning out as predicted is a poor outcome, imagine that the 95% probability of success your planner promised ends with spending your late retirement living off Social Security benefits alone. (Somebody has to fall into that 5%, who says it won't be you?) The tools that didn't predict the election are basically the same ones that planners use to predict retirement finances and those tools failed in the election with better data, a more rational model and entire teams of more highly-skilled statisticians running them.

I'm not suggesting that retirement simulations are worthless; they can provide valuable insights but they should occupy an appendix of a retirement plan and be viewed as one more interesting data point, not as complete plans.

We had confidence in the election models and that was a big mistake on our part. Let's not be overconfident about retirement forecasts.

The next time an advisor hands you a Monte Carlo simulation and says you have a 95% chance of funding your retirement, tell them, “Yeah? Well, Hillary had a 95% chance of being elected President.”

Then ask him if he's willing to eat a bug.

REFERENCES

*Donald Trump's Six Stages of Doom,*Nate Silver, FiveThirtyEight blog.

2016’s Election Data Hero Isn’t Nate Silver. It’s Sam Wang,

*Wired*magazine.

*Putting the Polling Miss of the 2016 Election in Perspective,*Nate Cohn, The Upshot blog.

Princeton Election Consortium blog.

How Evan McMullin Could Win Utah And The Presidency, FiveThirtyEight blog.

The Marketplace of Perceptions, Harvard Magazine.

Of Math and History, William Bernstein, efficientfrontier.com.

I saw this piece today on MPT theory not necessarily applying to retail investor portfolios (a similar theme to your post): https://blogs.cfainstitute.org/investor/2016/11/18/has-goals-based-investing-ruined-modern-portfolio-theory-mpt/

ReplyDeleteWith Republicans in charge of the Presidency, House, and Senate, the uncertainty regarding mainstays of retirement such as Medicare and Social Security has gone up significantly (never mind pre-65 retirees reliant on ACA coverage). That should be fun to plug into Monte Carlo evaluations of retirement as major changes in those have a non-zero probability that is much higher today than a couple of weeks ago.

I am still 4-8 years from retirement, but am more determined than ever to have some additional cushion with lots of diversification of income sources at that time than what Monte Carlo etc. indicate should be necessary. Too many meteorite strikes (Financial Crisis, Tea Party, Trump, etc. ) over the past decade to provide warm fuzzies about the next 30 years. My primary hope right now is that the 1930s do NOT turn out to be an appropriate model for the present times despite the many similarities around the world.

Good points.

DeleteI don't think that plugging non-zero probabilities for catastrophic events into a Monte Carlo simulation would be useful. They represent a small probability of a disastrous outcome that either will occur or won't. Better to have a plan in place should it happen even if the probabilities are small. Modeling doesn't address those scenarios very well.

Thanks for the comments!

Sorry Dirk, I forgot to include a link to the illustrations I mentioned

ReplyDeletehttp://blog.betterfinancialeducation.com/sustainable-retirement/part-ii/

A great post once again Dirk.

ReplyDeleteOur recent paper published in the Journal of Financial Planning (Nov 2016) took a look at what you're talking about Dirk. Shawn Brayman and I also think that the current application of Monte Carlo simulation in the profession is flawed. More specifically, the interpretation of results are flawed.

Monte Carlo to date has been using one simulation over a fixed period. The derivation of the solution and termination values have been focused on as the solution - forgetting that they are only how the ignored, single point, solution was obtained.

The graphic in my blog post ( http://blog.betterfinancialeducation.com/sustainable-retirement/part-ii/ ) describing our paper in more detail illustrates this through an example of long division (how we old timers were taught division, not as it is taught today).

Single point Monte Carlo simulations need to be rerun for the next year's time period, and again rerun for subsequent time periods (as we age), to derive solutions for each age. In other words, the focus has to shift to each solution ONLY from each simulation - and forget about derivation and remainder values for those solutions.

A model is made up of only solutions. It is then that retirees can see the impact of spending decisions today on their future solutions. Then, each retiree can visualize how their behavior and how it impacts their future. Each retiree has their own desires for how that future may look like.

This is not about predicting - which is not possible in the behavioral sciences like it is in the physical sciences. It is about using the Monte Carlo tool properly as a model (a series of solutions) rather than as a single simulation masquerading as a solution through today's interpretation of simulation results. The solution is ignored (it is the single point at the left end of all those simulations), while all the iterations are focused on to the exclusion of the solution to try and see into the future - like trying to read the tea leaves at the bottom of the cup.

In sum, the model needs to be a series of Monte Carlo simulations that graphs the solutions of each age in the series so that the solutions together can be evaluated. The range of solutions becomes much narrower (there is still uncertainty), but effects of spending decisions become more pronounced.

It is not about predicting, which can't be done about the future. It is about developing a model using Monte Carlo as a tool, rather than just a single point calculation which is interpreted to say something about the future of that single point. Behaviorally, our eyes are drawn to the "big fan" of solution derivations and terminal values, and we forget what we sought is the solution for that time period and conditions - the single point on the left! Model all of those single points and see how the picture changes.

Great post Dirk - and gets to what Shawn and I demonstrated that the tool needs to be reinterpreted and used as a true model.

Larry, that's exactly my point -

Deleteit's not about predicting. But that's how models are often used in practice. We researchers are more interested in how the pieces influence one another than the typical retiree or planner and models work much better for that.Looking forward to reading your blog posts as soon as the turkey settles and the ballgames is over!

Thanks for the comments.

If two contenders in a race are close enough in support, it would in reality be impossible for pollsters to predict who will win, even if the pollsters were competent and unbiased.

ReplyDeleteP.S. In my circle I am credited with vast insight for predicting (i) a Trump win. But I don't think that I did. I think that all I said was (ii) he might win, and (iii) Hillary thoroughly deserved to lose. Why (ii) + (iii) should morph into (i) in the minds of my friends beats me.

Yet, they insisted that they could and we believed them.

DeleteAnother excellent post, Dirk. My only complaint about this one is that it has been over a month since your last one. What is up with that? Don't you realize that there are a lot of folks out there (like me) who need their stimulating cup of Dirk from the Retirement Café, and we start to experience withdrawal symptoms after several weeks without?

ReplyDeleteYour comment about "data points" to my recent Advisor Perspectives article (and more eloquently expressed in this post) inspired my November 20 post,

http://howmuchcaniaffordtospendinretirement.blogspot.com/2016/11/using-multiple-data-points-to-determine.html

and I should have given you credit for inspiring it (sorry). If we readers can't have more Dirk quantity, I guess we will just have to live with Dirk's most excellent quality thoughts.

Ken, I read your post and enjoyed it. I guess we bloggers probably inspire one another, so it all evens out.

ReplyDeleteFrankly, I spent nearly three months researching reverse mortgages and fighting tooth and nail to show that, while they are excellent tools for many retirees, they aren't without their issues. By the time I won a few of the arguments, I was exhausted and needed a brief respite from my blogging responsibilities.

Appreciate the compliments! It's always nice to be missed.

There is a further issue here.

ReplyDeleteThe market has a signal (the value of the index) and a noise, the variation usually measured as the standard deviation (sigma or beta). That's fine.

But any next step involves unwarranted assumptions on the underlying distribution. That includes pronouncements on the likelihood of a 600-point drop on Monday, and Monte-Carlo simulations. The usual implicit assumption is that of a Normal Distribution, and that is known to not be true. Nor, any other statistical distribution that is amenable to analysis.

Correct. I believe I have mentioned that a few times recently, but it is worth repeating.

DeleteMonte Carlo simulations require an expected return, which we typically assume with little evidence will be similar to past returns. They similarly assume a standard deviation. But, the assumption that many take for granted is that the returns will follow a normal distribution.

We know that the distribution is not normal because events that should occur about once in a million lifetimes happen relatively often in the stock market. Whatever the underlying distribution, it is not normal, log-normal or any other currently-understood parametric distribution. We assume that it is normal or log-normal in our models, though we have mathematical evidence of fatter tails than either of these distributions. (A t-distribution is sometimes used because it has fatter tails, but that's just a more conservative guess.)

For readers without a statistics background, let me say this more simply. Statistical models try to predict your economic future based on weak premises. Like Yogi, you should discount their predictions appropriately.

Thanks for the comment!

Dirk, I believe this article contains the Bill Bernstein quotation you're seeking:

ReplyDeletehttp://www.efficientfrontier.com/ef/401/math.htm

Thanks, that's it! This has been driving me nuts. In my defense, I probably read that post ten years ago. And, I did search EfficientFrontiers.com but didn't find it.

DeleteI'm going to edit the post to include that.