But that strategy won't work when sequence of returns (SOR) risk is involved. Here's why.
The terminal value of a retirement portfolio (it's balance at the end of retirement) that we spend down using a sustainable withdrawals (SW) strategy isn't solely a function of the rate of portfolio return. It is a function of the withdrawal rate, investment returns, and the sequence of those returns.
For every average rate of portfolio return, there is some probability that the portfolio will be depleted prematurely and some probability that it will fund at least thirty years depending on the sequence of the returns. If the portfolio enjoys a high average rate of return over the 30-year period, the probability that it will be derailed by SOR risk is quite small. Likewise, if the average return is quite low over that period, the portfolio will probably fail, perhaps even without the nudge of a poor sequence of returns.
But in the range of average returns that you are most likely to experience, say between about 2% a year and 6%, SOR risk will often determine failure or success.
To illustrate, let's look at historical returns using the Robert Shiller data and the spreadsheet from the Retire Early Home Page and see what the historical results would have been for 2% real rates of return over past 30-year periods with a 4% withdrawal rate.
Historical stock market data is very limited. Shiller's data back to 1871 provides 142 years of data, but that is less than five unique 30-year periods. We try to stretch this number in a somewhat-flawed statistical manner by using rolling 30-year periods of historical data, but there are still only 112 of those. That is a relatively small sample for our purposes and no periods experienced 2% rates of return. Nonetheless, the terminal portfolio values for a $1,000 portfolio of 50% equities and using a 4.5% withdrawal rate with historical returns data can be shown as follows.
There were no periods with real 50% equity portfolio returns of only 2%, the rate of return I was hoping to investigate. There were so few periods in the sample, in fact, that I had to increase the withdrawal rate from 4% (the one I actually wanted to investigate) to 4.5% just to show a few more failures. Regardless, you can see that some portfolios historically failed with 4.5% rates of return while some successfully funded 30 years of retirement with only a 3.5% average return due to sequence of returns (SOR) risk. In fact, during this period of historical data, portfolios would have failed with real rates of return as high as 4.4% a year while others succeeded with returns as low as 2.8%.
Because there is such a small sample of historical data to work with, we sometimes use Monte Carlo simulation to test hypotheses. (A reader recently complained that I should use historical data more often, a strange complaint given that I very rarely use anything else, but this is an example of when we really need simulation to explain a point because the historical data is inadequate.)
I used the simulation from The Implications of Sequence of Return Risk to generate a similar graph. This simulation provided not only several scenarios with 2% portfolio returns, but produced a number of failed scenarios with a 4% withdrawal rate. The simulation provided 10,000 unique 30-year scenarios.
Notice in this graph that I rounded rates of return along the x-axis to the nearest one percent. Instead of producing a cloud of outcomes as in my previous post, this chart displays a vertical bar (actually a cluster of points) of the terminal portfolio values (TPV's) that demonstrate the range of outcomes for each rounded rate or return. (In effect, I scrunched all the outcomes from portfolio returns of 1.5% to 2.5% into a vertical bar above "2%", for example.)
Also note the yellow marker inside each vertical bar (double-click the chart to enlarge). That point marks the terminal portfolio value that would have resulted from a 30-year sequence of identical returns – in other words, it's the expected TPV with no SOR risk. This is the highly unrealistic scenario that would be generated by a spreadsheet or consumption-smoothing models that don't randomize returns.
Using spreadsheets and other tools that don't randomize returns, the yellow markers would seem to indicate that any return of 2% or greater would result in a retirement plan using the SW strategy successfully funding 30 years. But in reality, only 66% of the simulated scenarios with returns from 1.5% to 2.5% succeeded. The spreadsheet looks fine, but there is actually about a one in three chance of failure with this rate of return. And as we saw above, sometimes a lower return would have succeeded and sometimes a higher return would have failed.
What if I plug in 1% instead of 2% for my portfolio's rate of return and my spreadsheet still works? That's gotta be a good sign, right?
You've actually made the outcome less predictable. Scenarios with 1% average returns in the simulation had about double the SOR risk of 2% returns. If you're looking for the lowest rate of return for your spreadsheet that will very likely be successful, insert a higher rate of return, not a lower one.
That's why we can't plug a low average rate of return into a spreadsheet or other planning tool that doesn't randomize returns and gain confidence from the results that our plan will definitely work.
Often it will. Sometimes it won't.
If you plan to implement a SW strategy, be aware that unless you randomize the returns in your spreadsheet, you won't see SOR risk. I'm not suggesting that you shouldn't plan for retirement using a spreadsheet or E$Planner, only that you should do so carefully.