$FNSR Wants Me to Put My Money Where My Mouth Is
Well, a day after I mention my chickening out with $ANET and having to buy it later at a higher price, $FNSR challenges me to put my money where my mouth is and tanks almost 24%. The low of the day grazes my stop by a penny, but leaves it unscathed. Simply unbelievable. So I will follow my system and hold, but I can’t see this working out very well. In the end, I won’t be losing much of my initial capital as I was up a decent amount at the time of the tankage. I thought this cup and handle would be turn out to be a positive thing, but it failed dramatically.
In any case I can’t be right on every stock. I just need to quell the disappointment of a bitter end to a stock that was at a decent profit and forming a pretty nice base… unless $FNSR makes an incredible recovery of course, like $ANET.
Why is My Stop So Low?
So I guess the next question is why the heck would I have such a crazy low stop. A discretionary trader would have certainly done something like raise the stop to the bottom of the cup at 28$, or the long term trend-line to 29$,… however my system does not allow me to make stops by eye. It has three precisely calculated stops:
- A fixed, initial stop-loss based on a multiple of the price’s recent standard deviation, that is the dotted, red, horizontal line. This stop determines my position-sizing.
- A “slow stop” that ratchets up based on a certain number of standard deviations below a moving average for less volatile moves
- A “fast stop” that ratchets up based on a percentage below a linear price regression for more volatile moves
- The last two stops are shown as my “Combined stop” in my charts, since I only really care about the higher of the two, once it is past my initial stop-loss.
In any case what this comes down to is that I can’t make a mistake like $ANET again, and listening to my system this time around with $FNSR, even if it does stop out, is good, not bad. Discipline can seem ironic, I guess, when only looking at one data-point.
So how did I settle on such wide stops? Firstly, I chose stops that are easy to calculate and are a function of slow and fast-moving price, and my comfort-zone. Then I can take a database of currently listed and delisted stocks and see how my stops performs fairly quickly by backtesting my system over that historical data. To avoid overfitting, I test the system by sweeping my tunable parameters across rational ranges to see if the performance is stable. This involves rerunning the backtest multiple times to see if there is any steep drop-offs in returns. This generates what I call a “performance-space”, which in three-dimensions would look like a topographical map, where the peaks and valleys represent the performance of the system based on the results of sweeping the tunable parameters. Note that performance isn’t necessarily measured by CAGR; you can use what you like, such as returns over maximum drawdown, Sharpe or Sortino ratios, etc.
However most systems use more than two tunable parameters, making 3D visualization aids a little difficult. Spreadsheets could help in this case or some kind of algorithm to search for multidimensional peaks (genetic algorithms, simulated annealing, if you feel like getting exotic).
I’ve read a lot about the use of in-sample and out-of-sample data. In-sample data is data used to optimize the system, and out-of-sample data is data you use to validate your system to test its ability to forecast returns. As soon as you tune again after an out-of-sample test, your out-of-sample data becomes in-sample and you need other, “unblemished” data for further validation. If you continue to optimize over in-sample data, a profitable tuning could simply be fitted to noise, rather than describing a robust trading system (see the 3D plot below), where any shift in the performance-space map will send you into the troughs of negative expectancy:
Usually the solution to over-fitting is to perform some form of walk-forward optimization. The problem is, every long-term trading system you develop will never be completely unique to the other, and thus your out-of-sample data will always have been used somewhat by past attempts to optimize and validate and will thus be blemished. Walk-forward seems great for day-trading when you have endless data to work with and different strategies to pursue and you plan to re-optimize often to match current day or swing-trading conditions. Long-term trendfollowers, though, usually depend on end-of-day data and mostly price action. They can only really find good data from the mid-20th century forward. That seems like a lot of data points, but it’s not when your holding period for successful stocks is months to years.
My way around this is to not optimize too precisely or to ridiculous numbers (ever use a 123.94839458 moving day average? Don’t do it!). I only sweep across moving averages in tens of days for example. This provides robustness against latching onto noise and providing a false winning system. In fact, when I do my parameter sweeping, I make sure I never see a losing result. I like my performance-spaces to be rolling hills, well above-water. Therefore I’m not really optimizing. It’s really more of a stress-test, but I’ll still choose the top of the hill, of course, as you have to settle on something.
In the end, the only true, real out-of-sample data is the present and future data, as it is the only relevant data-set for which your system will generate real profits. That being said, my argument only holds if you create a trading system that actually has a semblance of a rational process. If you create a system in a bottoms-up fashion based on data-mining, you are in trouble if you optimize that and decide to trade it. But if you create a system that makes sense with respect to how markets function, you are careful about optimization, and use proper risk management, you should be in good shape to profit.