Sharpe Ratio > 1 : Careful What You Wish For

I had the pleasure to hang out with the good folks over at Resolve Asset Management recently. Part of our discussion centered around the differences between our two worlds. As you can imagine, prop trading is fairly different from institutional asset management. A topic of particular interest to both of us however was how to manage expectations. On their end one can imagine that in order to be successful it is important to make sure that clients have a good understanding of the range of outcomes that one should reasonably expects. In my world, it is also important to have a good idea what a trade with particular returns statistics might look like in practice.

One such performance metric one usually looks at is the Sharpe ratio \frac{\mu - rf}{\sigma}. Now long-term readers of this blog will be familiar with my hatred of this measure, but bear with me for a moment. It is common in the asset management industry to seek Sharpe ratios greater than 1. An out of sample ratio like that would be an excellent selling point. However I posit that most people don’t really know what trading a strategy like that will actually look and, most importantly, feel like. Let’s be honest, although we all like to try to approach the market in a perfectly emotionless manner, it is seldom the case in practice. Drawdowns are heart wrenching and are a good indicator of the mental pain you will have to experience with a strategy. On the prop side drawdowns are when you start double guessing the strategy and wonder if it no longer works while in the asset management world you experience the same, but it is also made much worse since you have to deal with client calls and redemption requests etc.

This discussing made me want to take a look at what would be statistically reasonable to experience given say a backtested strategy with a 1.0 Sharpe ratio. For this I create a dataset of 1000 samples of 252 days of daily returns averaging 25 basis points with a standard deviation of \sigma = \frac{.0025}{\sqrt{252}} \approx .0396. You can think of that as if you had 1000 different strategies with the mythical 1.0 Sharpe backtest. Those samples represent potential out-of-sample results a year forward.

In the plot below you can find (for a starting capital of $10,000) the individual equity curves, the distributions of terminal NAV, sharpe ratios, and finally maximum drawdown. Keep in mind here that each one of these comes from a distribution of returns that would average to a sharpe of 1. Some of these are just miserable and would test even the most dedicated and disciplined traders. A large drawdown is far more frequent than I would have imagined. How about you?

normdist

Furthermore, this is from assuming the strategy returns are normally distributed. If I run the same analysis using a Laplace distribution instead (also symmetrical but with significantly fatter tail in comparison) you can see below that the average sharpe still is around 1 but the outcomes are widely different. In particular note the fat right tail on the terminal NAV along with the much changed maximum drawdown distribution.

laplacedist

The point I am trying to make is that it is critical to consider a broad range of outcomes in order to properly shape expectations. The simple analysis above shows that even if the result conforms perfectly to the simulated distribution of returns it might not be as good as you might expect. Finally, the comparison between the normal and Laplace distributions also shows that it is important to consider more than the first two moments when assessing performance as does the Sharpe ratio (or Sortino for that matter). A good alternative worth looking into is the omega ratio which is defined as the probability weighted ratio of gains versus losses for some threshold return target.

Extreme Ownership: A Trader’s Perspective

When reading non-trading related books I always like to think about how they might apply to my field of choice. One of my all-time favorite books is one which at first glance has nothing to do with markets. In Extreme Ownership: How U.S. Navy SEALs Lead and Win1 authors and retired Navy SEAL officers Jocko Willink and Leif Babin take us through the leadership lessons they learned through the treacherous streets of Ramadi (Iraq) during some of the heaviest fighting of that conflict. With some interpretation many of the principles in the book can be very useful to traders. Here is some of the lessons I learned in my career that can be tied in with those principles.

Ch1: Extreme ownership
One must own everything in his world. Taking responsibility for everything impacting the objective before the fact and also taking responsibility for the outcome good or bad is in essence what this whole book is about.

In the trading world one can easily see how this is applicable. Only by taking full ownership of every part of the process can one be successful in the markets. In practice that means understanding and taking responsibility for every step of the process. The research, the execution, the post-trade analysis the risk management, everything. Finally, by being ruthless when assessing performance and not making excuses for bad results, one can objectively asses the situation and take appropriate actions.

Ch2: No bad teams, only bad leaders
If a team performs poorly, the responsibility lies squarely on the leader. By setting the standard and most importantly enforcing it, the leader can create a culture of success.

This really resonated with me regarding my days as a the senior trader on a desk. Working as a team all working towards the same goal it was really important to set a clear standard for everybody to follow. I found this particularly important for on boarding new team members or interns. By clearly defining the expectations it removed a tremendous amount of pressure from them to “do the right thing”. They seldom had to worry about that since the right thing to do had already been outlined for them. This allowed them to gain confidence early on which let them focus on being successful members of the team.

Ch3: Believe
In order to convince and inspire others to follow and accomplish a mission, a leader must be a *true believer* in the mission.

The situation which resonate the most with this principle for me is when a team member comes up with a strategy who he thinks would be worth going live with. The way this usually works in my experience is that the senior trader would take it on as part of his platform and eat the loss if the strategy doesn’t work as expected for whatever reason. In order for this process to work, the senior trader must really believe in what is being attempted. Failure to do that will only serve to stifle the process and embitter the creator of the strategy. It is then very important to commit to the idea and to believe it can be successful in order to make the development process work.

Ch4: Check the ego
Implementing extreme ownership requires checking your ego and operating with a high degree of humility.

Anybody having traded for long enough will know that there are only two ways to learn humility in trading; on your own, or alternatively and much more painfully, by having the market do it. As you can imagine the latter is quite costly at times. One only has to look at all the providers of “volatility strategies” subscription services in February to see a prime example of that. Only by ruthlessly admitting mistake, considering risks and weaknesses can one develop a good plan to succeed.

Ch5: Cover and move
This is about having every element of the process supporting one another.

In the trading sphere this is makes the case for diversification. This does not need much explanation for you dear reader as I know you are all very familiar with that concept. A collection of individually uncorrelated return streams is the only true “Holy Grail” I know of in the trading world. On the employee management side it also means that each member of the team must cover for one another. Trying to stand out individually by putting oneself before the good of the team is a very good way to ensure sub-optimal results. A senior trader must make that fact clear to everyone and tolerate no violations of any kind on that front.

Ch6: Simple
Simplifying as much as possible is crucial to success.

When things go wrong and as you all know, they will go wrong, complexity compounds issues that can spiral out of control into catastrophic losses. In addition to that, think in term of degrees of freedom or number of parameters in a model. Many dollars of mine have been lit on fire over the years to over-fitting even after what I would say were reasonably good precautions to avoid it. While of the other hand the best results and what made me the most money is the thoughtful application of simple techniques that I understood well.

Ch7: Prioritize and execute.
The way to handle countless complex problems snowballing is by prioritizing tasks and executing them focusing all your attention on the highest priority task. “Relax, look around, make a call.”

From a time management perspective this is quite important. There is only so many hours in a day and if you are anything like me your to-do list is endless and ever-growing. From research, strategy implementation, trade monitoring, post-trade analysis, risk monitoring and everything in between the only way to make an effective dent in said to-do list and improve your platform is to aggressively prioritize each task and tackle them successively focusing on the highest priority one. It is also important to recognize that those priorities are shifting in real-time. For instance, focusing on research while a production algo loses connectivity would be a problem to say the least! Additionally, in a risk limit liquidation, affectionately known as a puke situation where say multiple algorithms are requiring immediate action all at the same time this becomes a matter of survival for a trading platform. If the liquidation isn’t handled in a disciplined and unemotional fashion it can lead to much larger losses so prioritization here again is key.

Ch8: Decentralized command
Team members must be aware of the intent and be empowered to make decisions that are in line with that intent within well-defined limits to their decision-making abilities.

For instance, running a desk that operates during all major market sessions (Asia, Europe, America) it is impossible for a senior trader to always manage all the minute details. In addition to that, traders responsible for the desk during those sessions must know what they are trying to accomplish and what they can and also importantly cannot do in other to reach that goal. It is no use having somebody sitting in front of a computer overnight without any power to make any real decisions. All that ends up creating is a lot of lost sleep over middle of the night phone conversations. Furthermore, many such situations are time-sensitive. One can’t afford to have the execution trader not empowered to make decisions once they are needed. That being said, it is critical to clearly outline what are the limits of that decision-making authority. This is one of the areas where I really struggle because it is often very difficult to own the result of something that you don’t end up directly causing. But the important part here is that it is one job to mentor and train those people so that there is no scenarios where they would make a decision that goes against the intent which in our case is always to make the most money possible while minimizing the associated risk.

Ch9: Plan
Each course of action must be carefully planned. When doing so, it is important to account for likely contingencies and mitigate risk that can be controlled as much as possible. Finally, circle back and evaluate how successful the plan to implement lessons learned in future plans.

I know many of my readers are also listener of the excellent Chat with Traders podcast. Those will be very familiar with this recurring theme with many of the guests. Having a plan before stepping into the ring is a good way to improve the range of outcomes you will experience. When money is on the line, emotions get involved and having a solid plan on top of having already evaluated contingencies allows one to be a lot better at handling various scenarios. By being explicit about the plan, one also ensures that they can create a feedback look of success where the lesson learned are applied to future plans leveraging everyone’s experience.

Ch10: Leading up and down the chain of command
One must take ownership of the relationship with subordinate and superiors and make sure that everyone has all the information that they need to be effective in their respective jobs.

The importance of leading down the chain as a senior trader is obvious. Specifically, the traders executing the strategies need to have all the information they need to be effective. What might be less obvious is how one would manage their relationship to their superior. The idea here is that in the prop world capital is limited. As such in order to be allocated capital the partners need to have certain information. In order to be an effective senior trader, one must proactively provide that information and also periodically updating management on the process of various projects. That way, I have found that it is much easier to get buy-in for new ventures and build confidence so that over time, more leeway is available. An interesting dichotomy here is that by enabling your superiors to do their job more easily a senior trader will be more successful.

Ch11: Decisiveness amid uncertainty
Amid the pressure from uncertainty, chaos and the element of unknown that one must contend with, it is critical for leaders to act decisively; make the best decisions possible based on only the immediate information available.

Markets are inherently stochastic; it is usually impossible to have enough information to be certain which course of action is best. That said being indecisive is rarely the right decision. In my experience it is often better to decisively commit and make a decision that turns out to be wrong after the fact rather than wait in perpetuum for more information. By being decisive and using some prioritization even wrong decision become easier to handle and the resulting outcome is often better than the alternative on undecisive actions or worst inaction.

Ch12: Discipline equals freedom
Maintaining a high degree of discipline allows the freedom necessary to be successful.

Long term readers of this blog will know that I am not the most educated, most intelligent, or most talented trader around; far from it. The only reason I was able to have some success in my trading endeavors is my discipline. As the authors put it, “unmitigated daily discipline in all things” is truly the only edge that I have over my competitors. Many might think that having a high degree of self-discipline is coercive more than anything. That, in my experience, could not be further from the truth. Only by being disciplined can I have the freedom to do what is necessary to obtain success. Additionally, it should come as good news to every one of my readers that are considering putting some of their capital at risk since discipline is something you can start working on at any time. This is not a nature vs nurture situation; it is pure force of will. Anyone can reach the level of discipline I believe is required in order to be a successful trader.

To wrap things up, I cannot recommend the book enough. It is a good read if you are interested in becoming a better person, let alone trader. I also should mention that I don’t pretend to have all the answers. I have failed many more times than I have been successful but by constantly seeking thoughtful failure, I was able to find fantastic success. Something I think that is within reach for just about everyone with the required discipline and proper effort.

1. Willink, Jocko., and Leif Babin. Extreme Ownership: How U.S. Navy SEALs Lead and Win. First edition. St. Martin’s Press, 2015.

Algorithm design and correctness

Giving software you wrote access to your or your firm’s cash account is a scary thing. Making a mistake when manually executing a trade is bad enough when it happens (you can take my word for it if you haven’t yet), but when unintended transactions are made by a piece of software in a tight loop it has the potential to be an extinction event. In other words, there is no faster way to self-immolate than to have incorrect software interacting with markets. It is obvious that correctness is a critical part of the design of an algorithm. How does one go about ensuring1 it then?

Abstractly, a trading algorithm is simply a set of specific actions to be taken depending on the state of things. That state can be internal to the algorithm itself (such has open position, working orders etc.) or associated to the external world (current/historical market state, cash balance, risk parameters etc.). It is then natural for me to think of them as finite state machines (FSM). That presents a couple immediate advantages. FSMs are a very well-studied abstraction in the computer science world, there is then no need to reinvent the wheel as best practices are well accepted. Since they are so often used, just about certain that you will be able to find examples of FSMs implemented for you language of choice. For instance, on the Python Package Index a search for “finite state machine” returns over 100 different frameworks. I am confident the results would be similar for just about any modern language. That being said, let’s backup before we dive deeper on some application of the pattern.

Per Wikipedia, a FSM is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some external inputs; the change from one state to another is called a transition. A FSM is defined by a list of its states, its initial state, and the conditions for each transition. We will use the Turtle trading system 2 rules as an example. The rules are summarized below:

Entry: Go long (short) when the price crosses above (below) the high (low) of the preceding 55 days.
Exit: Cover long (short) when the price crosses below (above) the low (high) of the preceding 20 days.

As we learned previously, to fully define a FSM we need a list of its states, its initial state and the conditions for each transitions. We then have to convert these trading rules to a proper FSM definition.

From the rules above I would define the following trading states: TradingState\in\left\{Init,Long,Short,StopFromLong,StopFromShort\right\} In addition the trading state, the following states would be defined as well as they are relevant to the strategy: IndicatorStatues\in\left\{Initializing,Ready\right\} and finally (Low50, Low20, High50, High20)\in\mathbb{R}^{4}. That is to say then that the state space of our algorithm is defined by the following triplet \left\{TradingState,IndicatorStatus,\mathbb{R}^{4}\right\}

If you can forgive my abuse of notation, the following outlines the state transitions and their respective conditions:

\begin{cases} Init\rightarrow Init & IndicatorStatus=Initializing \\Init\rightarrow Long & IndicatorStatus=Ready\land Prc_{t}>High50_{t} \\Init\rightarrow Short & IndicatorStatus=Ready\land Prc_{t}<Low50_{t} \\Init\rightarrow SFL & \emptyset \\Init\rightarrow SFS & \emptyset \\ \\Long\rightarrow Init & \emptyset \\Long\rightarrow Long & Prc_{t}\ge Low20_{t} \\Long\rightarrow Short & Prc_{t}<Low50_{t} \\Long\rightarrow SFL & Low50_{t}\le Prc_{t}<Low20_{t} \\Long\rightarrow SFS & \emptyset \\ \\Short\rightarrow Init & \emptyset \\Short\rightarrow Long & Prc_{t}>High50_{t} \\Short\rightarrow Short & Prc_{t}\le High20_{t} \\Short\rightarrow SFL & \emptyset \\Short\rightarrow SFS & High20_{t}<Prc_{t}\le High50_{t} \\ \\SFL\rightarrow Init & \emptyset \\SFL\rightarrow Long & Prc_{t}>High50_{t} \\SFL\rightarrow Short & Prc_{t}<Low50_{t} \\SFL\rightarrow SFL & Low50_{t}\le Prc_{t}\le High50_{t} \\SFL\rightarrow SFS & \emptyset \\ \\SFS\rightarrow Init & \emptyset \\SFS\rightarrow Long & Prc_{t}>High50_{t} \\SFS\rightarrow Short & Prc_{t}<Low50_{t} \\SFS\rightarrow SFL & \emptyset \\SFS\rightarrow SFS & Low50_{t}\le Prc_{t}\le High50_{t} \end{cases}

Knowing the possible state transitions, we can determine what trading actions are needed in each instance:

\begin{cases} Init\rightarrow Init & \emptyset \\Init\rightarrow Long & Send\ buy\ order\ for\ N\ units \\Init\rightarrow Short & Send\ sell\ order\ for\ N\ units \\Init\rightarrow SFL & \emptyset \\Init\rightarrow SFS & \emptyset \\ \\Long\rightarrow Init & \emptyset \\Long\rightarrow Long & \emptyset \\Long\rightarrow Short & Send\ sell\ order\ for\ 2N\ units \\Long\rightarrow SFL & Send\ sell\ order\ for\ N\ units \\Long\rightarrow SFS & \emptyset \\ \\Short\rightarrow Init & \emptyset \\Short\rightarrow Long & Send\ buy\ order\ for\ 2N\ units \\Short\rightarrow Short & \emptyset \\Short\rightarrow SFL & \emptyset \\Short\rightarrow SFS & Send\ buy\ order\ for\ N\ units \\ \\SFL\rightarrow Init & \emptyset \\SFL\rightarrow Long & Send\ buy\ order\ for\ N\ units \\SFL\rightarrow Short & Send\ sell\ order\ for\ N\ units \\SFL\rightarrow SFL & \emptyset \\SFL\rightarrow SFS & \emptyset \\ \\SFS\rightarrow Init & \emptyset \\SFS\rightarrow Long & Send\ buy\ order\ for\ N\ units \\SFS\rightarrow Short & Send\ sell\ order\ for\ N\ units \\SFS\rightarrow SFL & \emptyset \\SFS\rightarrow SFS & \emptyset \end{cases}

This is obviously a simplistic example meant to illustrate my point but lets consider the design for a moment. First it is obvious that each state is mutually exclusive which is a prerequisite for a valid FSM. That in plain terms means that at any point in time I can evaluate the state and figure out clearly what the algorithm was trying to do since there can only be one possible transition given the state at that specific point in time. That would not have been the case had I decided to define the following transition:

\begin{cases} \cdots \\Long\rightarrow Short & Prc_{t}<Low50_{t} \\Long\rightarrow SFL & Prc_{t}<Low20_{t} \\\cdots \end{cases}

In that case, it would be possible for both the conditions to evaluate true and therefore there are more than one possible state transitions. How would you know looking at the logs what this algorithm tried to do? In this case it is again obvious but in more complex FSM I always find it worth the time to carefully consider the entire state space and clearly define the algorithm behavior in a similar fashion to the above. This might seem very long winded but it is something I do on paper religiously before I ever write a line of production code.

The same pattern can also be used in other parts of your trading stack. For instance, your order management system could define orders as a FSM with the following states: \left\{Sent, Working, Rejected, CxlRequested, Cancelled, CxlRejected, PartiallyFilled, FullyFilled\right\}. The transitions in this case will have to do with the reception of exchange order events such as acknowledgement, reject messages etc. If you stop to think about it, you could design a whole trading stack with almost nothing but FSMs. By using FSMs you can design to insure correctness of your algorithms and eliminate a whole class of potential design flaws that might sneak in otherwise.

The pattern fits nicely in object-oriented design where state and related behaviors can be grouped in nicely decoupled classes. That said, some functional languages provide you a type system that can provide you with additional guarantees when used properly and can help you build some very powerful abstractions. We will examine an example in  a following post. In the meantime, I would be very interested, as always, to hear which patterns you have found useful in your work.


1. [Or as close as one can get to certainty anyway!]

60% of the time, it works every time.

The blogosphere is a very interesting microcosm of the trading world. Many of my older readers will no doubt remember the glory days of “short-term mean-reversion”. By which I mean of course the multitude of posts (including several from yours truly), about RSI2, DV2 and the like. Around 2010 this type of strategy was quite successful and many people put their twist on it posting their results.

Then while this humble publication went into hibernation the collective brain trust of the community turned to the relatively new volatility ETF space. It was glorious; backtests were run, strategies were tweaked, whole websites tracking the strategies popped up and simulated equity curves went to the moon. Life was great. Then on Monday 2018-02-05 the music didn’t just slow down, it stopped. $XIV, the workhorse of many such strategies, there is no nice way to say this; blew up. From what I can see, its demise was met with mixed emotions. Twitter traders with $0.00 AUM knew it all along and were obviously already short $XIV from the high for size. People with subscription strategies either patted themselves on the back for side-stepping the reaper this time, or went AWOL to avoid having to take ownership for the losses incurred by their subscribers. My personal favorite are people selling strategies that usually held $XIV shares as their de-factor short-volatility security declaring that its demise is a non-issue; $UVXY will do the trick just as well!

This demonstrate such a blatant lack of trading IQ I struggle to put into words. The idea that because it was side-stepped this time the next face-ripping event will as well is simply preposterous. Selling volatility is something you do with other people’s money. It’s a great business, you pocket the recurring fees and performance incentives and when the music stop and you lose your client’s money they take all the loss. As Ron burgundy would put it, 60% of the time, it works every time. We would all be so lucky to find such asymmetric payoff propositions for ourselves, I share in the wins now, you get the blowout later, thanks for playing.

The vast majority of such systems I have encountered in the blogosphere were based on term structure signals to determine whether long or short volatility exposure has tailwind. In this particular instance, thankfully for some, the signal to get out of the short happened before the spike. Why should it do that next time, or the time after that?

I’d love to hear your thinking on the subject, esteemed reader. I know short volatility is a popular trade and has been for some time. Are you still going to do it? Are you worried about events such as the one form this past couple days being an issue in the future? Do you want to pay me a monthly fee for putting you in a trade that has an expected value of 0?

I would be down with that if I could sleep at night knowing you take all the risk and I will be the only one left with any profits to show for when the chips land at the end. Unfortunately, I could not live with myself. For those interested, you can look on collective2 but make sure you filter the strategies by performance excluding this week.

Machine learning is for closers

Put that machine learning tutorial down. Machine learning is for closers only.

As some of you that were around back in the early of this blog may know, I always held high hopes for the application of machine learning (ml) to generate trading edges. I think like many people first coming across machine learning the promises of being able to feed raw data in some algorithm you don’t really understand to conjure profitable trading ideas seemingly out of thin air is very appealing. There is only one problem with that; it doesn’t work like that!

Much to my chagrin now, a lot (and I mean a lot) of what this blog is known for is exactly his type of silly applications of ml. With this post, I hope to share some of the mistakes I  made and lessons I learned trying to successfully make ml work for me that haven’t made it on the blog due to my abysmal posting frequency. Here they are, in no particular order:

Understanding the algorithm you are using is important.

It is almost too easy to use ml these days. Multiple times I would read a paper forecasting the markets using some obscure algorithm and would be able to, through proper application of google-fu, find open-sourced code that I could instantly try out with my  data. This is both good and bad; on the one hand it is nice to be able to use the cutting edge of research easily but of the other, should you really give something you don’t understand very well access to your trading account? Many of my dollars were mercilessly lit on fire walking that path. I don’t think you need to master the algorithm to be able to apply it successfully but understand it enough to be able to explain why it might be successful with your specific problem is a good start.

Simple, not easy.

One of my worst flaws as a trader is that I am relentlessly attracted to complex ideas. I dream about building out complex models able solve the market puzzle raking in billions of dollars a la RenTec. Meanwhile, back in the ruthless reality of the trading life, just about all the money I make trading comes from thoughtful application of simple concepts I understand well to generate meaningful trading edges. That however, does not mean that it needs has to be easy. For instance, there are multiple reasons why a properly applied ml algorithm might outperform say ordinary least-squares regression in certain cases. The trick is to figure out if the problem you are currently trying to solve is one of those. Related to the point above, understanding a ml technique allows you to have a better idea beforehand and saves you time.

Feature engineering is often more important than the algorithm you choose.

I cannot emphasize this point enough. A lot of the older posts on this blog are quite bad in that respect. Most of them use the spray-and-pray approach, that is to say put a bunch of technical indicators as features, cry .fit()!, and let slip the dogs of war as data-scientist Mark Antony would say. As you can imagine it is quite difficult to actually make that work and a lot of the nice equity curves generated by these signals don’t really hold up
out-of-sample. Not a particularly efficient way to go about it. Generating good features is the trader’s opportunity to leverage their market knowledge and add value to the process. Think of it as getting the algorithm to augment a trader’s ability, not replacing it altogether.

Ensembles > single model.

Classical finance theory teaches us that diversification through combining multiple uncorrelated bets is the only free-lunch left out there. In my experience, combining models is quite superior to trying to find the one model to rule them all.

Model predictions can themselves be features.

Model-stacking might seem counter-intuitive at first but there were many Kaggle competition winners that built very good models based on that concept. The idea is simple, use the predictions of ml models as features for a meta-model to generate the final output for the model.

I’ll conclude this non-exhaustive list by saying that the best results I have had come from using genetic programming to find a lot of simple edge that by themselves wouldn’t make great systems but when thoughtfully combined create a profitable platform. I will discuss the approach in forthcoming posts.

Give me good data, or give me death

A good discussion not to long ago led me to start a revolution against some data management aspects of my technology stack. Indeed it is one of the areas where the decisions made will impact every project undertaken down the road. Time is one of our most valuable resources and we need to minimize the amount of it we have to spend dealing with data issues. Messy and/or hard to use data is the greatest drag I have encountered when trying to produce research.I had to keep a couple things in mind when deciding on a solution. First, I knew I did not want to depend on any database software. I also knew that I would not be the only one using that data and that although I use Python, other potential users still don’t know better and use R. The ideal solution would be as close to language agnostic as possible. Furthermore, I wanted a solution stable enough that I did not have to worry too much about backward compatibility in case of future upgrade.

With those guidelines in mind, I could start to outline what the process would look like:

  1. Fetch data from vendor (csv form)
  2. Clean the data
  3. Write the data on disk

The biggest decision I had to make at this stage was the format used to store the data. Based on the requirements listed above, I shortlisted a few formats that I thought would fit my purpose: csv, json, hdf5, and msgpack.

At this stage I wanted to get a feel for the performance of each of the options. In order to do that I created a simple dataset of 1M millisecond bars so 4M observations.

In [1]:
import pandas as pd
import numpy as np

#create sizable dataset
n_obs = 1000000
idx = pd.date_range('2015-01-01', periods=n_obs, freq='L')
df = pd.DataFrame(np.random.randn(n_obs,4), index=idx, 
                  columns=["Open", "High", "Low", "Close"])
df.head()
Out[1]:
Open High Low Close
2015-01-01 00:00:00.000 -0.317677 -0.535562 -0.506776 1.545908
2015-01-01 00:00:00.001 1.370362 1.549984 -0.720097 -0.653726
2015-01-01 00:00:00.002 0.109728 0.242318 1.375126 -0.509934
2015-01-01 00:00:00.003 0.661626 0.861293 -0.322655 -0.207168
2015-01-01 00:00:00.004 -0.587584 -0.980942 0.132920 0.963745
Let’s now see how they perform for writing.
In [2]:
%timeit df.to_csv("csv_format")
1 loops, best of 3: 8.34 s per loop
In [3]:
%timeit df.to_json("json_format")
1 loops, best of 3: 606 ms per loop
In [4]:
%timeit df.to_hdf("hdf_format", "df", mode="w")
1 loops, best of 3: 102 ms per loop
In [5]:
%timeit df.to_msgpack("msgpack_format")
10 loops, best of 3: 143 ms per loop
And finally let’s have a look at their read performance.
In [11]:
%timeit pd.read_csv("csv_format")
1 loops, best of 3: 971 ms per loop
In [10]:
%timeit pd.read_json("json_format")
1 loops, best of 3: 6.05 s per loop
In [8]:
%timeit pd.read_hdf("hdf_format", "df")
100 loops, best of 3: 11.3 ms per loop
In [9]:
%timeit pd.read_msgpack("msgpack_format")
10 loops, best of 3: 33.1 ms per loop
Based on that quick and dirty analysis HDF seems to do better. Read performance is much more important to me as the data should only be written once but will definitely be read more than that. Please not that I did not intend portray this test a end-all discussion proof. But simply to look at what the options were and to evaluate their relative performance.Based on my preliminary results including but not limited to this analysis, I elected to store the data using the HDF format as it meets all my requirements and looks to be fairly fast, at least for medium size data. It should also enable the R homies to use it through the excellent rhdf5 library.

So at this point I have decided on a format. The question that remains to be answered is how to organize it. I was thinking of something like this:

/data
|-- Equities
    |-- Stock (e.g. SPY, AAPL etc.)
        |-- Metadata
|-- Forex
    |-- Cross (e.g. USDCAD, USDJPY etc.)
        |-- Metadata
        |-- Aggregated data
        |-- Exchanges (e.g. IdealPRO etc.)
|-- Futures
    |-- Exchange (e.g. CME, ICE, etc.)
        |-- Contract (e.g. ES, CL etc.)
            |-- Metadata
            |-- Continuously rolled contract
            |-- Expiry (e.g. F6, G6, H6 etc.)

Personally not too sure how to best do this. It would seem to me that it would be rather difficult to design a clean polymorphic API to access the data with such a structure but I can’t seem to find a better way.

I would like to hear what readers have come up with to address those problems. In addition to how you store and organize your data, I am very keen to hear how you would handle automating the creation of perpetual contracts without having to manually write a rule for the roll of each product. This has proven to be a tricky task for me and since I use those contracts a lot in my analysis I am looking for a good solution.

Hopefully this discussion will be of interest to readers that manage their own data in-house.

99 Problems But A Backtest Ain’t One

Backtesting is a very important step in strategy development. But if you have ever went through the full strategy development cycle, you may have realized how difficult it is to backtest a strategy properly.

People use different tools to implement a backtest depending on their expertise and goals. For those with a programming background, Quantstrat (R), Zipline, PyAlgoTrade (Python) or TradingLogic (Julia) are sure to be favorite options. For those preferring a retail product that involves less conventional programming, Tradestation or TradingBlox are common options.

One of the problems with using a third party solution is often the lack of flexibility. This doesn’t become apparent until one tries to backtest a strategy that requires more esoteric details. Obviously this will not be an issue backtesting the classics like moving averages or donchian channel type strategies, but I am sure some of you have hit your head on the backtest complexity ceiling more than once. There is also the issue of fill assumption. Most backtests I see posted on the blogosphere (including the ones present on this humble website) assume trade on the close price as a simplifying assumption. While this works well for the purpose of entertaining a conversation on the internet, it is not robust enough to be used as the basis for decision making to deploy significant capital.

The first time one can actually realize how good (bad) his chosen backtesting solution is when the strategy is traded live. However I am always amazed how little some traders pay attention to how closely their backtest match their live results. To some, it is like the strategy is the step following the backtest. I think this is missing on some crucially important part of the trading process, namely the feedback loop. There is a lot to be learned in figuring out where the difference between simulation and live implementation. In addition to the obvious bugs that may have passed through testing, it will quickly become apparent whether your backtest assumptions are any good and whether or not they must be revisited. Ideally backtested results and live results for the period which the overlap should be closely similar. If they are not, one should be asking serious questions and try to figure out where the discrepancies come from. In a properly designed simulation on slow frequency data (think daily or longer) you should be able to reconcile both to the penny. If the backtester is well designed, the difference is probably going to center on the fill price at the closing auction being different from the last traded price which is typically what gets reported as the close price. I always like to pay particular attention to the data I use to generate live signals and compare it to the data fed to the simulation engine to find potential signal differences as I often find that the live implementation trades off data that doesn’t always match with the simulation dataset. Obviously, as the time frame diminishes the problems are magnified. Backtesting intraday trading strategies is notoriously difficult and beyond the scope of this blog. Let’s just say that a good intraday backtester is a great competitive advantage for traders/firms willing to put in the development time and money.

It would be negligent of me to complain about backtesting problem without offering some of the processes that I use to improve their quality and ultimately usability. First I personally chose not to use a third party backtesting solution. I use software that I write, not because it is better than other solutions out there but because it allows me to fully customize all aspects of the simulation in a way that is intuitive to me. That way I can tune any backtest as part of the feedback loop I was referring to earlier to more accurately model live trading. Furthermore, as I refined the backtester over time, it slowly morphed into an execution engine that could be used with proper adapters to trade live markets. Effectively I have a customized backtest for each strategy but they all share a common core of code that forms the base of the live trading engine. I also spend quite some time looking at live fill vs simulated fills and try to reconcile the two.

Please do not think that I am trying to tell you that do-it-yourself solution is the best. I am simply saying that it is the one that fits me best. The point I am trying to make herein is that no matter what solution you decide to use, it is valuable to consider the difference between simulated and live results, who knows perhaps it will make you appreciate the process even more.I would be tremendously interested to hear what readers think on the subject, please share some insight in the comment section below so everybody can benefit.

QF

Hello Old Friend

Reports of my death have been greatly exaggerated ~Mark Twain

Wow, it has been a while. Roughly four years have gone by since the my last post. It might seem like a long time for some, but coming out of college and hitting the ground running as a full-time trader made it seem like the blink of an eye for me. That being said, I have recently come to miss it and have the intention to start blogging again albeit on an irregular schedule that will evolve based on my free time.

What to expect

Obviously since I have been trading full time my skill set has evolved so I can only imagine that the new perspective I hope to bring to the analysis contained moving forward will be more insightful.

You will notice a few changes, the biggest one being that I no longer use R as my main language. I have all but fully moved my research stack to Python, so you can expect to see a lot more of it moving forward. As for the content, I think the focus will remain the same for the most part; algorithmic trading for the Equities markets.

Special Thank You

Finally I want to take the time to thank the readers that kept emailing and kept in touch during my absence from the blogosphere. I can only hope that the number of people that somehow find value in these short articles will grow over time and that I will meet other interesting people. You are after all the reason I write these notes. So a big Thank You to all of you.

QF

Ensemble Building 101

In continuation with last post, I will be looking at building a robust ensemble for the S&P 500 ETF and mini-sized future. The goal here will be to set-up a nice framework that will be (hopefully) of good use for readers interested in combining systems in creating strategies. In order to make it more tangible, I want to create a simplified example that will be used as we move forward in development. I would encourage readers to actively comment and contribute to the whole process.

Before we do anything, as with any project, I want to have an idea of the scope so I don’t get entangled in an infinite loop of development where I am never satisfied and continuously trying to replace indicator after indicator for a marginal increase in CAGR on the backtest. For this example, I want to use 4 indicators (two with a momentum flavour and two with a mean-reversion flavour), a volatility filter and an environmental filter. The strategy will also have an adaptive layer to it.

Now that the foundation idea has been laid, let’s examine the mechanics at a high level, keep in mind that this is only one way you could about it. Basically I will be evaluating each strategy individually, looking at performance under different volatility environment and also with the global environment filter. Based on historical performance and current environment, exposure will be determined.

This series will have R code for the backtest (still debating using quanstrat/blotter) and the simple example will be available for download. My point is to provide reader a way to replicate results and extend the framework significantly. More to follow!

QF