A statistics course
committed to honest data analysis,
focused on mastery of best-practice models,
and obsessed with the dynamics of financial markets
Cheese: 92M USD of It
A British Cheese company has constributed cheese to its pension fund.
Teams List and Presentation Schedule
Please check the PRESENTATION SCHEDULE for time of your team's presentation.
Citigroup --- Sales and Trading Event
Divya Krishnan took 434 in the Fall of 2008 (a tough year!) and now he is working in Sales and Trading at Citigroup. He's also organizing a campus event --- check it out:
On November 26th and 27th, Citi’s University of Pennsylvania Alumni will be on campus to discuss careers in Sales & Trading. In addition to holding a Sales and Trading information session/workshop on campus, some of the professionals from the Sales and Trading team will offer their time to meet informally, individually or in very small groups, with students to discuss our Sales & Trading program, as well as to review resumes and discuss interview techniques. Any students interested, please send a resume to Divya Krishnan at firstname.lastname@example.org
Onto the Final Turn...
First Deliverable: Monday November 26. A Two Page Description of your Proposal. This is for me to read to be prepared for your Proposal Presentations. Proposal Presentations will be on November 28 and December 2 and December 5. Written project reports will be due December 18 (hard copy and electronic copy).
From the Sunday Round-Up
Tim Harford, “The efficient markets hypothesis is surely false. What is striking is that it is very close to being true. For the Warren Buffetts of the world, “almost true” is not true at all. For the rest of us, beating the market remains an elusive dream.” (Tim Harford)
Day 23: Putting the Pieces Together
For 26 November 2012
Today we will have completed just over two dozen lectures, and this is a very modest number given our goal of dealing honestly with one of the central factors of economic life --- the returns on financial assets.
Our plan for the day is to (1) review the stylized facts that we have verified from our own analyses --- and add a few new twists (2) review our models with a focus on where they add insight and value, and finally (3) look at a few ways these can inform the design of interesting projects. In particular, we'll look at the interesting distinction between univariate, multivariate, and ordered univariate strategies. Variations on these methods can be applied to almost any basic project idea.
I'll also add a word or two about the use of confidence intervals and suggest a new technique --- bootstrapping ---which you can use to get confidence intervals (of a sort) for the return on a trading strategy. The method is not perfect, but it definitely has its charms.
I'll also ring the bell again about the importance of data cleaning, EDA, and presentation with thoughtful tables and graphs.
Generic Advice about Project Design (and Process Comments)
Certainly read the Final Project Spec Sheet. We've already covered the main points in class. The one I would underscore here is that clear thinking is a key to having an excellent report. You want to make clear assertions and you want to make sure that what you assert is backed up by clear, thoughtful, and thorough research. Even a "simple" theme can lead to a very rich report if it is engaged carefully and completely.
This is an excellent time to ask questions about your project ideas and plans --- even though you have already written out your "one pager" pre-proposals.
Sidebar: Promised Link on "Zero Cost" Portfolios
Naturally, they are not zero cost, or why would we hold anything else, since some of these do indeed have positive expected returns. The arithmetic of returns is a little tricky here. I've made one pass, perhaps you can do better. At least this puts some issues on the table.
Sidebar: 130/30 Strategies
It's been kicking around for ages that one might do better to be 30% short and 130% long. You can find a lot about this idea on the web. JAI has a useful empirical piece that is easily accessible.
Sidebar: Colorful View of Style Returns by Year
American Century has a colorful chart that is worth a few minutes of pondering. They call it a periodic table --- which is a bit of a pun, I hope. We can also go to ETFScreen.com for an up-date.
Sidebar: Black Friday Indicator --- Probably Not
Mark Hulbert ran a little regression on the returns for the rest of the year vs the "returns on Black Friday." He used data that went back further than we would ever go and he found what one might guess --- there is no substantial information for the total market that is in the "Black Friday" returns.
Naturally, there are open questions. Suppose we just take 20 years of data --- that actually seems more sensible. Suppose we just look at the retail sector --- that is also more sensible.
Sidebar: Just How Big Is the Equity Premium?
The answer seems to depend on (1) whom you ask and (2) when you ask. This is one take away from a recent survey of 150 texts by Pablo Fernandez. The paper also gives some clarification of what one means by "equity premium" --- there are four reasonable interpretations.
Day 22: Comparing Asset Returns in the Context of Risk
For 21 November 2012
A traditional 434 pre-Thanksgiving day task is to show how learning to juggle with three balls can be a useful metaphor for learning any complex task.
We'll do it again this year --- but it will not be a big chunk of the class. Still, if you are a juggler and you have equipment --- bring it to class on Wednesday.We do want to have a bit of fun on this "extra" day.
One theme is that plain vanilla three ball juggling can be very beautiful --- and magically complex. We won't "cover" it very deeply in class on Wednesday, but even today there is no better way to have four minutes of fun than by watching the Fugly juggler.
Finally, there is our traditional strutting turkey ---- Oh, what he does not know? Sort of reminds me of Bear Stearns's James Cayne going off to his famous bridge tournament secure in the risk management tools in place at Bear Sterns.
(a) A piece from Credit Swiss on the VIX as a signal of market direction. (b) a piece that needs to be up-dated on the World Series of Poker We may do the up-dating in real time! (c) Some discussion of Endowments --- especially at Yale, and (d) a piece on risk aversion that reminds us that most people are really insanely risk averse.
Not a Sidebar --- Back to Business!
The plan for today is to look at the notion of risk adjusted returns from soup to nuts. This is a very interesting topic in financial time series, and it has been developed far less systematically than one might have guessed. We'll consider all of the conventional measures, and add a few variations of our own. We'll also look at what one might learn about risks by consideration of post-mortem analyses of crises and crashes.
There is a some what "dated" resource page on Performance measures. Feel free to take a look, but also take what you see with a pinch of salt.
Not a Sidebar --- Questions about Projects
This is the time for everyone to start discussing the projects --- it's design and execution. I had an email question about the stylistic facts about beta. I'll show how you can do some thinking aloud about the kinds of investigations that beta suggests.
Sidebar: The Risks of Being in Charge of Risk Models
Things do have a way of periodically blowing up, and for some time now banks have found the natural goat -- the guy in charge of the VaR models. It is amusing to look at a story that goes back to 2007.
The moral is the same: if you can avoid being in charge of the risk models --- grab the opportunity. The up-side and the down-side are way out of balance --- Tukey's unobserved risk, the peso problem, and the inevitable earthquakes and asteroids are all on the way. A lot of ships will be sunk --- no way out of that --- but its somehow more tolerable if you are not in charge of making sure the ships stay afloat.
Sidebar: Volatility? What Volatility Are You Talking About?
When I saw the title of the paper " We Don’t Quite Know What We Are Talking About When We Talk About Volatility" by Daniel G. Goldstein and Nassim Taleb, I was quite excited.
At last, I thought, someone is making the point in print that I have made repeatedly in class. Namely, each time we say "volatility" we point to some parameter in some model, but the model and the parameter can differ from utterance to utterance. This is silly of us, but we all do it.
Goldstein and Taleb get a whiff of this, but not the full scent by any means. Still, you should take a look at their paper. It is a quick read.
Sidebar: PDP a Momentum ETF
PDP is a PowerShare ETF launched a few years ago (March 2007). The prospectus is one that would make Jessie James uneasy, but the underlying theory of the asset is interesting. It is a momentum story based on a proprietary momentum index --- which would all seem insane --- except that the proprietary index is published independently of PowerShares. Still, the index provider does not seem hesitant to be a promoter of the ETF. Net-net, this does not look like a healthy development.
Sidebar: Nice EDA --- Two Regions Each with Ten Sectors
If you take the 10 sectors of the SP500 and the 10 sectors of the MSCM EAFE you get 10 pairs of numbers, one for each sector. You can then plot these pairs in two dimensions and then ponder the meaning of the 45 degree line. Lo and Behold! It tells you in which of the two regions the given sector is now doing best.
This gives a very interesting snapshot of current market "stages." Note: Points that are near the line are doing about as well in each region, so not much "weight" should be placed on these points. Still, each one of these points deserves a story.
Note: The graph below is for November 2007 and by March of 2009 one may have wondered if such returns could ever been seen again. Lo and behold, the returns from March 2009 to December 2010 have been marvelous. Future returns may vary -- and surely will.
Make sure you understand the structure of this graph --- it's very interesting. You may have a final project that could take advantage of such a plot. All you need is (a) a collection of assets --- say sector returns that (b) come in two flavors --- say the American flavor and the European Flavor. There are many variations.
For a second variation, I I would love to see the dynamic analog of this graph, even for just one sector. The graph I have in mind would plot (x_t, y_t) where, for example x_t is the period t return on the EAFE energy sector and y_t is the period t return on the US energy sector. We could look at 3 month trailing returns and plot a point for each week. It could be a very informative picture.
Sidebar: Fairness --- and Insane On-Line Broker Pitch
We'll never look at this in class, but you might look at it in some idle moment when you want a good laugh. Its a "tutorial" on an IB tool for an algorithmic purchase of a block of 70K CISCO. Very funny --- well --- maybe not.
Sidebar: Michael Lewis on "The End of Wall Street"
Michael Lewis is famous for his book Liar's Poker which chronicled the goings on at Salomon Brothers during the "Wall Street excesses of the 80's" which were actually pretty tame by contemporary standards.
His current piece is provides a compelling view of the sub-prime development and how if you had a brain and intellectual integrity you could have been on the right side of the trade of the century. It is one of the most riveting pieces of financial journalism that I have read in years.
Sidebar: RiskMetrics on Volatility
The November 2008 research letter from RiskMetrics is worth a look. It's naive in many ways, but it does start some interesting conversations. One should keep in mind that November 2007 was just about the crest of the broader market (mortgage pools and many quantitative strategies started to break a bit earlier). From September 2008 to March 2009 was what looked at the time like a death spiral.
Sidebar: World Series of Poker --- For Whom Does It Make Sense to Play?
The other topic that has served us well on the day before Thanksgiving is a look at the logic of playing in the World Series of Poker. We'll not dig into the details this year, but I may outline what I regard as the crux of the problem. On a pure consumption (i.e. entertainment) model, I'd like to play sometime --- but it is hard to swallow taking a bad bet --- and I could be 100s of times "better" and the bet would still be bad. The theoretical question is if it makes sense for anyone --- and if it does my candidate is Andy Beal. I may have mentioned Andy Beal before in his capacity as a banker, a role that brought him to my ever-evolving quotes page.
We have a few people in class who are staring down into their phones. This is pretty distracting to me, so I ask you --- please do not check your phones during class.
Day 21: Cointegration and Statistical Arbitrage
For 19 November 2010
Final Project --- The Full Details
The project specification has evolved over many years of experience, so it is hard to imagine the there is any ambiguity that remains to be squeezed out. Still, I do want to go over it to make sure that everyone knows exactly what is expected --- especially at the level of academic integrity.
You must deliver a hard copy to my mailbox in JMHH Suite 400 and you must send an electronic copy to my email. As I mention in the specification, label the electronic copy of your project by your name and mark on the first page if this is "OK to web post" or "Not OK to web post." Both forms of submission are required.
Late reports lose one letter grade per day. This is a huge penalty. Don't even think about it.
You should note on the hard copy and the electronic copy if you are willing to have your report posted on the web for the possible guidance of future 434 students. Without your authorization, your report will not be posted, even if it is the perfect model for a 434 final project report.
Main Business --- Cointegration and its Application
The plan is to develop the theory of co-integrated series and the application of cointegration to statistical arbitrage. There are many of variations on this theme, but we will be particularly attentive to pairs trading. This class of strategies has bought more than one nice house in Connecticut, but its popularity has repeatedly waxed and waned.
The Puzzle that Started Cointegration --- Spurious Regression
We'll begin with one of my favorite simulations. Simulate two independent random walks, store the values in vectors x and y, regress y on x, and WHAM --- you find a highly significant alpha and beta almost every time. Since we know that x can not tell us anything useful about y, we know we have a spurious regression.
Next, we'll look at the way out of the trap --- testing that the residuals are an I(0) process. If the residuals are an I(0) process we are safe (or at least in no more danger than usual). If the residuals fail to be an I(0) process, then we almost certainly have a garbage regression. It is amazing how often you will see people perform such regressions, not knowing that they have fallen into a well known trap.
We'll look at some resources that add further intuition to this process, including the famous "Drunk and Her Dog" story.
Finally we'll look at some ideas from statistical arbitrage including the idea of a synthetic index and methods of pairs trading. I've started a resource page on pairs trading and I will add to it over time.
Sidebar: News Impact in the Classical Sense (Kobe, Katrina, and Crisis)
How much does news move the markets? This is the question that is addressed by what are called event studies, and there is a nice summary of some of these in a popular article by Robert Shiller, who is well-known for his book Irrational Exuberance.
Oddly, the Kobe scenario was one of "bad news travels slowly." The day one reactions were minor, but after ten days the Nikkei 225 had fallen by more than 8%.
One of Shiller's theses is that market impacts are sometimes the result of news cascades; that is, a drumbeat of follow-up news stories can have more financial impact than the initiating event. Since Katrina was post the publication of Schiller's essay, you might see if Katrina fits into his mold. This won't really make a whole project for the final, but it would be a nice investigation to share with the class.
This theory of news cascade also seems relevant to the financial crisis of 2008. Lehman hits the tank, AIG gets massive, bail out, GM hangs by a thread, etc. etc. It's hard for the market to rally if the world keeps presenting a cascade of bad event that are all related and all on a glide path that takes many months to run.
Right now we may have a news cascade going on with respect to Irish banks --- or possibly even Irish sovereign debt --- though that sounds overly pessimistic. Most likely this will play out like the recent concerns about Greece, which the market eventually seemed to ignore. Future returns may vary.
Sidebar: KMP vs KMR
Kinder Morgan is the largest pipeline management organization in the US. Investors can participate in Kinder Morgan either as limited partners in the MLP with symbol KMP or through another vehicle which is a kind of management company that trades under the symbol KMR.
There was a comment at Morningstar that argues that these assets should trade in "lock step." To me it seems interesting to look at the time series properties of the spread on these two assets. You'll want to think about what is really going on with the two, and you will need to keep in mind that it is particularly awkward to short KMR, i.e. it may be practically impossible. Still, if you get amused by MLPs this is where the fight begins. Other items for the soup? Look out for the differences in distributions --- these should make KMP trail KMR if you just look at price levels. News Check: There maybe reorganizational news about KMR/KMP.
If you just look at the two price processes (say via Google Finance) you get something of a pairs trade story, but you have to be very careful about distributions --- which can be whoppers in MLP land.
Sidebar: Details on a Blackrock Bandit Fund
People who have been ripped off are understandably thin skinned, so if you have an uncle who has been conned by some retail Merrill Lynch account representative into buying Black Rock Equity Index Fund CIEBX --- or something similar --- you have to be gentle as you coach your uncle out of the jam.
The objective of the CIEBX fund is "to match the performance of S&P 500 index" and--- provided that they mean the total return of the index holdings rather than just the index price return --- this is a noble goal. Unfortunately, they have a" deferred' front end load of 4.50%, a 0.75% 12b fee and 1.17% expense ratio.
If the "new normal" prevails and a 4% real return becomes your benchmark, then, buying this fund throws away about half of the real return you hope to learn. Buying it is just like giving away half of your future real earnings --- or half of your initial investment.
Buy 100K of this fund, and asymptotically you are guaranteed to get a negligible fraction of what you would get with an investment in an honest SP500 Index Fund such as Fidelity Spartan or Vanguard Admiral. This isn't fancy theory; it's arithmetic. Saving your Uncle from this mistake for just 100K will over time save enough to pay for a big chunk of someone's Wharton education.
Why Do They Do This? It's NUTS!
What I don't understand about Merrill Lynch and Blackrock is why they don't care more about the reputation risk that this kind of larceny at the retail level creates even at the wholesale level. It is transparent that the CIEBX fund is a crass rip-off. Other products are harder to analyze, but, if they are willing to rip you off when you can check exactly how much you have been scammed, then you have expect that they are REALLY ripping you off with their more obscure products.
Do People Learn? Evidently Not
" In 2002, Merrill paid $100 million in fines after regulators found analysts at the firm had recommended stocks they knew to be no good. "(ref)
Still, in every township throughout the land, one can find the friendly well-meaning ML rep, often clueless to his complicity, plugging products that under every possible future scenario will leave his clients with less money than they could have had if they had taken the time to read the prospectus and compare the ML products with the corresponding products from Fidelity or Vanguard.
Small Sidebar: Details on TIPS
There is a piece from GE Asset Advisors that provides a good tutorial on TIPS. It covers the mechanics and discusses both the strategic and tactical uses of TIPS. In a world where there is the possibility of deflation, there are some interesting twists on TIPS. Incidentally, they are a favored asset of David Swensen.
Small Sidebar: The Once Noble CREF is No Longer a Hero
Funds like Black Rock Equity Index Fund CIEBX are rapacious in their greed and exploitation of the credulous, but I am almost as irked by CREF.
In the early days, CREF was a genuine leader in providing investment value. Accordingly, they won a place close to the heart of academia. Sadly, in the last ten years, CREF has exploited that trust, and it now charges fees that are indefensible.
The CREF Equity Index Fund expense ratio is 0.50%, and, while this pales in comparison to the Blackrock fees, it is still a stupid price to pay. You can get the same product from Vanguard or Fidelity for less than a third of this price.
The excess spread --- say 35bp to 43bp--- may not look like much, but at retirement time when you have just 400bp to draw down to live one each year, it is at least 8.75% of your income. That is one hell of a tax!
I started a little resource page on MLPs, or Master Limited Partnerships. These form a very interesting asset class with attractive non-standard features, including very fat (and pretty stable) dividends and favorable tax treatment. These benefits spring from the tax law view that an MLP is a "wasting asset," but this theory may not apply to many MLPs --- except as a handy tax law fiction.
Sidebar: Performance in Volatility Regimes
"An out-of-sample back test indicates that switching styles according to market regime can be profitable. Specifically, momentum investing during the low-volatility regime and value investing during the high-volatility regime outperforms consistently and to a degree that appears profitable after accounting for transaction costs." ---according to Mebane Faber in a piece at World Beta which has other assertions that would be interesting to test in the pursuit of cheese.
Sidebar: Wisdom and Forecasting
This chart is from the CXO review Philip Tetlock's Expert Political Judgment: How Good is It? How Can We Know? which describes research on the forecasting abilities of political experts. It seems quite plausible that Tetlock's analysis applies to almost any kind of expert view. Still, in the end all you get is a check list of issues to ponder. Still, this is a starting place.
Day 20: Rolling Statistics and Momentum Strategies
For 14 November 2012
It never pays to ignore what you know, so any forecast, strategy, VaR level, or performance measure needs to be constantly up-dated as new data arrives. There are also nice tools in S-Plus that make this easy. The main tool is aggregateSeries(). This is a very general tool that makes it convenient to do "rolling anything."
Moving Averages --- Simple Minded, but Not Silly
We'll also look at some of the most ancient tools of time series analysis, the exponential weighted average. This is an all-purpose tool that is often used in combination with other, more sophisticated, time series tools. One of the apparent difficulties in the use of moving averages (simple or exponential) is that one has to pick a "window" size. We'll discuss some ideas for dealing with this problem, including" Foster's Trick." This is something that it would be very worthwhile to explore in a final project.
One trend line story that seems to have some non-idiot following is the slope of the 16 or 21 day simple moving average. Again, as a project, you could contemplate testing this. Just thinking of the simplest things first, how about a simple ordered univariate strategy based on this theme?
MACD and Other Price Level Favorites
MACD is goofy in some ways but it has fascinated me for a long time, because it so often looks like "it works." Unfortunately, formal tests with individual equities mostly come back with the verdict: "No extra cheese."
I keep looking for the context where MACD really does pay the rent. My sense is that it has a good chance of working well in currencies, and in style spreads --- say small cap value vs small cap growth. It might also be useful in making guesses about sector rotation. Exploration of one or more of these ideas might make a good project.
Finally, we will look at a resource page on momentum strategies. It has a CitiGroup FX Advisors presentation, and summaries for a few leading academic papers on momentum. The CItiGroup piece is pretty lame by the standards of 434, but it is worth a brief look. If nothing else, it suggests that at least some of the competition is not to be feared.
Sector-link: XLY etc...
Sidebar: Tops and Bottoms Identified by Sector Leaders?
A random web wag suggests that at the market tops the leading sector is consumer staples (say as reflected in XLP) and at market bottoms the leading sector is consumer discretionary (say as reflected in XLY). Naturally, this case is built on recent experience, and it does make modest sense. Is it something you'd like to bet on? I can't decide, but it is something that I'll keep in mind.
In the recovery from the 2009 March bottom, the consumer discretionary stocks certainly did well, but it is a matter of checking to see if they out performed technology. Of course, with AAPL you had it both ways, and that has been a marvelous holding so far this year.
One of the things that I find interesting in this analysis is the use of the XLY/XLP ratio. There are lots of other contexts were such an idea my be just what one needs to stir the "missing non-linearity" it to the model. This ratio was particularly jammed around in 2009 because of the wild swings in oil price.
There is a related theory that says that of all the goodies out there that might be counted on for reliable trending --- retail is the king. If you are looking for leading indicators, the retail index RLX may be a good shot.
Sidebar: Yet Another Black Rock Insane Bandit Fund
The Black Rock Fund Equity Dividend Fund (class A) MDDVX , has a front end charge of 5.25% and a turnover ratio of 2%. If you like this asset, just check the SEC filling, get the holdings, and voi-la 98% replication. This is a dominated asset with 750M under management. They should be ashamed of themselves. Oh, by the way, they have 100bp expense ratio, and --- a Morningstar rating of 4 stars --- which seems --- well, perhaps not right.
What a bizarre situation! Well, small turn over may be virtue in some people's view, but why should anyone pay an annual 100bp for the experience?
Sidebar: Markets and Mindshare
The size of the world's bond market (90T?) and world's equity market (65T?) are almost comparable in a "Fermi sense." Historically, returns on bonds equities have clobbered returns on bonds. Moreover, bonds are hardly risk free. For example, the bonds of the Weimar Republic became worthless, but the stocks did not. On a less dramatic scale, you can have a very rocky road with even a 30 year US Treasury --- a 1% rise in interest rates can cost you perhaps 25%, depending on the initial interest rate. So, why are there so many people, businesses, and governments who are happy to own bonds? How does this fit with our "counterparty theory" of strategic investing.
Sidebar: Stylistic Features of SP500 Returns
Wilhelmsson (2006) also deserves some class time. One nice feature of the paper is a break-down 1995-2000 and 2000-2005 of the fundamental features of the SP500 returns. These are very useful for calibration of one's intuition about returns --- and hence for "Fermi" calculations. This is his Table 3, and it is not his main message, of course. The main message is that it pays to deal with kurtosis (fat tails), but may not pay to deal with skewness (asymmetry about zero). One of the take-aways is that GARCH(1,1) driven by shocks that have the t-distribution is the best of breed given method of evaluation. We may not buy that method, but the conclusion may still hold up for us.
Sidebar: Be Short Vol and Expect Sad Days
Sidebar: The "New ETF Report"
There is a big conversation about a Kaufman report on the market impact of ETFs. The full story is far from known, but the report does put many interesting issues on the table. As always, I am curious about the cracks. If the report becomes convent wisdom, then where do we find the cheese?
In general I have a hard time believing that ETFs have a market impact. The exceptions are the commodity related ETFs like GLD. This made gold much easier for individuals and institutions to hold. I is possible that the creation of this ETF had an impact. Similarly, the creation of ETFs for MLP or for convertible bonds my have had an impact; these are thin markets and when new customers can be created for the underlying, one may have big changes.
Sidebar: Financial (vs Economic) Bloggers
It is easy for an academic to endorse various economic bloggers. They are "members of the tribe" and even if tribe members disagree from time to time --- all is forgiven between brothers. It is a little different with "financial bloggers," by whom we mean those rough hewn folks that have the gall to mention individual stocks. Mostly these folks are beyond the academic pale.
Such academic timidity has its benefits. One of the favorite taunts on the fourth floor of JMHH is "What part of asymmetric loss function does he not understand?"
So, with some temerity, I suggest you take a look at recent column at the Reformed Broker blog. I find many pieces there to be informative, and I was especially impressed by the brave piece about TheStreet.com.
In general, if you want a quick view of what may be happening in the world of financial bogs, you might take a look at Abnormal Returns. it is not as good as it was, but it is still useful.
Day 19: Comparing GARCH Family Members
For 12 November 2012
Now that we have a substantial family of GARCH models, how should we choose between them? The plan is to first consider some structural features, especially the connection to our old bête noire --- stationarity. There is also a Wold representation for GARCH models, but it's needs a bit more linear algebra than we may be ready to send.
One useful way to compare the many animals in the Garch Zoo is by looking at a plot called the "news impact curve." Given two models we first find appropriate values for the parameters of the models, say by fitting both to the same data. When then fix those coefficients and consider the the conditional variance as a function of the innovation epsilon_ t.
This function tells us how the two models will differ in the importance that they attach to a given shock. This measure is not perfect, since it speaks to just the impact of one shock. Still, it seems to be informative, and it is easy to implement (see e.g. S-Code Ex.)
The picture we get will give some intuition about which models "care" most about a negative shock versus a positive shock. Still, the pictures are not perfect, since it is not always easy to say which parameters values are "comparable" when one looks at radically different models. One way to make progress is to fit both models to the same data. Unfortunately, this begs another question; namely, the question of model specification.
Next, we consider non-normal drivers of the GARCH model. This is an important issue that makes the GARCH model much better than it would be otherwise. Still, the trick is old, going back to Bollerslev (1986).
Finally, we dig into a paper of Hansen and Lund which compares some 330 GARCH models. This is a heroic effort which we will be delighted to cover only from the summaries. Still, there is room to note a fundamental philosophical point. To compare one needs a criterion. How is one to choose among the potential criteria? My favorite is fitness for use. This is by far the most sensible criterion, but it does put a lot of questions to the modeler --- most of which do not have comfortable
I tend to "sell" the take-away from Hansen and Lund to be that "you don't need to look much further than GARCH(1,1), or perhaps EGARCH(1,1)."
I do believe this, but it is a little sophistic to argue this just from exercises like that done by Hansen and Lund. The problem is the criteria for judging the models. Hansen and Lund use a bundle of them, but a eight inadequate measures are not all that much better than one.
Also, the idea of ranking a zillion pretty similar models and then looking at the ranks --- well, that is clever, but it is also a bit sophistic. We'll also look at the paper of Wilhamsesson for a bit more perspective.
Alternative Features of Merit
There is another principle that I like. You could call it simulated verisimilitude. You fit the model, then simulate data from the model, then do EDA on your simulated series and your original data. If the EDA (and other logical) comparisons are not pretty close, then you have good reason to be unsatisfied with your model.
It is amazing to me how seldom this method is used by model builders in operations research, scheduling, logistics, transportation, etc. Those guys very often use models that have very little relation to the stylized facts of their business. In financial time series, we do at least have this part of the drill down pretty clearly.
Last Homework! This homework provides experience using a GARCH model to engage something that is of bottom line interest --- the relationship of risk and reward. As it is presented, it is reasonably straightforward. Nevertheless,t if you have time, you can use it to do a little exploring for your final project. It also provides a reminder of the very on-going importance of basic regression and EDA studies.
Sidebar: Roll of Subjective Judgment in Risk Models
The NY Times article "In Modeling Risk, The Human Factor was Left out" adds a bit to our discussion of VaR models, especially those models that ignore "known but unobserved risks" such as the historical "peso problem" or the more recent "agency problems" of CDOs.
Sidebar: Leverage and Lifetime Strategies
It has occurred to many people --- especially my colleague Dean Foster and his co-authors --- that historically one would have done well by being leveraged. We also saw this when we discussed Kelly betting, even thought we cast doubt on the wisdom of playing "full Kelly". The naive expected return maximization argument that irked Samuelson, turns out to have a more sophisticated "diversification over life argument." This has been developed in a recent paper which is worth a quick look.
Election Night Epiphany
For 7 November 2012
This was a "Moneyball" election that should be recorded in history (along with other features) as a FULL EMPLOYMENT ACT for statisticians.
Sure, it may have cost me 60-80K in taxes over 4 years, but I am not bummed at all. My former colleague Ben B. has been very, very good to me, Although he will leave shortly (to get paid!) I trust that life will not be too different with Ben's replacement by renewed President Obama.
We'll take a look at In-Trade Prices for the Presidential Race.
Day 18: The "Leverage Effect" and the GARCH ZOO
For 8 November 2010
The GARCH model takes a major step toward a realistic statistical view of the noise in asset return series. Still, it is not perfect. In particular, the plain vanilla GARCH model responds symmetrically to either a negative or a positive shock. Historically, it is the case that a large negative shock has a more substantial impact on subsequent volatility than does a positive shock. A plain vanilla GARCH model cannot capture this phenomenon.
Fisher Black (partner with Myron Scholes in the Black-Scholes formula) called this phenomenon of asymmetry the "leverage effect" and the name has stuck. Black gave an interpretation of this empirical phenomenon from the point of view of the firm's debt to equity ratio --- one traditional measure of leverage.
Since observed volatility is not well explained by debt to equity (except in certain extremes), the name "leverage effect" does not seem to be a good one. Nevertheless, the effect is real. When you take any of the models that test for a "leverage effect" you will be likely to find that it is significant.
Black's Leverage and Modigliani-Miller
Black's leverage story may seem to contradict the Modigliani-Miller Theorem. If it did, it would not particularly bother me. Still the problem is worth pondering. I'll argue the view that there is no contradiction because the Modigliani-Miller assertion is about value, and Black's leverage story is about volatility. Now volatility does effect value, but subtlety --- through volatility drag from our perspective, but surely not enough to make us regard the MMT and Fisher's leverage effect as contradictory.
Incidentally, the venerable Wikipedia has a remark on the MMT that I think is particularly wise: "Since the value of the theorem primarily lies in understanding the violation of the assumptions in practice, rather than the result itself, its application should be focused on understanding the implications that the relaxation of those assumptions bring."
We'll then look at the models that attempt to cope with the so-called leverage effect. Most of our attention will be given to Nelson's EGARCH model, or exponential GARCH model. This is the "next step" model that has rather reasonably stood the test of time. It is definitely useful by both academic and practitioners. It definitely has its uses, though it does not provide nearly as big an increment to our toolkit as GARCH itself.
After EGARCH there were many other models that attempted to deal with this or that stylistic fact that is not well modeled by GARCH. Naturally, one eventually faces a certain law of diminishing returns. Still, it pays to know about at least a few of these.
You can check out some relevant examples in S-Plus. It does indeed turn out that when you fit a model like EGARCH to the returns of an individual stock, you are very likely to get a significant value for the leverage parameter.
It's not easy to say what this really means to us in an investment context, but it is certainly worth thinking about.
Leverage Effect not a Leverage Effect
For year's I have argued that Black's interpretation of the "leverage effect" as a "leverage effect' didn't really make sense, and II figured that everybody knew this.
Turns out that there was still a paper to be written, so if you want (substantially) more than what comes with our Fermi calculations, you can look at an informative 2000 paper of Figelwski and Wang.
Sidebar: Risk and Reward
The big question is "Do you get compensated for taking incremental risks --- or is it the case that for any given asset incremental risk (as measured by "volatility") is an a priori bad thing?"
In the classical stocks versus bonds story, we see historically a very reassuring compensation for risk taking, but through time and within one asset class the story comes close to reversing itself. You'll get to explore this in HW9 which will be the last homework (though there are intermediate deliverables for your final project).
Sidebar: TIPS Yields in a Crisis
Krugman's blog story about this argues that the spike in TIP yields post Lehman's bankruptcy was due to liquidity. Keep in mind that vanilla treasuries yields were dropping during this time as part of the traditional flight to quality. I can't quite get that TIPS lack 'quality' --- but perhaps in a mad-rush-for-the-doors way they do.
Anyway, the next time that it looks like financial Armageddon, you might (1) first slam into Treasuries, (2) swap over to TIPS after about 5 months, then (3) after about 5 more months sell your TIPs and go long equities ---- first favoring the most volatile assets (emerging market, small cap growth, financials) then moving to large cap growth and energy. In the great recession of 2008 we saw that financials did have a nice up-tick as Armageddon was taken off the table, but then there was a still or fade as a more genuine recovery got under way. The amazing super winners were consumer discretionary.
History may not repeat itself, but at least this gives you a baseline to squirm around.
For an simpler "TIPS rule" how about this? It's hard to be disappointed if you buy TIPS with real yield above 3% (and you should be pretty happy with real yields above 2.5%). Alas, it may be quite sometime before we see TIPs with a 3% real yield.
Incidentally, here is a simple puzzle. For very short durations (say 3 months), there should be almost no difference between the total return on TIPS and the treasuries of the same duration. The real benefit of TIPs is that you can hold a long duration asset with no credit risk, no call risk, and a granted real return.
Sidebar: The Pollyanna Fit
This is a variation on a point made in the Granger article that we discussed in class. Take any old time series of returns and replace all of the 'big moves' with 'something more tame." The do you favorite fit and see if you get a wad of extra significance. Here are some things to ponder:
- Can this give you insight on the "way to bet' in normal times?
- Do the 'big move days" appear like a Poisson process, or do they have more clumping?
- What are good candidates for replacing the data on the 'big move days"? One candidate is the general mean.
- If you use a Pollyanna fit to organize your bets, what are your risks? The thesis is that you can 'break even' on the big move days --- which you give up on modeling. This can be true or false.
Sidebar: MERFX --- Interesting Theme, Bad Expense Ratio
"The fund normally invests at least 80% of assets in the equity securities of companies which are involved in publicly announced mergers, takeovers, tender offers, leveraged buyouts, spin-offs, liquidations and other corporate reorganizations."
This is perhaps not a bad idea if one is after a decent Sharpe ratio. You have less than market risk (only losing 3% or so in the annus horribilis 2008) and an expected return of perhaps 8% to 10%. You can also back test the theme with simple rules, and it looks like there is a little cheese here. Still, the good folks at the Merger Fund don't work for free, they snack on an expense ratio of 1.45%. If you like the idea, why not do some reverse engineering. It seems a terrible pity to waste 145 basis points.
The classic way to play this theme is to buy the company being acquired and sell short the company doing the buying. This stirs a lot of leverage into the pot, and when things are 'normal' there is useful risk-adjusted cheese in this game.
Day 17: ARCH and GARCH
For 5 November 2012
The ARIMA models underpin a large part of the theory of time series, but they have an Achilles heel when applied to financial time series --- the conditional variances of ARIMA models do not change as time changes.
For financial time series, this is in violent contradiction to reality.
One of the most fundamental stylistic facts of asset returns is that there is "volatility clumping" --- periods of high volatility tend to be followed by periods of high volatility, and periods of low volatility tend to be followed by to be followed by periods of low volatility.
The ARCH and GARCH models were introduced to remedy this situation, and they have led to models that are much more representative of financial reality. Our plan will be to develop these models as they evolved, first looking at why they are really needed. We'll also look at the tools in S-Plus for handling GARCH models, either in simulation or in fitting.
Finally, we'll discuss some original sources, notably a review article by Robert Engle called GARCH 101. The notation used in this piece is no longer the standard notation, and some bits are best taken with a grain of salt. In particular, given what we know now, Engle's discussion VaR is "optimistic" to say the very least. Still the piece is instructive.
Another paper we might discuss briefly is Engle's paper with Andrew Patton "What Good is a Volatility Model?" Ironically, this paper has the "tell" that I have mentioned in class, namely it uses the Dow (OMG!) as its illustrative price series. I don't know what motivated this choice, and I find it a little less serious than I would have hoped.
A positive feature of the paper is that it gives a brief list of "stylized facts," a very important notion to which we will start paying more systematic attention.
Sidebar: "Will We See a January Effect This Year?"
Mark Hubert had a piece on this in 2008 and it may be worth thinking about for this year. His thesis was that the January Effect (i.e. bigger small cap returns than big cap returns from 12/15 to 1/15) is more pronounce in years where the market has been strong going into December. Naturally, post the Jacobsen and Zhang paper that we looked at last time, we know that there is at least some revisionist thought about the January Effect. Still, it is interesting, and if you only believe in plain vanilla beta, you would expect that this year we will see small caps out-perform large caps if the market continues to do well in 12/15 to 1/15.
Sidebar: TIPS Spread and Temporary "Insanity" of November 2008
Yes, this was a very strange situation and it did not last. There were a few months of YoY deflation, but the TIPs real yield did normalize. The structure of TIPs is such that newly minted TIPs are protected against both deflation and inflation. Typically, seasoned TIPS have had some inflation change to their "par value" and this can be clawed back if there is deflation --- but the claw back does cannot send the "par value" below the original issue price.
Historical norms suggest that 2 to 2.5 percent real return may be a natural level, so the current negative (or near negative) real yield is quite storage.
Sidebar: TrimTabs, Money Flows, and "Bottoms"
"When public companies are net buyers while individuals are heavy net sellers, the market is making a bottom": This is an interesting assertion of the TrimTabs Liquidity Theory. The irksome part of this little observation is that "making a bottom" can take a lot of time and money. From the publication of this piece, the bottom did not present itself for five more very long months --- and almost as much value destruction as had been booked from November 2007 (the previous top) to November 2008 (the date of the quote).
Incidentally, there it is interesting to look at Brazil during the period of the crash and recovery November 2007 through June 2009. The US market made a pretty sharp turn in March 2009 which we now view a the "bottom" but Brazil and other EMs started the up-turn a solid month earlier. Yes, markets are absurdly coupled, but still --- it looks like there may have been a useful signal there. Participants in the Brazilian market decided earlier than others that the world was not coming to an end.
Sidebar: Mean Reversion vs Momentum
Essentially every quantitative strategy depends on a view that is either "trend following" --- that is a momentum strategy , or "trend reversal" --- that is mean reversion strategy.
Between the two, there is always a finely pitched battle. It seems to me that the momentum story has more reported successes. Still, there are situations where the mean reversion case can be made. One of these was mentioned in the Granger article covered last time.
As a variation on the strategy reviewed there, take any set of say 100 stocks. Now on each day, buy the 3 that are off the most at the close, and hold these stocks for 10 days. Now, compare this strategy to the comparable market buy-and-hold strategy. How do you do? Here, by the way, you might take your transaction cost to be 5 basis points on each leg of the trade. This is realistic if your trades are small enough to avoid market impact cost but large enough so that you are essentially paying one bid-ask spread for a round trip.
The biggest potential "bug" in such a study is that your real-world "buy on close" price may have some slippage from the "print" close that you find in the historical data.
Day 16: Switching Regressions, Non-Linearity, Forecastability, and Cost-Benefits of Subjectivity --- and, oh, Tells!
For 31 Oct 2012
Because of Hurricane Sandy, we have to compress Days 15 and Days 16. I'll keep the "blog" in its usual reverse order but I'll do the material of Day 15 first. NO HOMEWORK IS DUE ON MONDAY NOVEMBER 5. We'll have just two more homeworks (No. 8 and No. 9). The rest of our time will be focused on project design, generation of project ideas, etc. The lack of HW this week will I hope encourage you to do some reading and start contemplating your final projects. I'll spell out some of that plan today, but I'll be more systematic later.
Still, our main goal is to review a classic discussion paper by Clive Grainger, "Forecasting Stock Market Prices: Lessons for Forecasters."
Grainger shared the 2003 Nobel Prize in economics and his contributions find few equals in the world of econometrics. Granger's old paper has benefits for us even though at this point the it is rather dated. One benefit is that it can be 95% understood at the level of Stat 434. Second, and more persuasively, it suggests some potentially useful ideas that even now have not fully explored.
As a caveat, Grainger's introductory comments on martingales and the EMH is way off the mark. For example, if asset prices were martingales as the introduction considers, then only risk-seeking gamblers would invest, and this is not the case.
This part of t he paper can be fixed with just a small correction. For example, one can use a model I mentioned earlier. Specifically, the ratio of an asset price to the market price may be more feasibly viewed as a supermartingale which you recall is a "bad" game. This makes the assertion logical. Unfortunately, it still remains untestable because even the remote possibility of a very bad outcome can upset the whole apple cart.
There are many smaller problems, such as the uncomfortable casualness with which dividends are discussed. One wonder a bit if Grange go the arithmetic right.
At other places in the paper, we have to be concerned about data quality, or logical stationarity, or data relevance. For example, one of the papers that Granger discusses uses market returns back to 1896. For me, this is just too far past the "use by date".
Similarly, at one place Granger looks at transaction costs of 0.5% or even 1%. Nowadays, this is a silly level of transaction costs for the assets in question --- except in the remotely relevant situation of market impact costs. In more common (but still special) situations, you are even paid to trade; that is, you make money trading, even if you just break even on the trades. This sounds weird, but I will explain. It is a little bit like being a shill in a poker game. Sidebar: The Wiki piece on shills looks only at the negative side. There really is a positive side too!
Nevertheless, if anyone is looking for advice about how to have better luck forecasting asset returns, Grainger's piece is a very sensible place to begin... for more, look at the more recent papers that cite this classic.
Finally, we can see more clearly now that Grainger did not set up his analysis in a way that was most likely to lead to success. He could have taken a hit from the CAPM: The market is a great sea on which individual stocks bobble up and down with the waves.
One has much more chance for predictability if one removes the effect of the overall market as much as possible. The easy way to do this is to always look at spreads. If now you take each of the ideas in Grainger paper (such as splitting regressions) and you apply that idea to the spreads --- well, now you have a chance for cheese.
"Sell in May" and 317 Years of Monthly Returns
There is a recent paper that cobbles together a UK 317 year monthly return series and then takes aim some engaging some of the favorite Calendar Anomalies.
In particular they note: "Winter returns – November through April – are consistently higher than (negative) summer returns, indicating predictably negative risk premia. A Sell-in-May trading strategy beats the market more than 80% of the time over 5 year horizons."
They also note: "Studying the Sell-in-May effect is interesting, as it is quickly evolving as one of the strongest anomalies. It challenges traditional economic theory, as it suggests predictably negative excess returns."
On page 8, you will find a time series regression for the detection of month effect. They quite pleasantly let the intercept be the estimated January return and then have dummy variables for the remaining eleven months so they can capture the difference in the return for those months.
It is not easy to tell if it is news you can use, but it is worth reading --- either for fun or for a professional discussion of time series regressions and a very interesting data set.
Caveat: If if did not like Grainger using data that went back to the 1890's how do I feel about data that goes back to the 1690's? Well, I mainly feel amused. This is history and it may teach us something about animal spirits that could continue to influence price behavior. Besides, it was wonderful to lean that Christmas began in 1837.
More Caveats: Read the very interesting footnote on page 11 about the speculative fever that overcame the UK markets in 1824 and the crash of 1826.
Note: Citation Searches
When you find an article that you like (or even one you don't like), you can find more recent articles that follow up on it by doing a citation search. This is typically a much more efficient way to find relevant research than just by looking up a topic. In particular, if you look up a topic that is too broad --- like forecasting --- almost no one can thread his way through the forest. Citation searches are a very powerful research trick.
Note: S-Plus Tools For Rolling Regressions
We may also discuss the tools that are available in S-Plus for dealing with dynamic regression. Rolling regressions and weighted rolling regressions are a staple in many of the Stat 434 final projects, but at this stage you can probably learn everything that you might need about these tools just by working through the code box example.
Note: Risk Free Rates
For a CAPM style modeling exercise, one needs "risk free rate." Exactly which rate one might choose is open to debate, but 30 day treasury yields are usually appropriate. When you put any rate into the regression you will naturally have to make sure you are comparing apples and apples --- i.e. daily stock returns and daily risk free returns. To convert treasury yields to daily yields, you can use the conventional 360 day year. For data resources you have several options.
Sidebar: J. P. Morgan's Measure of Hedge Fund Long/Short Exposure
One of the metrics that is regularly reported in some of the sell-side research from J. P. Morgan uses a rolling CAPM type estimate to get an estimate of the net-long exposure of hedge funds. What they do is estimate a rolling beta for the aggregate returns of a collection of long short equity hedge funds. Since these are supposed to be "hedged" we expect a beta less than one. If the fund are net short we would even expect a negative beta. What one gets in the end is a sequence of beta's that do provide a measure of net exposure. Since this is supposed to be the "smart money" higher values of the index --- so more net long exposure --- would be a bullish indicator for the market.
They use a 21-day Rolling regression on daily hedge fund returns reported by HFR's index HFRX. The market surrogate is the MSCI World Index.
Sidebar: Russian Stock Market Closes for Week
This happens every year. Now here is the puzzle. The Russian ADRs like VIP will still trade in the US (and elsewhere). There will also be Russian and BRIC ETFs. This is a new circumstance, and it would not surprise me if a thoughtful consideration of a few years of data might turn up some ex ante cheese, even if statistical methods will not tell the whole tale.
Sidebar: Returns to Thinly Traded Stocks
A Forbes article (10/28/2010) notes that
"Ibbotson has studied stock returns back to 1972 and says thinly traded stocks beat highly liquid ones of equal market values in all four quartiles ranked by size. The spread ranged from 12 percentage points a year between the least and most actively traded microcaps to 2.8 points between the least and most popular megacaps."
This is an amusing thesis. It is no really a time series thesis, but it does suggest that you no reason to choose A over B then you might choose the more thinly traded of the two. Here thinness would be measured by something like the averaged daily volume divided by the shares outstanding.
Sidebar: Regarding CAPM and Other Puzzles --- What Changes a Mind?
“Children do eventually renounce their faith in Santa Claus; once popular political leaders do fall into disfavor…Even scientists sometimes change their views….No one, certainly not the authors, would argue that new evidence or attacks on old evidence can never produce change. Our contention has simply been that generally there will be less change than would be demanded by logical or normative standards or that changes will occur more slowly than would result from an unbiased view of the accumulated evidence.” ---Nisbett and Ross (1980), quoted by Hong and Stein (2003).
Incidentally, this quote is consistent with the notion of confirmation bias which asserts that a person holding a view is more likely to be attentive to evidence that supports his view than evidence that does not. Confirmation bias is a feature of human psychology that has been demonstrated in a great variety of experiments.
Sidebar: Uses of Subjective Information
In science and engineering there is a tradition of working hard to minimize subjective content of models and analyses --- but even there one has to admit that many design choices are based on subjective information. There is a phenomenon in economics called "physics envy" and this is one of sources of pressure for analysts to minimize subjective input into financial and economic models. The downsize is that this leads to more and more weight being place on historical observations. As we know from the discussion of the Peso Problem, such back-looking empirical estimates may ignore some serious economic facts.
This brings us to the thorny issue of subjective input in to models like those that are used in VaR calculations. It is clear to me that subjective input would have provided at least some improvement on the VaR models that have blown up over the last year. If one does advocate subjective input, it's a good idea to give a periodic review of the cognitive biases which can be as real --- and as dangerous --- as the "path-focused objective myopia" which one might hope to ameliorate with the inclusion of at least some explicit subjective inputs.
Sidebar: The Old Bellwether Idea --- It's now Now the Apple Tell
For the longest time people would look to GM (LOL!) and later IBM as "bellwether" indicators of the market. That is, these were viewed as leading indicators of the whole market. More recently Apple, Google, RIMM are the new "bellwethers" --- at least for the tech sector.
Is this baloney, or is it cheese? BTW, "cheese" is a 434 term of art that stands for "excess returns." This friendly term has not been used much this year, perhaps as an apology for past year abuses.
Day 15: Time Series Regression and Applications to CAPM
The plan mostly focuses on the natural extension of ordinary least squares regression (OLS) to financial time series. Still, there are new topics, such as the likelihood ratio test and the AIC criterion. We'll particularly look at AIC, AIC weights, and the way to use these to combine forecasts.
We'll look at the nuances of the model and its associated tests. We may also cover a MSFT/CAPM example that is bundled with Finmetrics, but you can just as well look at this by yourself.
One of the most famous models that fits into this context is the 1992 Fama-French Three Factor model. This is the model which for many (but not all) signaled the "death of the CAPM." Parallel Mark Twain's line, the rumors of the death of CAPM may have been greatly exaggerated. Still, the true believers are starting to face a sea of troubles that are almost as tough as the ones face by those who preferred had a hard time with heliocentrism, but comparably sure experiments are much harder to come by.
The Wikipedia article on Fama describes the three factor model, and it also has useful links, including one to the original FF92 paper and a good humored one to a Dartmouth humor magazine.
If you do look at the original lFF92 article you will see that there is a fair amount of technology there which we have not engaged. Still, with the tools we do have you can tell very similar stories. The basic tale is pretty robust. It's time to list it as one of our "stylistic facts."
Maximum Likelihood and the ML Ratio Test
Basically all the test that you have seen in all of the statistics courses that you have taken are obtained by on general method, and they are all what is called a maximum likelihood ratio test.
The computations behind these tests are a basic part of other statistics courses, but we still to well to review a bit of this theory. In particular, it gives us the chance to nail down the very fundamental notion of the likelihood. This is critical for maximum likelihood estimation, for the likelihood ratio test, and for other cool stuff like the AIC, which comes up next.
Akaike's Information Criterion and Model Averaging
The Akaike Information Criterion (or AIC) is perhaps the most widely applied criterion for model selection.
For my own account, I am not a huge fan of AIC. The main problem is that in many cases one assumes at least as much going in as one hopes to infer coming out. In pure model selection there may be some net gain in most cases, but, since there can be loss in some cases, it is not clear that one wins over all.
Still, the AIC is out there, and it mostly seems to point in the right direction. It's probably worth consideration in most contexts, provided that one does not get too carried away.
Model Averaging: A Practical Alternative to Model Selection
If you are interested in using your model as a forecast, you may be able to side-step the problem of model selection. Rather than simply chose model A or model B to make your forecasts, you can instead consider an appropriate average of the forecasts given by the two (or more models).
Foster's Trick For Model Averaging
There are many rules one can pose for averaging the forecasts given by a set of models. You could just take the simple average, or you could take a weighted average where the more accurate model is given a larger weight. A still more principled idea that I learned from Dean Foster is to use regression. Here one takes the forecasts of the models and regresses the observed returns against these forecasts. One can then use the regression coefficients as "weights" for the combined model. Here we use the quoted because the weights can be negative and need not add up to one, so this method is not strictly an averaging method.
Naturally this idea must be combined with good sense. The forecasts given by your original models are quite likely to be highly correlated, so this regression problem can be ill-conditioned. My advice would be to consider this as a exploratory tool. There is no reason at all why you cannot stir in your own judgment.
The "Best" Criterion --- Fitness for Use
The model that is best is the one that "works" best for you. Ironically, this criterion is not often discussed. I have written a bit about this, and eventually I will write more. The whole notion of a model is one that deserves a richer --- and more philosophical --- view that is common in statistics courses.
Sidebar: "120/20 Good Buddy"
Among the strategies that now have the public ear are the 120-20 strategies ---- leverage up 20% on the long side and off-set this leverage by going short for an amount of 20%.
Naturally, such a strategy would be nuts --- unless you could pick winners for at least part of your up-position and pick mostly under-performers for your short position. As a retail investor, you would also face an 8% margin cost headwind on the 20% that you are leveraged and the headwind of any dividends to be paid on the downside.
Thus, for an individual investor replication of a 120-20 strategy is a non-starter.
As an institutional investor, your long position will cost LIBOR and a bit and your short position will pay LIBOR minus a bit, so for professionals the whole game becomes modestly feasible.
Thus, professionals have the opportunity to let you in on this game --- for a modest fee, of course. This is a good trick all by itself, but the 120/20 pitch has a ready listening. You can look at some of the recent pitches.
My own view is that these retail issues are not good deals, but I am open to arguments on the question. If you are looking for a final project, you might want to consider a project that plays off of these funds.
Sidebar: Unconditional Variance of the AR(2) Model
Alex Goldstein has kindly written up a derivation of the unconditional variance of the AR(2) model. His trick is to use a variation on the Yule-Walker equations, but not the Yule-Walker equations themselves. The calculation is actually not-to-bad, and it is certainly easier than the Wold decomposition approach that I suggested. Still, even this method shows that one does not want to look for a formula for the unconditional variance for the AR(3) model.
Sidebar: CRB and Dollar Index
Here the picture looks compelling (CRB up, Dollar Index down) but we know that it is easy to ge hypnotized by price charts. Nevertheless, there is some logic to this if commodities are indeed "priced" according to some international basket of currencies. What needs thinking through is that the dominant asset in the CRB is oil and at least in principle it is priced in dollars. Gold would be another story.
Sidebar: VXX and Going Long or Short Volatility
VXX is an ETN that is supposed to track a perpetual 30 day VIX futures contract, which is something that exists only in theory. Naturally there are fees, expenses, and the possibility of incompetent tracking. Still, if you don't have a futures account, or can't stand the nose bleed of trading futures, then this still gives you a way to 'express your view' about implied volatility. The contract is relatively new, but it is sponsored by Barclays --- one of the ETF/ETN good guys --- so it may turn out to be a useful trading asset.
As we saw in class, from the crazy days of the crisis it would have been a naturally and very profitable to short the VXX. Still, that is a "ex-post" observations. It is easy to find things that would have been wonderful do to between March 2009 and the present.
Here there are further factors involved. In general, the long term holder of any commodity ETF faces some headwinds due to the term structure of futures contracts. Moreover, a well informed contact tells me that this traditional headwind is particularly vicious for the VXX contract. So much so that he asymptotic value of the VXX may well turn out to be zero. Unfortunately,t the only guarantee is that if we have a serious macro event it will be very painful to be short VXX. On the UPS scare of 10/29/2010 could have been much worse.
Sidebar: Distressed Debt --- One Thousand Words
HW 6: Comment Regarding the Cauchy Distribution
The Cauchy distribution is a special case of the t-distribution. It comes up naturally in many places, and it's principal claim to fame is that it looks like the normal when you graph it --- but the graph is deceiving since the Cauchy has very fat tails. The Wiki page on the Cauchy tells you more --- perhaps too much.
More HW6 Q and A:
- EDA means "Exploratory Data Analysis"
- Yes, a t-distribution with one degree of freedom is the same as a Cauchy distribution.
- Does location and scale disturb the interpretation of a qqplot? That is a puzzle for you to contemplate.
- RYTNX is a mutual fund, not a stock. The returns are in WRDS. This series is longer than the 2x ETF alternatives.
- Remember Always: Google is an extension of your MIND. It will answer many questions --- including these.
- Logic of ADF test. In the simpler DF test you are testing only the hypothesis that the AR(1) coefficient phi_1 is equal to one. In the ADF test, you are still just testing this hypothesis but you allow for the possibility that the model is more complicated than the plain vanilla AR(1). Under the null hypothesis --- where you are looking at something that is random walk like --- the model is a RW to which has been added a stationary (or I(0) ) process. For details, look at the model specification in ZW.
- Small point. Any series with drift is a non-stationary series --- specifically having a drift implies that the means are not constant as they must be for a stationary series. We we do an ADF test that "allows for trend", we are really saying ---- ok, remove any trend then tell me, is the hypothesis that I am now looking at a RW wildly improbable?
From Bad Idea T-shirts... but not unfriendly to the Twain quote...
SPECIAL Note: HW 7 Due Wednesday Oct 31
Because of SANDY there will be no class on Monday. Just stay snug in your rooms and ponder the universe. You can hand in your discoveries on Wednesday.
Day 14: VaR --- The Sad Story
For 24 October 2012
As I said even before the 2007 course blog: "There is much wrong with the way VaR is used and calculated --- even in the most enlightened firms. Some implementations are close to (1) a hoax or (2) at least naively self-delusional. "
Still, VAR used almost universally. Moreover, the vast majority of implementers are sincere in their belief they have done their best.
The people I hold responsible for the destruction caused by VAR are those who said, "Everyone else has good Value at Risk measures. Either you guys come up with a system that works, or I'll get someone who can." In such an organizational situation, you are guaranteed to get bogus analyses --- all of which will agree very comfortably agree with all the other delusional models. This is sad, but very human --- the mechanism of social proof is as universal as any phenomenon one can imagine.
Again, humbly quoting myself from the 2007 blog, "If one could simply bet against run-of-the-mill VaR estimates, one would not need to look for other investments. This would be ... a veritable paradise of Black Swans, vastly more lucrative than those (too rare) Black Swans that stingy options traders occasionally provide."
Why, Oh Why, Is this So?
There are two virtually insurmountable problems with VaR as it is calculated in most firms. These are the "Peso Problem" and the "everything is correlated at the extremes problem. " There is also a less overtly dangerous but still unavoidable problem I call "Tukey's Biased Estimate of Variance."
The "Peso Problem" is a standard part of economic lore ---- but it is steadfastly ignored by essentially ALL VaR models. The point is that "observed risk" (say as measured by historical return standard deviation) and real risk (say as set by the smartest book makers) are often wildly different. There are indeed "infinite Sharpe ratio" investments that are (at most) only mediocre bets.
The "correlation problem" has also been widely understood for a long time. It was one of the forces that led to the demise of LTCM. It was one of the forces in Niederhoffer's first blow up after the Asian Crisis of 1994. Still, my favorite example actually goes back to the great flood of 1927, and I'll tell that story in class.
Despite the long history , the" correlation problem" is ignored in 99.9% of VaR models.
Finally, how about Tukey's Estimate? I'll elaborate in class, but --- it's ignored. Why? Because it would force everyone to bring down their leverage. Well, since the fall of 2008, we have the strong sense that bringing down the leverage early is sometimes the right thing to do. The VIX, the LIBOR-OIS spread, and other indicators told us what to do, but VaR was the last to speak --- and it was too late.
John Tukey understood all of this, even before the ideas came into play in a financial context.
With a barrel-chested sotto voce rumble, he would say, "The variability that you have seen is always an under estimate of the true variability." In our context, where volatility and variability are cognates, Tukey is on one side, and the world's VaR models are on the other.
My money is on Tukey.
Still, many firms are getting better at VaR, and we just need to have evolution play its role. Though many individuals in many firms will kick and scream, the big firms with the best VaR models (and other risk controls) will be survivors. As the "subprime" story played itself out, we found that many firms had VaR estimates that were pure garbage. It seems inevitable that some measure like VaR will always be with us --- and it seems that sometime again in the future it will greatly fail us.
Peso, Correlation, and Tukey are too much headwind for anything.
Extreme Value Distributions --- Use at Extreme Risk
On the more technical side, we'll look at extreme value theory, which is one of the tools that theoreticians always seem to want to trot out when there is a discussion of big market moves. The mathematics of the extreme value distributions is interesting, but for several reasons extreme value theory doesn't deal with the reality of extreme market moves.
We'll discuss the Gumbel (or Fisher-Tippet) distribution in class. It comes out of a beautiful limit theorem, and it is the leading example of what are know as extreme value distributions. Sadly --- and in stark contrast to the Central Limit Theorems --- there is a major disconnect with any level of honest practice.
You will see from a homework problem that the convergence is excruciatingly slow even in the ideal case of normally distributed random variables. There are people who have advocated the use of this distribution in financial practice. It has even been used as part of the Tokyo building code. These applications are bogus, bogus, bogus.
Still, extreme value distributions are worth learning about. There probably are special contexts where they are applicable, and they have a undeniable charm. Also, they are part of the common language, and any time "extreme" events are discussed, they are likely to be drawn into the conversation. When this happens, be prepared to be skeptical.
More on Risk-Adjusted Returns
Sidebar: "Is RSP a Stinker?"
Just as a side note, you might want to think about a project that plays RSP versus MDY. These are assets with different designs, but which are very highly correlated. RSP has a bigger expense ratio, so it may be that the portfolio of long MDY short RSP can have a sweet Sharpe ratio --- small mu, but microscopic sigma.
Sidebar: Sales of Safes Spike in Crisis Week 10/22/2008
The CEO of Wal-Mart gets to see secrets of human behavior that most can never see. In a CNBC interview on 10/22/2008 Lee Scott observed that his stores saw a "run on safes" during the weeks of the October 2008 financial crisis.
HW 6 Note
If you are having trouble finding RYTNX returns, note that it is a mutual fund. Also, it may be that you'll find it in the "quarterly updates" rather than in the "annual up-dates.(Thanks to Himish Lad for this tip!)
Dan adds: "SPXU and UPRO have price data on CRSP under their ticker symbols. They are triple long and short ETF's. RYTNX is also in the system in the daily data, but for some reason the ticker is "RYT" not "RYTNX" and the cusip is different from what Rydex lists on the (WRDs) website."
Day 13: Stationarity and Unit Root Tests
For 17 October 2012
Stationarity is the assumption that gives us a link to the past. Without "belief" in stationarity (or some surrogate), we have (almost) no way to learn from what has gone before. It is natural then that economists and others would hunger for ways to test for stationarity.
We know from the "cycle construction" that it is impossible to test for stationarity in general, but how about in the specific? For example, one may be willing just to test for stationarity while assuming an ARMA structure. An ARMA model may be stationary or non-stationary, so there is indeed something to do even in this confined context.
The fist and most famous of such domain-limited test is the Dicky-Fuller Test (1979). DF is in essence a "just a t-test" but the corresponding tables for p-values just happens not to be the famous t-table. The relevant distribution theory actually depends on Stochastic Calculus, and we may chat about this if there is time. As a practical matter, one just uses S-Plus to find the relevant p-values.
The Unit Root code fragments explore the augmented Dickey-Fuller tests and comment on some examples described in Zivot and Wang. In the example for log-exchange rates and for stock prices we fail to reject the null hypothesis that there is a unit root. For stock prices this is certainly no surprise, but for exchange rates it may not have been expected. Such economic ideas as purchase price parity might have pointed toward stationarity.
Still, for PPP to come into play, one needs to deal with the separate inflations in the two countries. As the example of Brazil shows, one can have something close to PPP yet have exchange rates that are flamboyantly non-stationary. In general, emerging market exchange rates can be much more violent than one might imagine a priori, and one week moves of 5 to 10 percent are not uncommon. This plays havoc in the short term with emerging market asset returns, but "washes out in the long run" --- if we believe in that sort of thing.
More Caveat than Usual
As much as one wants to test for non-stationarity, our technology is not particularly compelling. I expand on this in a little"cultural" piece on unit root test.
Sidebar: Algorithms for the ARMA(1,1) Etc.
The ARMA(1,1) model cannot be fit by classical regressions because we don't observe the errors explicitly. One way around this problem is to use an iterative algorithm that uses successively smarter residuals to stand in for the errors. This is not the algorithm that is used in S-Plus, but it captures some of the features of the algorithm that S-Plus does use. We'll discuss the algorithm briefly.
Sidebar: Performance Measures
A relatively recent technical report points out that all of the classical measures for inventor performance pretty much tell the same tale. This is not shocking news, but you should still skim the paper for a review of performance measures and their uses. I'll have a few methodological comments in class.
I also have an older summary of the basic facts about performance measures --- and s look at their Achilles heal. Of course, there is always the venerable Wiki, but I find many things in their Sharpe Ratio piece that need to be fixed. The Sharpe ratio piece at MoneyChimp is better but it still has misstatements. The portfolio calculator at MoneyChimp is also instructive. The "optimal" portfolios are amazingly sensitive to your choices of parameter values ---- and their default choices don't look too good to me.
Sidebar: Leveraged ETFs
We'll look at some of the logic of leveraged ETFs. In particular, we'll look at what the Kelly Criterion, Volatility Drag, and Sharpe Ratio can contribute to the conversation about these weird assets.
Sidebar: Junk Bonds, Treasuries, and Bubbles
"Well-informed investors avoid the no-win consequences of high-yield fixed income investing." — David Swensen, Unconventional Success
It takes a brave man --- and David Swensen is one --- to nix an entire asset class. Still, in Unconventional Success he is highly critical of corporate bonds in general. His "no-win" critique is based on the observation that corporate treasurers know more about their company than anyone on the outside, so they will always make more informed judgments about new issuance or about calling in of earlier issues.
This is a valid observation, but it is very much a non-equilibrium argument. A "price is right guy" would argue that bond purchasers also know this asymmetry and they insist on being compensated appropriately for this risk.
The flip side of Swensen's argument is that US Treasuries are a very special asset class. There is no default risk and the bonds are not callable. They therefore provide insurance against unanticipated deflation --- or panic. In the crash of October 2007 to March 2009, this insurance did pay off nicely --- at least at first. At the crisis matured, the treasuries went back up in yield and a big slice of the gain was given back. As of Fall 2010 we know that though the extreme crisis passed, the Fed continued to keep interest rates low --- with the 10 year rate going down to 2.5% or so. At this point, there is conversation of a bond "bubble". There are reasonable arguments for both sides, though personally I can't see the appeal of a 10 year bond with a 2.5% yield --- when the Fed has almost made an explicit target of 2% inflation. If they overshoot their target, someone is then stuck with a negative real yield.
When we turned out of the crisis in March 2009, the corporate bond market led the recovery and junk bonds did very well indeed. Best of all were the bank trust preferred shares, which are arguably the junkiest of fixed income investments. Admittedly, this is just an ex-post review, but the outlines of this pattern have been seen many times before. I would not be surprised to see it again, though I hope I don't.
Sidebar: Market Rally and Fog of War
Just prior to the Allied invasion of Normandy on June 6, 1944, it seems that as part of the pre-invasion deception, there was a Swedish stock market rally that was rigged up by the Allies. The theory was that this rally would be interpreted by the Nazi's as occurring because of Swedish anticipation of an Allied invasion of Norway. The Allied intention was to keep the Nazi divisions in Norway pinned down in defense of an invasion that would never come.
I do not know if this story is true, or if it is even widely believed. It may also be just one of many war time market manipulations. For example, it is well documented that various deceptions were ably conducted by the Belgian royal family to keep uranium ore from moving from the Congo to Germany. As a matter of war time analysis it may be interesting to ponder that the US stock market did quite well during the First World War it was close to flat for the the Second World War. This may well have been because the profits of many corporations were essentially capped by the government. These caps may have lead to a certain mount of "gray saving" through excess depreciation. Much of that gray saving came back into economic activity during the post-war recovery.
Sidebar: Managing Multiple Currencies
There is an old interview with George Soros where he points out that money management is easier if you only have to be concerned with one currency. It is a often ignored fact that the change in exchange rates is often of the same or larger magnitude as the change in equity assets.
The returns of January 2009 to November 2010 illustrate this nicely. It also shows why the super returns that US investors made on EM stocks during this period were much closer to "normal" bull market returns in the local currency. Incidentally, each week the Economists gives YTD market returns in both local currency and dollars.
Day 12: Martingales, Probabilities, and EMH
For 15 October 2012
We will discuss the every useful notion of a martingale. Martingales were originally introduced to provide models for fair games, but they have evolved to become what many people regard as the most important notion in all of probability theory. The plan will only require a few intuitive observations about martingales before coming up with some wonderfully concrete results, such as the famous formula for the probability of ruin (i.e. losing all you money).
Martingales help on think a bit more precisely about the EMH or the ways one might measure the extent to which a money manager may have significantly out-performed the market --- or not.
Sidebar: The 200 Day MA of the SPY
Go over to bigcharts and take a look at the SPY along with its 200 day MA, the most classic of all technical tools. The strategy of being long the market only when it is above its 200 day MA has long been known to have historically out-performed pure buy-and-hold.
Nevertheless, this achievement is not given very much credit because most of the juice came from just one special period.It turns out that the special period came to be called the great depression. Since the great depression is so far from the collective memory, this argument against paying much attention to the 200 Day MA, seemed reasonable. Then came the crisis of 2007-2009, and again it turned out that paying attention to the 200 Day MA would have again saved you a major nickel.
Sidebar: BDCs Worth a Look?
I like strange assets. Being off the beaten track, they offer a logical possibility for cheese. Sometimes whole asset classes can seem off the beaten track. Formerly MLPs were only a boutique class and much further back REITs were regarded as a strange new bird. One class of assets that is still under the radar (mostly for good reasons) is the BDC, or business development corporation. Mostly these are firms that loan money to mid sized firms. They get a nice rate on the loan, and the deals are often structured with an equity kicker or some other tasty tidbit.
Day 11: Betting on AR(1) and Intro to EMH
For 10 October 2012
We will look harder at the Kelly proposal and in particular at the idea of a conditional modification when one has a AR(1) model in mind.
We'll then look at the EMH. This is commonly pondered, but surprisingly hard to make 100% logically rigorous. Still, there is considerable benefit to asking "Who is my counter party, and why to I have a reason to buy from him --- or sell to him.
Sidebar: Ahead of the Curve by Joseph Ellis
This is an honest and informative book that lives up to it subtitle: A Commonsense Guide to Forecasting Business and Market Cycles. The author was the Institutional Investor No.1 ranked Retailing Analysts for 18 years and the book explains the big-picture part of his analytical approach. It is not at all mathematical and it deals with data on a much slower time scale than we typically see in our course.
Perhaps the biggest theme of the book is that year-over-year change in PCE, personal consumer expenditure, is a key leading indicator of changes in industrial production, corporate profits, and asset prices. A minor but evocative theme is that research that focuses on prediction of recessions is misplaced from the point of view of either a business manager or an investor. The book has some weak points, say in its analytics of deficit and interest rates, but over all it is at least provocative --- perhaps even wise.
Sidebar: Confidence --- of Consumers and of Others
Incidentally, the Wiki article on the great depression observes that concerning the causes of the great depression, "The only consensus viewpoint is that there was a large-scale lack of confidence."
The good news is that we now have structures in place (like the FDIC) that should keep consumer confidence from hitting the tank, as in the picture above of the run on the American Union Bank. We also have unemployment insurance, which keeps people consuming even if they suffer a job dislocation. The bad news is that we have also have a spaghetti bowl of linked derivatives that make everyone quite uncertain --- even about traditionally secure institutions, such as the Hartford and MetLife insurance companies.
There is an index of consumer confidence that is of high scientific quality, but it comes out only monthly, so it is always yesterdays news from the view of the financial markets. Gallop does its own consumer confidence poll. It comes out approximately every two days.
Incidentally, another 2008 survey suggests that young adults (19-29) are more pessimistic about the economy than other age groups. They aren't necessarily pessimistic about their own prospects, but they do think things were much rosier in the 1990's.
Sidebar: ProShares and Volatility Drag
ProShares (and its competitors) are designed to provide the holder with two times the daily returns on the benchmark asset. Because of volatility drag (the famous "mu minus half sigma squared" formula), such an instrument is guaranteed to provide less than the twice the return of the underlying over any longer period of time.
Sidebar: XOM and USO
While at bigcharts, take a look at the relationship between the price of oil and the price of Exxon-Mobil. To me, the three year chart for XOM and USO (the oil commodity ETF) is at least a little mysterious. When you compare XTO and USO, there is a more intuitive relationship.
Commodity ETFs that designed around futures contracts have many embedded issues that come from the term structure of futures contracts.
Sidebar: Oct 10, 2008 VIX hits 70 and Dow Interday Swing is 1000+
What can you say? This was unprecedented. The thermometer may not be "broken" but the logic behind the thermometer is severely torqued. With sigma at 70, a minus 1.5 simga event means you could snap up the total market with just your lunch money.
If you must search for a silver lining, have a line from a recent Schwab report: "Following each move above 40 since 1990, the S&P 500 had consistently positive returns one month (8.7% average) and one quarter (11.3% average) later, although the performance was mixed in the very short term."
Sidebar: Historical vs Implied Volatility (Oct '07)
Here is a plot of annualized realized volatility based on thirty day trailing data and the annualized implied volatility "based" on the 30-day at-the-money options. These track better than I thought they would. The WSJ source comments: "We're running a big fever --- the thermometer is not broken."
If we consider for a moment the art of honest graphics, one should note that this graph may suffer from what I call the "histogram cheat." Classically, this is the ruse where you overlay a density and an empirical histogram and observe that "it looks pretty good." For example, this device was trotted out in the Ljung-Box paper. An experienced analyst knows that such superpositions play tricks on our eyes.
In general it is much better to look at the spreads (i.e. the difference in the two curves) or at the multiplicative analog of spreads --- the ratio of the two curves.
When put under such a microscope, you start to see that even this "surprisingly good fit" is actually pretty bad.
Sidebar: Jim Chanos and the Apollo Short
Jim Chanos is a hedge fund manager with a great track record as a short seller who is willing to hold his short position for a long time. In early 2010 (and perhaps even earlier) he started shorting the for-profit education businesses. The crux of his analysis was simple: "These business don't collect their fees from customers --- they collect fees from the the state and federal governments that provide loans to the students. They don't care about the students future careers and they take all applicants. This is so close to fraud it had to catch the regulator eyes eventually."
Chanos was not shy about sharing his analysis with all listeners --- regulators included.
This bet has worked out very well for Chanos (as of 10/14/2010) and he has provided a service to the nation ---- while making a nice return for his fund.
What else is Chanos short? He is very bearish on real estate in China --- and pretty bearish on China in general. What are his actual bets? Unfortunately, the 13F does not require the reporting of short positions.
Sidebar: VXX Amazing Slide
VXX is an ETN that reflects the returns on a fixed (contractual) short-term futures strategy. In essence (but you always have to smell the essences pretty carefully) this ia a bet on market volatility as measured by index options. The strategy is implemented through VIX futures (or surrogates).
It does seem natural that after walking through the valley of the shadow of death that one would see volatility come down. It is fair to say that "everybody" saw this, but how many of us were short VXX? Take a look at the path. Golly, did we ever miss a good bet.
Still, there is an old traders saying: "Never be short vol."
Maybe this should have been modified --- but now --- perhaps -- it should be put back on the bulletin board.
Sidebar: Magic Numbers
There is a nice piece on the magic of numbers that may have some relevance to the kinds of problems that concern us.
Quote of the DayJason Zweig, “Remember now, as always, that the individual investor is at the bottom of Wall Street’s food chain—a speck of plankton adrift in a sea of predators.” (WSJ)
Day 10: What to Do When Facing A Favorable Bet
For 8 October 2012
The Plan for Day 10 then has us review the Law of Large Numbers. I'll even give a proof of the weak law of larger numbers to remind you of some facts from probability theory. We'll also discuss the statement of the strong law of large numbers, which is a much more sophisticated notion than the weak law --- even though both cary the same intuition.
We review the LLN be cause we want to use it to guide our understanding of lithe attractive but controversial notion of long-term growth rate of wealth. This rate will then lead us to the famous Kelly criterion for bet sizing. We will develop this in some detail. (Purely parenthetically, there is a remarkable article on the "Infinite Monkey Theorem" in the Wikipedia. I think it is delightful for its scholarship.)
For some richer context for the Kelly Criterion, you might want to browse my brief page of Kelly related links. I will add more to this page as time goes on, so you may want to revisit it when you start to think about your final projects.
Finally, we will see how the Kelly criterion relates to a classical paradox of utility theory, the famous St. Petersburg Paradox. As our discussion is unfortunately brief, you might want to look at comments on the Paradox by Shapley and by Aumann.
Shapely argues that it need not be the failure of risk neutrality that causes us to offer so little for the St. Petersburg gamble. Shapely says it can all be explained by counter party risk. This makes a lot of sense to us now considering, for example, how Ambac and MBIA looks like feeble backing for bond insurance.
Aumann argues that any unbounded utility function exposes one to paradox; for Aumann, utility must be bounded to make sense. These papers are brief, lucid, and written by central figures of modern economic thought. Where better could you spend a few minutes of your day?
Sidebar: Quotes with an Economic Spin
I keep a web page of the quotes that amuse me. Since most of my reading is about contemporary economics, statistics, and finance, so are most of the quotes. You might take a look at these from time to time, just for the fun of it. Heck, they may even be educational.
Scheduling Note: Day Before Thanksgiving
It is a tradition in 434 to treat attendance the day before Thanksgiving as optional. I do hope and expect that some people will be around. We will have class, but those who are eager to get to Grand Ma's house can scuttle off without worrying about missing anything essential.
Sidebar: Periodic Table of Financial Bloggers
Sidebar: 666 Numerology
It is touchingly quaint that the SP500 bottomed at 666 in March of 2009, supposedly a devilish number --- yet still a fact. Another fact is that post-production for the film the "Exorcist" was done at 666 Fifth Avenue in New York City. This is not news you can use --- outside of cocktail chat.
Sidebar: "Phase Transition and Momentum"
These are both terms from physics, and, when applied to economics and finance, such terms are often an invitation to foolishness. Still, "often" is not "always". I cited earlier a 2008 PIMCO piece that made the augment that "post phase transition expect more momentum" .
In 2009, we get to see how this thesis plays out, and so far, the story seems credible.
Both in the down draft to March 2009 and in the recovery after March 2009, we have seen an astounding amount of momentum. In the recovery phase, we have also seen a big pay out to "high beta" assets, with the classic emerging market and small cap growth themes paying off big time.
I'm definitely writing down in my personal play book "Post some major transition (e.g. bubble crash) expect more momentum (positive or negative) than you would normally expect". Due to the awkwardness of just living on one sample path of economic history, we'll never have an honest test of this rule, but I think it is still more wise than foolish to look at the rule for guidance.
Sidebar:Richard Bernstein's Growth vs Value Thesis
In 1995 Richard Bernstein published Style Investing, a trade book that looked at questions such as "When do growth stocks out perform value stocks?" His most interesting thesis was "Contrary to popular belief, growth investing requires a rather pessimistic view of the world, while value investing is a more optimistic view" (p.72).
Bernstein came to this view by looking at performance prior to 1995, but the developments of 2009 so far seem consistent with his thesis. When the market turned in March of 2009 growth out performed value, and small cap growth was the best of the "six styles." Bernstein's argument was that "when growth is scarce it sells at a premium." By October 2009 the picture was more mixed --- again consistent with the optimism/pessimism thesis.
The alternative interpretation for 2009 is that we saw a sharp rebound from the March lows and the high-beta stocks just got their fair beta multiple of the positive returns. Since growth stocks, especially small cap growth stocks tend to be high-beta stocks, there is no way (or alt least no easy way) to decide between the Bernstein thesis and beta thesis. The good news as far as the playbook goes is that each thesis points you in the same direction.
Sidebar: Coal Prices
The EIA has a site that tracks coal prices. Coal prices, like other commodity prices, may seem like they should be decently predictable. After all, both supply and demand are indeed decently predictable. Nevertheless, markets being markets, profitable prediction is no piece of cake. Hold that thought for a minute and take a moment to learn a bit about coal prices.
It's really amazing (to me!) how greatly the different types of coal diverge in prices as one varies BTU content and SO2 content. Just look at the spread on Illinois vs Powder River Basin prices! Finally, contemplate the observation that coal prices have been pretty stable, but coal stocks have taken a 50% nose dive.
By the old "it's all about expectations" mantra, this may be reasonable --- but maybe it is not!
Addendum: A 434 Alumnus points out that US supplies (as reported by EIA) are relatively stable, but there have been supply shocks to international supply due to new production coming on-line in Australia. He also notes that PRB coal is not exported and used almost exclusively for electricity production.
Day 9: ARIMA(p,d,q): Differencing, Estimating, Simulating
Posted for 3 October 2012
Our first step is to consider "differencing" as a tool for getting stationary time series from a non-stationary time series. This is a simple but useful trick. Also, when we look at continuous returns we are basically looking at the difference sequence of log price. This is one reason it is often (but not always) safe enough for us to assume stationarity of a return series. Price series themselves (or even log price series) are rarely stationary.
The main part of the plan is to go over the S-Plus tools for fitting ARIMA models and for Simulating ARIMA models. Important for us will be the use of arima.sim for simulating ARIMA series and arima.mle for fitting ARIMA series. Naturally, we'll also discuss the new homework HW4 as well as whatever else shows up.
Wisecrack du Jour --- Just An Eye Blink Ago (if 4 years is a blink!)
"I like two positions: Cash and the Fetal position." --- Jeff Macke, Fast Money, CNBC, (9/29/08)
Sidebar: Gray Monday, Sept. 29, 2008: Dow Drops 777 Points
Congress rejected the Economic Recovery Act and the markets responded with disappointment. The much-watched Dow-Jones Industrial Average was off "only" 6.98% while the Standard and Poors 500 Index dropped 8.81%. You might wonder which number best reflects economic reality. The answer is simple: the Dow is an out-dated, deeply flawed, index.
The Gray Wednesday of 2008 took place in a much different environment than the Black Tuesday of 1929 which rang the bell on what became the Great Depression. There a lots of reasons why "it is different this time." In response, one wag observed, "Yeah, Lehman Brothers survived the Crash of 1929."
A more reassuring difference is that the world's central banks are much more well-informed. Unfortunately, legislators don't seem to have made much intellectual progress. Another difference is that Gray Wednesday took place in the context of a reasonably priced equity market, thought perhaps not a cheap one when the then current inflation rate of almost 5% is taken into considered. What was really different in 2008 is that it was a credit (and credit derivative) driven sell-off.
The VIX also hit 48, which was not a record for the time, but it was close. As I commented on the blog at the time "The bottom line is that --- up or down --- you can expect big time volatility." Eventually the VIX went on to hit 88, which skirts perilously close to a mathematical inconsistency once one ponders the definition of volatility.
We know now that the bottom was not found until March of 2009 with the SP500 at the devilish 666, down from an October 2007 high of about 1550 (1565 intraday)--- a stunning 57% decline in about 15 months. Already there are more than one hundred books written about these turbulent times.
Sidebar: Inflation and Stock Market Returns
I'm sure that many of you have a copy of Jeremy Siegel's Stocks for the Long Run. In the third edition (p. 196) there is an interesting chart that relates inflation and the real compound annual return on stocks. He takes the inflation rates, bins the years into five bins, and on the y-axis plots the average of the real returns for the years that fall into this bin. It's an instructive analysis, but relevant to our earlier discussion on graphics, I would much prefer to look at the plain vanilla scatter plot, perhaps with the points coded for decade.
Siegel's graph makes the clear inference that inflation over 4.4% is bad news for equity returns. The scatter plot is likely to be less clear yet more informative.
Sidebar: Listed Assets with Near Zero Beta
If you search for single assets that have near zero beta, you turn up some very odd birds. One that I find interesting is WESCO. It trades very thinly --- which is explained in part by being 80% owned by Berkshire Hathaway. WESCO is run by Charlie Munger, Warren Buffett's long-time sidekick.
Sidebar: More Data or More Bayes?
What is the best way to think about what may be the likely returns to a long term holder of stocks? One model is to look back over all of the available data (say to 1880 in the US, or farther back in the UK) and look at the mean and variance as if annual returns were IID? Is this smart? Can one do better? Why do we believe what we believe? This is a huge discussion and it may lead nowhere, but it really is one of the issues that perplexes me most. It also relates nicely to many other discussions.
Just to distract us from the agony of day-to-day developments, we'll take a moment to look at a statistical time series that is fortunately remote from us --- both in time and culture.. The series in question deals with a sad part of American history --- lynchings in the Southern States from the period of reconstruction (1870's and 1880's) until the depression (1930's). The key figure in the article has suggested to some that there was a relationship between cotton prices and the number of lynchings.
These series set some traps. It would be easy--- and wrong --- to trot out an analysis that ignores: (1) spurious regression in time series (a topic we take up later) (2) the notion of cointegration --- or the lack thereof (another future topic) or (3) the probably poor quality of the data --- not of cotton prices, but surely of lynchings --- the reported number of which were under the political control of the states.
Even given the scholarly source of the data, I view it as quite possible that the recorded number of lynchings could easily be off by a factor of two (especially on the low side) in any --- or even most --- of the given years. Lastly, the use of annual data makes the series very short. Given the humble efficiency of our methods, they are certainly too short for one to address honestly the issue of co-integration. Oddly enough, the whole puzzle does echo an interesting issue in behavioral finance --- namely the constant stalking horse of confirmation bias.
Day 8: MA(q) and ARMA(p,q)
Posted for 1 October 2012
Our first task will be to survey your discoveries from HW3. If all goes as as usual, we will find that --- except in the luckiest of circumstances --- the daily returns for common stocks are not normally distributed. This is the first of the so-called stylized facts that we will discover about asset returns.
The non-normality of returns is often ignored by practitioners --- sometimes mindfully and often times thoughtlessly. We'll try to remain mindful of the non-normality of returns, but on many occasions we'll have to join the thoughtless herd and assume normality --- even when we know it is false.
How do we live with this situation? Well, first we admit that it is a goofy thing to do and then we just keep looking back over our shoulder to see if something has really gone haywire.
The rest of the plan focuses on the theoretical features of the general ARMA(p,,q) models, including the issues of general issues of stationarity, invertability, and identifiablity.
Sidebar: Test or No Test for Stationarity
One theoretical point (that we may or may not cover) is the construction of a stationary process that essentially proves the impossibility of testing for stationarity in the broad sense in which we have defined it. We will later discuss tests for stationarity in more narrow senses, and it will be up to you as a modeler to decide which description best reflects your world.
Sidebar: Tests vs QQ Plots
When we use a QQ plot to compare a sample with a theoretical distribution, we see the FULL TRUTH in our data, and when we us a test (like the JB test for normality) we really apply a microscope to some specific features of our data (e.g. skewness and kurtosis in the case of the JB test).
It is natural that the full truth is harder to "understand" than the partial truth of a p-value.
Also, throughout statistics, longer tailed distributions are more troublesome than distributions with normal-like tails. This inevitably shows up when we try to compare asset returns with t-distributions of various degrees of freedom. We can tell easily enough that a t-distribution is more appropriate than a normal distribution, but by the time we try to decide which t-distribution is "best" we may start to face more or the truth than is personally comfortable.
There is a think piece from PIMCO that uses some metaphors from physics in the context of financial markets. Normally I dislike such metaphorizing --- but normally I do like the PIMCO pieces (especially those by Bill Gross and those by Mohamed El-Erian). One take-away from this article is that after passing through a "phase transition" the likelihood of momentum starts to exceed the likelihood of mean reversion. There do seem to be some historical examples to support that point of view, so it is worth contemplating, even if a satisfactory statistical confirmation is unlikely.
Sidebar: Ergodic? "I don't need no stinkin '' ergodic ... "
You might want to take a look at an amusing popular essay about ergodicity. It is fundamentally correct, but I can't honestly say what one would infer from the article, if one did not already know the formal meaning of ergodic. Still, it is consistent with our definition and it possibly adds some intuition.
There are several layers to this puzzle. John von Neumann once said: "We don't understand mathematics, we just get used to it." Well, the notion of "ergodic" is somewhat the same. We can master the definition, but mastery of the real notion just takes lots of "getting used to."
Incidentally, John von Neumann also had a role in the use of the word "entropy" in information theory. Claude Shannon asked him if he though the use of the term was appropriate, and von Neumann said "Go ahead and use it. Nobody knows what entropy means anyway."
Well, about von Neumann --- John Wheeler once said, "There are two kinds of people. Johnny and the rest of us."
Finally, the definition of ergodic in Z&W is not 100% up to snuff. Their definition and ours will coincide for Gaussian processes, but even our first brush with real asset data tells us that in our business we can't expect to see too many of those.
Sidebar: Too Much Company Stock in Employee 401(k)'s
The a September 2008 piece in the SJ catches up on a point that is often discussed by academic types.:The bottom line is that it is I pretty stupid to hold your company's stock in your 401(k). If you get it at a discount, take it (maybe) but when you can trade out of the discounted stock --- do it! Just by working for a company you already have much more than a market weight in the company, to pour in more money rarely makes much sense. You should probably even underweight your company's industry.
The only exception I can imagine to this "rule" would be for C-Level executives who need to "fly the flag." These executives should look into the possibility of hedging the positions that they are essentially "forced to hold." Giving the "Caesar's wife" constraints such executives face, this may not be easy.
The WSJ piece was bemoaning losses, but ex-post "the first loss was the best loss." What we saw in September of 2008 only got worst until March of 2009, and those were 6 painful months.
Sidebar: Institute of Supply Management Survey (PMI)
The PMI (Purchasing Managers Index) is one of the numbers one can use to measure year over year change in business activity. It is survey based, and all surveys have lags of various sorts --- the PMI's may be shorter than most. There are also data quality issues that can change over time. Also just looking at peaks and troughs, it is hard to argue that it is a leading index --- or at least it does not seem to lead the stock market. The Wiki piece on the PMI seems to have been vetted by ISM, it's not particularly critical.
Use Rolling Betas to Estimate Long Exposure
A recent JPM research report observes "The still low level of their 21-day rolling equity beta suggests that Macro hedge funds are still far from fully covering their shorts in equities." This sentence explains a nice trick. If the returns of a hedged fund (or collection of funds) has a beta that is smaller than usual then their net long exposure is smaller than usual. There are several ways to apply this trick. Suppose you have a long-only manager of a mutual fund who you think is only 60% invested except for a few days toward the end of the period when he has to report holdings. You can look at the "interior days" beta (calculated from the daily returns) and come up with a very good guess.
Truthiness --- Automated Meme Retrieval
There is a new tool to help you chase down the origin of an idea that you find to be "suspect" --- such as the assertion that President Obama is a muslim (not that that is bad --- just that it is not true) or the PPT (Price protection Team) that supposedly supports the price of stock (Oh, but were it so!). This seems to be most effective for completely hair-brained assertions, such as "Paris Hilton is a Mormon" or "Tom Robertson played for the NFL.."
CXO on Slope of Yield Curve and Equity Returns
There is a modestly informative piece at CXO that looks at various measures of the slope of the yield cure and how these might help forecast future period returns on equities. The R-squared statistics are not particularly impressive and the use of overlapping periods is problematical. Still, a "Goldie Locks" story that emerges does make some sense. It's also interesting to see which graphics are persuasive and which are not. One is then just left with the puzzle: "Have I been appropriately persuaded."
Retail Sales "Sediment Layer" Graph
There are now many types of graphs used by the popular press. The always look nice, but it is not easy to say if innovation is always informative. You might consider the recent post in the WSJ on retail sales. I call this a "sediment layer" graph by analogy with familiar pictures in geology.
Changing Value of the Float at ADP (and Elsewhere)
ADP is the world's largest payroll processor, and they have about 18B of permanent float on the payroll funds received from firms but not yet paid to employees (or governments). During most of ADP's history, they got to earn 4-5% on this float, and now they are earning about 0%. Even so, they trade at about 17 times trailing earnings, despite being a true "moat" business with a very "sticky" customer base. Their operating margins are also the best in the industry and they are four times the size of their closest competitor. They have a current 3.4% dividend, which will keep your money warm while you wait.
One never knows when a fundamental argument will hold water, but this one comes from Whiney Tilson who is about as good as they get. The argument also makes sense to me --- though it is quite a long-term story. Substantial increases in employment or in short-term interest rates may be three or more years away.
One idea is to make a list of firms that have historically made a lot of their money from short term float. Someday we will have higher short term rates, and you will have a pre-planned strategy for that future day. Or, if you really believe in the wisdom of markets, you can use the stock prices to give you a leading indicator of future changes in the short-term rates.
Day 7: The Full AR(p) model --- Features and Choices
Posted for 26 September 2012
The main part of the plan is to introduce the general AR(p) model. We'll discuss the historical contribution of Yule, his equations, and the ability to pick up periodic behavior, such as the 11 year cycle one finds in the famous sunspot data.
One issue of clear practical importance is that of choosing an appropriate value of p. In general, this is a problem in what is called "model selection." We'll look at one natural approach based the so-called "partial autocorrelation function."
We'll also open an important conversation about "parsimony" in models. Is this a miracle, credo, or a practical and well-founded heuristic?
We'll also do some mathematics --- in part because it is honestly important for understanding the AR(p) model and in part because I want to remind everyone that --- however powerful simulation may be, one should never forget the value of analytical insight. More times than not, it is an analytical insight that tell one "what to simulate."
After warming up the old partial fractions, we'll find a useful criterion for stationarity in an AR(p) model. In theory this will be nice and explicit, but when turned back to practice we'll see that it is at least a bit schizophrenic. We'll also do that Wold calculation that we didn't quite get to last time.
Sidebar: Goofy Asset --- TIAA Real Estate Fund
If you want to consider a truly unusual asset, you need go no further than the TIAA Real Estate Fund (not to be confused with a mutual fund run by TIAA which is not so strange). We'll just look at the raw figures --- you can build your favorite graphs. This example brings up one of my favorite themes: just because lots and lots of assets exhibit some generic features, it does not mean that all assets have that generic feature. These "failures" are really the most interesting assets. BTW, this is a csv file and it may show missing values. If you import this to S+ you should have no trouble getting plots of these NAVs.
Day 6: Data Confronts the "Normality Assumption"
Posted for 24 September 2012
An important part of the plan is to go over the piece on using WRDS which also covers the importation of WRDS data into S-plus. We'll also discuss HW1 (gone back), HW2 (coming in), and perhaps go over the newly assigned HW3.
Sidebar: Pondering the Mysterious Role of Assumption in Models
We'll often simply assume that a series is stationary, though of course we won't be silly about this.
Most price series are clearly non-stationary, and there are even return series that we can't call stationary with a straight face. Nevertheless, except for isolated (but important!) periods, many (but not all) asset return series have behaved in a way that stationary enough for us to courageously push ahead and "assume" stationarity.
Later we will discuss some "tests of stationarity" but to tell the truth, these test are so limited in their power and applicability that they do not fully deserve the name. They are more technically known as "unit root tests" and they help one to discover when a random process "looks" like a random walk --- a vigorously non-stationary process.
The issue of Normality
Normality of the log-price change and independence of log-price changes are assumed in the development of the Black-Scholes model, a model --- and subsequent theory --- that is of rhapsodic beauty.
It is even a useful theory --- when carefully applied.
Still, as we experiment together, we will find that there is seldom much empirical encouragement for us to assume normality of returns (or differences of log-Price).
We'll start addressing the normality part of this set-up with help from the Jarque-Bera and Shapiro–Wilk tests of normality. For a purely seat-of-the-pants practical approach, one still does well by eyeballing the qqplots.
While we are at it, I should underscore that It is important that everyone in the class start sharpening their understanding of the theoretical side of our work --- even as we keep marching along with the practical side. It really is nonsense to suggest that one has a "practical" understanding without a "theoretical" understanding. We'll see lots of examples of the foolishness that can result when one loses track of what he is talking about.
In particular, make sure that your definitions are rock-solid.
For example, write down the definition of independence of two random variables.
Hint: if you use the word "correlation" you have a gap in your knowledge that you should plug right now. What is required here is a very simple formula --- nothing else is genuinely true or complete. To be brutal about it, anything else is simply wrong. It may have mitigating virtues, but it is still wrong.
My concern is this --- if your understanding of the fundamental notion of "independence" is shaky, then the more subtle notion of "martingale" which we well use shortly must be incomprehensible. Without understanding the ideal of a martingale one is almost barred from a fully adult conversation about market efficiency, gambling systems, and lots of other great stuff.
Suffice it to say ---- understand the definition of independence as deeply as you can.
Sidebar: JB in the Wiki --- Still with a Small "Slip"
The Wikipedia description of the JB test has improved greatly over the last year and now it is a pretty good resource. I'll add a little bit to the description and its motivation in class. One should note that what the Wiki calls "kurtosis" is more precisely the "excess kurtosis". I'll discuss this briefly and remind you of a useful computation.
The Wikipedia article on Shapiro-Wilk is also somewhat useful but it may be it is a too obscure to be directly useful to us. Nevertheless, there is an intuitive description of the SW test that we will discuss in class.
Just for Fun Sidebar: Which Bill has the Shortest Life Time?
Or, how much currency is in circulation? Is most of this in the US or outside the US? All of these, must-know trivia questions are answered by the US Treasury.
Addendum: On Monday we'll learn a bit more about the Graphics tools in S-plus. In the meanwhile you may want to look at a little micro note on graphics that I wrote. Also, you'll want to take a look at the Sidebar on suppressing printing for a graphical function.
Sidebar: Suppressing Printing in acf()
Many functions that create plots also create other useful stuff. To get that useful stuff without creating the plots (and hanging up S+Plus), you need to use the "plot argument". If you look at the help file you'll see the default for plot is true, you need to set it to false: plot=F. Note the default below (from the help file).acf(x, lag.max=NULL, type="correlation", plot=T ...)
What Works Now?
"The lesson that I have learned is that it isn't reasonable to be agnostic about the big picture." -- David Einhorn
What makes this quote have content is that in times past one could have been responsibly agnostic about the macro view. Not too long ago one could hope to make investments that would prosper no matter which way the macro winds might blow. This view now seems to deprecated in the conventional conversation --- though, of course, there could be several layers of illusion going on here.
Non-Wharton Students: WRDS ACCOUNT
I'll give out in class the magic word that Non-Wharton students need to access WRDS.
HW Note: HW is due Monday Sept 24.
Our HW is always due on Monday, baring some fluke event like Fall Break.
HW Note: T(T+1) or T(T+2)? ... Use T(T+2)
In class I made a mental typo when I wrote down the LB statistic. Where I had T(T+1) please put T(T+2). The story is unchanged, otherwise. Remember, LB was in any case a seat of the pants correction to the more logical BP statistic. Kudos to Ulas Jadgale for the catch (by 5pm on Wednesday!)
Day Five: Ljung-Box, S-Plus Code, and the Wold Decomposition
Posted for 19 September 2012
Here is a quote from the 434 Blog from just a few days more than four years ago:
"Some comments are certainly in order on the truly astounding transformations that are taking place in the financial markets. There are lots of ways in which we are not in Kansas any more and it would be foolish to ignore these astounding events --- however much we love stationarity".
That was then, this is now. There are two point to keep in mind. The first is that you can tell when a seismic shift takes place, and one would be stupid, stupid, stupid to ignore it. The bottom did not occur for another six months. The people who ran for the lifeboats did the right thing. The second is that such shifts are exceedingly rare. The old story is "they don't ring a bell when a Bear market begins" --- the new version is "they USUALLY don't ring a bell..."
Still, most markets are --- by definition --- normal markets, and this brings us to the plan. We lock in the notion of an autocorrelation test and specifically engage the Ljung-Box test --- the technically superior version of the more logical Box-Pierce test.
The original paper Ljung-Box (1979) requires more background than you are likely to have at this point, but you can still benefit from taking a quick look. Figure 1 (page 299) especially tells a good tale. It contrasts Chi-squared approximations of the Ljung-Box and Box-Pierce statistics, and we see from the plot that Ljung-Box is the winner. Even though from the point of view of pure logic, the now out-dated Box-Pierce statistic is dandy, we have to concede that from the point of view of the Chi-squared approximation, the Ljung-Box statistic carries the day.
We'll explore the Ljung-Box statistic with help from the S-Plus Finmetrics tools that are bundled into the so-called "acf test" which covers among other things our favorite Ljung-Box test. Specifically, we'll check out the code example and look the results and a relevant picture : MSFTretACF.
We will also develop ---- in the context of our poster child AR(1) model ---- an elegant theoretical tool called the Wold representation. Along the way we will also introduce the lag operator and the so-called "operational method", a magic trick that is useful in many parts of applied mathematics. It looks goofy, but it works --- and it can be justified rigorously by fancy algebra which we will not do.
Finally, we'll also go over the new homework and deal with any logistical issues --- such as getting all of the partnerships formed. BTW, we'll look at my favorite example of why the "partners" set up works so well.
Hey, if we have a moment, we may have some fun with the translation functions of Google. Remember those little flags?
Sidebar: Useful Blogs?
The only financial blog I read regularly ia AbnormalReturns. It stylizes itself as "forecast free" and it mainly serves as an aggregator (and filter) on other material in the blogosphere. At one time I read ZeroHegde, but nowadays --- not so much (hence no free link).
Sidebar: Simple Experiments with set.seed and rnorm
Given almost any question about S-Plus, the way to get the definitive solution is just to run a little isolated experiment. Here is one that is motivated by our discussion of set.seed.
Sidebar: Multiple Comparisons
My colleague Shane Jensen points out a nice example of the problem of multiple comparisons. This is a generic plague on statistical investigations, but his DEAD FISH example is especially compelling.
The researchers did an fMRI of a dead fish exposed to several mental tasks. Without correcting for multiplicities, several regions of the dead fish's brain pop up as active. This is sad in many ways, but saddest yet for those whose wagon is hitched to the burgeoning field of dead fish studies. We'll deal more with the problem of multiple comparisons later, but the lesson is always timely and the example is too good to risk forgetting.
Day Four: Autocorrelation Functions and Simulation Tools
Posted for 17 September 2012
The first order of business is to discuss the incoming homework, which I trust went well for everyone. There may also be some logistical issues of partnership formation, etc.
We'll then introduce the notion of an autocorrelation function, which is our front- line tool for looking for dependence in a time series. After pondering the general definition (valid for any stationary series), we will work out an explicit formula for the theoretical autocorrelation function for the AR(1) model. This calculation will also make it stunningly clear why rho must have absolute value less than 1 for our model to make sense.
We'll then look at the S-Plus function for computing the autocorrelation function. This will provide an opportunity to discuss the nature of S-Plus objects and to look at the relevant extractors. We'll look at
We'll also discuss an example application of the Finmetrics tool arima.sim which can be used to make quick simulations of a large number of models. We'll look at the code, but it is much more fun for you to cut and paste in bits at at time to see how it works. Still, we will look at the main picture.
As promised, the new homework has been posted, but we'll go over it next class time.
As I mentioned in class there are numerous S-Plus tutorials on the web. The hard part it to find one that is really tailored to the needs of our class. For the non-time series basics, I think one does well to look at the tutorial of Konstantinos Fokianos of the University of Cyprus. The S-Plus installation also has useful PDF manuals. We'll look harder at those later. Finally, we have several new sidebars that are worth some attention.
Sidebar: Sign Prediction and Noise in AR(1)
Here is a good question to ponder. If you fix rho (say at 0.3) in the AR(1) model, but you make sigma huge, won't that really mess up your ability to predict the sign of the next period returns? Note that if you think about regression where the model is given by y=a+bx+epsilon, if you keep beta fixed and jam up sigma, you lose any chance for predictability of the sign of y for a fixed value of x.
How does the AR(1) model differ from the regression model?
Hint: How does "changing the scale" change rho in the AR(1) model? At a deeper level what is going on here is related to dimensional analysis which really is one of the most fundamental tricks in applied mathematics and engineering.
Once you have figured out what is going on in the AR(1) model with mean zero, you should humble yourself by then considering the case of a model with non-zero mean. Say, for example IID returns with mean mu and standard deviation sigma. Now, what does changing sigma do here to your ability to predict signs?
Finally, how can you put both of these insights together into one "package"?
Sidebar: Hint on S-Programming (and the Editors)
The interactive way that S-Plus works is nice in many ways, but when you write functions you probably don't want to do this in an immediately interactive way. You can use the editor in S-Plus --- this is the integrated programming environment way to go. Also, the poor-man's approach (which I still use most often) is to write the function or program in a text editor (such as Notepad) and then just copy and paste into the S-Plus command line.
Sidebar: Coaching on Coding
Naturally I cannot look at your code examples individually, but I can give you some generic hints.
Many errors are caused by "boundary issues" --- that is one somehow misuses the range of a vector, or a sum, etc. Other times there are misunderstandings of a function. Here you might want to use help(rep) to make sure you understand that S-Function. Also, for debugging there is some generic advice: (1) test the pieces of your code individually and (2) do something like a "trace" --- that is, print out lots of intermediate results to see if the program is really doing what you want it to do at each step. Also, look at the next sidebar.
Sidebar: Debugging Simulations --- and Using set.seed
When we do a simulation, we are not using honest random numbers, we are using pseudo-random numbers. When you call a function like rnorm(1) to get a "random" N(0,1) observation, the function looks at a special hidden variable called the "seed" and calculated a deterministic function of the seed, then it changes the seed in a deterministic way. If you want to exactly replicate a simulation, you can use the function set.seed to force the "seed" to have a specified value.
For example, if you issue the command set.seed(314159) then call rnorm(5) you will get five observations that start grow out of the seed 314159. Any subsequent calls to rnorm will start with the "current seed" --- so, to have a replica of the first five that you got, you would have to do set.seed(314159) again.
Check out help(set.seed) or look at the learning module from UCLA that I found with just a little searching
Gold Puzzles and Structural Change
In the 'lost decade" 2001-2010 of equity returns, gold has been the best performing "major" asset class. Still, to most economists, gold is simply a "barbarous relic". Is gold in a bubble or has there been a structural change that supports the current price?
Here are some conventional arguments for structural change:
- The existence of ETFs that hold physical gold (not futures) creates a new demand by making purchase by investors easy, transparent, and efficient.
- The emergence of a large middle class in India and other countries with a tradition of individuals holding gold as a substantial fraction of their wealth.
- The halting of the long-running liquidation of the gold reserves by the worlds central banks and the emerging possibility that gold does have some legitimate role as a "reserve currency".
- Extremely low interest rates around the world have reduced the opportunity cost of holding a non-producing asset such as gold.
These seem to be the four main arguments that say gold is not in a bubble. Only the "India argument" seems robust. Interest rates and central bank policy can change quickly, and ETFs are can be liquidated very quickly if sentiment shifts. Time series tools don't help us sort out this kind of argument, but it does suggests some indicators of a potential turn.
Still Need a Partner?
Anyone who is missing a partner should stay a bit after class on Monday so we can close the loops on the teams.
Pension Plans --- Way Under Funded
The big pension funds that remain are mostly those of cities and states. The amount that these governments have to put into reserves to fund their future obligations is dependent on their assumed return on investment. Many governments are still using 8% (or there about) and they have about half their assets in bonds and half in stocks. If bonds yield 4% going forward then you need 12% equity returns to get back to 8% on the whole portfolio. This could happen --- and it is nice if it does --- but it seems more likely that the pension plans will have to revise their return expectations. This increases the cost of state and city governments --- by a lot. The bottom line is that citizens of historically generous cities and states can expect taxes to rise at much more than the rate of inflation.
More on Virtual Desktop (for Maccademia Nuts)
Documentation on virtual access for those who can't put S-Plus Finmetrics on their PC (because they don't have Windows box).
Some people may not yet have found a partner. If you are such a person, please take a moment on Wednesday after class to meet, exchange emails, and negotiate appropriate pairs. If I forget to mention this before the end of class, please seize an appropriate moment to remind me.
Virtual Access Recourse (for Mac Captives)
From Stan Liu (9/12/2012)
"The students' experience on the virtual lab should be the same as a physical lab. They have the ability to save files to the Y: student drive. They will not be able to install programs or change settings though.
The virtual lab with the S+ software should be available by tomorrow. The students who don't have a Windows PC can connect via the directions attached (this is a draft). The official public instructions should be available here this afternoon: http://supportcenteronline.com/link/portal/632/655/ArticleFolder/949/Wharton-Public-Technology
Software: Getting S-Plus for PC (Bullet Proof!)
All students are able to download a free 1 year license of S-Plus and Finmetrics. To do so click here.
The download is 270M (zipped) so it takes perhaps 5 minutes to download. If you have problems you can contact our departmental support person Andrew Romond ( email@example.com).
Note: There are many S-Plus tutorials on the web. My advice is not to bother with them for the moment. We'll develop the tools that we need as we need them, and there are many S-plus tools that we will never need. Moreover, we will work almost exclusively from the command line, and many of the web tutorials are for lower-level courses that rely on the graphical interface which would only stir in confusion.
AIG Saga (9/11/ 2012 via CNBC)
"The U.S. government cut its stake in American International Group to 19 percent on Monday, making a profit of $12.4 billion on the insurer's crisis-era bailout and bringing the unpopular rescue closer to its end."
"The Treasury Department sold $18 billion worth of the insurer's shares at $32.50 a piece, in what could yet be the largest ever secondary offering in U.S. history. The underwriters have the option to buy another $2.7 billion worth of AIG [AIG.N 33.30 -0.69 (-2.03%) shares, which they can exercise in the next 30 days."
Software: Accessing R
Some people (but somehow not all people?) have had trouble down loading S-Plus. Our Technical Rep is working on this and in the meanwhile there is GOOD NEWS. For the first homework, you can use R. For the first assignment, there is no difference between R and S-Plus, except for a little difference in the interface. In fact, S-Plus and R have a common ancestor with which they must be compatible.
Getting R: Go to the CRAN Project and follow the instructions at the R homepage. This is bullet proof. Still, if you have a puzzle see if Dan or Andrew can help. If they are stuck, let me know and I will post up-dated information on the web.
Day Three: AR(1) --- Simulation and ML Estimation
Posted for 13 September 2012
We'll cover the day 3 plan, attend to any question on the Homework No. 1 Assignment, and work a bit with S-Plus, including the examples of functions and loops in a baby AR simulation.
Our main theoretical task will be to introduce the notion of a Markov process and to note that AR(1) is a leading example --- along with its even more famous friend, the random walk.
We'll also review the notion of maximum likelihood estimation and use that principle to obtain explicit formulas for estimates of the AR(1) parameters. The only "tricky" part of the process is that we really only get estimates that are approximately the MLE.
Important Note: Everyone will eventually need a Wharton account or a "class account". If you are a non-Wharton student, you can get a class account from the class account link.
We also have two sidebars for discussion.
Sidebar: Getting the Information Just a Little Quicker
If you can legitimately obtain information that is relevant to asset pricing before the larger market gets such information, you have the basis for a trading opportunity. There are many levels to such games, and sometimes all of the serious work is done in the preparation --- the play book, so too speak.
Still, sometimes you can just get the information first, or a surrogate for the information. Now there are firms that provide satellite photos of Wal-Mart and other parking lots --- this can give you a heads-up on sales (or at least traffic).
There is a story I would like to share about how one firm got very important information after it was released by the government (so legitimate) but before the rest of the investment community learned the information (so profitable). It all came down to a technological "glitch".
This is indirectly related to the current conversation about "stuffing" --- except the days of this "trick" are long gone (unless you find the niche where it is not!). In that case, this is definitely, "News You Can Use."
Sidebar: Picking Your Time Periods --- for Wisdom, or for Advertising
It is useful to look at how various asset classes have faired since October 2007, the previous market peak. This also gives the opportunity to discuss a problem that always hangs in the background any time one looks at financial data: "What period does the analysis cover?"
Here we know we are looking at a special period --- we're starting at a peak. A mutual fund prospectus for a fund that started in 1975, or 1982, or 2002 may not always tell you "By the way, this was a fine time to start a fund!"
The facts behind some of these categories do not always coincide perfectly with their names. In particular, you might think of lots of things when you say "commodities" but most commodity indices are very much dominated by oil --- because that really is the relevant market weight if the class is drawn widely enough.
Sidebar: Credit Spreads --- Canary in the Coal Mine?
The message here is not really a time series message in the formal sense of this course, but it is still very informative as data analysis --- and perhaps as financial and economic insight. It speaks very clearly to the powerful concept of the "repricing of risk". We'll discuss this in class.
Day Two: A First Model --- Modest but Clear
Posted for 10 September 2012
The first task will be to continue with the development of the AR(1) model, including estimates of the parameters. The AR(1) is a modest model but it has two nice claims to fame. First, it is a model that contains the "strawman" of a pure noise model that underlies many financial results, including the Black-Scholes model. Moreover, depending on the sign of ρ, the AR(1) points us to the fundamental distinction of "mean reversion" versus "trend following."
A central concept of time series is that of "stationarity". This is the property that asserts that as a probability model the series is "shift invariant". We'll look at several layers of this.
We'll use the AR(1) model to deepen our discussion of stationarity. In particular, we will have a nice concrete illustration of the "volatility" versus the "conditional volatility."
The third task is to develop further some further S skills, including the use of loops, conditionals, and functions. In particular, we'll look as some simple simulation code.
Please note that HW No. 1 is posted in the e-Handouts, and it is due on Monday September 17 . You will also find the more detailed bullet points for today's class in the e-Handouts. There is also a page that offers a guide to some S-Plus tutorials. These are not perfectly directed at financial time series, but they are worth a look.
Sidebar: PRNs Modern and Ancient
Computers cannot generate truly random numbers but they can generate numbers that we can't tell are not random. They are deterministically generated by a reasonably simple recursion. Even knowing this, we can't guess what the next value will be if we are given the ones that came before. This surely is a very modern idea, and it is usually traced to Los Alamos in the early 1940's. Still, with good scholarship, there are roots to everything, and there were variations on PRNs known in Syria in the 1nd Century CE.
One Text or Two?
You will only need Zivot and Wang, the big fat blue thing. Also, as I mentioned before, we will only use about one-third of ZW, so one copy per team will be plenty. The other text at the book store (Krause) is purely optional. It is useful if you feel that you would like a more general and perhaps simpler introduction to S-Plus.
Anticipating Day One
Posted for 5 September 2012
There will be three main objectives, beyond going over the logistics of the class. First, we'll go over the procedural details regarding homework, teams, and the final projects.
Second, there is at less a little nibbling at our first model --- AR(1), the simplest auto-regressive model and the simplest alternative to pure random noise. Over the course of the term, we'll develop considerable expertise with this model, but the main purpose of this first exposure is to help you calibrate the level of mathematics we will be using --- not too high, but not too low
Finally, we'll have a real-time introduction to S-plus which is our main software tool. Before next Monday you are expected to have installed S-Plus Finmetrics on you PC and to have given it a test drive. You also need to complete the student questionnaire and bring it to our next class.
All students are able to download a free 1 year license of S-Plus and Finmetrics from OnTheHub. This can be found at the following URL: http://www.onthehub.com/tibco/
Students can register for a free account and download the software free of charge.
If you have problems you can contact our departmental support person Andrew ( firstname.lastname@example.org).
Note: There are many S-Plus tutorials on the web. My advice is not to bother with them for the moment. We'll develop the tools that we need as we need them, and there are many S-plus tools that we will never need. Moreover, we will work almost exclusively from the command line, and many of the web tutorials are for lower-level courses that rely on the graphical interface which would only stir in confusion.
If you are still course shopping, I may be able to save you some time.
First, everyone in the course absolutely must have access to a Windows PC on which they can install software. The reason for this is that we will be using the software S-Plus with Finmetrics, and this software does not run on Macs or Unix. Also, the software cannot be placed on public machines. If you are thinking about maybe scraping along without proprietary access to a windows box, I strongly encourage you not to try. From experience, I know this requirement is a deal killer.
Second, I should underscore that this is a course about financial time series. There are lots of applications of time series, and you might think that this course could help you with engineering or medicine or some other worthwhile activity. Unfortunately, that is not the way this course works. Most of our techniques and almost all of our efforts focus on just what is special about financial time series.
Certainly, from time to time, I will mention some of the ways that time series are used outside of financial contexts, but those will just be small parenthetical remarks. The course is about the models and empirical realities of asset returns. If asset returns are not deeply and absolutely interesting to you, then you will miss out on the real fun of the class. Taking the class would be like dancing without liking the music --- possible, but not a good use of one's time.
We will be writing some programs and dealing with some serious software tools, so it helps if you like such work. We won't be doing a ton of mathematics, but if you can't remember calculus, this is not the course for you. You will also have to have some acquaintance with linear algebra (matrix and vector concepts, matrix multiplication, matrix inversion, notion of matrix rank, etc).
From the beginning we will be using expectations, variances, co-variances, probability distributions, confidence intervals, and multiple regression, so you should have had some solid exposure to all of these. Still, I do not expect that all these tools have been completely mastered. Throughout the course a serious effort will be given to deepening your understanding of the fundamentals. This is a never ending process.
Just how much mathematics, statistics, and "computer sense" you need --- well, it's almost impossible to say.
Strength in one place can make up for weakness in another. In the end, what maters most is whether you look forward to trying your hand at discovering whatever something about the ways that asset prices evolve over time.
Almost without exception, personal motivation, honest curiosity, and dogged commitment will rule the day. These work best when combined with an informed interest in financial markets and a solid self-confidence in your own abilities and knowledge.
It's not one you should skip. One of the main tasks will be to sketch out a "mind map" that will provide the big picture for the whole course. We'll also be handling a lot of logistics, such as text requirements and software access. We'll also begin work with S-Plus, our main software tool.
We will again use the text by Zivot and Wang. I came to this decision, only after much soul searching. We are really only going to use about a fourth of its many pages, but there is no (legal) way that I could think of getting you just those pages that we need. It is not required that you have copy of the book, but life will definitely be easier for those who have comfortable access to a copy. Perhaps getting on copy per team would be a good compromise.
I wish that the book were (1) smaller (2) more focused on what we use (3) more "opinionated" about what works --- or doesn't (4) more generous with coaching about S-Plus (5) more sincere in its engagement of real financial issues. It is sadly a "computer manual" and a bit of a cookbook.
Still, it is a beginning, and one needs a place to start. Eventually, I will write my own text for 434, but there is no chance that a workable version will be available this term.
December 18 NOON (Hard copy and PDF)