Wednesday, January 30, 2013

The Libertarian Problem

Over at the NYTimes, David Brooks argues a big problem with the Republican party is its libertarian bias:
While losing the popular vote in five of the last six presidential elections, the flaws of this mentality have become apparent. First, if opposing government is your primary objective, it’s hard to have a positive governing program. 
As Bill Kristol pointed out at the National Review event, the G.O.P. fiercely opposed the Dodd-Frank financial regulation law but never offered an alternative. The party opposed Obamacare but never offered a replacement. John Podhoretz of Commentary added that as soon as Republicans start talking about what kind of regulations and programs government should promote, they get accused by colleagues of being Big Government conservatives.
Alas, there are probably many swing moderates like Brooks who think the path to Republican success is to have a smarter kind of state, not a smaller one.  A vote-seeking Republican probably should adopt this attitude if he wants to be elected.

To libertarians a smarter state is a smaller state, but most voters aren't libertarians. Bush II gave us more (and compassionate) government, and he failed not because he wasn't sincere, but because more government is never smarter. Dodd-Frank and Obamacare just add more bureaucracy to sectors that already are over and mis-regulated.  

Tuesday, January 29, 2013

Signal and the Noise Seems Better Than it Is

I read The Signal and The Noise because I got two copies as Christmas presents.  It was an enjoyable read, like popcorn in that while I liked it afterward I wasn't very satisfied.

There were some neat anecdotes, like about how the McLaughlin Group's forecasts were about 50-50, but I didn't think it meant these guys were clueless so much as McLaughlin was basically asking for opinion where the true probability is about 50-50. As Steve Sailer notes, no one wants to hear depressing statistics like the murder rate in Detroit will be higher than the suburbs, or that per capital income in Somalia will be lower than in Estonia in 2010 or lots of other really important facts about the future, because we take those for granted. Instead, we want to know if iPhones will be more popular than the Samsung phone, which is more objectively about 50-50.

Then there was the section on poker, and what I thought most interesting was that to capitalize on your skill here you need to play an ungodly number of hands.  I know some people play a couple hours of Texas Hold'em every night, and concede that skill to them, happily.  It just isn't that interesting to me.

In weather forecasting there are structural models that look at where weather is now and how its moving, and forecast the interaction.  These work well for about 8 days, but after that long term averages from the Farmer's Almanac work better.  In economics, simple vector-autoregressions that are basically regression-to-the-mean models work better than averages for a year, and then long-term averages dominate.  Structural models are worse in the short and long term.  This should give one pause when you listen to a macroeconomist, and indeed their stock as public pundits has depreciated quite a bit since the 1970's heyday.

For example, when I joined a bank as a economic analyst around 1987, the economists all remembered when their departments had a whole floor of economists generating forecasts on everything (employment by sector in Modesto California, PPI by commodity type, interest rates over the next 5 years).  By then, then group was down to about 8 economists, and now most big banks have one economist, and he's just used for PR to put on CNBC, and the internal decision makers don't even pretend to listen to him (at Moody's, we had an  economist and that position is still strangely respected; I don't remember once anyone mentioning his work in internal discussions). We had debates between monetarist-based models, and Keynesian models, and they all were no better than guessing.

I would have liked to hear some inside scoop on his baseball model, which seems to have been pretty good, but I guess he figured that was too much inside baseball.  As for his election model, it's kind of interesting he simply averaged polls, a pretty straightforward approach.  Most things that work are like that: so simple they don't seem really profound. It would be nice if the key to forecasting was something really tricky like a Kalman filter or a 100 equation macro-model, but I know these don't work because I've seen them fail first hand. Nonetheless, one often meets someone with more education than experience who thinks their new technique will work, and if it's complicated enough, they won't ever have to admit it doesn't.

As per Silver's take on Tetlock's Hedgehogs vs. Foxes, I think John Cochrane had a better take-away:
Milton Friedman was a hedgehog. And he got the big picture of cause and effect right in a way that the foxes around him completely missed. Take just one example, his 1968 American Economic Association presidential speech, in which he said that continued inflation would not bring unemployment down, but would lead to stagflation. He used simple, compelling logic, from one intellectual foundation. He ignored big computer models, statistical correlations, and all the muddle around him. And he was right. 
In political forecasting, anyone’s success in predicting cause and effect is even lower. U.S. foreign policy is littered with cause-and-effect predictions and failures—if we give them money, they’ll love us; if we invade they will welcome us as liberators; if we pay both sides they will work for peace, not keep the war and subsidies going forever. 
But the few who get it right are hedgehogs. Ronald Reagan was a hedgehog, sticking to a few core principles that proved to be right.

Monday, January 28, 2013

Low Vol Football Betting

Analytic Investors has now taken "low volatility" investing to the next level: football (see here, click on NFL Analytic Alphas). That is, real football, with hands, not that sport they show on TV all the time in Europe. Anyway, they find 'low volatility actually helps one bet:
Contrary to the zeitgeist, the anomaly attests that higher risk doesn’t necessarily imply higher return. In fact, research, both at Analytic and elsewhere, has suggested that portfolios comprised of the lowest risk equities outperform their risky counterparts. This phenomenon is not only apparent in domestic and global equities, but in the majority of the world’s asset classes. Research has extended this paradigm to horse racing and gambling in general. After taking a look at NFL betting markets, we’ve found evidence of low volatility outperformance at the franchise level and with respect to heavy underdogs (longshots). .. 
we then partitioned the teams into one of three categories: low risk, average risk, and high risk, in accordance with this volatility. Unambiguously, the organizations with the most stability in Alpha (low risk) also tend to be the best investments year in and year out.
Above is a picture of the returns by odds, and as in the horse-racing literature, they find longshots have the worst returns, just like high volatility equities.  But more interestingly, they find low-volatility franchises have higher returns.  This effect is everywhere! This doesn't include the vig, so hold off on the new hedge fund.

And, they see the Ravens covering the 4 point spread this weekend, but it's one game so either way it proves nothing.

Sunday, January 27, 2013

Don't Renounce Instincts



 In this Bloggingheads video between Robert Wright and Buddhist Gary Weber, Weber notes that he has achieved such a state of bliss that he no longer feels that he care more about his daughters than other children.  This he finds transcendent and logical.  I think it's sad. As a child, I really liked the thought that my mom liked me more than others, and so unlike others really cared about my little travails. As a parent, I feel I need to protect and nurture my children because others won't be nearly as attentive.  It's simply an evolutionary advantage to care more about one's kids than others.  It's hard wired, and so I don't feel one should be proud of losing this like losing one's more irrational prejudices.  It's good that we aren't simply optimizing 'the collective', but instead our kin, because remember: ants are not merely selfless, but genocidal and unempathetic.

W. Somerset Maugham's book The Razor's Edge (1944) tells the story of Larry Darrell, an American pilot traumatized by his experiences in World War I, who sets off in search of some transcendent meaning in his life. His rejection of conventional life and search for meaningful experience allows him to thrive while the more materialistic characters suffer reversals of fortune. When it ends all the characters are suffering for their foolish objectives in various ways whereas the protagonist achieves the total consciousness alluded to by Bill Murray's character in Caddyshack. The book ends noting that the lead character Larry achieves "happiness" as an itinerant worker, as if that is all one should want.

 Maugham anticipated the beat culture, the glorification of finding one's 'authentic self' via a renunciation of material rewards, and rather, an inner happiness one gets from getting high and listening to music.  He was born of the British gentry and so could take status for granted because back then one's class, especially in Britain, was not merely a function of one's wealth. I watched the 1984 version of The Razor's Edge recently and was rather taken by the glib supposition that a man without means or connections in America could simply enjoy his transcendent being without material success. To think that you can achieve bliss by being a monk doing manual labor your whole life is just as dumb as thinking that you can achieve greater glory just getting high everyday and listening to music.

A desire for status and achievement is a deep part of our needs.  It makes sense from an evolutionary perspective, because that way people are incented to interact and try to be helpful to others, create things others want, and receive material rewards for doing so.  Surely, a desire for status can go too far like any desire, but to presume you could be a content toll-booth operator  is naive.

The desire for status is hard-wired and like all instincts good in moderate doses.  Such a desire leads one towards benchmarking against others, and this leads to a zero risk premium.  That's in my book.  Like anyone with a Big Idea, I see it everywhere, and here I think it's absurd to think the good objective in life is losing one's attachment to things like popularity and success. This was really brought back to me reading the existentialists and noting how profoundly sad they all were (Schopenhauer, Nietzsche  Kierkegaard), because they were really lonely, they didn't have wives or children.  I remember reading Henry James telling his brother that his most singular attribute was his pronounced loneliness, which as a homosexual in that day must have been really tough. We have hard wiring that may not make sense from a logical perspective, but we shouldn't dismiss these desires simply because they aren't rational in a utilitarian perspective. 

Wednesday, January 23, 2013

Don't Innovate Too Much

Dick Thaler gives out some advice to econ students:
My advice for young researchers at the start of their career is… Work on your own ideas, not your advisor’s ideas (or at least in addition to her ideas).
I think this is bad advice.  Kids love to think that they can do it all themselves in an Ayn Rand fantasy, independent of others.  Yet, to succeed is to succeed socially, because we are social animals, so we need not just good ideas, but good ideas people are receptive to. This means, understand what customers/readers/users want, and what colleagues want to help you with.  Sure, you can do it yourself with sheer brilliance, but it's infinitely more difficult than simply extending someone else's work.

 In the book I’ll Have What She’s Having, the anthropologist authors argue that we mostly copy everyone else: first our parents, then our peers, then anyone who seems to be doing well. Generally, it works, as if you want to figure out how to ride the subway, you could do worse than stand back and watch everyone else. Evolutionary biologist Mark Pagel takes this a step further by stating emulating others is the basis for almost every idea we have, and notes that very few of us create things that are later used by many others; think the number zero, the wheel, soap, iron, the alphabet. Kids aren't good at listening to their parents, but they are predictable imitators because that's a good strategy.

We are fundamentally an imitating species. Most innovators spend their formative years producing derivative work: Bob Dylan's first album contained 11 cover songs, comedian Richard Pryor began his career doing imitations of Bill Cosby, Hunter S. Thompson retyped The Great Gatsby just to get a feel for writing a great novel, and most of Freud's theories were riffs on Greek literature and Shakespeare.  We copy to acquire knowledge that becomes the foundation for variations and extensions that appear to outsiders as thinking outside the box. As Isaac Newton said, we stand on the shoulders of giants (itself, a rephrase of a saying by 12th century French philosopher Bernard de Chartres)

You can't make variations on a theme without understanding that theme, and having the theme your are riffing on well-accepted already.  Personally, I wrote my dissertation on what interested me, low volatility stocks, and thought it was fascinating.  It was a total flop--I got zero flyouts--mostly because it had no supporters in our finance department.

You need mentors and advocates.  Sure, there's the occasional Ken Arrow with his dissertation on the impossibility theorem, or John Nash's theory on, well, Nash equilibrium, but those are  exceptions.  Most successful academics published their first paper by merely extending their dissertation advisor's ideas (Joe Stiglitz edited Paul Samuelson's collected works as a grad student, giving him a good feel for how a paper should be presented). As Thaler is mainly famous for some papers finding that 3-year winners and losers mean-revert, I think his main take-away is that it helps to be lucky, because those papers were seriously flawed and wouldn't have made it out of many incubators (see here), and that result has been orphaned because it doesn't hold up.  For example, the low-priced bias was so large, there was a January bump in portfolio returns 12 and 24 months after the portfolio formation period as these low-priced 'loser' stocks bounced from their bid of 1/2 to their ask of 5/8 each January.

Back in the 1990's, successful students simply applied the latest technique (eg, vector autoregressions, generalized method of moments), using TeX, which then was instantly credible because it looked a lot like what those top economists were doing. Add a top prof as your advocate because you are adding a reference useful to his reputation, and you have job offers. Copying form is probably more important than substance, because it's not only easier to do, but easier to evaluate, and as much as decision-makers say they love innovation, they just mean, the kind of innovative work they and their circle of colleagues are doing (and, of course, Einstein, Kepler, etc.).

I really like this old piece by Robin Hanson on The Myth of Creativity:
To succeed in academia, my graduate students and I had to learn to be less creative than we were initially inclined to be. Critics complain that schools squelch creativity, but most people are inclined to be more creative on the job than would be truly productive.

Tuesday, January 22, 2013

A Happy Skewness Delusion

There are several papers asserting that investors like positive skew to their returns. This is because empirically investors tend to be highly undiversified, and have a bias towards highly volatile stocks, and so they seem to want big lottery-type payoffs (incidentally, this is the exact opposite to what Nassim Taleb states is common, that people prefer payoffs that have high modes and low means--negative skew). Anyway, In the early days of portfolio theory, reconciling the desire to take such big risks was considered side by side with minimizing variance, as Markowitz looked at both approaches back in 1952 but eventually just sided with the total variance approach, and preference for positive skew was abandoned as an exception to a mean-variance rule.

Alas, it keeps cropping up, as in papers by Kraus and Litzenberger (1976), Harvey and Siddique (2000), or Statman and Shefrin (2000). I think Statman and Shefrin do the best job here, notingg that this isstrictly irrational, in that if there is a skewness preference and a risk aversion preference, then the skewness preference will be pretty tame, and as Post, van Vliet and Levy (2006) note, second order relative to the equity risk premium (eg, less than 1%). Rational risk averse investors would not take a lot of risk with some of their assets, at least, not enough to affect pricing much. That's why I like the Statman and Shefrin model, because they explicitly say this doesn't make sense if investors are rational risk averse investors (at least, with risk conventionally defined), unlike Harvey and Siddique who try to cram this into a fully rational model and ignore how it all fits together.

In any case, I was looking at this and came across an interesting take by Brunnermeier, Gollier and Parker (2008).  The model gets pretty convoluted but the intuition is very interesting.  The benefits of being ex ante biased towards skewed assets outweighs the ex post costs of having inefficient portfolios.  These highly skewed assets make it possible to conceive of an aspirational state of much greater wealth, as such an event is possible, though improbable.  Your excessive optimism makes you better off because you anticipate a small probability payoff incorrectly, but since your happiness is the discounted value of these beliefs, you are a happier deluded self. Think about all the time you have spent considering what you would do if you won the lottery, as surely everyone has done that at least once. The cost can be pretty low (for a lottery ticket, $1), but the benefit of visualizing that fantasy pretty large, and it doesn't keep me in a debilitating stupor because I don't do this much (moderation in all things).  

Now, as an explanation of the low vol effect, I think this Brunnermeier et al model is inferior to the Shefren and Statman approach because it pretends to be a rational, general equilibrium model, when its not (funny how general equilibrium models built on fundamental preferences are usually quite parochial because of their limited assumptions).  But I like the intuition, or find it intriguing.  It's nice to think we are on a journey with sealed orders with some higher purpose that we aren't privy to, but it exists and is important. It may not be true, but it's comforting to think so, and doesn't really cost much. 

Monday, January 21, 2013

CBO Shenanigans

The US budget deficit has been large for quite a while, and there doesn't seem to be a lot in terms of revenue increases or expenditure cuts in the pipeline. Consider that in the latest budget deal, tax hikes will raise $22 billion this year, while the special-interest business and energy tax credits will reduce revenue by $65 billion.

 Thus, I was surprised to see this chart from our non-partisan Congressional Budget Office around the internet, showing a mysterious big decline in the Federal debt/GDP ratio starting in about a year:


So, what's going to cause this trend to reverse course in a few months?  Two things. First, the Federal deficit is projected to basically halve every year for the next couple, getting to around zero in 5 years. As mentioned above, the profiles in courage recently displayed should give one pause, especially considering how a newly elected Democrat couldn't even cut the defense budget meaningfully.  Secondly, growth over the next 5 years will average 3.2% annually.

That kind of growth hasn't happened since the late nineties, or late eighties.  It's an outlier. How can they get away with that? Well, they put in a recession next year, so that would seem to suggest they are being pessimistic, but then this is countered by years of 4+ growth.

Economists are very bad at predicting business cycles. It would be far more honest to simply have a vector autoregression that projected a slow reversion to the long run trend, and leave it at that. Instead, using the illusion of control that comes out of their structural model, the CBO thinks they have not merely the mean, but the fluctuations tied down. Heck, last time we were in a recession, most economic forecasts were still predicting growth and had no clue we were actually in a downturn.  So, this recession in some sense will be unique like all the past ones, the exception being that it will be the one we actually saw coming (and was mild).

But a fluctuation in a forecast is very useful because if you predict growth like that, if it comes in worse than expected you simply move out the recovery over time and it looks like you saw it all coming (first rule of forecasting: do it early and often). If growth comes in better than expected next year you highlight how pessimistic you are (leaving those 4% growth years for someone else to explain, not that anyone gets called on projections out over 2 years anyway).

I understand why weathermen over predict rain when it is improbable: it sells, people prefer to hear a 20% chance of rain to a 5% prediction, even if the latter is more calibrated.  Thus, small probabilities for precipitation are over estimated in general. But the CBO is relied upon to make big decisions and not selling advertising, and they clearly are simply pulling numbers out of the air when they 1) forecast recessions and 2) forecast 5 years of above-average growth.  

The pathetic thing is they know what they are doing, and how useful the down-up biased forecast is for managing ex-post rationalizations in real time. The first key to being a good quant is honesty, because when you start lying to others, you start lying to yourself, and then you are simply an articulate confabulating hack for whatever prejudice is driving your big picture.

Thursday, January 17, 2013

Getting Back to Basics

From Bloomberg:
Chief Executive Officer Michael Corbat, said “We’ve got to get to a point where we stop destroying our shareholders’ capital.” 
At least he understands what they have been doing.  If he's serious, why not start by selling the bank off into manageable parts?  I just don't see how its possible to efficiently maximize a 2 trillion dollar bank, with 260k employees. The only people this makes sense for are senior management and politicians, as its become the new Fannie Mae, a great way to give bureaucrats a multi-million dollar payouts in thanks for regulations that hurt competition under the rubric of safety.

Wednesday, January 16, 2013

Interesting Equilibria

Over on TheEdge, they asked 150 people what worries them most.  One of the more interesting responses by Dylan Evans notes that while Democracy is pretty good, there might be something better. Unfortunately, we may never find that better system.
the appendix persists because individuals with a smaller and thinner appendix are more vulnerable to appendicitis. So the normal tendency for useless organs to atrophy away to nothing is blocked, in the case of the appendix, by natural selection itself. Perhaps this idea will turn out not to be correct, but it does illustrate how the persistence of something can conceivably be explained by the very factors that make it disadvantageous. 
Democracy is like the appendix. The very thing that makes majority dissatisfaction inevitable in a democracy—the voting mechanism—also makes it hard for a better political system to develop.
It's a neat analogy and a deep problem.  The thought of the median voter, including the great unwashed one sees at amusement parks, deciding policy forever seems somewhat limiting.  After all, the nautilus's pinhole eye is far inferior to something with a lens, but in the 500 million years since the Cambrian explosion, it's still stuck with the pinhole eye camera even though a nautilus would really benefit, greatly and immediately, from a lens. It is like a hi-fi system with an excellent amplifier fed by a gramophone with a blunt needle.

Another fascinating post was Clifford Pickover:
Zeilberger considers himself to be an ultrafinitist, an adherent of the mathematical philosophy that denies the existence of the infinite set of natural numbers (the ordinary whole numbers used for counting). More startlingly, he suggests that even very large numbers do not exist—say numbers greater than 10 raised to the power of 10 raised to the power of 10 raised to the power of 10. In Zeilberger's universe, when we start counting, 1, 2, 3, 4, etc., we can seemingly count forever; however, eventually we will reach the largest number, and when we add 1 to it, we return to zero!
I'm hopped up on pain meds now, and this thought is blowing my mind.

Tuesday, January 15, 2013

Falkenblog in Top100

I made a list of the top 100 finance blogs.  Thus, everything I write, like that on the internet in general, is even more true.  If you like this blog, please buy my book, The Missing Risk Premium, which is on $10 on Kindle, $15 paperback.

note: I had surgery on my long bicep tendon today, a residual of prior damage, as benching 405 was probably not one of my better mid-life goals.  Hopefully I'll be back, but right now I suspect I have been given placebos by some new penny pinching Death Panel because I'm in a lot of pain.

Interestingly, for my pre-op exam, they basically asked me a bunch of questions, including: do I wear seat belts, what type of birth control I use with my wife (and the set up, whether I have an active sex life--I didn't ask for a definition of 'active'), whether I feel safe in my relationship, and if I sometimes feel anxious (I said 'sometimes' to the latter, which I think must be true for any non-psychotic, but perhaps in the future I won't be able to get a concealed carry permit).

They asked these at a later phone consult from the surgeon's office, so I stopped the second time and asked, 'what these have to do with susceptibility to a bad reaction to anesthesia?'  She said, 'I have no idea, we just have to ask them.' I imagine some bureaucrat thought this was a great idea, but I'd rather my doctor fix parochial problems and not have to spend any time on them.  Always back to the Serenity Prayer.

Monday, January 14, 2013

Kurzweil on Creating a Mind

I just got done reading Ray Kurzweil's How to Create a Mind, his latest on how machines will soon (2030ish) pass the Turing test, and then basically become like robots envisaged in the 60's, with distinct personalities, acting as faithful butlers to our various needs.

And then, today over on The Edge, Bruce Sterling is saying that's all a pipe dream, computers are still pretty dumb.  As someone who works with computer algorithms all day, I too am rather unimpressed by a computer's intelligence, but Kurzweil made me a little more appreciative of what they can do.

He notes that IBM's Watson won a Jeopardy! contest by reading all of Wikipedia, a feat clearly beyond any human mind. Further, as Kurzweil notes, many humans are pretty simple, and so it's not inconceivable a computer can replicate your average human, if only average is pretty predictable. Sirri is already funnier than perhaps 10% of humans.

Human's have what machines currently don't have, which is emotions, and emotions are necessary for prioritizing, and a good prioritization is the essence of wisdom.  One can be a genius, but if you are focused solely on one thing you are autistic, and such people aren't called idiot-savants for nothing.

Just as objectivity is not the result of objective scientist, but an emergent result of the scientific community, consciousness may not be the result of a thoughtful individual, but a byproduct of a striving individual enmeshed in a community of other minds, each wishing to understand the other minds better so that they can rise above them. I see how you could program this drive into a computer, a deep parameter that gives points for how many times others call their app, perhaps.

Kurzwiel notes that among species of vole rats, those that have monogamous bonds have oxytocin and vasopressin receptors that give them a feeling of 'love', and those where dads are just sperm donors don't. Hard wired emotions dictate behavior.  Perhaps computers can have emotions, you can put in something for sadness when they aren't called by other programs or people, a desire to see users with physical correlates to fertility like smooth skin and tone bodies. But it's one thing to program an aversion to solitude, another to desire a truly independent will.

Proto humans presumably had the consciousness of dogs, so something in our striving created human consciousness incidentally. Schopenhauer said "we don't want a thing because we have found reasons for it, we find reasons for it because we want it." The intellect may at times to lead the will, but only as a guide leads the master. He saw the will to power, and fear of death, as being the essence of humanity.  Nietzsche noted similarly that "Happiness is the feeling that power increases."  I suppose one could try to put this into a program as a deep preference, but I'm not sure how, in that, what power to a computer could be analogous to power wielded by humans?

Kierkegaard thought the crux of human consciousness was anxiety, worrying about doing the right thing.  That is, consciousness is not merely having perceptions and thoughts, even self-referential thoughts, but doubt, anxiety about one's priorities and how well one is mastering them. We all have multiple priorities--self preservation, sensual pleasure, social status, meaning--and the higher we go the more doubtful we are about them. Having no doubt, like having no worries, isn't bliss, it's the end of consciousness.  That's what always bothers me about people who suggest we search for flow, because like good music or wine, it's nice occasionally like any other sensual pleasure, but only occasionally in the context of a life of perceived earned success.

Consider the Angler Fish. The smaller male is born with a huge olfactory system, and once he has developed some gonads, smells around for a gigantic female. When he finds her, he bites into her skin and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. He is then fed by, and has his waste removed by, the female's blood supply, as the male is basically turned into a parasite. However, he is a welcomed parasite, because the female needs his sperm. What happens to a welcomed parasite? Other than his gonads, his organs simply disappear, because all that remains is all that is needed. No eyes, no jaw, no brain. He has achieved his purpose, has no worries, and could just chill in some Confucian calm, but instead just dissolves his brain entirely.

A computer needs pretty explicit goals because otherwise the state space of things it will do blows up, and one can end up figuratively calculating the 10^54th digit of pi--difficult to be sure, and not totally useless, but still pretty useless.  Without anxiety one could easily end up in an intellectual cul-de-sac and not care.  I don't see how a computer program with multiple goals would feel anxiety, because they don't have finite lives, so they can work continuously, forever, making it nonproblematic that one didn't achieve some goal by the time one's eggs ran out.  Our anxiety makes us satisfice, or find novel connections that do not what we originally wanted but do what's very useful nonetheless, and in the process helped increase our sense of meaning and status (often, by helping others).

Anxiety is what makes us worry we are at best maximizes an inferior local maximum, and so need to start over, and this helps us figure things out with minimal direction.  A program that does only what you tell it to do is pretty stupid compared to even stupid humans, any don't think for a second neural nets or hierarchical hidden markov models (HHMMs) can figure stuff out that isn't extremely well defined (like figuring out captchas, where Kurzweil thinks HHMMs show us something analogous to human thought).

Schopenhauer, Kierkegaard, and Nietzsche were all creative, deep thinkers about the essence of humanity, and they were all very lonely and depressed. When young they thought they were above simple romantic pair bonds, but all seemed to have deep regrets later, and I think this caused them to apply themselves more resolutely to abstract ideas (also, alas, women really like confidence in men, which leads to all sorts of interesting issues, including that their doubt hindered their ability to later find partners, and that perhaps women aren't fully conscious (beware troll!)). Humans have trade-offs, and we are always worrying if we are making the right ones, because no matter how smart you are, you can screw up a key decision and pay for it the rest of your life. We need fear, pride, shame, lust, depression and envy, in moderation, and I think you can probably get those into a computer.  But anxiety, doubt, I don't think can be programmed because logically a computer is always doing the very best it can in that's its only discretion is purely random, and so it perceives only risk and not uncertainty, and thus, no doubt.

The key, as Minsky always told me, was uncertainty, true uncertainty as discussed by Keynes and Knight.  If it is truly non-quantifiable, then a computer can not understand it, and they will never empathize with us correctly, never accurately have a 'theory of mind' that comes naturally for humans.  After all, without uncertainty, there really isn't doubt, which Schopenhauer said was the essence of consciousness. So, the search for AI, and a model of 'real risk', seemed joined at the hip. 

Sunday, January 13, 2013

Is Broker-Dealer Leverage the Elusive SDF?

Every year thousands of young people attend elite schools to learn about business. One of the core courses is Corporate Finance, and one of the key principles they learn is about risk and reward, and the standard theory is framework--not a model--that holds that the expected return of a financial asset is a function of risk, where you are paid to endure this unpleasant, irreducible characteristic. The quantity of risk is measured by a covariance with priced risk factors which are as-yet unidentified time series like the stock market, and there’s a linear relation between this risk metric and expected returns. Risk measures the 'how much', and the price you receive for this is from risk premiums. Thus, expected returns are determined by this crucial, objective characteristic (Cliff Asness nicely describes the expected return as dominating the variance in the forward to Antti Ilmanen's Expected Returns, accurated because if you ever do a mean-variance optimization, those assumptions really drive the end result, but alas they have much greater uncertainty).

Yet as Mark Rubinstein said about the CAPM and its extensions, “More empirical effort may have been put into testing the CAPM equation than any other result in finance. The results are quite mixed and in many ways discouraging.” Eugene Fama and Kenneth French called the CAPM “empirically vacuous,” and APT creator Stephen Ross noted that “having a low, middle or high beta does not matter; the expected return is the same.” These are all major proponents of this approach, so I think it's fair to say that the standard model is a theory in search of validation. As the elusive risk factor is clearly not 'the market', but something like the market, new factors are proposed all the time.

Remember the CAPM, with its simple single factor model,

E(Ri)=Rf+bE(Rm-Rf)

Supposedly, this worked great, but then, Fama and French showed that when you pre-sort data by size, there's no relationship with beta.  Thus, they created the new 3-factor Fama-French portfolio

E(Ri)=Rf+bE(Rm-Rf)+bsize(Rsmall-Rbig)+bvalue(Rvalue-Rgrowth)

At this point, all bets were off. There were two ways to theoretically rationalize a factor. One could simply assert it as being intuitively risk, as Fama and French did, which via Arbitrage Pricing Theory logic should be priced.  Alternatively, one could write down a Stochastic Discount Function, as Harvey and Siddique did in (2000) with their coskewness paper:


Or as Jacobs and Wang did in their 2001 consumption growth volatility paper:


'm' is sort of like super-string's M-theory: it can be whatever you want it to be, and the great elders of rigor have proven these are all kosher, so, once written down as above, you can simply append the resulting 'betas' to the above 3-factor Fama-French model and not explain exactly how they work together given one's earlier motivation that contained no value or growth factor (It reminds me of how Marx's totally wrong but complicated and rigorous Das Kapital allowed generations of theorists to talk about the Hegelian dialectic as if it were real, because nonbelievers simply didn't want to waste time on it, and insiders could all point to thoughtful people who believed and extended the great work).

Researchers Tobias Adrian of the Federal Reserve Bank of New York, Erkko Etula of the Federal Reserve Bank of New York (now at Goldman Sachs), and Tyler Muir of Northwestern University have a hot paper that is the latest best hope for the elusive risk factor that explains asset returns (Financial Intermediaries and the Cross-Section of Asset Returns). They argue that the key is broker-dealers, specifically, their leverage constraint. The nice thing is that one has data on their leverage going back to 1968 or so with quarterly data, so you can throw it against the wall, and guess what, "Our single-factor model prices size, book-to-market,momentum, and bond portfolios with an R2 of 77 percent and an average annual pricing error of 1 percent." That is, 25 size-value sorted portfolios, plus a momentum portfolio, and some US Tbond portfolios.


Considering that CAPM betas can't explain anything, how does this work?  I'm not sure, but I suppose a lot of it is overfitting: there are only 25 portfolios targeted, and there are lots of potential SDFs (eg, consumption-labor-wealth VARs, consumption growth, Tobin's Q in various forms).  As usual, they stress the deep theoretical roots to their metric:
Guided by theory, we use shocks to the leverage of securities broker-dealers to construct an intermediate SDF.  
How does theory guide this?  Well, remember, 'm' is the Stochastic Discount Factor,so, if you simply assert that


Where LevFac is the seasonally adjusted change in Broker/Dealer leverage, there you go. As Frazzin and Pedersen's Betting Against Beta framework also included the market return with their leverage constraint, I'm not sure how they dropped that given their similar intuition.

I took there proxy of the SDF, and graph it next to the S&P for the past 10 years.  You can see its correlated about 30%, but catches the really big moves as in 2008 and 1987 (not pictured here).

note: the BD factor is derived from a factor-mimicking portfolio from the 6 F-F size-value portfolios and the momentum portfolio as given in their paper

What I suspect, though I haven't done the experiment, is that if you regress individual stocks against this factor there will be a zero correlation with returns. That's the result of overfitting. You fit the target, in this case some portfolios from Ken French's website, and you have a pub, especially if you write down an SDF, but it's just the flavor of the month, the latest potential solution to a perennial problem.

I shouldn't be too hard on it, it is intellectually honest work, very clear.  Yet, these pop up all the time as one would expect with thousands of potential SDFs out there and the ease at which they can be rationalized.  If one ever explained the cross section of stock returns, I'd rethink my skepticism.

Tuesday, January 08, 2013

Bob Haugen RIP

Bob Haugen passed away last Sunday. My favorite Haugen articles are really two.  In Commonalities in  the Determinants of Expected Stock Returns (1995, with Nardin Baker), he basically showed that there's lots of stange things going on in the stock market. He looked at 40 or so specific metrics related to liquidity, price ratios, prior returns, growth, and risk, and found many of them significantly related to future returns in the US, Germany, France, and the UK.

The paper was important because at that time Fama and French came out with their influential paper showing the value and size anomalies, and reconciled this within a risk model that they said must have some orthogonal value and size related factors.  Lakonishok, Shliefer, and Vishny, meanwhile were arguing these were due to inefficiencies, investor over and under-reactions. Haugen favored the inefficiency explanation, and more importantly highlighted there were a lot more than the value and size anomalies.

Most importantly to me, in my dissertation I remember highlighting his The efficient market inefficiency of capitalization-weighted stock portfolios (1991, with Nardin Baker). This paper highlighted that a very fundamental portfolio, the Minimum Variance Portfolio that is at the extreme left of a convex hull created via Markowitzian diversification in mean-variance space, actually had a slightly higher than average return.  Haugen focused on the inefficiency of the market portfolio, but I used it to support my contention that low volatility stocks actually had slightly higher than average returns.  Most low volatility funds reference this paper in presentations of their approach, as it was the first paper to highlight the dominance of this special portfolio.

He was an independent spirit, and will be greatly missed.


Monday, January 07, 2013

Hubbard on Relative Risk

Glenn Hubbard was a major academic bank researcher when I was in graduate school, and later he became a Republican stalwart and dean of Columbia's business school. In connection with the lawsuit of monoline insurer MBIA against Bank of America and Countrywide, he recently gave this interesting deposition.

The plaintiffs are trying to prove Countrywide was negligent or even fraudulent in dealing with customers and investors. Hubbard is trying to support the proposition Countrywide did nothing wrong.  I'm kind of torn because on one hand, Countrywide was pushing the envelope of no-income/no down payment loans that made the bubble much greater than it would have been with mere monetary easing; on the other hand, Countrywide bragged about this for years, and their CEO Angelo Mozillo received many honors for expanding the home ownership to minority communities (eg, he won the American Banker's Lifetime Achievement Award in 2006).  What they did was stupid, but it wasn't a secret, and when it was happening it was encouraged by regulators, legislators, academics, community activists, borrowers, and yes, investors.

The key problem was not something really complicated in the copulas or a misplaced confidence on correlations, it was the assumption that housing prices, nationally, would not fall in nominal value, significantly. This outcome basically was assigned a zero probability, why the correlation assumption was allowed to stand. So, it wasn't really that subtle, but it was pervasive, and as we all now know, terribly wrong.

Anyway, Hubbard's post-mortem defense is that as Countrwide's portfolio did about as well as other lenders, they didn't do anything wrong.  He gets very testy, just as he did in the Movie Inside Job:
Q. You understand your obligation is to answer my questions to the best of your ability, including as the questions change, as they will throughout the course of the day?
 A. I promise to be as nonlinear as you would like me to be.
Around page 48 of the deposition, the plaintiff lawyers try to get Hubbard to admit that all he did was compare Countrywide to other mortgage originators, so if fraud or misrepresentation was pervasive it wouldn't show up in his benchmark comparison of Countrywide performance relative to its peers.  Hubbard tries to have it both ways, saying at various times he didn't analyze underwriting, that he had no knowledge of what other lenders were doing, but also that Countrywide wasn't doing anything wrong because Countrywide's mortgages did as poorly as everyone else's. 

The analysis Hubbard does is pretty straightforward. That he gets paid hundreds of thousands of dollars by many large corporations highlights that they are all paying for his brand name, because his knowledge of mortgage risk is pretty meager. The problem, however, is that everyone who really understands this market is clearly biased, but then, Hubbard is obviously biased too, so in a sense this is all a sham (get a big name to do pedestrian work that is purportedly objective).

Alas, this brings to mind the famous John Maynard Keynes quote, 'worldly wisdom teaches that it is better for the reputation to fail conventionally than to succeed unconventionally', or perhaps Schopenhauer's, "in a herd, we are free from the standards of an individual." Both are lamentable but true.  If everyone's doing it, it's hard to call this fraud or negligence, criminally defined.  Singling out bankers would be arbitrary, as these guys were not doing this outside the Matrix (Fannie Mae's DeskTop Underwriter front-end greatly expedited ninja loan underwriting, and they are a product of the Federal Government). So, while I think shaming should go all around, I don't see the point in criminalizing it because then you either fine/jail everyone, or arbitrarily scapegoat some minority group (here, bankers).

The bottom line is that Hubbard's correct in that there is less risk doing what everyone else does, regardless of how good or bad it is, precisely because it is impossible to punish everyone. That's why it's essential to have a Bill of Rights, so majorities can't steamroll unpopular minorities.

In any case, risk, in practice, is relative, the theme of my book, The Missing Risk Premium, which curiously implies a zero risk premium in general and a persistent negative premium to assets desired for reasons outside of standard models.

Relative risk means pervasive benchmarking and the importance of tracking error. It can lead to sunspot equilibria when everyone is going crazy like the in the internet bubble. Now, it's easy to say, you should not benchmark, you should target a Sharpe ratio, not an Information Ratio, but most people don't. I think it highlights that there's an easy way to beat the benchmark, avoiding the crowd any sticking with boring investments, but the key is you need sufficient reputation or capital to be able to do this without losing your job, so it's not so easy to do even if you want to.

Sunday, January 06, 2013

Dimensional's Latest on Low-Vol

Ronnie Shah has a new piece on low volatility investing (Residual Volatility and Average Returns, Dec 2012), and as he works for Dimensional Fund Advisors, I presume his take is consistent with that of the Dimensional management.  As this would include Eugene Fama, Ken French, Robert Merton, Roger Ibbotson, Myron Scholes, and George Constantinides, I think it represents conventional academic financial wisdom about as well as any firm can.

The old guard continues to think nothing's wrong with the standard theory, a 3-factor model include a value and size proxies (ie, long high book-market/low cap, short low book-market/high cap) as well as the market (as in the traditional Capital Asset Pricing Model). The market factor does not work within equities, but 'explains' the relative premium in equity indices over bonds; the size and value factors explain, well, themselves (the size and value anomaly). That is, all factors are chosen to explain themselves! While the market factor has some theoretical justification, value and size proxy some factor we still, after about 30 years, haven't identified. That this is considered a logical, rigorous, theory highlights to me that smart people are willing to accept a lot of rationalization to keep their paradigm alive.

In any case, Shah's argument is that Low Volatility is not an anomaly because if you sort by residual volatility, among the high cap/value stocks, there's an insignificant difference between the low and high volatility stocks.

Methinks he's conveniently ignoring a lot of arguments, instead focusing on a simple statistic that, while logically correct, is rather selective.  That is, minimum volatility portfolios taken within the SP500, since 1998, generate a large 3.9% difference in annual returns with the SP500 (this is just multiplying the monthly returns by 12).  Look at the results below (data are from here).  That's statistically insignificant, however, given standard t-stats, and if things continue, will become significant around 2050. Such is the nature of financial time series.


I haven't broken out a low 'residual volatility' portfolio because I think it's really total volatility that is the key, and so using residual volatility handicaps the results. Note that there's no beta premium, and so from a return perspective this doesn't hurt, and lower systematic volatility (ie, beta) is also a good thing for a portfolio.

More importantly, even if there's no low volatility premium, the fact you can generate the same return for one-third the beta, or total volatility, then anyone who believes in mean-variance optimization (as the Dimensional Star Chamber does) should love low volatility investing much more than value or small cap investing.

Lastly, just because high volatility stocks tend to be small cap, and have high spreads, does not mean they aren't good data points. It's true that you can't really arbitrage the poor returns on high volatility stocks because they are often illiquid, but note that lots of people own these stocks, and they seem strictly dominated in the sense of Rothschild and Stiglitz  (Increasing Risk I, 1970).  The fact you can't short them efficiently, still doesn't explain why anyone would own these as they do. Further, given their high transaction costs, the holding period should be long, and what really kills these stocks is the compounding, as their volatility is about 40%, which generates an 8% drag on geometric returns (Geometric Return=arithmetic Return-variance/2). That people buy these lottery tickets is impossible to reconcile with the model where people are maximizing a standard utility function.

This will be an example of Max Planck's dictum that science increases funeral by funeral, because these guys aren't about to give their standard model the root canal I think it deserves.  As shown above, you can have meaningfully large opportunities that won't look statistically significant for 50 years, so, if you have a strong prior, there's no logical reason you have to give it up in this life. 

Wednesday, January 02, 2013

Buy AAPL

With the VIX at lows, and central banks across the globe printing money at breakneck pace, it seems a great time to be long equities. Not that I think this is good for the long run, but for at least a couple years this should be great for equities.

Specifically, Apple has decline 25% from its peak in September, probably due to investors with large gains taking their capital gains before 2013 when such capital gains rates will go up.




Given so many interest groups have an interest in asset prices rising, and the central banks across the world have this as their priority, I think once bank get some of their final legacy lawsuits off their backs they will start lending again, M2 will explode, and prices will rise. First in assets. 

Tuesday, January 01, 2013

Some Classic Economics Articles

Here are some of my favorite economic articles.  You might enjoy them.

Bruce Yandle on Bootleggers and Baptists. I think this is a really deep theory, in that it applies not simply to economic regulation, but almost all political divisions. That is, contra Marx, the Hegelian dialectic is not between those in power and those not, the rich and the poor, but rather, each side in any revolution contains some of both. There simply aren't enough rich to beat 'the poor', and these classes are very heterogeneous, each with various divisions (I remember how the Mexicans and Central Americans used to fight when I lived near MacArthur Park in Los Angeles). Think Emperor Claudius, the Praetorian Guard and the mob, vs the Senators and patricians after the assassination of Caligula, with high-minded rhetoric about Rome leading the PR battle. Henry Manne's Parable of the Parking Lots is a good example of these coalitions applied to economic regulation.

Frederick Bastiat on The Seen and Unseen.
There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen.
Most stimulus and redistribution is simply the seen, so shame on economists for generally supporting these boondoggles, usually as a Keynesian pretext for their less defensible, but still understandable, egalitarian ends (the theme of my book The Missing Risk Premium is that this is precisely our dominant instinct, envy over greed).

Mark Skousen on Samuelson's Economics Textbook. This book really reflected conventional Macroeconomic wisdom for 50 years, and its tendencies are worth examining. Samuelson highlights the uselessness of Tetlock's Hedgehog and Fox dichotomy, because 1) Samuelson was clearly both, and 2) Tetlock described himself as both. Who isn't both?

R.A. Radford on the Economics of a P.O.W. camp.  Notice the politics of economics even in these situations.  Hyman Minsky used to tell me markets are good at distributing goods that already exist, like care packages in a POW camp, but not investment, which being subject to Keynesian uncertainty, is too irrational. He ceded the Radford anecdote, yet dismissed its generality.

Tullock and Buchanan on the Calculus of Consent, why collectives choose policies that are suboptimal. Markets can be suboptimal, but giving power to a regulatory body has even greater problems. That is, it's simply not correct to assume a regulator will implement the optimal policy, or anything close to it.