Wednesday, April 30, 2008

Risk is a scalar, not a vector

A pernicious reaction to failures is to add factors and nuance, not simplify and integrate. That is, an additional risk measure, or an additional risk oversight group. But risk is a scalar, not a vector. The problem with risk as a vector, that is, a set of numbers like {3.14, 2,73, 1.41} is that it's relevance is left as an exercise for the reader. Thus, one number can be high, another low. Should one look at the 'worst' number? Average all the numbers? Throw out the high and low numbers? In general, such a bevy of numbers leads to some being above average, others below. Indeed, as the risk vector grows over time, invariably some are above average because reviewers love to give everyone some marks that need improving, and some that give the reviewed some encouragement.

The famous 1987 space shuttle disaster involved, with hindsight, the serious error of flying during low temperatures that were known to cause problems with the O-rings. Now, this issue was raised as part of the review process, with about a hundred other such 'mission critical' risks. Of course, it was overridden. A good risk number should go off rarely, so when it does go off, you respond. Otherwise, it's like the 'check engine' light on a crappy car: everyone thinks the light is broken, not the engine (and they are usually right).

Take the UBS case. They have Group Internal Audit, Business Group Management, GEB Risk Group Subcommittee, Group Head of Market Risk, Group Chief Credit Officer, UBS IB Risk & Governance Committee, IB Business Unit Control and IB Risk Control, UBS Group Risk Reporting, Audit Committee, IB Business Unit Control. I could be missing some. This clusterpuck of groups is a recipe for disaster, because tricky issues are generally left for someone else. Lots of groups, and their opinions, is isomorphic to having a risk vector, not a risk scalar. For example, the VAR for subprime used only a 10-day horizon, and was based on only the past 5 years of daily data. Who's idea was that? Who is accountable? I bet somewhere you will find someone who made the appropriate criticism, but in the end, they were ignored. Basically, if some one person can't see how the risk of a product in its entirety, and report this to the CEO, who can understand what is being said, this company should not do these things.

Take the Moody's response to the subprime mess. They suggest that perhaps, for structured credits, they Moody's mentions they could supplement their structured credit grades with something that gives additional information, say, AAA.v2. As Cantor and Kimball note in their February 2008 Special Comment, "Should Moody’s Consider Differentiating Structured Finance and Corporate Ratings?":
The additional credit characteristics could be conveyed through symbols that would not be physically appended to the rating but instead provided in separate data fields, analogous to other existing rating signals such as rating outlooks and watchlists. This approach would avoid entanglement with the existing rating system – a potential issue for both rating agencies and users of ratings data – and would encourage the addition of more information content over time because such information would not have to be appended to the rating itself. For example, an issue could have a “Aaa rating, with a ratings transition risk indicator of v1, with a data quality indicator of q3, and a model complexity indicator of m2."
A single group, with a single number, such as a properly constructed VAR or Moody's rating, is very informative because it is unambiguous. Such a number should have lots of documentation that mentions how different risk factors were addressed, but at the end of the day, risk, like return, is a single number. I know caricatures of Value-at-Risk are pointless, but those errors are rectifiable within the framework: fat tails, long holding periods, absence of specific historical data. The nice thing about it is that when its wrong, it's wrong, and people and methodologies can be held accountable. But a vector can always be right if you just would have looked at the various signals that were bad (in a large vector, some always exist). Risk measurement is about creating an unambiguous evaluation, so that it can be compared directly to another business line, without all sorts of qualifications. Eventually, you have to explain this to 60-something senior management, and while they should have some level of knowledge, they shouldn't have to integrate 10 risk group's findings on risk reports that each contain 5 numbers and lots of qualitative nuance. That's how UBS lost $37B.

Tuesday, April 29, 2008

Great UBS Mea Culpa

UBS wrote down $37B over the past 12 months, quite a mistake, but they sure aren't shy about it. In a very informative report to shareholders, they outline the disaster. This should be essential reading for any senior risk manager (go here and download 'Shareholder Report on UBS's Write-Downs'; essential reading for junior risk managers includes Maxim, FHM, and Fark.com).

The report goes over the entire mess, and highlights several issues at work. 70% of the losses came from their fixed income group. 15% came from the Dillon Read Capital Management (acquired in only 2005), and the rest, other, all pretty much based on the subprime, and its affect on other structure product spreads.

Much of the growth in the Fixed Income group came from fat fees on repackaging residential mortgages from the US into mezzanine (ie, Ba rated) CDOs and reselling them. This generated fees of about 150 basis points, compared to fees of only 40 basis points on senior tranches. Now, in fixed income, 150 basis points is a lot of spread, so you have to wonder where they thought this edge was coming from. Its like, if someone told you they could repackage US Treasuries, take out 50 basis points, and make everyone happy. The spreads are just too thin, you can only get this kind of edge by taking huge risk, in Treasuries for example, it implies there must be some kind of yield curve bet, or selling some swaption, and in these markets, they just aren't that inefficient to think 50 basis points is there no matter how smart you are. If they thought there was 150 basis points in free alpha via repackaging pools of RMBS, they should have done the math. So the spreads alone should have been a warning signal, and suggest senior management was ignorant of the basic pricing and historical performance of credit returns. It was doomed based on this analysis alone.

The 150 basis points on the mezzanine piece necessitated keeping about 60% of the RMBS on their books, both the super senior pieces, and the first-loss pieces. But this looked good, because as a AAA rated bank, they funded everything at this ridiculously low rate, and so made positive carry on this stuff. So it seemed like arbitrage. They also bought AAA rated ABS of various collateral, and, because ABS AAA trades above bank AAA rates, had positive carry on this. When the mortgage market blew up, everything in structured finance cratered even though asset quality was pretty unchanged, and they exited with a loss in the third quarter of 2007 on things backed by credit cards and autos, merely because spreads widened.

At the end of the problem, their pipeline could not be sold, so they were stuck with the 40% they anticipated selling.

The bank had several groups evaluating risk, and so no own had real ownership of the issue. One group, internal audit, did 250 such reviews a year, so their rather perfunctory review is understandable--any group that does 250 risk reviews a year is worthless, because they don't have the clout to tell anyone with power no, they have too much to do to fight such a battle. Furthermore, only 5 years of data on RMBS pricing was used for stress tests and Value at Risk. Given the 2000-2005 period was benign, this understated the risk significantly. Any stress test or VAR needs to count observations in terms of cycles, not days: 1990 hotels, 1990 Commercial mortgage backed securities, 1997-8 emerging markets, 2001 CDOs and telecom, 1996 credit card, 1994 yield curve, and draw inferences from different products, because the historical losses of one type of derivative over 5 years is just not sufficient to get a real sense of a worst-case-scenario.

Further, they assumed normality, because they assumed these were tradeable assets, and over short horizons, even asymmetrically distributed credit instruments are well approximated by a normal distribution. But the secondary market for derivatives is much less liquid than the prices indicate, because these instruments are dominated by new issues, so you get lots of data that seems to be moving like a futures price, but in practice, you can't sell an outstanding issue at anything like these prices. When the market collapsed and many firms wanted to exit these positions, the losses were highly asymmetric, typical of credit cycles. One should apply annual worst-case-scenarios, applying the losses from the adverse cycles above, to anticipate correctly for such losses on these kinds of instruments. I'm only 42, but I've seen it many times, and every business line has managers who have a 10 year plus spotless record, but this is merely survivorship bias within the firm.

In sum, they appeared to make 3 key mistakes.

First, that warehousing AAA ABS paper added value to the bank. Warehousing prime paper funded at their AAA rate was merely abusing their internal funding. The profits from the portion they sold as mezzanine tranches was intoxicating, but left residual risk via the unpackaged debt (equity and super-senior) that only looked profitable because they were funded incorrectly. This massive residual was only tenable because it seemed to generate profits, not costs.

Secondly, their risk metrics were based on short, biased samples, and one should assume that any credit can be just like Telecom in 2001. Credit has to use long investment horizons, and highly asymmetric return characteristics, because that is the game. A bond is really like selling out-of-the-money puts on equity, because you make money 80% of the time, and lose 20%, and historically, the returns to High Yield bonds has basically been the same as Investment Grade. Don't expect to make money warehousing credits--its a cost of intermediating, not a benefit. Banks like UBS should recognize their value-add comes from intermediating, not warehousing. Warehousing is a residual effect as part of being a dealer, but there should be strict ex ante limits on this stuff precisely because any such assets on the balance sheet are, at best, treading water--any positive carry is from perverse funding rates, especially considering these are generally going to be first loss positions. Basically, the structured group was generating these fees, at seemingly positive carry, but it was implausible there was ever this much edge in the assets.

A final problem was the never re-underwrote the agency ratings. That is, they assumed AA rated paper would be ok, because it has, historically, a 0.05% annual default rate. But as mentioned, underwriting standards were changing, and the rating agencies dropped the ball. Now, with tens of billions of dollars at risk, one has an obligation and ability to re-underwrite this, and the changes in origination criteria should have made it clear these residential mortgages were different than those issued only 5 years ago.

Monday, April 28, 2008

Classic Comeuppances

A note from Scalia's new book Making Your Case, on how to argue before a court:
A noted barrister, F.E. Smith, had argued at some length in an English court when the judge leaned over the bench and said: “I have read your case, Mr. Smith, and I am no wiser than I was when I started.”

To which the barrister replied: “Possibly not, My Lord, but far better informed.”
heh.

Lowenstein on Moody's

So I heard from my old Moody's colleague that Roger Lowenstein, who wrote When Genius Failed (about LTCM), was preparing an expose on Moody's related to the mortgage crisis. It was a piece in the NYT Magazine, and I thought it wasn't so tendentious or misleading, it was actually pretty good.

Moody's gave Lowenstein access to a particular mortgage deal in some detail, and Lowenstein notes that for a particular CDO done at the peak of the residential mortgage bubble, a securitization called XYZ, contained the usual stuff, but there was this:
[the loans] were originated by a West Coast company that Moody's identified as a 'nonbank lender.'... Moody’s learned that almost half of these borrowers — 43 percent — did not provide written verification of their incomes...half of the borrowers, however, took out a simultaneous second loan. Most often, their two loans added up to all their property's presumed resale value, which meant the borrowers had not a cent of equity.
Moody's qualified this by noting these people had good credit scores, and these were generally their primary residences, so they thought it was safe. But 43% with no income verification? Half the borrowers had no equity in the game? Issued by a nonbank lender, meaning, someone without any large franchise value, and so he has little too lose when these things tank? That's picture perfect bad underwriting. I think the first thing to do isn't to adjust the algorithm, but do drug testing, because ignoring these details is like missing a fly ball because you were staring at the shiny lights on the ceiling.

“We aren’t loan officers,” Claire Robinson, a 20-year veteran who is in charge of asset-backed finance for Moody’s, told me. “Our expertise is as statisticians on an aggregate basis. We want to know, of 1,000 individuals, based on historical performance, what percent will pay their loans?”
If you are using assumptions from people with a clear incentive to misrepresent themselves, why should we believe anything coming out of your group? Do you really think your statistical algorithms are all that people pay for? It's the whole, final, fat grade, which for Baa and up, is simply supposed to say 'you don't have to reunderwrite this, because its risk is insignificant.' A partial grade is a cop-out, because you can always say, after the fact, that one in a million things was off. If for a structured credit, a Moody's Baa means "this is A rated given the unverified and unanalyzed assumptions given to us, so we notched it down one grade", I have one word for you: worthless.

These people, the mortgage recipients, the originators, had no skin in the game, and so, clearly had an incentive to take a bet on housing via a lax securitization process. When I worked for a bank, one needed 20% down payment to get a mortgage. After all, the first rule of banking is to only give loans to people who don't need them. This is a common joke, but the plain fact is, you should be reasonably sure that nonpayments are implausible, because estimating losses that are significant gets you into equity, not debt. The only exception is consumer loans, like credit card, and there we have millions of observations, and so one can expect to lose 5% a year, but with an 18% interest rate, it's tolerable. When you have no money in the house, a 10% fall in housing prices implies significant defaults, leading to the new territory, as we are finding out. If borrowers had 20% down and income verification, there would not be this problem.

Moody's calculated an expected loss over the lifetime of the CDO of 4.9%, but only a year later, 13% of the loans were delinquent. In April 2007, Moody's announced it was revising its model used to evaluate subprime mortgages, but the horse was out of the barn. As Mark Adelson noted, Moody's statistical approach, applied to these very novel underwriting standards, was 'like observing 100 years of weather in Antartica to forecast the weather in Hawaii'. If ever the 'ceteris paribus' assumption didn't hold, this was it. Whoever led this group at Moody's should be fired.

My friend and ex-Moody's colleague Jerry Fons has a consulting practice, and his big beef with the rating agencies is the conflict of interest on these things. He notes that, when you are getting paid to give grades, there's a clear incentive to good grades, especially when issuers shop ratings, looking for a good grader, who then gets this business. So one solution is to have investors, not issuers, pay. But that would be tough, because there's a free rider problem, in that most investors would wait for someone else to pay, and then, because secrets are impossible to keep, piggy back on that guy's opinion.

Rating is a scale game, and it's efficient for one or two groups to specialize in these things, give one grade, so people don't needlessly duplicate the same function. I don't see how you can get around issuers paying. But it would be useful to mandate that all Moody's performance data is warehoused at the government, and have someone at the Fed publish the default data for Moody's, S&P, Fitch, and the rest. That is, in compensation for being a Nationally Recognized Rating agency, a valuable designation given to only a handful of companies, they need an objective scorecard that will adversely impact their stock price if they screw up. In this case, Moody's ratings on structured finance clearly isn't worth that much right now, and they really have to earn that back their credibility in this area.

Currently, the rating agencies keep tabs on themselves via their default studies, and this information is too important, and too valuable, to have the agencies grade themselves. For example, Joseph Mason of Drexel University finds that CDOs defaulted eight times the rate of corporates for Baa ratings--a massive miss. Documenting this performance would hold Moody's accountable, as those in charge would know that if any group started to wreck the statistical validity of their ratings, it would cost them. There residential mortgage mistake is not merely a minor mistake, but reflects a profound lack of judgment on something fundamental--incentives.

But as the article notes, what is the alternative? I'm sure the Feds, if they monitored it, would have a different set of incentive problems, more intractable. There is a huge franchise value to being independent, accurate, analysts of these issues, and so I think Moody's and their competitors will figure this out, but if this carrot is not sufficient, I don't see the next best solution. We don't merely want excess caution, because a rating that is excessively pessimistic is just as misinformative as one that is too easy: soon people will dismiss cautions because they are perceived, correctly, as too harsh. The worst result would for Moody's, or some new organization, merely make everything be bullet proof to get a rating, because then we would have a bunch of AAA rated paper, rated between AAA and BB, and this would merely then be noise, constrain credit, and not be an equilibrium solution.

Sunday, April 27, 2008

Panic!

There's nothing like an old coot telling you he's seen it all before. It's tiresome at some level, but very wise at another--an accumulation of life experiences that shows the eternal verities are basically the same as when Euripides was describing life in the 5th century BC.

Peter Bernstein, author of those bestsellers, Against the Gods and Capital Ideas, is interviewed:

Today's trouble, the 89-year-old Mr. Bernstein says, is worse than he has seen since the Depression ... Before, it was investment that made the V at the bottom of the business cycle. I don't see real investment turning enough without some sign from the consumer side.

This seems rather bizarre, that he is having trouble seeing this cycle turnaround, given he is older than dirt. It's like having a veteran of the French Foreign Legion tell you his paper cut is the most pain he has ever experienced.

GDP growth has still been positive, though it slowed a 0.1% annualize growth rate in the fourth quarter of 2007, and is projected to be slightly above zero in first quarter of 2008. To put the current malaise in perspective, the past GDP peak-to-trough declines have been as follows:

2001: -0.3%
1990-1: -1.3%
1981-2: -2.5%
1974-5: -3.1

and here are the big ones of the US experience:

1929-33: -29% (the Great Depression)
1937-8: -18%

Billionaire investor George Soros believes the current financial crisis is the worst since the Great Depression. The IMF states that US mortgage crisis has spiraled into "the largest financial shock since the Great Depression". Paul Krugman states that "The financial stuff looks like a combination of 1990 and 2001, and probably bigger than both combined." I think housing prices overshot, but these experts should turn off CNBC and look at the bigger picture.

Ephemeral Skills


I use to be able to do this in under a minute, and knew several tricks, but now, I plod through. As I initially learned how to do this by reading a book, my ability to solve this is not very impressive (figuring it out without help would be a real achievement). It's a pretty pointless skill, so I imagine most people who have ever learned how to do this, have forgotten.

I took French in high school and college, but my French skills are pretty lame. This is understandable, as living in the US, there is really no need to know French, it never really comes up, and without practice, you simply forget. I often think, it would be nice to learn German, or multiply 3 digit numbers in my head quickly, but then think, no, it would be pretty pointless given my current situation.

This is why most kids find school boring, because they are often asked to learn things they know are merely difficult: diagramming sentences? how a bill becomes a law? Understanding "Peasant Blacksmithing in Indonesia: Surviving and Thriving against All Odds" (Obama's mother's dissertation)? Kids have a sense of what is tripe, what is merely pedagogically convenient, or PC, versus something that can actually be useful. I used to teach, and when I look at what I learned as a teacher, it was mainly to give better examples and exercises, ones that are not merely for the test. Anything can be rationalized as essential knowledge, because almost any fact is useful knowledge to someone, but it's all context and reasonableness. Thus, kids know most of the their day is spent learning stuff that they know is about as useful as learning the Rubik's cube, and dread it.

Selective Use of DNA Illogical

Recently, the U.S. Senate passed legislation that prohibits employers and health insurance companies from discriminating against people on the basis of their genetic test results. The House is expected to approve the measure as early as next week so it can be sent to President Bush for signing.

The Senate vote was 95 to 0, and reflects the basic popularity of this idea. Earlier this year, the Genetics and Public Policy Center surveyed more than 4,000 Americans and found that 92 percent were worried that genetic test results could be used against a person.

Clearly, people think that things beyond one's control should not be used against them. But life is all about probabilities, not binary decisions, and one can't avoid the fact that many things beyond our control affect our probability of success in life. One's IQ, athleticism, extroversion, musical abilities, are all somewhat determined by one's genes. Sure you can make the best of what you are given, but it is simply improbable that you will be a PhD physicist if you have an IQ of 85, or that you will play college football if you can't run a 40 yard dash under 5.5 seconds, or be a good salesman if you are introverted. Some kids are not nearly as attractive as others. Genes affect longevity, one's propensity to develop cancer, or heart disease. More importantly, some genetic defects are directly related to one's longevity, such as type-1 diabetes, cystic fibrosis, sickle-cell anemia.

A small attempt to get back at an unfair Creator. But I think it would be more fair to say, if you have a gene that predisposes one to cancer, any such cancers should be covered by some federal health insurance. But that would involve up-front expenditures, extra-premiums or plans provided by the Nanny State. Legislators like smooshing expenses indirectly through mandates, that then drive up costs indirectly, and then every one can think the Government costlessly made a good thing happen.

Saturday, April 26, 2008

Smart, or Overfit?

At the NBER conference, Jonathan Berk of UC Berkeley made a very spirited, but unconvincing, defense of the idea that one should evaluate skill, not through alpha via an asset pricing model (eg, the Fama French 4 factor model), but rather, look at the size of the fund and its fees as evidence of alpha. That is, if a firm has edge, in a competitive environment, its edge is equal to its fees. This argument is based on competitive markets, and makes some sense. But it implies that mutual fund returns, pre-fees and expenses, are correlated with fees and expenses. I haven't seen such data.

Anyway, there was a fun 15 minute argument about the issue. Berk is a very impressive guy, he looks, and acts, like an ultimate fighter: aggressive, shaved head. He has one of those English accents that's very distinctive, but as an American, I don't know where it comes from. I just know it's not like the Queen, nor a cockney accent. After that, who knows. I'm sure Englishmen could put him in a particular county and social strata.

Berk made an interesting observation. Any market anomaly that has been expertly exploited, is observationally equivalent to any overfit, ex post observation. Both are merely historical, and though you can come up with a good explanation for why making money on internet companies in the 90's after the fact, one could easily argue these were merely accidents, unforeseeable. The bottom line is, you really can never know, except through a very qualitative evaluation of the motivation.

There was a clear distinction between theorists and empiricists. Theorists merely want models. They want something with generalization, something they can extend or modify, or else, what are they to do? People want things they can use, and so, as an intellectual, this means an idea that is amenable to various refinements. There is an interesting contrast between the desire to make something useful pedagogically, or useful for refinements, and useful for explaining the data. Sometimes they coincide, sometimes they don't.

Friday, April 25, 2008

Don't be Evil



"Don't be evil" is the informal corporate motto for Google, established by Gmail inventor Paul Buchheit. Buchheit, who suggested the slogan in a meeting, said he "wanted something that, once you put it in there, would be hard to take out," adding that the slogan was "also a bit of a jab at a lot of the other companies, especially our competitors, who at the time, in our opinion, were kind of exploiting the users to some extent." "Don't be evil" is said to recognize that large corporations can often maximize short-term profits with actions that destroy long-term brand image and competitive position. By instilling a Don't Be Evil culture, the corporation establishes a baseline for decision making that can enhance the trust and image of the corporation that outweighs short-term gains from violating the Don't Be Evil principles.

As with the Golden Rule, one would think this is all obvious. But it is not. Evil exists, and it's not Snidely Whiplash types who say they wish to do bad things, like sadists, but rather, paranoids using principle as a pretext. I think that all the poster children of evil in the the 20th century--Pol Pot, Stalin, Mao, Hitler, Manson--were sincere in thinking they were making the world a better place. They were just 1) wrong in their assumptions and 2) indifferent to the suffering of individuals. That is, even if you think that Germans are the Master Race, or that socialism is utopia, expediting the journey via massive starvation and slaughter is simply immoral and wrong. Further, in every case, it was not even true their assumptions were right, so the path they were so excited about getting to was not utopia, but rather, hell. Clearly these errors are correlated, in that, Hayek might think a republican government of state minimalism is optimal, but it would be inconsistent to treat people as means to an end. Milton Friedman wanted to shrink the government, but because he believed in liberty, and the value of individuals, it would have never occurred to him to kill groups to get there. And so, I think evil is basically ignorance, and has absolutely nothing to do with sincerity.

Further, given the intelligence of the German high command, intelligence has nothing to do with it. Ignorance, as any educated person knows, is not confined to the underclass. Many smart people adopt a paranoid worldview that makes them evil.

It seems there are many people who own mansions and corporate jets, and are worth hundreds of millions of dollars, who treat individuals as means to an end, and wish to make examples out of them based on some paranoid principle. These people are clearly sociopaths, but I feel no pity for them, any more than I feel pity for violent rapists because they are obviously mentally unhinged. It is their victims who deserve our support; sociopaths, especially ones of means, need disincentives, not more group hugs. They have enough enablers telling them that their behavior is somehow reasonable (all on the hope of getting some of the power or money from such rich people).

But clearly those with the means to inflict such evil are small, relegated to wealthy sociopaths. But what about those in a position to help, but choose to do nothing? As Abraham Lincoln said, 'Nearly all men can stand adversity, but if you want to test a man’s character, give him power'. And so, most cubicle slaves aren't in a position to manifest any sociopathic qualities. But many are given small opportunities, that are like little 'tells' in poker, and give away a moral emptiness.

To not step forward and testify, or challenge, because it may expose them to the wrath of the evil person? Remember the Kitty Genovese story? This kind of immorality by omission, like not being good Samaritan. One has, I think, a duty to act, because choosing to do nothing is a choice. Further, when the choice merely asks one to behave reasonably, to espouse the truth, the 'risks' are always there but are small, not like asking someone to jump into a race riot to save their lone buddy.

Most of us, after grade school, are no longer called upon to exhibit physical courage. But intellectual courage is called upon, and generally journalists and educators prop up the heroes of the past and say, I would have been behind Galileo or Ruby Bridges. Who wouldn't? The real question is, if you saw someone getting railroaded, would you stand up and say, to someone important in the process, this is not right, knowing that you would incur the wrath of the powerful instigator? Unfortunately, most people, even seemingly very religious, and empathetic people, do not.

Flaubert said, 'Our ignorance of history makes us libel our own times. People have always been like this.' Most people are intellectual cowards, that is, are afraid of reprobation, or making enemies in social circles. My kids learn about Civil Rights constantly, as if this is to teach them courage, but learning about courage this way is very misleading, because we have a case where there was a bad thing that is no longer defended by anyone sane. So to teach courage via this tale is like teaching about scientific courage by pointing to the idiocy of the flat-earthers. I think a better education would be to look at Hitler, and see, OK, he needed a pretext to arouse the masses, and the Jews were good. Why? Because they were unsypmathetic. Why? They were disproportionately successful. Why is that sufficient? Because they were depersonalized, rationalized via social Darwinism, etc. That would be much more informative than merely saying that those in favor of allowing blacks to enroll in white schools were right. My boys learn about a caricature of evil in school, they don't learn the real thing, and I see the moral effect this has on current adults--they have learned, or become, nothing. Hopefully, my kids won't experience evil, but as Flaubert noted, it's as probable as ever.

Thursday, April 24, 2008

NBER financial conference

So I'm at this NBER conference in Chicago, and today there were papers on the recent financial tumult. As with all conferences, there was a combination of interesting and uninteresting material presented.

One neat set of papers was when Andy Lo presented his paper on a specific, mean-reverting strategy common to hedge funds, and noted it experienced 8 standard deviation losses back-to-back-to-back in August (8/7/07-8/9/07), and got it all back the final day (Friday, 8/10). It appears someone puked up the strategy pretty bad. Then, Kent Daniel noted that several standard long/short quant factors (eg, long/short book-to-market) also lost a lot on those days. So it seems that some big fund, or some big group of funds, all exited these strategies simultaneously, and got clobbered. Lo also documented that really short mean-reversion, say over a 1 hour period, lost a lot only in that period--generally a market maker makes money on mean reversion. It appears someone big was exiting and the standard liquidity providers were getting run over, so they just disappeared.

Atif Mian and Amir Sufi, besides having very symmetric names, presented an interesting paper on the degree to which the mortgage problem was supply vs. demand based. They note that several standard risk metrics of borrowers--debt to income, rose dramatically from 2000 to 2005. So did the proportion of mortgage debt sold to third parties, which are rife for moral hazard (idiots buying loans originated by guys with no skin in the game, and thus have an incentive to be lax in underwriting). When house prices rise, the obligor's risk doesn't matter, because you merely repossess the collateral. But when prices flattened, the chicken came home to roost, and here we are. But they looked at thing by county, and it was suggested they look at those areas targeted by CRA and other government programs. It will be interesting if they find that the Fed's initiative, the Community Reinvestment Act, was a big reason for this problem. We will know soon.

There was also a neat note by Markus Bunnermeier. He estimates this calamaty is a $500B loss in market value. But that's about a 2% downtick in the S&P500. So, it's not the magnitude of this mess that is the problem, but the concentration, and how it reverberates through the system.

There were some people talking about VAR performance, looking at the number of times big firms hit their 99% or 95% limits. I thought this was very strange, as I'm unsure what purpose a top-down VAR measure performs. At that level of aggregation, its pretty meaningless. I imagine it's just done for regulators, and generally considered meaningless by insiders.

Wednesday, April 23, 2008

Existence Theorems

Many arguments are predicated on existence proofs. That is, someone proves that there exists a solution, and then they assume the possible is probable, or at least, counters the alternative. But I really can't think of anything interesting where the two sides don't have possible results, but rather, it's the probabilities: what's the best way to manage the Fed, tax rates, welfare, etc. It's not whether something will or won't happen, but rather, how much or to what degree. You can prove that under certain conditions free trade makes a country worse (infant industries), or that higher prices raise demand (giffen goods), or that savings is bad for an economy (the fallacy of composition). But these are all generally, empirically, untrue. Possible, however. Intelligence tells us what is logically implied given assumptions (eg, mathematics), but probabilities are generally an empirical matter, often deriving from meta-knowledge, ie, common sense. Thus, intelligence is weakly correlated with common sense because it can't prove priorities, which are based on probabilities.

Take, for instance, the famous proof that because the market portfolio is inefficient, with very small deviations of the measured market portfolio (ie, the proxy, like the S&P500) and the true market portfolio, you can get a zero relationship between beta and cross-sectional returns. After Fama and French (1992) put a fork in the CAPM, Ross and Roll (1994) and Kandel and Stambaugh (1996) resurrected the Roll critique, and tried to address the issue of to what degree an inefficient portfolio can generate a zero beta-return correlation (now acknowledged as fact). That is, is it possible that beta is uncorrelated with the S&P500 or whatever index is being used, even though it works perfectly with the ‘true market’ index? In Ross and Roll’s words, if you mismeasure the expected return of the market index by 0.22% (easily 10% of one standard deviation away), it could imply a measured zero correlation with the market return.

Sounds impressive, proof that no evidence against the CAPM is meaningful because it could be due to an unavoidable deviation from the market proxy. There are several problems with these results. First, Stambaugh (1982) himself documented that inferences are not sensitive to the error in the proxy when viewed as the measure of the market portfolio, and thus while a theoretical possibility; this is not an empirical problem. Shanken (1987) found that as long as the proxy between the true market was above 70%, then rejecting the measured market portfolio would also imply rejecting the true market portfolio. So if you thought the market index, any one of them, was highly correlated with the ‘true market’, the CAPM was testable as a practical matter.

Secondly, to generate such a result with an ‘almost’ efficient index proxy, one needs many negative correlations among assets, and lots of returns that are 100fold the volatility of other assets. In other words, a very efficient, but not perfectly efficient, market proxy can make beta meaningless—but only in a fairy tale world where many stocks have 100 times the volatility of other stocks, and correlations are frequently negative between assets. Now average stock volatilities range from 10% to 300%, most between 15% and 70% annualized. Betas, prospectively, are never negative, correlations between equities never negative unless the equity is a closed-end fund that employs a short strategy.

Thus, it is possible the absence of a correlation between beta and returns is due to an inefficient market proxy. But it is extremely unlikely, and given restrictions on the the correlations and relative volatilities of assets, so improbable as to be irrelevant.

But these 'proofs' are common. For example, in Steven Wolfram's book A New Kind of Science, he argues that some simple recursive rules can generate patterns that look very complex and ordered, like fractals. Thus, he argues that perhaps all laws of physics and chemistry and biology (ergo a New science) have such little rules as their fundamental essence. Possible. But what are the number of such recursive processes that are generating patterns that can be selected? Does generating pretty pictures, 'homologous' to ordered complexity as we see in physical laws, imply that such a simple rule underlies it? It could. Given he is talking about rules operating at the sub-quark level we could never observe, I don't see the point because the search space for such rules is so large we won't find such a rule given a million monkeys and a googplex of time.

Similarly, my favorite flâneur, Nassim Taleb argues that because unexpected things like the market crash of 1987, or Google, or Harry Potter, were unexpected, a wise investment stratgy is to allocate, say, 15% of one's portfoio to things with vague, but unlimited upside. But what is the denominator in this expected return? Sure, we include conspicuous winners like Google in the numerator, but how many hare-brained investments were tried that didn't generate Google-type returns? Thousands? Hundreds of thousands? My Uncle Frank has a new investment idea each year and maybe one day he'll hit it big, but I'm not holding my breath.

And then there is evolution. [feel free to ignore this evolution stuff. I myself thought such people were idiots 12 months ago, but as mentioned, I had an epiphany reading about those who think we're are avatars in a giant World of Warcraft computer game by a hyper-developed intelligence. And I have no proof, nor expect any. But I still don't believe cellular machinery arose merely by natural processes, a whimsical belief with little relevance.] Sure, the bacterial flagellum involves a constellation of proteins and cellular processes that are improbable, but given the advantage to such a function (propulsion), isn't it probable, given enough time, something like this would be found? Well, it depends how large the space of such constellations of proteins is, as every mutation in the DNA's 4-bit code (CATG) affects, potentially, one amino acid, that may affect the ultimate protein, its address, the gene expression regulators, etc. How many successive changes are needed to rearrange a collection of 'homologous' proteins used for different purposes, to finally get a workable flagellum, and how does this relate to the space of possible paths of changes that ended up in evolutionary dead ends (as almost all mutations are)?

Like the inefficiency of a market proxy generating a zero beta-return relation, proof of existence implies there's a chance, but common sense on the context tells me these solutions are not correct because they are incredibly improbable given the state space.

Tuesday, April 22, 2008

(great news!)

On Papua New Guinea, the Kiriwina language has the word ‘mokita’ to describe something everyone knows but no one discusses. 'Schaudenfreude' is the pleasure one feels for other's misfortunes. These are fun, weird words to describe awkward truths. Is there a word to describe a man's emotional response to reading this?

Monday, April 21, 2008

Confounding Indicator: Bank Capital Issuance

National City announced it was issuing $7B in equity, a nice kick in the groin to risk-arb buyers who were anticipating an immanent take-over bid (stock fell 25% today). Keycorp, Fifth Third, Bank of Novia Scotia, KKR, and Warburg Pincus all found Nat City too expensive to buy at the current price, 75% below last year's level. Thus, instead of paying existing shareholders a premium on the thought that the current stock price is too cheap, new investors are demanding existing shareholders get diluted in order to get an equity stake. You couldn't find one financial institution to see value here, and so it seems that unlike 1990, bank prices are not 'too low'--other banks in the know see this and turn away. Thus the indicator is a two-fer: banks issuing equity indicates their stock prices are 'too high', and potential buyers (all banks) think current prices are 'too high'. Citibank, RBS[today!], and Wachovia, also issued equity recently. The fact that so many banks are issuing equity, and not being taken over, suggests that our current swoon is seen by insiders as justified.

One of the big factors explaining the cross-sectional returns to equity is capital issuance. For instance, it is well known that if you buy an IPO, you better have bought it as an insider at the IPO price. If so, you make about 12% on the first day on average. After that, from the close of the first day, it will actually lag the market a considerable amount (Jay Ritter has an excellent set of data on this at his website here). Interestingly, this pattern shows up also in seasoned equity issuance: issuers tend to signal an overbought equity price. I have a bunch of references to this literature here. Basically, public companies tend to issue equity when their perceive, correctly, that the price is too high, and buy it back when it's too low.

As the Barron's commercial used to say, every transaction involves a buyer who thinks the price is too low, and a seller who thinks it's too high. A simplification, to be sure, but probably an accurate approximation. So if you are buying from the ultimate insider, ya think there might be a good reason to expect the insider to get the better of the deal?

Indeed, capital issuance is one of the ideas that my ex-employer thinks is so valuable he demanded I not use in any way, and 'return' in some unspecified form (hopefully the judge won't grant them full biopsy rights, but there's no precedent or statute on this, so I'm crossing my fingers). Therefore, the recent announcements by major banks issuing equity is one big countersignal to my otherwise bullish view on the stock market, because if the banks are relative underperformers, given I am a big fan of the credit-transmission process for business cycles (eg, Bernkanke and Gertler).

If bank portfolio values were merely a liquidity problem, that is, portfolios being marked below 'true value' because of selling for reasons unrelated to new information, one would think strong banks would be buying distressed banks on the cheap, the way JPMorgan picked up Bear Stearns (which probably was mainly a run, as opposed to a more fundamental insolvency). It appears that banks don't think this is mainly a liquidity issue, but more fundamental.

Sunday, April 20, 2008

Expelled


I saw Expelled, because, frankly, I have issues with Darwinism as a complete explanation of life on Earth. I don’t believe in God, but I also think the probability of cellular life is so small, that I just don’t think random mutation plus selection gets us there, and my lack of alternative doesn't mean I should believe the current theory. It’s one thing to turn off a protein (sickle cell anemia), or select for the length a finch's nose, or basically modify an existing structure into various subspecies, which really only takes a couple of flips in the genetic sequence, quite another to create new tissue.

As the movie notes, the first replicating organism needed at least 250 proteins in a specific order, each of which needs at least 20 and probably 400 amino acids in a specific order, and a specific rotation. This structure is needed for selection to guide random mutations--prior to this, natural selection merely destroys. When you listen to the current contenders for how this happened, it’s basically the same ‘a miracle happened’ hand waving no matter which side you are on (the most specific Darwinistic theory offered is that they arose from 'backs of crystals').

Staney Miller's famous experiment showing that he could create a couple of amino acids from methane, hydrogen, ammonia and water, has proven a dead end. You can create some amino acids, and these are also found in meteorites, but then, they need to be assembled into proteins, and then the proteins into a compound. Further, submarine volcanic vents don't make organic compounds, they decompose them. Indeed, these vents are one of the limiting factors on what organic compounds can be created in the primitive oceans. At the present time, the entire ocean gets cycled through those vents every 10 million years. So all of the organic compounds get zapped every ten million years. That places a constraint on how much organic material you can get via merely simmering.

Anyway, I really gave up the Darwinist explanation of creating all life after after reading about some philosophers in the NY Times, who believe we could be living in the SimCity of some giant alien race (one guy put the odds at 20%, which I thought was great, because usually the number is 1%, 99%, or 50%). Anyone who programs writes new code by copying and pasting, or opening and saving, old code. Thus the fact that organisms appear to have a common progenitor is merely from lazy (ie, all) programmers [note to Telluride Asset Management lawyers reading this—that means all my Telluride code is from prior experience, which I look forward to showing to a jury—I did not invent the ‘do while’ loop at Telluride]. So the fact that humans appear to have common ancestry, just means that a resource constrained creator (a design team in the 10-th dimension), merely borrowed a lot of code, or had a template library, and the lower the level of borrowing determines the taxonomic relation. This also explains why there are lots of inefficiencies in code: the great designer is not omniscient and omnipotent, he just has a resource constraint and technology team that makes Google look like a nematode, large but still finite. Like my programs, genetic instructions are a kluge.

But in the movie I found the performance of Richard Dawkins, atheist extraordinaire, most refreshing. First, because he is so honest. He doesn’t say evolution has ‘absolutely nothing to do with religion’ as many like to presume, but rather, it profoundly impacts one view of religion. I’m not religious myself, so that implication doesn’t really affect me, but I do think many people’s views on religion are decided by evolution or vice versa. Indeed, the subtext is that not believing in God can lead to Nazi atrocities, about as likely as believing against Darwinism as explaining all life leads to bible-thumping homophobia. But most importantly, Dawkins said while he thinks Intelligent Design is not science, he would entertain that life on Earth was seeded from aliens, his only concern being those aliens (or the aliens who seeded them) at some point were primordial ooze. In that way, my creationism is perfectly consistent with a type Dawkins would allow.

I think, fundamentally, Dawkins hates the 'arguments by authority' of religion, and anti-empirical nature of religion. Again, I agree with him. But I just don’t buy the current Darwinism that waves its hand and says the complexity of eukaryotes is merely a sequence of random plus selection. For example, in the most famous case in the past 10 years, the Discovery Institute’s Michail Behe argued against Darwinistic explanation of the bacterial flagellum, which Kenneth Miller from Brown argued against Behe's argument. The flagellum involves a constellation of 50 parts very much like an outboard motor, with a rotor, engine, etc. It seems improbable these things arranged together by natural selection. Miller argues that several of the component (ie, proteins) of the flagellum have ‘directly homologous’ proteins that have other purposes ('Directly homologous' is like 'kinda isomorphic'). One, for example, is used as a secretory system in some bacteria, allowing or disallowing various specific toxins. But I found this proof very unconvincing, like saying the outboard motor of a boat could come together, because, you could use the propeller to cut grass, and the carburetor tin to work as an ash tray, the pistons as medicine dispensers. Could happen, but the odds seem really low. Every protein will be similar to some protein (ie, 'homologous'), but the probability that natural selection favored species with individual C-A-T-G errors that slowly took 50 different complex proteins, adjusted them appropriately (remember, each protein contains usually hundreds of amino acids), moved them next to each other, and also kept the supply chain within the cell in lock step with these changes (every part needs a supply and removal chain), seems more improbable than the thought that we are created by really smart aliens in the 10-th dimension.

It has often been noted that the complete works of Shakespeare could by written by a bunch of monkeys, but they have estimated that even this would take 1E183800 keystrokes to write Hamlet. The universe has only 1E79 (ie, 10^79) hydrogen atoms, and only existed for 4E17 seconds. There’s a big difference between infinity and these numbers, especially related to 1E183800. For the human genetic code, it seems like similar odds to writing Shakespeare given the complexity of our various cellular, immunological, and endocrine processes. But the counter argument is that because I’m conscious, and there are many universes, and perhaps the big bang happens again and again, etc., I am really looking at a conditional probability, and I've basically won the most improbable PowerBall in the multiverse (good to remember on bad days). My speculation is that it’s a set of creators, of unknown form, using templates (probably like in Dreamweaver, under C:\users\The Creator\templates\organic\eukaryotes folder), and drinking the 10-th dimensional version of coffee, while cranking out the next beetle (asked what could be inferred about the work of the Creator from a study of His works, the British scientist J.B.S. Haldane replied, that He has "an inordinate fondness for beetles", given there are so many different kinds (conservatively about 350,000). These are probably training exercises for those new to the team).

Thursday, April 17, 2008

Why have Children?


Bryan Caplan has some neat notes on career women who choose don't have kids; it seems like something they are just too busy to really make in a priority, so it doesn't happen. Back in 2005, Caplan argued that people tend to underinvest in kids because they bear most of the costs when they are in a position to make the decision. As an economist and being logical, I have to say this influenced my calculus to create Izzie last May. A major puzzle in child rearing--to economists--is that many happiness studies find that time spent with children is often considered the least pleasurable part of a person’s day, and when you control for the standard socioeconomic variables adults with children are less happy than those with children. Ever play ‘Shoots and Ladders’? It's boring. Jerry Seinfeld once joked that many games are advertised as “fun for the whole family. Nothing is fun for the whole family!”

Yet children are perhaps the best way to manufacture appreciation. You may be incompetent at your job, but your 5 year old will still need you very much. There is a basic appreciation for one’s biological parents, regardless of how much they actually hang out (note Obama's elevation of his sperm-donor dad), and the littlest things a grandparent does for a child is often looked back at with profound appreciation. For example, in The Education of Henry Adams, the author recounts an episode from when he was six or seven. He vaguely remembers throwing a tantrum about not wanting to go to school. His frail 80 year old grandfather, the American President John Quincy Adams (son of President and Founding Father John Adams), appears, and takes his hand, and walked him silently a mile to the schoolhouse. No lecture, just a walk, but right to school, and the tantrum was over. Looking back, Henry Adams notes that “the seeds of a moral education would at that moment have fallen on the stoniest soil in Quincy”. These are the things parents and grandparents strive for, hoping that when they are long gone a consciousness truly appreciates something they have done in a profound way.

I remember strongly that when my mother was dying of brain cancer, she was very focused on leaving memories behind: getting picture albums in order, organizing mementos from our past that we could cherish in her absence. The thought of making having someone engage in traditions she helped create consumed her objectives until she lost the ability to pursue them.

An executive has a full inbox of people competing for your attention all the time, and does not need children to manufacture appreciation, and so as children are substitutes for generating appreciation, and resources are limited, many career-oriented women are childless by choice. I think children are great investment in creating something that needs you, that connects you to the future. Not only do toddlers need you, but so will their kids, which is especially rewarding during a period in one's life when your professional duties create no need for you. For the poor with no market skills, kids are a sure way to make yourself important, at least in someone's eyes.

The flip side of being appreciated, is being unappreciated, and no tiresome work or physical pain is as stressful is being in a situation where you feel your talents are unappreciated, and you see little hope of changing things. The null feeling of people towards each other is indifference, and so a person unappreciated by everyone has done nothing to earn favor, and knows it. What makes this so terrible is that unappreciation is persistent, whereas physical pain is temporary. When I think about making my children successful, it primarily involves imagining them in a situation where they are appreciated because of their competence, courage, empathy, and good humor. If all I knew was that many people truly appreciated them, and not in the condescending way people say they appreciate unskilled workers, it wouldn't matter to me whether they were rich or poor.

This gets down to what, specifically, is in a person's utility function. The traditional objective function takes one's consumption as the primary objective, and then present values the 'utility' of this consumption. This leads to weird convolutions, where one 'consumes' children, and actually doesn't enjoy their company but still prefers them. I think a more reasonable approach would be to assume people are maximizing their external appreciation in society. This is going to be increasing in wealth, because the more money one has, the more favors one can do, the more people will flatter you to get business or use your boat. And it all gets back to maximizing status. A status maximizer maximizes his appreciation in this world. Someone looking merely at their consumption bundle should consider themselves rich in the West even if they are at the bottom of society working as a parking attendant, but of course they would not feel rich at all, they would feel very poor. As people crave appreciation from others, maximizing their status is a direct path to that end. So is having children.

And of course, this leads to a world where all risk taking is unrewarded (see my article here).

Wednesday, April 16, 2008

Barack Obama as the Ultimate Corporate Climber

Barack Obama comes off as the most intelligent, reasonable candidate left. But I think a lot of this comes via a lack of track record. His stint at the Harvard review, he didn't pen one article, nor while on the faculty at U Chicago. He didn't propose any big legislation as a state senator in Illinois, or in the US senate. At least McCain has McCain-Feingold, and Hillary has health care, and their failures in these areas are learning experiences that will make them better candidates. I'll take someone who has failed in a sincere effort that was reasonable at the time, over someone with zero experience any time, because the best attribute of leader of something with a not-obvious objective function is modest ambition, and nothing teaches modesty better than failure in previous panaceas.

The problem with guys who merely talk reasonably, but have no record of achievement, is we have a tendency to infer an Olympian reasonableness and moderation to their views when they are conventionally successful. Yet many of these guys are merely afraid of looking stupid, so say little about anything controversial and relevant to their career. For example, I know several very successful people who have mastered the art of saying very little about actual business tactics. And if they find themselves in a group that is successful for reasons unrelated to their action (eg, a portfolio manager for telecom in the 1990s), most people assume they are very bright. You really have to have an insider's knowledge to know this, so you get this unalloyed praise out of ignorance of outsiders deferring to success, and self-interested flattery by ambitious insiders eager to form coalitions.

Most people prefer not to look stupid, and it is easier to say nothing than take a risky view. Yet there are so many of these people, the fact that most successful executives in large organizations adopt this strategy does not mean it is optimal, because there are many such people mired in middle layers because their groups were not winners of accidental success in one's field, like people lucky to be in telecoms in the the 1990s. I would not advocate this strategy to someone young and smart, in spite of the fact that most successful politicians and businessmen adopt it, because those successful people are more like lottery winners than self-made men.

The average political leader is really a meek person intellectually, taking well-worn, prosaic positions, and then heaping immoderately large praise on their unassailable objectives (read Obama quotes here, and note none of them advocate anything anyone would be against). Forcefully articulating big ideas is helpful to the policy wonks who need to use these to rationalize their petty objectives. Really thoughtful policy tends to come from academics or journalists who first test them in the field, and so only after several decades of success, does one justify an idea based on Milton Friedman, who was for many years a lonely voice.

But if you have no good novel ideas, and really no good opinion on current ideas, the following approach is recommended for moving up the corporate ladder. First, have strong opinions about things like sports and movies, and also have keen grasp of conspicuous historical events in your field (Obama took a bold pro-Bear position prior to last year's super bowl). So, in finance, if you read an insider's account of the dot-com bubble, the S&L crisis, or the Long Term Capital Management, you will appear to have a good, detailed knowledge. Never champion a new idea, or take a view on a current, controversial plan. You will only look foolish. Lastly, get a haircut every 2 weeks, and wear good clothes, paying special importance to shoes. Now, when people are looking for a new head of corporate development, you are perfect, because you have never made enemies, and you seem like a reasonable guy. People won't remember that you never have opinions on novel stuff, because their memory will be dominated by your strong opinions on irrelevancies, and your hindsight knowledge about the internet bubble shows a courageous and savvy insider's view (this is the key subtlety to the empty suit strategy, you need some camouflage). Then, inherit a bunch of projects, and manage them by merely listening quietly to your employees bitch about each other, by saying, 'don't worry, I'll take care of it', which means, 'I'll call them into a meeting and listen to them bitch about you'.

Obama seems like a smart guy, but his voting record suggests he merely takes the stereotypical, unthinking, leftish position on everything. The fact that he will rationalize this well is not a good thing.

Tuesday, April 15, 2008

von Neumann...Mandelbrot?

A 'straw man' argument is to set up a weak counterproposal, and then criticize it. It is a 'straw man', easy to knock down. I thought this was a discredited technique that would generally make the author look bad, but the more I read popular books, the more I see this approach taken. Best selling critics of the free market, such as John Kenneth Gallbraith and Lester Thurow, would criticize any misfortune as an anomaly to libertarians who thought markets were perfect.

But then I was in my local Borders, and was reading Michael Mauboussin's More Than You Know, which is about investing and such, with an academic/practical slant, something I would like. The book has its moments, and its short chapters and easy reading make it a great book to read while drinking a cup of coffee, and then return to the stack. The book has lots of graphs and tables. I love books with graphs and tables, and wish more books did, as I find it odd that a 300 page book on investing would not have these thing (eg, Against the Gods, or Random Walk Down Wall Street has but a few), as these make points so much better than mere words. So bully for Mauboussin. But he had a chapter on "frequency versus magnitude in expected value", and highlighted that some really smart people like Warren Buffet assess expected value as probability times payoff, so an improbable event may actually be a good buy, because its payoff is sufficiently large. Who knew? He even quotes my favorite flâneur, Nassim Taleb, for noting that as a trader he would be short sometimes even when he thought the market was going up, because the size of a downturn would make the expected return negative [forget about the fact that trading floors hate it when their traders have directional bets]. Supposedly, most rubes merely look at the probabilities, ignoring payoffs.

An expected value is the weighted average of the payoff times the probability. It has always been that way. To think this is subtle, or rare, strikes me as daffy, after all, most options expire worthless, yet have a positive price. Contradiction? No.

Then I moved over in the stacks and found Benoit Mandelbrot's Misbehavior of Markets, and there's a blurb by Paul Samuelson:
On the scroll of great non-economists who advanced economics by quantum leaps, next to John von Neumann we read the name Benoit Mandelbrot

ORLY? von Neumann is famous for his von Neumann-Morgenstern utility function, the workhorse of most economic analysis. He had a macro model too, but I think it's safe to say that was a noble effort, but a dead end. But Mandelbrot? He is usually the first reference of fat tails for financial distributions based on a 1963 article on cotton prices, but I fail to see this as a truly earth-shattering finding because almost everything natural has fatter tails than a normal (gaussian) distribution, so who was suspecting otherwise? Mandlelbrot's book points out (and so does Mauboussin's) that the stock market has fat tails, and states that models like Black-Scholes, and the CAPM, are based on the Gaussian hegemony that forces us to think that such things can only have normal distributions, a legacy of Bachelier from his work on brownian motion in 1903. But these models use the Gaussian model as an expositional device, because it generates nice, clean closed form solutions. Markowitz's earliest book (Portfolio Selection: Efficient Diversification of Investments, 1959) looked at semi-deviation, maximum loss, and other asymmetric loss functions. Thus since the very beginnings of the MPT, researchers have been aware that distributional assumptions were important, and obvious paths for alternative hypotheses. Looking at the Journal of Portfolio Management, “skewness” is mentioned in no less than 66 articles, kurtosis in 44. The bottom line is that after 40 years, this obvious fix—adding a cube or square term, effectively—appears only episodically as a solution, invariably not standing up to subsequent scrutiny.

In general, the first order effects are not great when ignoring these complications, so the cost of messing up the formulas to include more free parameters, at little benefit, is not done. But the CAPM doesn't work regardless, and Black-Scholes is fit to a volatility smile--always has. No one I have met has ever confused the map with the territory, when it comes to asset pricing models,

In 1987, James Gleick published a best selling book Chaos, highlighting Benoit Mandelbrot's fractals, and suggested he had a new insight that was about to revolutionize all of science, including finance. But as Rubinstein states in A History of the Theory of Investments , "In the end, the stable-Paretian hypothesis [ie, fractals] proved a dead end, particularly as alternative finite-variance explanations of stock returns were developed".

Ever since I have paid attention to option prices in the 1980's, options were priced anticipating fat tails, via a volatility smile, yet Mandelbrot would have you believe that no one knows this, or at least, that The Establishment thinks Gaussian distributions are perfect replications of reality. The 1987 crash was a 22 standard deviation event, happening once every 60 bazillion years according to the Gaussian knaves. Now truly people did not anticipate this, because the previous largest move was less than 10%, but I don't think his model is any great improvement because anticipating a 22 stdev movement in any option with a significant probability will cause you to overpay severely for illiquid options with wide bid-asks. I actually was actually bought an out-of-the-money S&P500 put option on October 16 that I sold in the last 15 minutes of trading on October 19, 1987, and made $38k on a $3k investment (my life savings at that time, see here). I remember not getting my mark for a day, and finally being surprised to see my option sold with an implied vol of around 70. Great return and all, but it wouldn't pay for a lifetime of buying 3-delta options (i.e., the way out-of-the-money options). However, if you include that investment in my arithmetic average annualized return, my life-to-date Sharpe in my personal account is around a 5--highlighting the problem with arithmetic averages.

Benoit Mandelbrot is a very smart man, but I have a feeling that he truly believes his great idea in finance is that prices follow these power-law distributions with extra parameters. That is simply a not a good idea, and I can imagine his brother-in-law thinking 'why does everyone think this guy is so smart?'

Anyway, are such straw man arguments necessary to sell a book? I should write a book on the theory that some assets go up, some go down, but very few stay exactly the same price. This proves that everyone who thinks prices are 'right' is a fool, because prices change, proving the old price wrong. The Man tells you that prices are the best estimate of market value, but he is proved wrong every day. QED. (patent no. D696,243).

Monday, April 14, 2008

Bad Week for Peak Oil

So last week, they found 10 billion barrels of oil in North Dakotan shale, up from the previously estimated 0.5 billion barrels. And today, they found 33 billion barrels off the coast of Brazil. Considering we (the US) isn't really looking to drill off Florida, or in the wastelands of Alaska, I imagine there are a lot more such finds waiting to be found if we ever really start getting serious about it.

I remember when the oil crisis of 1980 or so, and being very sad at the thought that though I was about to turn 16 and get my drivers license, I wasn't going to be able to drive much if at all, because we were running out of oil. And so it goes, as the Peak oilers and their allies tend to think we are running out of oil, and we are all going to have Hell to pay really soon. Luckily, it does not look like this will be in my lifetime, or that of my kids. I have enough to worry about. After all, in 1950, geologists estimated that the world had 600 billion barrels of oil, revised to 2,000 in the 1960's, 1,500 in 1970, 2,400 in 1994, and 3,000 billion barrels in 2000. Running out of oil no longer keeps me up at night.

Peak oilers remind me of my son's fears. One night, before my son was going to bed, he looked anxious, and I asked why. He said, "I'm afraid of sharks." Of course, in Minnesota, I'm personally more afraid of angry Moose or stray bobcats, but then again we do go to the neighborhood pool a lot in the summer, and God knows when those hungry beasts will get in there, being aquatic man eaters. I told him that if you poke them in the gills, they will release you, allowing you to get away with merely a flesh wound. Dolphins are known to go for shark gills, and so do sharks themselves when they fight. I am proud to state that, armed with this knowledge, sharks have not attempted to bite my son yet, knowing they would be in for a heap of hurt.

The key to peak oil, whether we are halfway through our global tank of gas, is the the biogenetic theory of oil, which is that cumulative decayed vegetation is the basis for our oil reserves. Now most Russians, unlike us in the West, believe oil was not formed from decaying vegetation, but rather through natural processes in the earth's crust and meteors that have nothing to do with bacteria and plants (abiogetic petroleum). Interesting debate. But did you know that there isn't a really good theory for why Earth has so much water? I doubt we'll resolve these issues anytime soon.

Sunday, April 13, 2008

Aggregate Market Predictor

The best single predictor of the future stock market return is the year-over-year change in the Fed funds rate. When it falls, the market tends to rise and vice versa. This is using data back to the beginning of the managed Fed Funds rate back in 1954, and looks at the past 12 month change in the Fed Funds v. the future 12 month change in the S&P500. Look at the graph below.


In table form, it looks like this. There's a whopping 5% difference in future stock market return when there is a negative, as opposed to a positive Fed Funds rate change over the prior years, looking a year ahead. It's not a big enough edge to quit your job, but for something as noisy as the aggregate market, it's pretty good.

Saturday, April 12, 2008

Why the Freakonomics Craze?

With the success of Freakonomics, there is now:

Tim Harford’s Logic of Life
Tyler Cowen’s Discover your Inner Economist
Steve Landsburg’s More Sex is Safer Sex
Robert Frank’s The Economic Naturalist
Dan Ariely’s Predictably Irrational

And I'm sure I've missed some. They make good bookstore reads, in that, like a trivia book, they are fun, but after a while boring because there really isn't a theme. I'm with famed game theorist Ariel Rubinstein, who mocks Freakonomics mercilessly, and quotes from the book:

the most likely result of having read this book is a simple one: you may find yourself asking a lot of questions

What happened to teaching answers? Is it really more profound to be more confused after reading a book? Arguing from authority (Freakonomics modestly notes that Levitt is "the most brilliant young economist in America"), it seems like Levitt is demolishing a staid and blinded establishment, yet to the extent his points are truly important and novel, they are untrue (e.g., abortion and crime), and to the extent they are true and novel, they are unimportant (e.g., sumo wrestlers engage in quid pro quos when on the bubble in tournaments). As Rubinstein points out, the book makes points, but not really, points particular to economics:

Freakonomics expresses the aspiration to expand economics to encompass any question that requires the use of common sense. Take, for example, Levitt’s tales of the big city. The Chicago Municipality administers an annual test for schoolchildren. A suspicion arose that teachers were “correcting” their students’ answers before sending the tests to be checked. Levitt obtained the data from the municipality and developed a computer program that looks for classes with suspicious combinations of answers. For example, if all of the tudents in a particular class responded correctly to questions 7, 8 and 10, and erred on question 9, a suspicion arises that the teacher falsified the answers to four questions. (On question 9, the teacher either made a mistake himself or tried unsuccessfully to avoid raising suspicion.) In this way, Levitt discovered dozens of deceitful teachers. The IDF’s intelligence units and credit card companies use similar algorithms. What have we learned about Levitt? He is a smart guy
with connections in the municipality. What is the connection to economics? None.

And then there's the reference that Levitt was consulted by the CIA to advise them on finding evil-doers. These agencies have hundreds of thousands of employees, and in a bureaucracy, a disproportionate number of inside-the-box thinkers. That they might pay for a speech by an outsider is rather unremarkable, as I'm sure the list of such speakers is long and boring. It's hardly an exclusive group (though, truth be told, the CIA has not asked me for my opinion--player hater!). So, I'm no fan. Of all the Freakonomics-like books, I think Freakonomics is the worst.

So why such emulation? Publishers. As Rick Horgan, corporate suit extraordinare notes in this wonderful interview at this publishing website:

"The biggest deciding factor is the comparison book. That’s the way you sell in books these days. A book comes in, if I don’t have a like book to compare it to, then I’m lame, I’m crippled when I try to sell it internally."

Q: Will a lack of comparable sales prevent you from picking up a book?
Rick Horgan: Totally.

So until you can say, this is gonna be "just like the bestseller Freakonomics", a book publisher is like a dog trying to learn calculus. If you have a new idea, like, say, Freakonomics the original, he'd probably suggest self publishing.

Friday, April 11, 2008

Fund of Fund Overfitting

Spurred by self-promoter Harry Kat, for some reason people think you can replicate a hedge fund merely by back-fitting a fund's return, a vector of probably no more than 100 data points, against 78 or so futures time series. This is kitchen-sink regression analysis at its worst: overfit, with little predictive power but a really high R2! Anyway, Mr. Kat somehow convinced several people with more money than brains to apply his proprietary "FundCreator' software, that is described as follows: "In essence, this means our strategies go back to the famous Black-Scholes option pricing model, which is well-tested in practice and which forms the foundation of today’s trillion dollar derivatives industry." Black-Scholes! There's a manipulative argument by authority.

In my never-ending litigation from the Dark Side, the judge at one point said: "I was reading the New Yorker, and Harry Kat seems like an expert on Hedge Funds. Why can't we ask him about this?" Not good.

But anyway, it seems overfitting is a common vice in that it is so tempting, in that something that fits a set of data 'could' be the model driving it. This error is especially common for those with natural science backgrounds, because small samples aren't really a problem to physicists and chemists ,and so they don't develop any intuition for these issues. Thus, I got this copyrighted white paper, so I can't post it, but it's free, and I got it at Albourne Village. Its Fund of Hedge Fund Portfolio Risk, by Investor Analytics. It tries, like Kat, to reverse engineer a fund via throwing a bunch of time series against it. The paper notes, "most FoHF managers use a minimum of 24 data points (2 years of monthly returns), but 36 data points is generally preferred". Hey, after 30 datapoints, a Student's T distribution approximates a Gaussian distribution, so... Anyway, they give an example with 29 datapoints. They find a 0.8 R2 using a combination of aluminum, precious metal, natural gas, and gas oil futures, and the square of crude oil. They note that they used the square of crude oil because the initial set of factors included cocoa, and they portfolio managers don't trade cocoa. OK. so you have 3 highly correlated inputs (gaus, oil, oil^2) and then a precious metal and an industrial one. I wonder what random set of commodity futures best explain the S&P over the past 29 months? I'm sure there is a set.

I thought, who writes this stuff? And I go to Investor Analytics website, and the CEO is described as follows:
Damian received his undergraduate degree from the University of Pennsylvania and his doctorate in nuclear physics while working at the National Superconducting Cyclotron Laboratory in Michigan.

2 Cheers for economics. The experience with time series, and the limitations of Keynesian macro models, leading economic indicators, Kalman filters, Vector Autoregressions, prevents us from wasting time this way. Or at least it should.

Monday, April 07, 2008

Asset Pricing Theory is Really a Framework

The latest JoF came out, and though publication day is the 'death of a paper', I think its fun to go over one article in there. Ang, Bekaert, and Wie, have an article called The Term Structure of Real Rates and Expected Inflation. It highlights a pernicious path in asset pricing, in that it is no longer a theory, but a framework. A theory is a testable hypothesis, it generates predictions, restricts what can happen, and is testable. A framework is like string theory with its 10^500 different universes: one of them works!

Its a regime switching model, which is convenient, because interest rates were really crazy from 1970-1981 when inflation was increasing and real yields were negative, and so a parameter that basically gets that separated helps. In a sense, the fact that people didn't anticipate the increase in inflation in the 1970's, or its decline in the 1980's, is modeled via this little trick, but I don't think reifying this makes it any clearer. As per regime switching in general, watch out: macro data has a handful of turning points: 1973 for productivity, 1981 for inflation, the 5 recessions since 1970. Let my model 'know' those and the model y=constant works pretty well. The model is a function of three more parameters: the inflation rate, a 'latent' factor, and a risk factor--the latter two being unobservable.

The model captures some 'stylized facts' of the yield curve, such as its upward slope. But the upward slope of the yield curve is only for the first 1-3 years, so for some reason, the 'inflation risk premium' they assert that causes this is only applicable for 1-3 years, after that, it goes away. But they offer 1 to 6 (!) regimes, as if, take your pick! They like that their model shows that nominal yields are more upward sloping than real yields, though the Treasury Inflation Index yields look pretty parallel to the nominal curve to my eyes. They find that 80% of the yield curve volatility is from the inflation rate an the inflation risk premium, even though the 10 year TIPS rate has about 87% of the nominal rate since 2002 (data in FRED).

But the authors are proud of their little beast because it is "rich", pointing the varying price of risk, and the regime switching, as evidence of its richness. When the key attractiveness of a model is that it can accommodate the data we all know so well with unobservables, what is the point? Inflation, fine, but latent factors and the time-varying risk parameters is simply the data, because there is very limited intuition for these, they are fit to the data. Now if they found they could pull out a risk factor here, and use it to explain equities, that would be neat. But this model is classic academic research.

Sunday, April 06, 2008

Next Big Catastrophe: Detroit Automakers

For the past 20 years, Honda and Toyota have generated a profit. Ford and GM, meanwhile, haven't made money since 2005, and will lose money again in 2008. When the next full recession hits, I figure it will be the ballgame for these guys. These companies are rated B-, and their debt trades in the middle of this range for most stuff, about 700 bps over Treasuries.

The kicker was that recently Appaloosa Management decided it would not inject the money needed to get Delphi out of bankruptcy, which means GM is left with this big loser, or must kick in more money to get a buyer. Big companies can play games for a long time, for example, GM was paid by Delphi for a loan...with debt.

But there was a happy story in this weekend's Barron's, noting that it should increase to profitability in 2009! What company doesn't anticipate profitability the year after next? Barron's also noted that it is trading at "half its historical multiple of 2.2 times expected 2009 cashflow". Since when has Price-to-[year after next cash flow] a valid multiple?

The next recession, not this thing, but when unemployment hits 7% and such, the US automakers are dead, because their cost base is so much higher than the Japanese, even for cars produced in America. If you are making widgets, and it costs you $2, and your competitor's cost is $1, the equilibrium price will be in the middle, and the game will end only when the high cost company exits. Detroit costs are about 30% higher than Japanese!

Friday, April 04, 2008

Regulatory Capital Recommedation

As a former head of capital allocations at a regional bank, I have some ideas to address the issue of regulating complex financial institutions below. They won't prevent crises from happening, but they are good ideas nonetheless. The basic idea emphasizes disclosure: more accountability for the rating agencies via external monitoring of their ratings performance, more detail on the net positions of investment banks, and finally, a Fed overview that publishes specific deficiencies of an institution's due diligence of their many product lines.

The problem with capital requirements, as opposed to disclosure, is adverse selection and exit. Details matter, so any top-down view invariably will invite adverse selection to the extent details are not specified, such as saying you need 2% capital for all swaps notional amounts. Great, then only swaps where the risk capital insiders feel necessary (economic) is greater than that will be on those books, and firms will contract overseas for the others.

The complexity is insane. At Key, we had 5 major lines of business, such as consumer, corporate, and capital markets. Then 23 primary lines of business, such as within community banking: small business, retail community, commercial banking, and the ubiquitous 'other'. Finally, you have over 100 secondary lines of business. Within, say, commercial banking, you have: upper middle market, community middle market, large corporate, business owner services, and other.

Within each of the 100 secondary lines of business, you have different product types, such as loans, and lines (undrawn loans that are available). Loans, are evaluated differently depending on obligor, so that a bucketing of commercial loans are based on financial statements, consumer loans on FICO scores, and small business on a combination. Within consumer loans, say, some credits look at collateral values (eg, leasing, residential mortgages, equity) and some don't (say credit card). Some look at the vintage of the loans (consumer loans), some don't (commercial). Within lines, there are even different types, that, frankly, I forget the distinction off the top of my head: financial, commercial, and performance lines of credit (but they have real differences in risk!). There are different risk factors applied to each matrix element (line of business/product type/credit score/credit vintage/etc.) based on estimates of loss rates and recovery rates for those types, and a sense of their prospective volatility.

Then there is the high profile capital markets group, which highlights the fact that there are strong forces within every large organization to misdirect even internal management, making the problem even more complex. These groups engage in what I call 'alpha deception', basically, they have a big incentive to inflate their value and present themselves as speculators as opposed to middle men, because that's the only way to justify getting paid >$250k per year to be a clerk, so many people who are just making markets are looking at screens with bids and asks, and then buying at the bid, selling at the ask. I remember a trading head who got very mad at me when I reported his value-at-risk to my boss without telling him--it was too low! It basically said, this group is a bunch of intermediators, not speculators, and middle men don't wear fancy red suspenders and drive porsches.

Now, they need flexibility to do this efficiently--e.g., sometimes you offset an over the counter swap with a series of eurodollar futures contracts--and so they do, occasionally take market risk. The complexity of this task means any manager two level up has really no clue how money is made. For example, we had this swap trader, and he knew how to hedge and price swaps via the eurodollar and large swap market, so a little $10MM loan to Bob's Bakery, at a fixed rate, could be swapped into a fixed rate via this method. With billions of dollars in loans, you just call the CFOs for all your customers, ask them if they think interest rates are falling. If he says yes, then tell him to swap into a floating rate loan fast, so his liability does not increase in value. If he says no, he thinks they are going up, ask if he has any floating rate debt he wants swapped into fixed rate. If he thinks markets are efficient, well, he is unusual.

Now this swap trader made a lot of money when he came, because lots of people wanted to swap from fixed to floating or vice versa, and we simply made money on the spread when we did the swap (e.g., if the market rate was 5.4%, we charged him 5.70%--the price not being obvious to someone who does not monitor eurodollar synthetic forward rates composed of futures and swaps). After his second year, as he made the most money, personally, in the whole bank, he met with the CEO. The CEO had no clue how he was making so much money, and the trader basically implied he made it via 'arbitrage'. But the arbitrage necessitated flow, retail customers, it was not arbitrage, in the sense that a price taking could do it, but that was the impression he gave to the CEO. If he told the truth, that he made X millions off, basically, the franchise value doing something any one of a thousand other people could do, the CEO would obviously see he was being overpaid.

I've mentioned this issue before, but it's important, because I would guess many people think banks and Ibanks make money off savvy directional bets, when this is generally not true and gives a very skewed picture of the nature of bank's risk. I remember reading Marcia Stigum's 900 page Money Markets book, and she noted that one year, a bank made $400MM on an interest rate bet, and noted this with a kind of awe, as if that's the real place banks make money. Bankers are no better than forecasting interest rates than your neighbor, or economists, and most don't bother with that any more. So there's all this misinformation within financial firms about the true nature of risks that makes collecting the really relevant information difficult. But basically capital markets in financial institutions make their money off of flows, customers, not speculating--they intermediate. Thus, the Value-at-Risk for any large organization is generally insignificant (yet still must be monitored to prevent a SocGen type fiasco), because these are driven off sensitivities to indices like the S&P, Tbonds, currencies, even volatility (e.g., the VIX), whereas most banks have merely incidental exposures. Further, the credit risk of their derivatives is generally minuscule due to margining, because for anyone with a rating less than A, they will have to post margin as their derivatives position craters. As a focus of Basel III the VAR of the trading desk is a red herring, but alluring because they are very amenable to a top down approach (just calculate the VAR), as opposed to the bucketing approach applied to the n-dimensional matrix outlined above, where the risk is not so much the final tally, but the identification and description of the weakest links. The real risk for trading operations is the naked, pure bet these guys were leveraging: who knew Bear was levering up on AAA subprime res mortgage paper?

Now, for Bear, one should have had a sense that if residential mortgage CDOs fall 50%, you still have 3x the capital to withstand that loss in market value. But wait, many of these things were rated AAA, and because one's time is limited, and historically AAA paper has had 0.02% annual default rates, one does not do full analysis of this stuff, and a 50% loss rate on AAA paper is unprecedented. If you demand that all your capital requirements cover unprecedented loss rates (e.g., 10 times greatest ever loss rate for that 'kind' of risk), then everything shuts down; it is impractical.

Solutions:
1) Hold the rating agencies more accountable. The AAA paper in this economy is, historically, of very low risk, yet is large in outstanding amounts. AAA stuff is like Treasuries, it acts as cash, and so in that sense is like the money supply. When they screw up on this stuff, it hurts the system. A AAA bond going default, from a bayesian perspective, was wrong (e.g., if something with a 0.02% annual default rate defaults, it is more likely the default assessment was wrong). The rating agencies should be required to post information on their performance, such as by providing to the government all their outstanding ratings, and the performance (e.g., default rates). Currently, the just do this on their own accord, selectively, without any objective oversight, so that, they commonly exclude munis and structure securities--how convenient. It's a small price to pay for their little quasi-monopoly.

2) Have the Fed examine internal risk management systems, and then let them publish their thoughts on the system, and outstanding disagreements and uncertainties. If you can't validate your assumption on the loss rates for RVs, or indirect leases, it will be put out there, and then the CFO will have to do damage control, and either curtail the business or answer the question, or, if the market does not care, do nothing. But then this would take many hundreds if not thousands of trained risk managers working for the government, asking pertinent questions, in an expeditious and fair manner. I will say the Fed does hire better than average people, but then again, I never saw them dig down to the level of an internal person doing capital allocations, so maybe at some point the hamster on the treadmill in their head would stop.

3) Investment banks, as well as banks, to the extent they are net long or short assets, like CDOs of AAA rated residential mortgages, should give an explicit accounting of these assets: rating, specific type (e.g., for structured notes, the more the better). If Bear had to say they were long $10B of AAA subprime mortgage CDOs, we should know it. Banks are levered institutions with opaque balance sheets. To the extent they are warehousing assets, it should be merely highly illiquid ones, because if they want to lever up on traded, risky, assets, they are doing a disservice to investors: they can get that cheaper via mutual funds. Most of a banks portfolio should be little bits left over as inventory for trading, securities for managing liquidity and making the risk-free rate (with a little curve risk, up to 3 years), and unrated stuff that doesn't trade, but therefore also cannot be hedged very well so it won't be levered much.