Thursday, December 30, 2010

Should Old Acquaintance Be Forgot

Although I started this blog more than eight years ago, it lay largely dormant for most of this period and this has been my first full calendar year of (somewhat) regular posting. The experience has been consistently rewarding but occasionally exhausting. As the year draws to a close I'd like to acknowledge my debt to a few of the individuals whose writing I have enjoyed and learned from over the past twelve months, and to reflect upon some of the main ideas that have been explored in these pages.

Macroeconomic Resilience began the year as an anonymous blog but was subsequently revealed to be the creation of Ashwin Parameswaran, whose ecological perspective on behavior and markets is very close to my own. Every post of his is worth reading in full, but there is one on the trade-off between resilience and stability that remains an absolute favorite of mine.

Steve Randy Waldman's posts on interfluidity are generally so compelling and self-contained that there is usually very little left to add. I have been especially appreciative of a sequence of recent posts in which he argues that technocratic arguments, regardless of their merits, are unlikely to be persuasive if they are not consonant with our moral intuitions. It is the neglect of this important point that has so many commentators wondering why a policy that allegedly saved the financial system from collapse at negligible cost to the taxpayer is so deeply unpopular.

Along similar lines, Yves Smith on naked capitalism has been relentless in her criticism of TARP (and the unseemly self-congratulation of its architects) on the grounds that superior alternatives were available at the time. While there is plenty of room for debate on these points, it's a conversation that must be had, and one that has to consider the impact of the policy on the distribution of financial practices, as well as the outrage generated when moral intuitions are offended. It is essential that Yves (and her guests) continue to challenge the emerging academic consensus on the policy. 

One of the defining events of the year for me was the flash crash of May 6. Contrary to initial media reports, this was not the result of a fat finger or computer glitch -- it was the consequence of interacting trading strategies, most of which involved algorithmically implemented rapid responses to incoming market data for very short holding periods. In understanding the mechanics of the crash I benefited from comments posted by RT Leuchtkafer in response to an SEC concept release. One of these was published three weeks before the crash and turned out to be remarkably prescient. 

Viewed in isolation, the crash might be considered fairly inconsequential, and a recurrence could probably be prevented by implementing rule changes such as trading halts followed by call auctions. But the crash ought not to be viewed in isolation. Like the proverbial canary in a coalmine, it's importance lies in what it reveals about the manner in which trading strategies interact to produce major departures of prices from fundamentals from time to time. These more routine departures take longer to build and correct, are difficult to identify in real time, and leave their mark in the form of value and momentum effects, volatility clustering, and the fat tails of return distributions.

This view of speculative asset markets as a behavioral ecosystem in which the composition of stategies is a key determinant of market stability has also been advanced by David Merkel on The Aleph Blog. David's sequence of posts on what he calls "the rules" is well worth reading, and it was in response to his tenth rule that I wrote my first post on trading strategies and market efficiency. That was just a couple of weeks before the flash crash occurred and brought these ideas suddenly to life.

I am convinced that the non-fundamental volatility induced by the trading process has major effects on portfolio choice, risk-bearing, capital allocation, job creation and economic growth. Some possible mechanisms through which such effects can arise have been explored by David Weild and Edward Kim, and I thank David for bringing this work to my attention. I am also grateful to Terry Flanagan of Markets Media Magazine for an invitation to attend their Global Markets Summit where I witnessed a fascinating and combative debate on the broader economic effects of exchange-traded funds. 

On the issue of market efficiency I have tangled with Scott Sumner on multiple occasions. But his anniversary post on The Money Illusion really struck a chord with me. Scott has a talent for making complex ideas intelligible, and an ability to maintain a clear distinction between a model and the empirical phenomenon that it is designed to explain. His vision of the economy is coherent and he is a formidable intellectual adversary. His post made me even more optimistic about the ability of blogs to shape economic discourse in constructive ways.

My window to the world of economics and finance blogs is Economist's View. Mark Thoma somehow manages to be both comprehensive and highly selective in his choice of links, virtually all of which are worth following. But more importantly, his site is a wonderful clearinghouse for open debate on economic methodology, especially in relation to macroeconomics. His post on the dynamics of learning (featuring a video presentation by George Evans) was especially memorable, as was Brad DeLong's diagrammatic discussion of the topic.

Despite the recent flowering of behavioral and experimental economics, I believe that the level of methodological homogeneity in our profession is stifling. But the time may finally be ripe for the introduction of agent-based computational models into mainstream discourse. A problem with simulation-based approaches is that there are no commonly accepted criteria on the basis of which the robustness of any given set of results may be evaluated. This will change once there is an outstanding article in a leading journal that sets a standard that others can then adopt. Where will it come from? Based on my reading of ongoing work by Geanakoplos and Farmer, I suspect that it may emerge from this recently funded initiative at the Santa Fe Institute. That would be nice to see.

Although my posts here have dealt largely with economics and finance, I also have a deep personal interest in social identity and group inequality, especially in the American context. On this set of issues I have found no voice more incisive than that of Ta-Nehisi Coates, whose freshness of perspective and formidable powers of expression I find breathtaking. His post on Robert E. Lee was one of several spectacular pieces this year, and prompted me to respond with my own thoughts on cultural ancestry. Related themes have been explored in a series of fascinating dialogues between Glenn Loury and John McWhorter.

Finally, I am thankful for the numerous extraordinary comments that have been left here, many by individuals who manage superb blogs of their own. Joao Farinha on economic development, Barkley Rosser on bubbles and agent-based models, Kid Dynamite on the flash crash, Economics of Contempt on TARP, Nick Rowe on learning, Adam P on equilibrium, Andrew Gelman on dynamic graphs, 123 on exchange traded funds, Andrew Oh-Willeke on private equity and cultural founder effects, and JKH on maturity diversification come immediately to mind, but there are many, many others.

I could go on, in a futile attempt to acknowledge all those who have influenced me and taken the time and trouble to  respond either in comments here or on their own blogs. But this post has to end before the calendar year does, and this seems as good a time to stop as any.

A very Happy New Year to you all.

Saturday, December 11, 2010

Perspectives on Exchange-Traded Funds

Are exchange-traded funds good or bad for the market?

That was the title of a lively and interesting session at Markets Media's third annual Global Markets Summit last Thursday. The session was organized as an old-fashioned debate between two teams. On one side were David Weild and Harold Bradley (joined later by Robert Litan on video), who argued that heavily traded funds composed of relatively illiquid small-cap stocks were responsible, in part, for the sharp decline in initial public offerings over the past decade, with devastating consequences for capital formation and job creation.

Responding to these claims were Bruce Lavine, Adam Patti and Robert Holderith, all representing major sponsors of funds (WisdomTree, IndexIQ and EGShares respectively). The sponsors argued that they are marketing a product that is vastly superior to the traditional open-end fund, provides investors with significant liquidity, transparency and tax advantages, and is rapidly gaining market share precisely because of these benefits. From their perspective, it makes as little sense to blame exchange-traded funds for declining initial public offerings and the sluggish rate of job creation as it does to blame them for hurricanes or influenza epidemics.

So who is right?

Bradley and Litan have previously argued their position in a lengthy and data-filled report, and Wield has testified on the issue before the joint CFTC-SEC committee on emerging regulatory issues. Their argument, in a nutshell, is this: The prices of thinly traded stocks can become much more volatile as a result of inclusion in a heavily traded fund as a consequence of the creation and redemption mechanism. For instance, a rise in the price of shares in the fund relative to net asset value induces authorized participants to create new shares while simultaneously buying all underlying securities regardless of the relation between their current prices and any assessment of fundamental value. Similarly a fall in the fund price relative to net asset value can trigger simultaneous sales of a broad range of securities, resulting in significant price declines for relatively illiquid stocks. This process results not only in greater volatility but also in a sharply increased correlation of returns on individual stocks. The scope for risk-reduction through diversification is accordingly reduced, which in turn influences the asset allocation decisions of long term investors. The result is a reduction in the flow of capital to the smaller, more innovative segments of the market, with predictably dire consequences for job creation.

The sponsors do not deny the possibility of these effects, but argue that any mispricing in the markets for individual stocks represents a profit opportunity for alert fundamental traders, and that this should prevent prolonged or major departures of prices from fundamentals. But this is too sanguine an assessment. Fundamental research is costly and its profitability depends not only on the scale of mispricing that is uncovered but also on the size of the positions that can be taken in order to profit from it. Furthermore, since a significant proportion of trades are driven by the arbitrage activities of authorized participants, mispricing need not be quickly or reliably corrected. Both illiquidity and high volatility serve as a deterrent to fundamental research in such markets.

The problem, in other words, is real. But what I find puzzling about Bradley's position on this issue is that he seems unable (or unwilling) to recognize that precisely the same effects can be generated by high-frequency trading. As was apparent in an earlier session at the conference, he remains among the most vocal and fervent defenders of the new market makers. His justification for this is that spreads have declined dramatically, lowering the costs of trading for all market participants, including long term investors.

There is no doubt the costs of trading are a fraction of what they used to be, but a single-minded focus on spreads misses the big picture. It is worth bringing to mind John Bogle's wise words:
It is the iron law of the markets, the undefiable rules of arithmetic: Gross return in the market, less the costs of financial intermediation, equals the net return actually delivered to market participants.  
If spreads and costs per trade decline, but holding periods shrink to such a degree that overall trading expenditures rise (due to significantly increased volume), the net return to long term investors as a group must fall. Furthermore, if increases in volatility and correlation induce shifts in asset allocation that have the effect of reducing financing for small companies with high growth potential, then even gross returns could decline.

I have been arguing for a while now that the stability of an asset market depends on the composition of trading strategies and, in particular, that one needs a large enough share of information trading to ensure that prices track fundamentals reasonably well. But changes in technology and regulation have allowed technical strategies to proliferate, and high frequency trading is a significant part of this phenomenon. The predictable result is a secular increase in asset price volatity and an increased frequency of bubbles and crashes.

The flash crash of May 6 was just a symptom of this. Viewed in isolation, it was a minor event: prices fell (or rose, in some cases) to patently absurd levels, then snapped back within a matter of minutes. But the crash was the canary in the proverbial coal mine -- it was important precisely because it made visible what is ordinarily concealed from view. Departures of prices from fundamentals are routine events that, especially on the upside, are not quickly corrected. Some of the proposed responses to the crash that were favored at the conference -- such as trading halts followed by call auctions -- are cosmetic changes. They will have the effect of silencing the canary while doing nothing to lower toxicity in the mine.

It is the unremarkable, invisible, gradually accumulating departures of prices from fundamentals that are the real problem. These show up in the magnitude and clustering of asset price volatility and, through their effects on the composition of portfolios, leave their mark on the path of capital allocation, employment, and economic growth.

---

I am grateful to Terry Flanagan of Markets Media Magazine for the invitation to attend the summit.

I would also like to mention that the Kauffman report contains a number of assertions with which I disagree. For instance, Bradley and Litan endorse the claims of Bogan, Connor and Bogan that an exchange-traded fund with significant short interest could collapse with some investors unable to redeem their shares. This has been refuted very effectively by Steve Waldman in his comments on the Bogan post, and by Kid Dynamite. It is unfortunate that most responses to the report have focused on this dubious claim, rather than the more legitimate arguments that are advanced there.

---

Update (12/11). David Weild writes in to say:
I think we are seeing capital leave the microcap markets for a variety of reasons including:
  • Loss of liquidity providers
  • Emergence of ETFs (they don't buy IPOs and most don't buy follow-on offerings)
  • Indexing displacing fundamental investing (again, when this occurs, the funds stop investing in IPOs)
  • Loss of the retail broker as a stock seller.
If you don't have access to sufficient capital then capital formation, innovation and economic growth will suffer. That is clearly where we are.
I have also heard from someone who was once active in convincing the SEC to expand approval of ETF applications (and prefers to remain anonymous). He asserts that "the effects now being debated were certainly not an anticipated consequence. I can't remember a single conversation externally or internally at the SEC about whether the creation and redemption mechanism would increase correlations."

In hindsight it seems obvious that returns would become more highly correlated, but the fact that it was completely unanticipated at the time illustrates the enormous challenge of regulatory adaptation to financial innovation.

Wednesday, December 08, 2010

Building a Computational Model of the Crisis

A team of four researchers affiliated with the Santa Fe Institute has secured a grant from the Institute for New Economic Thinking to fund the development of an agent-based computational model of the financial crisis. The model will explicitly consider "housing and mortgage markets, banks and other financial institutions, securitization processes and hedge fund investors, manufacturing and service firms, and regulatory agencies," with the goal of discovering "the essential elements needed to reproduce the crisis, while investigating alternative policies that may have reduced its intensity and strategies for recovery."

It's an interesting and multidisciplinary group, composed of Doyne Farmer, John Geanakoplos, Peter Howitt and Robert Axtell. Genakoplos and Howitt are two of the most creative economists around, and I have discussed the work of the former on leverage and the latter on learning in earlier posts. Axtell is the co-author (with Joshua Epstein) of a fascinating book called Growing Artificial Societies, in which they develop an elaborate computational model of the interaction between a renewable resource base and the human population that depends on it. The model reproduces spatial patterns of resource depletion and recovery as well as population growth, migration and decline. Farmer is a physicist by training but has been working on finance for as long as I can remember. I discussed some of his work in an earlier post making a case for greater methodological pluralism in economics in general, and agent-based modeling in particular.

The team is looking for a graduate student or postdoctoral fellow to join them for a couple of years. For a young researcher interested in finance, the microfoundations of macroeconomics, and the agent-based computational methodology, this could be a fantastic opportunity.

Thursday, December 02, 2010

Global Health and Wealth over Two Centuries

Here is a story in four minutes of remarkable divergence followed by rapid convergence in health and wealth across nations over the past two centuries (h/t David Kurtz)


Where the entire world was clustered in 1810 only sub-Saharan Africa remains. But even here there are profound stirrings of change.

I suspect that someday soon animations such as this will replace the soporific tables and charts than now appear as motivating evidence in economic papers.

---

Update (12/6). Pinkovskiy and Sala-i-Martin argue that over the past decade and a half, the nations of sub-Saharan Africa have experienced a dramatic and broad-based decline in poverty and inequality (h/t Mark Thoma):
African poverty reduction has been extremely general. Poverty fell for both landlocked and coastal countries, for mineral-rich and mineral-poor countries, for countries with favourable and unfavourable agriculture, for countries with different colonisers, and for countries with varying degrees of exposure to the African slave trade. The benefits of growth were so widely distributed that African inequality actually fell substantially...

It has often been suggested that geography and history matter significantly for the ability of Third World, and especially African, countries to grow and reduce poverty... Since these factors are permanent (and cannot be changed with good policy), they imply that some parts of Africa may be at a persistent growth disadvantage relative to others.

Yet... the African poverty decline has taken place ubiquitously, in countries that were slighted as well as in those that were favoured by geography and history. For every breakdown... the poverty rates for countries on either side of the breakdown tend to converge, with the disadvantaged countries reducing poverty significantly to catch up to the advantaged ones. Neither geographical nor historical disadvantages seem to be insurmountable obstacles to poverty reduction... even the most blighted parts of the poorest continent can set themselves firmly on the trend of limiting and even eradicating poverty within the space of a decade.
This is consistent with recent observations by Shanta Devarajan, Ngozi Okonjo-Iweala, and even the much-maligned Gordon Brown.

I have argued in a couple of earlier posts that sub-Saharan Africa may have entered what might be called a zone of uncertainty in which optimistic growth expectations can become self-fulfilling:
History can matter for long periods of time (for instance in occupational inheritance or the patrilineal descent of surnames) and then cease to constrain our choices in any significant way. Once reliable correlations can break down suddenly and completely; history is full of such twists and turns. As far as African prosperity is concerned, I believe that a discontinuity of this kind is inevitable if not imminent.

Friday, November 19, 2010

Foley, Sidrauski, and the Microfoundations Project

In a previous post I mentioned an autobiographical essay by Duncan Foley in which he describes in vivid detail his attempts to "alter and generalize competitive equilibrium microeconomic theory" so as to make its predictions more consonant with macroeconomic reality.  Much of this work was done in collaboration with Miguel Sidrauski while the two were members of the MIT faculty some forty years ago. Both men were troubled by the "classical scientific dilemma" facing economics at the time: the discipline had "two theories, the microeconomic general equilibrium theory, and the macroeconomic Keynesian theory, each of which seemed to have considerable explanatory power in its own domain, but which were incompatible." This led them to embark on a "search for a synthesis" that would bridge the gap.

This is how Duncan describes the basic theoretical problem they faced, the strategies they adopted in trying to solve it, the importance of the distinction between stock and flow equilibrium, and the desirability of a theory that allows for intertemporal plans to be mutually inconsistent in the aggregate (links added):
My intellectual preoccupation at M.I.T. was what has come to be called the "microeconomic foundations of macroeconomics." The general equilibrium theory forged by Walras and elaborated by Wald (1951), McKenzie (1959), and Arrow and Debreu (1954) can be used, with the assumption that markets exist for all commodities at all future moments and in all contingencies, to represent macroeconomic reality by simple aggregation. The resulting picture of macroeconomic reality, however, has several disturbing features. For one thing, competitive general equilibrium is efficient, so that it is incompatible with the unemployment of any resources productive enough to pay their costs of utilization. This is difficult to reconcile with the common observation of widely fluctuating rates of unemployment of labor and of capacity utilization of plant and equipment. General equilibrium theory reduces economic production and exchange to the pursuit of directly consumable goods and services, and as a result has no real role for money... The general equilibrium theory can accommodate fluctuations in output and consumption, but only as responses to external shocks to resource availability, technology or tastes. It is difficult to reconcile these relatively slowly moving factors with the large business-cycle fluctuations characteristic of developed capitalist economies. In assuming the clearing of markets for all contingencies in all periods, general equilibrium theory assures the consistency... of individual consumption, investment, and production plans, which is difficult to reconcile with the recurring phenomena of financial crisis and asset revaluation that play so large a role in actual capitalist economic life...

Keynes' theory, on the other hand, offers a systematic way around these problems. Keynes views money as central to the actual operation of developed capitalist economies, precisely because markets for all periods and contingencies do not exist to reconcile differences in agents' opinions about the future. Because agents cannot sell all their prospects on contingent claims markets, they are liquidity constrained. In a liquidity constrained economy there is no guarantee that all factor markets will clear without unemployed labor or unutilized productive capacity. Market prices are inevitably established in part by speculation on an uncertain future. As a result the economy is vulnerable to endogenous fluctuations as the result of herd psychology and self-fulfilling prophecy. From this point of view it is not hard to see why business cycle fluctuations are a characteristic of a productively and financially developed capitalist economy, nor why the potential for financial crisis is inherent in decentralized market allocation of investment...

But there are many loose ends in Keynes' argument. In presenting the equilibrium of short-term expectations that determines the level of output, income and employment in the short period, for example, Keynes argues that entrepreneurs hire labor and buy raw materials to undertake production because they form an expectation as to the volume of sales they will achieve when the production process runs its course... But Keynes offers no systematic alternative account of how entrepreneurs form a view of their prospects on the market to take the place of the assumption of perfect competition and market clearing. This turns out, in detail, to be a very difficult problem to solve.

Given the supply of nominal money, a fall in prices appears to be a possible endogenous source of increased liquidity. Keynes argues that the money price level is largely determined by the money wage level, but offers no systematic explanation of the dynamics governing the movements of money wages.

Though money is the fulcrum on which his theory turns, Keynes does not actually set out a theory of the economic origin or determinants of money. As a result it is difficult to relate the fluctuations in macroeconomic variables such as the velocity of money to the underlying process of the circulation of commodities.

On point after point Keynes' plausible macroeconomic concepts raise unanswered questions about the microeconomic behavior that might support them.

Thus economics in the late 1960s suffered from a classical scientific dilemma in that it had two theories, the microeconomic general equilibrium theory, and the macroeconomic Keynesian theory, each of which seemed to have considerable explanatory power in its own domain, but which were incompatible. The search for a synthesis which would bridge this gap seemed to me to be a good problem to work on. From the beginning the goal of my work in this area was to alter and generalize competitive equilibrium microeconomic theory so as to deduce Keynesian macroeconomic behavior from it.

In the succeeding years I approached this project from two angles. One was to fiddle with general equilibrium theory in the hope of introducing money into it in a convincing and unified way. The other was to rewrite as much as possible of Keynesian macroeconomics in a form compatible with competitive general equilibrium.
This latter project came to fruition first as a close collaboration with Miguel Sidrauski, and resulted in a book Monetary and Fiscal Policy in a Growing Economy (Foley and Sidrauski, 1971)... Our joint work... sought to develop a canonical model with which it would be possible to analyze the classical problems of the impact of government policy on the path of output of an economy... Following my notion that the price of capital goods are determined in asset markets, and the flow of new investment adjusts to make the marginal cost of investment equal to that price, we assumed a two-sector production system, so that there would be a rising marginal cost of investment. The asset equilibrium of the model is a generalization of Sidrauski's (and Tobin's) portfolio demand theory, which in turn is a generalization of Keynes' theory of liquidity preference. One of my chief goals was to sort out rigorously and explicitly the relation between stock and flow variables, so that we analyzed the model as a system of differential equations in continuous time, a setting in which the difference between stock and flow concepts is highlighted. At each instant asset market clearing of money, bonds, and capital markets in stocks together with labor and consumption good flow market clearing determine the price of capital, the interest rate, the price level, income, consumption and investment. Government policies determining the evolution of supplies of money and bonds together with the addition of investment flows to the capital stock move the model through time in a transparent trajectory. The book considers the comparative statics and dynamics of this model in detail...

Monetary and Fiscal Policy in a Growing Economy had a mixed reception... The fact that we did not derive the asset and consumption demands of households from explicit intertemporal expected utility maximization turned out to be an unfashionable choice for the 1970s, when the economics profession was persuaded to put an immense premium on models of "full rationality." Sidrauski and I were quite aware of the possibility of such a model, which would have been a generalization of his thesis work. At a conference at the University of Chicago in 1968, David Nissen presented a perfect foresight macroeconomic model that made clear that this path would lead directly back to the Walrasian general equilibrium results. Since I didn't believe in the relevance of that path to the understanding of real macroeconomic phenomena, I thought the main point in exploring this line of reasoning was to show how unrealistic its results were...

The project of a macroeconomic theory distinct from Walrasian general equilibrium theory rests heavily on the distinction between stock and flow equilibrium. In Keynes' vision, asset holders are forced to value existing and prospective assets speculatively without a full knowledge of the future. Our model represented this moment through the clearance of asset markets. In the Walrasian vision this distinction is dissolved through the imaginary device of clearing futures and contingency markets which establish flow prices that imply asset prices. The moral of Sidrauski's and my work is that some break with the full Walrasian system along temporary equilibrium lines is necessary as a foundation for a distinct macroeconomics. Once the implications of the stock-flow distinction in macroeconomics became clear, however, the temptation to finesse them by retreating to the Walrasian paradigm under the slogan of "rational expectations" became overwhelming to the American economics profession....

In my view, the rational expectations assumption which Lucas and Sargent put forward to "close" the Keynesian model, was only a disguised form of the assumption of the existence of complete futures and contingencies markets. When one unpacked the "expectations" language of the rational expectations literature, it turned out that these models assumed that agents formed expectations of futures and contingency prices that were consistent with the aggregate plans being made, and hence were in fact competitive general equilibrium prices in a model of complete futures and contingency markets. Arrow and Debreu had made the assumption of the existence of complete futures and contingency markets to give their version of the Walrasian model the appearance of coping with the real-world problems posed by the uncertainty of the future. To my mind, the rational expectations approach amounted to making the perfect-foresight assumptions that I had already considered and rejected on grounds of unrealism in the course of working with Sidrauski... What the profession took to be an exciting breakthrough in economic theory I saw as a boring and predictable retracing of an already discredited path.
To my mind the most appealing feature of the Foley-Sidrauski approach to microfoundations is that it allows for the possibility that individuals make mutually inconsistent plans based on heterogeneous beliefs about the future. This is what the rational expectations hypothesis rules out. Auxillary assumptions such as sticky prices must then be imposed in order to make the models more consonant with empirical observation.

In contrast, the notion of temporary equilibrium (introduced by John Hicks) allows for the clearing of asset markets despite mutually inconsistent intertemporal plans. As time elapses and these inconsistencies are revealed, dynamic adjustments are made that affect prices and production. There is no presumption that such a process must converge to anything resembling a rational expectations equilibrium, although there are circumstances under which it might. The contemporary literature closest to this vision of the economy is based on the dynamics of learning, and this dates back at least to Marcet and Sargent (1989) and Howitt (1992), with more recent contributions by Evans and Honkapohja (2001) and Eusepi and Preston (2008). I am not by any means an insider to this literature but my instincts tell me that it is a promising direction in which to proceed.

---

Update (11/20). Nick Rowe (in a comment) directs us to an earlier post of his in which the importance of allowing for mutually inconsistent intertemporal plans is discussed. He too argues for an explicit analysis of the dynamic adjustment process that resolves these inconsistencies as they appear through time. It's a good post, and makes the point with clarity.

Some of the comments on Nick's post reflect the view that explicit consideration of disequilibrium dynamics is unnecessary since they are known to converge to rational expectations in some models. My own view is that a lot more work needs to be done on learning before this sanguine claim can be said to have theoretical support. Furthermore, local stability of a rational expectations equilibrium in a linearized system does not tell us very much about the global properties of the original (nonlinear) system, since it leaves open the possibility of corridor stability: instability in the face of large but not small perturbations. (Tobin made a similar point in a paper that I have discussed previously here.)

---

Update (12/13). Mark Thoma and Leigh Caldwell have both posted interesting reactions to this. There's clearly a lot more to be said on the topic but for the moment I'll just link without further comment.

Wednesday, November 17, 2010

Herbert Scarf's 1964 Lectures: An Eyewitness Account

In the fourth volume of The Makers of Modern Economics is a fascinating autobiographical essay by Duncan Foley that traces the arc of his career as an economist and reflects upon developments in the discipline over the past four decades. Duncan describes his first exposure to economics at Swarthmore, his interactions with Tobin as a graduate student at Yale, the introduction in his doctoral dissertation of a concept of equity (now called envy-freeness) that does not depend on interpersonal comparisons of utility, his enormously fruitful collaboration with Miguel Sidrauski at MIT on the microfoundations of macroeconomics, his disillusionment with the rational expectations revolution, and his growing interest in heterodox economics at Stanford and subsequently at Barnard and Columbia.

There's enough material there for several interesting posts, but here I'll confine myself to reproducing Duncan's vivid recollection of a two semester course in mathematical economics taught by Herbert Scarf in 1964 (links added):
After the free pursuit of individual learning fostered by the Swarthmore Honors program, I found the return to traditional classroom teaching at Yale a difficult transition... I was frustrated in these courses not just by the tedium and ineffciency of the class lecture style, but by the tendency for instructors who knew a great deal about the substance and practice of their subjects to waste time rehearsing mathematical and theoretical topics they did not understand very well and often misconstrued...

The great exception to this pattern of misdirected pedagogy was Herbert Scarf's year-long course in Mathematical Economics. Scarf knew this material as well as anyone in the world, and had the gifts of patience, clarity of exposition, and personal charisma to convey it brilliantly and effectively. Scarf's teaching was a revelation to me of what could be accomplished in the classroom, with the appropriate attention to systematic organization, consistently careful preparation, and a judicious balance of lecture and discussion to maintain contact with the level of students' understanding. My notes from this course comprise a better and more complete reference for the topics than any book that has since been published.

The passage of time has revealed that the content of Scarf's course was just as remarkable in its depth and insight as the presentation. Remaining mostly within the realm of finite-dimensional spaces, and emphasizing duality and practical algorithms for the construction of solutions, Scarf gave a thorough tutorial on the mathematics of optimization, starting with linear programming via the simplex method and continuing through Kuhn-Tucker theory, dynamic programming, turnpike theory through Roy Radner's algorithmic approach, and integer programming. Since a huge proportion of economic models boil down to an optimization problem, this survey effectively unified and clarified an immense range of economics for the student. When Peter Diamond was working with James Mirrlees on the problem of optimal taxation (Diamond and Mirrlees, 1971a,b), for example, Scarf's approach helped me to grasp the relation between the complexity of their comparative statics results and the nonconvex structure of the constraint set (the intersection of the set of allocations that are resource and technology-feasible and those that can be supported by distorting taxes) in this problem. The study of these formal problems also convinced me that most economic theory depends on strong assumptions of convexity to assure the tractability of the resulting optimization problem, and that in situations where convexity is inherently absent or implausible it is very difficult to make much progress by traditional methods.

Scarf's course continued with a systematic review of general equilibrium theory, starting from the separating hyperplane approach to the Second Welfare Theorem, and including Gérard Debreu's proof (1959) of existence of a competitive equilibrium, the first presentation of Scarf's algorithmic approach to the calculation of competitive equilibria (1973), the theory of the core and its asymptotic equivalence to competitive equilibrium, and Scarf's own crucial counterexamples to the stability of competitive equilibrium under tâtonnement dynamics with more than two commodities (1960). The critical lesson Scarf emphasized in this discussion was the fact that the competitive equilibrium cannot, except in special cases such as representative agent economies, be represented as the solution of a mathematical programming problem. In other words, the Walrasian system does not generally admit a potential function. As a corollary to this observation we see that the comparative statics of competitive general equilibrium theory inherently lacks the organizing structure of convex programming, so that, for example, equilibrium prices are not in general monotonic functions of endowments. These observations planted the seeds in my mind of what grew to be grave doubts about the Walrasian system. These doubts do not focus on the logical consistency of the system, but on its adequacy as a useful representation of real economic relations...

In retrospect we can see that Scarf's course mapped out the whole development of high economic theory for the next twenty or twenty-five years. The theoretical literature of this period has largely been concerned with generalizing the concepts he taught to more sophisticated commodity spaces (such as infinite-dimensional spaces and spaces of stochastic processes), and rediscovering the general properties and limitations of competitive equilibrium theory in these contexts. This has been a source of both wonder and concern to me. I am amazed at how prescient a mind like Scarf's can be about the future development of a field, guided purely by superb mathematical instincts. But what does this imply about the theoretical fertility of economics during this period? If the core theoretical ideas that have dominated the field since were all present in the Yale classroom in 1964, it suggests that economic theory has been in a scholastic, formalistic phase of development during this period, primarily focusing on working out increasingly esoteric implications of well-established concepts.
Duncan tells me that he still has his notes from this course and that Scarf, who recently retired from teaching, remains full of vigor.

In subsequent posts I hope to discuss Duncan's reflections on the microfoundations of macroeconomics, his work with Sidrauski, his concern that the rational expectations revolution was a step backwards in the development of the theory, and his view that "some break with the full Walrasian system along temporary equilibrium lines is necessary as a foundation for a distinct macroeconomics." (The Hicksian concept of temporary equilibrium allows for asset market clearing in the face of heterogeneous beliefs and mutually inconsistent intertemporal plans.) These are themes that I have touched upon in previous posts and would like to revisit soon. In the meantime, let me repeat my plea to the fellows of the Economteric Society to nominate Duncan for election to their ranks.

---

Update (11/18). Glenn Loury writes in to say:
I never had much interaction with Scarf, but his pedagogic virtuosity and mastery of mathematical economics circa 1970 reminds me of... Stanley Reiter, whom I encountered as a raw assistant professor at Northwestern in the 1970s. Stan, a close friend and occasional collaborator with Leo Hurwicz, was director of the Math Center at Northwestern (forerunner of MEDS), and in the late 1970s had a huge impact on young scholars like Paul Milgrom, Bengt Holmstrom, Mark Satterthwaite and Roger Myerson...
I don't think I agree with the claim that much of "high economic theory" since the 60s has been dotting "i's" and crossing "t's". That was true through the mid-seventies, perhaps, but the asymmetric information, mechanism design, incomplete contract theory revolutions (Hurwicz/Myerson/Maskin, eg.) -- and the emergence of deeply insightful applied theory in a variety of fields from labor and I/O to money, finance and trade suggest otherwise to me.
I basically agree with Glenn on this latter point but, in Duncan's defense, the focus of his essay was on the microfoundations of macroeconomics and the futility of simply aggregating the Walrasian system. And on this dimension I think that progress has been limited at best.

---

Update (11/18). A wonderful comment by Jonathan Conning:
I too sat in Herb Scarf's Yale Micro Theory classroom and still remember the stunned awe that I and my classmates felt at the end of his first lecture with us, which happened to be on the simplex algorithm.
My only regret is that that semester at Yale (1990) we only got a handful of micro lectures from Scarf and so did not get the full "systematic review of general equilibrium theory" that Foley mentions.
I have little to say to improve on Duncan's glowing description of a Scarf lecture except to note that by 1990 the Hillhouse basement classroom had smooth sliding blackboards (which I do not imagine they had in 1964). This meant that there were always three blackboards in use, as he could fill one blackboard full of equations and slide it to conceal or reveal what had been written before. One of the things I recall most vividly is how artfully and efficiently Scarf used those boards, and how rarely he used the eraser. A lecture which might have started with definitions and theory that might have taken a detour through an expertly chosen example to reinforce intuition would in the end always return, with the smoothest glide of a hand to reveal again exactly the right portion of the board to bring the lecture full circle back to the climactic point he wanted. Everything seemed expertly choreographed and timed down to the very last second.
I hope that other former students of Scarf will somehow stumble upon this post.

Monday, October 11, 2010

Glenn Loury on Peter Diamond

Glenn Loury has kindly forwarded me a letter he wrote earlier this year in appreciation of Peter Diamond, one of the co-recipients of this year's Nobel Memorial Prize in Economics. The tribute was written for the occasion of Diamond's retirement, and seems worth publishing today:
April 20, 2010
Prof. James Poterba, Chair
Department of Economics
Massachusetts Institute of Technology

Dear Jim:

It is a pleasure to contribute a brief note of tribute to Peter Diamond, on this occasion of celebration for his work as scholar and teacher.

Peter was an inspiration and role model for me during my student years at MIT. My encounters with him -- in the classroom and in his office -- left an indelible impression. I recall going over to the Dewey Library shortly after arriving in Cambridge, in the summer of 1972, and digging out Peter's doctoral dissertation. This was a mistake! Peter's reputation as a powerful theorist had been noted by my undergraduate teachers at Northwestern. I wanted to see how this reputed superstar had gotten his start. Just how good could it be, I wondered? I had no idea! What I discovered was an elegant, profound and exquisitely argued axiomatic treatment of the general problem of representing consumption preferences over an infinite time horizon, extending results obtained by his undergraduate teacher and the future Nobel Laureate, Tjallings Koopmans.

I prided myself on being a budding mathematician in those years. Yet, Peter's effortless mastery in that dissertation of the relevant techniques from topology and functional analysis, and his successful application of those methods to a problem of fundamental importance in economic theory -- all accomplished by age 23, younger than I was at the moment I held his thesis binder in my hands! – was simply stunning. This set what seem to me then, and still seems so now, to be an unapproachable standard. I was depressed for weeks thereafter!

Even more depressing was what I discovered as I got to know Peter better over the course of my first two years in the program: that mathematical technique was not even his strongest suit! An unerring sense of what constitute the foundational theoretical questions in economic science, and a rare creative gift of being able to imagine just the right formal framework in the context of which such questions can be posed and answered with generality -- this, I came to understand, is what Peter Diamond was really good at.

And so, I learned from him in those years what turned out to be the most important lesson of my graduate educational experience -- that, in the doing of economic theory and relative to the behavioral significance of the issue under investigation, technique is always a matter of secondary importance -- neither necessary nor sufficient for the production of lasting insights. I learned this from the careful study of Peter's seminal contributions to growth theory, the theories of taxation and social insurance, the theories of choice under uncertainty and the allocation of risk-bearing, the theories of legal rules and institutions, and the theory of unemployment. I also learned this from Peter's elegant and comprehensive lectures on the work in these areas of himself and that of other scholars. And so I came -- slowly and fitfully, because I was rather attached to the joys of doing mathematics for its own sake -- to see the world the way that Peter Diamond saw it. And, in the process, I became a much better economist.

Peter graciously agreed to be the second reader on my dissertation, even though I was writing outside of his areas of specialization at the time, and my intellectual indebtedness to him only increased over the course of my last two years at MIT. It has by now become rather clear that I shall never be able to discharge that debt.

So, thanks Peter, for your extraordinary generosity as a teacher, and for your unmatched example as a scholar.

Glenn C. Loury
Merton P. Stoltz Professor of the Social Sciences
Professor of Economics and of Public Policy
Brown University
The following passage from the letter is worth repeating:
And so, I learned from him in those years what turned out to be the most important lesson of my graduate educational experience -- that, in the doing of economic theory and relative to the behavioral significance of the issue under investigation, technique is always a matter of secondary importance -- neither necessary nor sufficient for the production of lasting insights.
I have had very little time for blogging recently, thanks to two new courses, but if I can find the time I'd like to write a post on Diamond's classic 1982 paper on search, and the wonderful coconut parable he used in order to illuminate the theory.

Tuesday, October 05, 2010

Hot Potatoes

RT Leuchtkafer follows up on his earlier remarks with a comment in the Financial Times:
After a detailed four-month review of the flash crash, looking at market data streams tick-by-tick and down to the millisecond, the SEC concluded that a single order in the e-mini S&P 500 futures market ignited an inferno of panic selling. It was over in about seven minutes, and $1,000bn was up in smoke.
Within hours of the SEC’s report, the CME Group, owner of the Chicago Mercantile Exchange, issued a statement to point out that the suspect e-mini order was entirely legitimate, that it came from an institutional asset manager (that is, the public), and was little more than 1 per cent of the e-mini’s daily volume and less than 9 per cent of e-mini volume during and immediately after the crash.
How did this small bit of total volume cause such a conflagration?
You do it with computers. Specifically, you do it with unregulated computers. You pay rent so your machines sit inside the exchanges, minimising travel time for your electrons. You pay licence fees so your computers eat their fill of super-fast proprietary data feeds, data containing a shocking amount of information on everyone’s orders, not just on your own.
And when your computers spot trouble, such as a larger than expected sell-off, they dump inventory and they shut down – because they can.
No one knows what a “larger than expected sell-off” might be, but on May 6 a single hedge that added just an extra 9 per cent of selling pressure was enough to cause chaos.
When that happened, the SEC’s report says, high-frequency traders “stopped providing liquidity and began to take liquidity”, starting a frenzied race for anyone willing to buy. The report likened the panic to a downward-spiralling game of “hot potato” where, as HFT firms bought beyond their risk limits, they pulled their own bids and frantically sold to anyone they could, which were often just other HFT firms, who themselves quickly reached their risk limits and tried to sell to anyone they could, and so on – into the abyss. Fratricide ruled the day. Firms then fled the market altogether, accelerating the sell-off.
Punch drunk, markets rebounded when other market participants realised what had just happened and jumped into the market to buy.
Fair enough, some might say. Markets do panic, and sometimes for no reason. But the larger HFT firms register as formal marketmakers, receiving a variety of regulatory advantages, including greater leverage. All of this extends their enormous reach and power. In the past, they fulfilled certain obligations and observed certain restraints as a quid pro quo for those advantages, a quid pro quo intended to keep them in the market when markets were under stress and to prevent them from adding to that stress. Over the past few years, however, decades-long obligations and restraints all but disappeared, while many advantages stayed.
Computing power also opened marketmaking to a field of unregistered, or informal, high-frequency marketmakers, what investor and commentator Paul Kedrosky termed the “shadow liquidity system”. Exchanges will pay you to do it, too, just as they pay formal marketmakers, and require little in return.
The result is a loose confederation of unregulated, or lightly regulated, high-frequency marketmakers. They feed on what many consider confidential order information, play hot potato in volatile markets, and then instantly change the game to hide-and-seek if even a single hedge hits an unseen and unknowable tipping point.
The only quibble I have with this analysis is that too many different classes of algorithmic trading strategies are being bundled together under the HFT banner. In particular I would like to see a distinction made between directional strategies that are based on predicted short term price movements, and arbitrage based strategies that exploit price differentials across assets and markets. Both of these can be implemented with algorithms, rely on rapid responses to incoming market data, and involve very short holding periods. But they have completely different implications for asset price volatility. It is the mix of strategies rather than the method of their implementation that is the key determinant of market stability.

---

Update: Leuchtkafer writes in to say:
I should have been clear in the piece I was talking specifically about market making strategies. 
I appreciate the clarification, and agree with his characterization of the new market makers

Friday, October 01, 2010

RT Leuchtkafer on the Flash Crash Report

The long-awaited CFTC-SEC report on the flash crash has finally been released. I'm still working my way through it, and hope to respond in due course. In the meantime, here is an email (posted with permission) from the very interesting RT Leuchtkafer, whose thoughts on recent changes in market microstructure have been discussed at some length previously on this blog:
It's natural for any critic to focus on what he wants in the report, and I'm no different.

From the report, in the futures market: "HFTs stopped providing liquidity and instead began to take liquidity." (report pp 14-15); "...the combined selling pressure from the Sell Algorithm, HFT's and other traders drove the price of the E-Mini down..." (report p 15)

And in the equities market: "In general, however, it appears that the 17 HFT firms traded with the price trend on May 6 and, on both an absolute and net basis, removed significant buy liquidity from the public quoting markets during the downturn..." (report p 48); "Our investigation to date reveals that the largest and most erratic price moves observed on May 6 were caused by withdrawals of liquidity and the subsequent execution of trades at stub quotes." (p 79)

It's also natural - if ungraceful - for a critic to say "I told you so." OK, I'm no ballerina, and I told you so (April 16, 2010):

"When markets are in equilibrium these new participants increase available liquidity and tighten spreads. When markets face liquidity demands these new participants increase spreads and price volatility and savage investor confidence."

"...[HFT] firms are free to trade as aggressively or passively as they like or to disappear from the market altogether."

"...[HFT firms] remove liquidity by pulling their quotes and fire off marketable orders and become liquidity demanders. With no restraint on their behavior they have a significant effect on prices and volatility....they cartwheel from being liquidity suppliers to liquidity demanders as their models rebalance. This sometimes rapid rebalancing sent volatility to unprecedented highs during the financial crisis and contributed to the chaos of the last two years. By definition this kind of trading causes volatility when markets are under stress."

"Imagine a stock under stress from sellers such was the case in the fall of 2008. There is a sell imbalance unfolding over some period of time. Any HFT market making firm is being hit repeatedly and ends up long the stock and wants to readjust its position. The firm times its entrance into the market as an aggressive seller and then cancels its bid and starts selling its inventory, exacerbating the stock's decline."

"So in exchange for the short-term liquidity HFT firms provide, and provide only when they are in equilibrium (however they define it), the public pays the price of the volatility they create and the illiquidity they cause while they rebalance."

Finally, the report should put paid to the notion that HFT firms are simple liquidity providers and that they don't withdraw in volatile markets, claims that have been floating around for quite a while.

What happens next?
In a follow-up message, Leuchtkafer adds: 
I'd like to note there were many other critics who got it right, including (most importantly) Senator Kaufman, Themis Trading, David Weild, and others. They all deserve a shout out.
To this list I would add Paul Kedrosky.
Firms that began to "take liquidity" during the crash would have suffered significant losses were it not for the fact that many of their trades were subsequently broken. I have argued repeatedly that this cancellation of trades was a mistake, not simply on fairness grounds but also from the perspective of market stability:
By canceling trades, the exchanges reversed a redistribution of wealth that would have altered the composition of strategies in the trading population. I'm sure that many retail investors whose stop loss orders were executed at prices far below anticipated levels were relieved. But the preponderance of short sales among trades at the lowest prices and the fact that aberrant price behavior also occurred on the upside suggests to me that the largest beneficiaries of the cancellation were proprietary trading firms making directional bets based on rapid responses to incoming market data. The widespread cancellation of trades following the crash served as an implicit subsidy to such strategies and, from the perspective of market stability, is likely to prove counter-productive. 
The report does appear to confirm that some of the major beneficiaries of the decision to cancel trades were algorithmic trading outfits. But I need to read it more closely before offering further comment. 

Saturday, September 04, 2010

Economic Consequences of Speculative Side Bets

The following column was written jointly with Yeon-Koo Che and is crossposted from Vox EU with minor edits and links to references.
---
There is arguably no class of financial transactions that has attracted more impassioned commentary over the past couple of years than naked credit default swaps. Robert Waldmann has equated such contracts with financial arson, Wolfgang Münchau with bank robberies, and Yves Smith with casino gambling. George Soros argues that they facilitate bear raids, as does Richard Portes who wants them banned altogether, and Willem Buiter considers them to be a prime example of harmful finance. In sharp contrast, John Carney believes that any attempt to prohibit such contracts would crush credit markets, Felix Salmon thinks that they benefit distressed debtors, and Sam Jones argues that they smooth out the cost of borrowing over time, thus reducing interest rate volatility.
One reason for the continuing controversy is that arguments for and against such contracts have been expressed informally, without the benefit of a common analytical framework within which the economic consequences of their use can be carefully examined. Since naked credit default swaps necessarily have a long and a short side and the aggregate payoff nets to zero, it is not immediately apparent why their existence should have any effect at all on the availability and terms of financing or the likelihood of default. And even if such effects do exist, it is not clear what form and direction they take, or the implications they have for the allocation of a society's productive resources.
In a recent paper we have attempted to develop a framework within which such questions can be addressed, and to provide some preliminary answers. We argue that the existence of naked credit default swaps has significant effects on the terms of financing, the likelihood of default, and the size and composition of investment expenditures. And we identify three mechanisms through which these broader consequences of speculative side bets arise: collateral effects, rollover risk, and project choice.
A fundamental (and somewhat unorthodox) assumption underlying our analysis is that the heterogeneity of investor beliefs about the future revenues of a borrower is due not simply to differences in information, but also to differences in the interpretation of information. Individuals receiving the same information can come to different judgments about the meaning of the data. They can therefore agree to disagree about the likelihood of default, interpreting such disagreement as arising from different models rather than different information. As in prior work by John Geanakoplos on the leverage cycle, this allows us to speak of a range of optimism among investors, where the most optimistic do not interpret the pessimism of others as being particularly informative. We believe that this kind of disagreement is a fundamental driver of speculation in the real world.
When credit default swaps are unavailable, the investors with the most optimistic beliefs about the future revenues of a borrower are natural lenders: they are the ones who will part with their funds on terms most favorable to the borrower. The interest rate then depends on the beliefs of the threshold investor, who in turn is determined by the size of the borrowing requirement. The larger the borrowing requirement, the more pessimistic this threshold investor will be (since the size of the group of lenders has to be larger in order for the borrowing requirement to be met). Those more optimistic than this investor will lend, while the rest find other uses for their cash.
Now consider the effects of allowing for naked credit default swaps. Those who are most pessimistic about the future prospects of the borrower will be inclined to buy naked protection, while those most optimistic will be willing to sell it. However, pessimists also need to worry about counterparty risk - if the optimists write too many contracts they may be unable to meet their obligations in the event that a default does occur, an event that the pessimists consider to be likely. Hence the optimists have to support their positions with collateral, which they do by diverting funds that would have gone to borrowers in the absence of derivatives. The borrowing requirement must then be met by appealing to a different class of investors, who are neither so optimistic that they wish to sell protection, nor so pessimistic that they wish to buy it. The threshold investor is now clearly more pessimistic than in the absence of derivatives, and the terms of financing are accordingly shifted against the borrower. As a result, for any given borrowing requirement, the bond issue is larger and the price of bonds accordingly lower when investors are permitted to purchase naked credit default swaps.
This effect does not arise if credit default swaps can only be purchased by holders of the underlying security. In fact, it can be shown that allowing for only “covered” credit default swaps has much the same consequences as allowing optimists to buy debt on margin: it leads to higher bond prices, a smaller issue size for any given borrowing requirement, and a lower likelihood of eventual default. While optimists take a long position in the debt by selling such contracts, they facilitate the purchase of bonds by more pessimistic investors by absorbing much of the credit risk. In contrast with the case of naked credit default swaps, therefore, the terms of lending are shifted in favor of the borrower. The difference arises because pessimists can enter directional positions on default in one case but not the other.
While this simple model sheds some light on the manner in which the terms of financing can be affected by the availability of credit derivatives, it does not deal with one of the major objections to such contracts: the possibility of self-fulfilling bear raids. To address this issue it is necessary to allow for a mismatch between the maturity of debt and the life of the borrower. This raises the possibility that a borrower who is unable to meet contractual obligations because of a revenue shortfall can roll over the residual debt, thereby deferring payment into the future.
As many economists have previously observed, multiple self-fulfilling paths arise naturally in this setting (see, for instance, Calvo, Cole and Kehoe, and Cohen and Portes). If investors are confident that debt can be rolled over in the future they will accept lower rates of interest on current lending, which in turn implies reduced future obligations and allows the debt to be rolled over with greater ease. But if investors suspect that refinancing may not be available in certain states, they demand greater interest rates on current debt, resulting in larger future obligations and an inability to refinance if the revenue shortfall is large.
A key question then is the following: how does the availability of naked credit default swaps affect the range of borrowing requirements for which pessimistic paths (with significant rollover risk) exist? And conditional on the selection of such a path, how are the terms of borrowing affected by the presence of these credit derivatives?
For reasons that are already clear from the baseline model, we find that pessimistic paths involve more punitive terms for the borrower when naked credit default swaps are present than when they are not. More interestingly, we find that there is a range of borrowing requirements for which a pessimistic path exists if and only if such contracts are allowed. That is, there exist conditions under which fears about the ability of the borrower to repay debt can be self-fulfilling only in the presence of credit derivatives. It is in this precise sense that the possibility of self-fulfilling bear raids can be said to arise when the use of such derivatives is unrestricted.
The finding that borrowers can more easily raise funds and obtain better terms when the use of credit derivatives is restricted does not necessarily imply that such restrictions are desirable from a policy perspective. A shift in terms against borrowers will generally reduce the number of projects that are funded, but some of these ought not to have been funded in the first place. Hence the efficiency effects of a ban are ambiguous. However, such a shift in terms against borrowers can also have a more subtle effect with respect to project choice: it can tilt managerial incentives towards the selection of riskier projects with lower expected returns. This happens because a larger debt obligation makes projects with greater upside potential more attractive to the firm, as more of the downside risk is absorbed by creditors.
The central message of our work is that the existence of zero sum side bets on default has major economic repercussions. These contracts induce investors who are optimistic about the future revenues of borrowers, and would therefore be natural purchasers of debt, to sell credit protection instead. This diverts their capital away from potential borrowers and channels it into collateral to support speculative positions. As a consequence, the marginal bond buyer is less optimistic about the borrower's prospects, and demands a higher interest rate in order to lend. This can result in an increased likelihood of default, and the emergence of self-fulfilling paths in which firms are unable to rollover their debt, even when such trajectories would not arise in the absence of credit derivatives. And it can influence the project choices of firms, leading not only to lower levels of investment overall but also in some cases to the selection of riskier ventures with lower expected returns.
James Tobin (1984) once observed that the advantages of greater “liquidity and negotiability of financial instruments” come at the cost of facilitating speculation, and that greater market completeness under such conditions could reduce the functional efficiency of the financial system, namely its ability to facilitate “the mobilization of saving for investments in physical and human capital... and the allocation of saving to their more socially productive uses.” Based on our analysis, one could make the case that naked credit default swaps are a case in point.
This conclusion, however, is subject to the caveat that there exist conditions under which the presence of such contracts can prevent the funding of inefficient projects. Furthermore, an outright ban may be infeasible in practice due to the emergence of close substitutes through financial engineering. Even so, it is important to recognize that the proliferation of speculative side bets can have significant effects on economic fundamentals such as the terms of financing, the patterns of project selection, and the incidence of corporate and sovereign default.

Saturday, August 28, 2010

Lessons from the Kocherlakota Controversy

In a speech last week the President of the Minneapolis Fed, Narayana Kocherlakota, made the following rather startling claim:
Long-run monetary neutrality is an uncontroversial, simple, but nonetheless profound proposition. In particular, it implies that if the FOMC maintains the fed funds rate at its current level of 0-25 basis points for too long, both anticipated and actual inflation have to become negative. Why? It’s simple arithmetic. Let’s say that the real rate of return on safe investments is 1 percent and we need to add an amount of anticipated inflation that will result in a fed funds rate of 0.25 percent. The only way to get that is to add a negative number—in this case, –0.75 percent.

To sum up, over the long run, a low fed funds rate must lead to consistent—but low—levels of deflation.
The proposition that a commitment by the Fed to maintain a low nominal interest rate indefinitely must lead to deflation (rather than accelerating inflation) defies common sense, economic intuition, and the monetarist models of an earlier generation. This was pointed out forcefully and in short order by Andy Harless, Nick Rowe, Robert Waldmann, Scott Sumner, Mark Thoma, Ryan Avent, Brad DeLongKarl Smith, Paul Krugman and many other notables.

But Kocherlakota was not without his defenders. Stephen Williamson and Jesus Fernandez-Villaverde both argued that his claim was innocuous and completely consistent with modern monetary economics. And indeed it is, in the following sense: the modern theory is based on equilibrium analysis, and the only equilibrium consistent with a persistently low nominal interest rate is one in which there is a stable and low level of deflation. If one accepts the equilibrium methodology as being descriptively valid in this context, one is led quite naturally to Kocherlakota's corner.

But while Williamson and Fernandez-Villaverde interpret the consistency of Kocherlakota's claim with the modern theory as a vindication of the claim, others might be tempted to view it as an indictment of the theory. Specifically, one could argue that equilibrium analysis unsupported by a serious exploration of disequilibrium dynamics could lead to some very peculiar and misleading conclusions. I have made this point in a couple of earlier posts, but the argument is by no means original. In fact, as David Andolfatto helpfully pointed out in a comment on Williamson's blog, the same point was made very elegantly and persuasively in a 1992 paper by Peter Howitt.

Howitt's paper is concerned with the the inflationary consequences of a pegged nominal interest rate, which is precisely the subject of Kocherlakota's thought experiment. He begins with an old-fashioned monetarist model in which output depends positively on expected inflation (via the expected real rate of interest), realized inflation depends on deviations of output from some "natural" level, and expectations adjust adaptively. In this setting it is immediately clear that there is a "rational expectations equilibrium with a constant, finite rate of inflation that depends positively on the nominal rate of interest" chosen by the central bank. This is the equilibrium relationship that Kocherlakota has in mind: lower interest rates correspond to lower inflation rates and a sufficiently low value for the former is associated with steady deflation. 

The problem arises when one examines the stability of this equilibrium. Any attempt by the bank to shift to a lower nominal interest rate leads not to a new equilibrium with lower inflation, but to accelerating inflation instead. The remainder of Howitt's paper is dedicated to showing that this instability, which is easily seen in the simple old-fashioned model with adaptive expectations, is in fact a robust insight and holds even if one moves to a "microfounded" model with intertemporal optimization and flexible prices, and even if one allows for a broad range of learning dynamics. The only circumstance in which a lower nominal rate results in lower inflation is if individuals are assumed to be "capable of forming rational expectations ab ovo".

Howitt places this finding in historical context as follows (emphasis added):
In his 1968 presidential address to the American Economic Association, Milton Friedman argued, among other things, that controlling interest rates tightly was not a feasible monetary policy. His argument was a variation on Knut Wicksell's cumulative process. Start in full employment with no actual or expected inflation. Let the monetary authority peg the nominal interest rate below the natural rate. This will require monetary expansion, which will eventually cause inflation. When expected inflation rises in response to actual inflation, the Fisher effect will put upward pressure on the interest rate. More monetary expansion will be required to maintain the peg. This will make inflation accelerate until the policy is abandoned. Likewise, if the interest rate is pegged above the natural rate, deflation will accelerate until the policy is abandoned. Since no one knows the natural rate, the policy is doomed one way or another.

This argument, which was once quite uncontroversial, at least among monetarists, has lost its currency. One reason is that the argument invokes adaptive expectations, and there appears to be no way of reformulating it under rational expectations... in conventional rational expectations models, monetary policy can peg the nominal rate... without producing runaway inflation or deflation... Furthermore... pegging the nominal rate at a lower value will produce a lower average rate of inflation, not the ever-higher inflation predicted by Friedman...

Thus the rational expectations revolution has almost driven the cumulative process from the literature. Modern textbooks treat it as a relic of pre-rational expectations thought... contrary to these rational expectations arguments, the cumulative process is not only possible but inevitable, not just in a conventional Keynesian macro model but also in a flexible-price, micro-based, finance constraint model, whenever the interest rate is pegged... the essence of the cumulative process lies not in an economy's rational expectations equilibria but in the disequilibrium adjustment process by which people try to acquire rational expectations... under a wide set of assumptions, the process cannot converge if the monetary authority keeps interest rates pegged... the cumulative process is a manifestation of this nonconvergence. 
Thus the cumulative process should be regarded not as a relic but as an implication of real-time belief formation of the sort studied in the literature on convergence (or nonconvergence) to rational expectations equilibrium... Perhaps the most important lesson of the analysis is that the assumption of rational expectations can be misleading, even when used to analyze the consequences of a fixed monetary regime. If the regime is not conducive to expectational stability, then the consequences can be quite different from those predicted under rational expectations... in general, any rational expectations analysis of monetary policy should be supplemented with a stability analysis... to determine whether or not the rational expectations equilibrium could ever be observed. 
To this I would add only that a stability analysis is a necessary supplement to equilibrium reasoning not just in the case of monetary policy debates, but in all areas of economics. For as Richard Goodwin said a long time ago, an "equilibrium state that is unstable is of purely theoretical interest, since it is the one place the system will never remain."

---

Update (8/29). From a comment by Robert Waldmann:
I think that it is important that in monetary models there are typically two equilibria -- a monetary equilibrium and a non-monetary equilibrium.

The assumption that the economy will end up in a rational expectations equilibrium does not imply that a low nominal interest rate leads to an equilibrium with deflation. It might lead to an equilibrium in which dollars are worthless.

I'd say the experiment has been performed. From 1918 through (most of) 1923 the Reichsbank kept the discount rate low (3.5% IIRC) and met demand for money at that rate.

The result was not deflation. By October 1923 the Reichsmark was no longer used as a medium of exchange.
In fact, the only stable steady state under a nominal interest rate peg in the Howitt model is the non-monetary one.