Professional Resume


Citizenship: US


Washington University, B.A. Spring 1960

Princeton University, Ph.D.(Economics), Spring 1972

Honors: Woodrow Wilson Fellowship

Fields  (JEL Classifications):


    Production and Organization (D41, D24)

    Market Structure (D40, D41, D42, D43)

    General Equilibrium (D58)

    Information and Uncertainty (D82, D83)

    Intertemporal Choice and Growth (D90)

Macroeconomics and Monetary Economics

General Aggregative Models (E17)

Sector Models (E21, E22, E23, E27)

Money and Interest Rates (E41, E47)

Monetary Policy Analysis and Modeling (E50, E52, E58)

Macroeconomic Policy (E60, E61)

International Trade and Finance

Open Economy Macroeconomics (F41, F42)

Financial Economics

    General Financial Models (G11, G12)

Work Experience:

I. September 1966 - April 1968: U.S. Department of Agriculture

My assignment was to develop an econometric model of agricultural production

based on survey data to study the costs and benefits of pesticides.

II. May 1984 - June 1985: International Monetary Fund, Quantitative Representative

to Nepal Rastra Bank. My job included advising senior and junior staff

on modeling techniques. I also taught courses on econometrics and mathematical


III. May 1968 - March 2001: Division of Research and Statistics, Board of

Governors of the Federal Reserve System.

My responsibilities in the Division of Research and Statistics at the Federal

Reserve included the following:

1. Provide information, analysis, and technical advice to the Division’s senior

staff, Board members, and the Federal Open Market Committee (FOMC) via

memoranda, reports, and briefing material.

2. Provide model-based forecasts of key economic and financial variables, and

develop and maintain econometric models for use in forecasting and policy


3. Prepare research papers, presented at in-house as well as outside seminars

and conferences and/or published in professional journals, on topics involving

macroeconomics and monetary policy.

4. Write testimony and speeches and provide background material for Board

members, answer inquiries from Congress and the general public, and review

articles and research papers prepared by Federal Reserve System staff.

In later years, work was focused on the design of optimal decision rules

when policy makers harbor Knightian uncertainty about their models. Model uncertainty

is a perennial issue for policy authorities; and especially in cases such

as the newly formed European Union, where a coherent model of macroeconomic behavior is a dream at best, policy may need to account for extreme model uncertainty.

In practice, strong doubts about model specification can cause decision

makers to prefer rules that perform well across a set of models. My interest in

this subject goes back to 1982, when I wrote what is probably the first paper on

robust control in economics, a topic that is just now gaining momentum. In this

now increasingly cited paper, I laid out the basic principles of a game-theoretic approach

to monetary policy.

Throughout the years, most of my attention was focused on monetary policy.

My published and unpublished papers (see ), follow an empirical and analytical trail of issues and concerns that, at the time of writing, were foremost in the minds of policy makers, including issues such as: what are ideal targets and instruments of policy? In what sense would setting a band for the overnight interest rate be optimal for policy? How does one best characterize the cost of disinflation in a dynamic economy with frictions when firms and households form rational expectations of the future? How can and should policy makers cope with uncertainty in their data and in their models? How should information be used optimally? Is it useful for the Federal Board to target asset prices? This work led to the publication of a number of articles in major academic journals.

A 2001 paper on robust monetary policy (co-authored with Robert J. Tetlow, ) takes full advantage of the latest developments in robust control

techniques in the engineering literature, where the theory is used to design space

shuttles and jet fighter bombers.  “Robust monetary policy with misspecified models: Does model uncertainty always call for attenuated policy?” appeared in the March 2001 issue of the Journal of Economic Dynamics and Control. In 2004, my colleague, Robert J. Tetlow and I published “Avoiding Nash Inflation: Bayesian and Robust Responses to Model Uncertainty “a in the Review of Economic Dynamics. It examines how a policy maker, who applies a constant-gain algorithm in estimating a Phillips curve fall into the trap of an induction problem that may lead to cycles of inflation and disinflation when the central bank uses either optimal or robust control techniques to guide its policy decisions.

In 2009, we published the oddly titled paper, “Robustifying Learnability,” which does not assume rational expectations in equilibrium on the supposition that agents in an economy will likely need time and experience to learn how it functions. The issue of learning has been well explored by George Evans and  Seppo Honkapohja    their book, “Learning and Expectations in Economics,”  A principal issue concerns the question if learning by agents actually leads to a rational expectations equilibrium and how policy might be adjusted to cause learning to be equilibrating. In Robustifying Learnability, we add the assumption that agents make mistakes and a policy that seeks to bound the worst possible learning errors that can be made without sending the economy onto unstable paths or indeterminacy of equilibrium.

Selected Publications

A list of papers, published articles and citations can be found here.

Robustifying Learnability

(Robert J. Tetlow, co-author)

Journal of Economic Dynamics and Control

vol. 33(2), pages 296-316, (February 2009).


In recent years, the learnability of rational expectations equilibria (REE) and determinacy of economic structures have rightfully joined the usual performance criteria among the sought-after goals of policy design. Some contributions to the literature, including Bullard and Mitra [2002. Learning about monetary policy rules. Journal of Monetary Economics 49 (6), 1105-1139] and Evans and Honkapohja [2006. Monetary Policy, Expectations, and Commitment, Scandinavian Journal of Economics 108, 15-38], have made significant headway in establishing certain features of monetary policy rules that facilitate learning. However a treatment of policy design for learnability in worlds where agents have potentially misspecified their learning models has yet to surface. This paper provides such a treatment. We begin with the notion that because the profession has yet to settle on a consensus model of the economy, it is unreasonable to expect private agents to have collective rational expectations. We assume that agents have only an approximate understanding of the workings of the economy and that their learning the reduced forms of the economy is subject to potentially destabilizing perturbations. The issue is then whether a central bank can design policy to account for perturbations and still assure the learnability of the model. We provide two examples one of which-the canonical New Keynesian business cycle model-serves as a test case. For different parameterizations of a given policy rule, we use structured singular value analysis (from robust control theory) to find the largest ranges of misspecifications that can be tolerated in a learning model without compromising convergence to an REE. In addition, we study the cost, in terms of performance in the steady state of a central bank that acts to robustify learnability on the transition path to REE.

Click here for a working paper version.

Avoiding Nash Inflation: Bayesian and Robust Responses to Model Uncertainty  (Robert J. Tetlow, co-author)

Review of Economic Dynamics,  vol 7(4), pages: 869-899,
(October, 2004)


We examine learning, model misspecification, and robust policy responses to misspecification in a quasi-real-time environment. The laboratory for the analysis is the Sargent (1999) explanation for the origins of inflation in the 1970's and the subsequent disinflation. Three robust policy rules are derived that differ according to the extent that misspecification is taken as a parametric phenomenon. These responses to drifting estimated parameters and apparent misspecification are compared to the certainty-equivalent case studied by Sargent. We find gains from utilizing robust approaches to monetary policy design, but only when the approach to robustness is carefully tailored to the problem at hand. In the least parametric approach, the medicine of robust control turns out to be too potent for the disease of misspecification. In the most parametric approach, the response to misspecification is too weak and too misdirected to be of help. But when the robust approach to policy is narrowly directed in the correct location, it can avoid Nash inflation and improve social welfare. It follows that agnosticism regarding the sources of misspecification has its pitfalls. We also find that Sargent's story for the rise of inflation of the 1970s and its subsequent decline in the 1980s is robust to most ways of relaxing a strong assumption in the original work. (Copyright: Elsevier)

Click here for the working paper version.

Robust monetary policy with misspecified models: Does model uncertainty always call for attenuated policy?

(Robert J. Tetlow, co-author)

Journal of Economic Dynamics and Control,

vol. 25(6-7), pages 911-949, (June 2001).


This paper explores Knightian model uncertainty as a possible explanation of the considerable difference between estimated interest rate rules and optimal feedback descriptions of monetary policy. We focus on two types of uncertainty: (i) unstructured model uncertainty reflected in additive shock error processes that result from omitted-variable misspecifications, and (ii) structured model uncertainty, where one or more parameters are identified as the source of misspecification. For an estimated forward-looking model of the US economy, we find that rules that are robust against uncertainty, the nature of which is unspecifiable, or against one-time parametric shifts, are more aggressive than the optimal linear quadratic rule. However, policies designed to protect the economy against the worst-case consequences of misspecified dynamics are less aggressive and turn out to be good approximations of the estimated rule. A possible drawback of such policies is that the losses incurred from protecting against worst-case scenarios are concentrated among the same business cycle frequencies that normally occupy the attention of policymakers.

Click for a PDF of the working paper version of this article.

Simplicity versus optimality: The choice of monetary policy rules when agents must learn (Robert J. Tetlow, co-author)

Journal of Economic Dynamics and Control,

vol. 25(1-2), pages 245-279  (January 2001).


The normal assumption of full information is dropped and the choice of monetary policy rules is instead examined when private agents must learn the rule. A small, forward-looking model is estimated and stochastic simulations conducted with agents using discounted least squares to learn of a change of preferences or a switch to a more complex rule. We find that the costs of learning a new rule may be substantial, depending on preferences and the rule that is initially in place. Policymakers with strong preferences for inflation control incur substantial costs when they change the rule in use, but are nearly always willing to bear the costs. Policymakers with weak preferences for inflation control may actually benefit from agents’ prior belief that a strong rule is in place.

Click for the working paper version in PDF.

An Optimal Interest Rate Rule with Information from Money and Auction Markets

Journal of Money, Credit and Banking vol. 26, issue 4, pages 917-33, (1994)


For an economy characterized by neo-Keynesian wage rigidity, an optimal open market rule is derived based on financial market information, including auction price behavior. Simulations of a small model of the United States--estimated via full information maximum likelihood together with a numerical procedure for solving dynamic, linear rational expectations models--are used to evaluate the response of the economy to sectoral shocks, given the optimal interest rate rule. In the case of three aggregate commodity price indexes studied here, the additional indicator information is unlikely to have significant impact on the performance of monetary policy.

Copyright 1994 by Ohio State University Press.

Comparing forecasts from fixed and variable coefficient models: The case of money demand

(A. V. B. Swamy and Arthur B. Kennickell co-authors)

International Journal of Forecasting 6(4) pp 469-477, December 1990


In this paper we introduce a class of tentatively plausible, fixed-coefficient models of money demand and evaluate their forecast performance. When these models are re-estimated allowing all coefficients to vary over time, the forecasting performance improves dramatically. Aside from offering insights about improved methods of analyzing time series data, the most promising direct use for point estimates derived from time-varying coefficients is as an aid in calibrating proposed models of the kind discussed here.

Further thoughts on testing for causality with econometric models

(P. A. V. B. Swamy, co-author)

Journal of Econometrics 39(1-2), pages 105-147, (October 1988)


The ontological basis for causality testing must be some empirically interpretable system leading one to specify a logically valid a priori law that can be shown to exist. Such paradigms as conventional, linear and non-linear models, be they structural or time series, tend to embody contradictory assumptions that, unfortunately, make causal interpretability problematic. The paper demonstrates how these difficulties can be avoided using a stochastic-coefficient approach. The final part of this article is devoted to a discussion of probabilistic logic as a valid tool for scientific analysis and interpretation of causal relationships.

The foundations of econometrics--are there any?

(P. A. V. B. Swamy and Roger K. Conway co-authors)

Econometric Reviews 4(1), Pages 1 - 61, (1985)


The conclusions of a logically consistent economic theory which strictly adheres to Aristotle's axioms of logic are factually true if its sufficient conditions are all factually true. Alternatively, if a conclusion of such a theory is false, then at- least one of its assumptions is false. Unfortunately, the factual truth of sufficient conditions cannot be established because the problem of induction is impossible to solve. It is also true that the falsity of a conclusion cannot be established in the presence of uncertainty. While the philosophy of instrumentalism applied to sufficient and logically consistent explanations may provide useful solutions to immediate practical problems, the principles of simplicity, parsimony and profligacy--all of them requiring conditional deductive arguments--are useless as criteria for model choice.

Activist vs. non-activist monetary policy: optimal rules under extreme uncertainty, 1982 working paper

Reprinted in

Finance and Economics Discussion Series 2001-02 (January)

Editorial Note to 2001 reprint

This working paper is a re-issue of an unpublished manuscript written eighteen years ago. The reason for resurrecting it now is that it has been referenced recently in the new literature on robust control. In 1982, this topic had little currency in the economics literature; and even in engineering, where this field was developed, the topic had just begun to take hold with the publication of G. Zames’ seminal 1981 article on control. Since I wrote “Activist vs. Non-Activist Monetary Policy,” the technology of robust control has leap-frogged and so has understanding of the subject, thanks largely to the many contributions by Lars Hansen and Thomas Sargent. Even so, many basic insights and intuitions remain the same. Because this paper is framed in simple mathematical terms, the reader may view it as a primer on robust control, useful even today, when much of the literature on the subject tends to be very technical.


This paper analyzes the optimality of reactive feedback rules advocated by neo-Keynesians, and constant money growth rules proposed by monetarists. The basis for this controversy is not merely a disagreement concerning sources and impacts of uncertainty in the economy, but also an apparent fundamental difference in the attitude toward uncertainty about models. To address these differences, this paper compares the relative reactiveness of a monetary policy instrument to conditioning information for two starkly differing versions of model uncertainty about the model and the data driving it: Bayesian uncertainty that assumes known probability distributions for a model's parameters and the data Knightian uncertainty that does not. In the latter case, the policy maker copes with extreme uncertainty by playing a mental game against "nature," using minimax strategies. Contrary to common intuition, extreme uncertainty about a model's parameters does not necessarily imply less responsiveness to conditioning information--here represented by the lagged gap between nominal income growth and its trend--and it certainly does not justify constancy of money growth except in an extreme version of Brainard's (1967) result. A partial constant growth rule can be derived in only one special case: if the conditioning variable in the feedback rule is also uncertain in either Bayesian or Knightian senses and the authority used Neyman-Pearson likelihood ratio tests to distinguish noise from information with each new observation.

The short-run volatility of money stock targeting

(co-authored with Peter Tinsley and G. Fries)

Journal of Monetary Economics, 10 (2), pp 215-237, (1982)

Working paper version in: Special Studies Paper 169, Federal Reserve Board, 1982,


The altered allocations of money market volatility obtained by alternative monetary policy procedures are illustrated by stochastic simulations of a staff monthly model. The results indicate the nature of the tradeoff between short-run volatility in the money stock and in the funds rate that is available to money stock targeting procedures.

A maximum probability approach to short-run policy

(Peter A. Tinsley co-author)

Journal of Econometrics, 15 (1), pp 31-48, January 1981

Monopolistic competition and sequential search

Journal of Economic Dynamics and Control 2, pp.257-281, 1980.


In models describing sequential search as an optimal mode of behavior for buyers seeking the lowest price, it is often implicitly assumed that price dispersion actually exists. To evaluate the existence of equilibrium price distributions under sequential search, this paper presents two variants of a model of monopolistic competition. It is shown that under reasonable assumptions in a finite market, Nash competitive behavior is not consistent with price dispersion in equilibrium. However, in a continuous atomistic market, a price distribution can arise as a mapping from the distribution over search costs among consumers.

The ‘flexible accelerator’ and optimization with a finite horizon

Economics Letters, 5(1), pp 21-27, 1980


So-called ‘flexible accelerator’ or partial adjustment models can be derived from dynamic optimization under convex adjustment costs. However, even with time-invariant adjustment costs and constant discount rate, constancy of the speed of adjustment requires infinite horizon planning. Using a simple production model, it is shown here that for finite horizons, the adjustment may nevertheless be approximately constant for much of the time if the discount rate is high enough and if the marginal adjustment cost is sufficiently small.

Price dispersion in atomistic competition

Economics Letters  3(4), pp 327-332, 1979


This note presents necessary conditions for non-degenerate price dispersion in a continuous atomistic market where buyers with price-elastic demand search sequentially for the lowest price, and firms maximize profit subject to a variable average cost function.

Optimal bands in short-run monetary policy

Zeitschrift fűr die gesamte Staatswissenschaft, 134(2), pp 242-260, June 1978

Optimal Price Adjustment: Tests of a Price Equation in US Manufacturing

Proceedings of the 1972 IEEE Conference on Decision and ontrol and 11-th  Symposium on Adaptive Processes,

vol 11, pp 26-30, December 13-15, New Orleans


This paper studies some microfoundations of persistent price behavior in the US for a dynamic model of the firm that is constrained from instant price adjustment by optimizing search behavior by buyers. The implied econometric model is an equation featuring lag structures for prices with lag weights that vary cyclically.

Price behavior in U.S. manufacturing: an application of dynamic monopoly pricing

Special Studies Paper 23, Federal Reserve Board, 1971.

N-person dynamic oligopoly: the case of conjectured price variations under certainty

Special Studies Paper 22, Federal Reserve Board, 1971

On the optimal monopoly price over time

Special Studies Paper 21, Federal Reserve Board, 1971.

Work in Progress

"The Great Inflation and the Great Moderation" (with Robert J. Tetlow) (manuscript in progress)


In the late 1960s and into the 1970s, the United States experienced a burst of inflation--the Great Inflation in the words of Delong (1997)--the origins of which seemed hard to uncover. Then, in the 1980s, it all went away, replaced not only by lower inflation, but a remarkably less volatile economy; the Great Moderation, according to Stock and Watson (2003).  Straightforward explanations for either of these phenomena have been hard to come by. Typically, one must either appeal to a number of proximate causes, or to happenstance. This paper advances the idea that the Fed simply got the model wrong. We assume that the true model is a variant of the canonical New Keynesian business cycle model, but the Fed estimates a reduced-form VAR, consistent with common practice over the period. We find that a central bank, learning its model by variants of recursive least squares learning, and choosing optimal policies conditional on its beliefs would have allowed indeterminancy.  We calibrate the resulting sunspots using empirical results from Leduc et al. (2003) and show that the "resonance frequency" of prediction errors generated by the model is consistent with those from their empirical results.  The Volcker disinflation is then seen as a bold stroke that to rule out sunspot equilibria and restore the stability of inflation expectations. An implication of this is that the observed higher volatility of the economy in the 1970s than today is really a manifestation of having mistakenly assumed away sunspots which shows up as fundamental shocks during the earlier period.
JEL classifications: C5, E5.

Keywords: monetary policy, learning, indeterminacy, sunspots, control problems.

“Robustly optimal carbon taxes with climate skeptic agents”


For an economy with climate-related externalities generated by fossil fuel in production and consumption and a public that is skeptical about the true climate model, this paper studies a Ramsey planner's robustly optimal carbon tax policy. Because the planner does not know the probability model motivating household behavior and is unable to infer it from the history of observations, it formulates a robust Ramsey problem by solving a multiplier problem due to Hansen and Sargent. By adopting some standard assumptions regarding utility and production, this paper derives two simple formulas for fossil fuel taxation at the consumption and production levels. Climate skepticism induces history dependence in the distortionary fuel consumption excise tax which is, optimally, an ad valorem tax.  The non-distortionary optimal production-level fuel excise tax recoups all unpriced carbon externalities, calculated as an expected present value, and therefore depends directly on the market-based stochastic discount factor which is lowered by prevailing climate skepticism. As a proportion of GDP, this tax has a simple structure and depends only on the discount rate, the rate of carbon retention in the atmosphere, and on future expected elasticities of output losses with respect to carbon emissions.

JEL Classifications: C6, C7, D9, Q5.

Keywords: climate sensitivity, Knightian uncertainty, misspecification, martingale, ambiguity aversion, robust control.