The Case for Monetary Policy Experimentation

Gideon Magnus
4 min readJan 31, 2022

--

With inflation picking up in the US and beyond, central bankers are actively contemplating their policy options. Should interest rates be raised, and if so, when?

Monetary policy clearly matters: market participants monitor (clues about) central bank decisions more closely than pretty much any other policy.

It may come as a surprise, then, that economists understand the actual effects of monetary policy quite poorly. In particular, when a central bank (e.g. the Federal Reserve, European Central Bank, or Bank of England) raises interest rates, to what extent does this increase real, i.e. inflation-adjusted rates?

The main reason monetary policy is so poorly understood is that almost all economic data are poorly suited for testing hypotheses in a clean and simple way. Why? Ideally, data are collected from something resembling an experiment. For instance, the efficacy and safety of a new medicine is evaluated by comparing outcomes of a treatment and control group.

The key characteristic of an experiment is randomization: participants are split between treatment and control groups by chance, in other words, through a lottery. Without randomization, differences in outcomes could be due to many other factors, and so we are unable to accurately measure the effect of the treatment (for example, a medicine).

Suppose, for example, that we want to measure the effect of a college education on earnings. Of course, we observe differences between people who do and don’t attend college. But whether these differences are due to college attendance is hard to say, as the process that determines whether people attend college (or not) is far from random.

It is usually taken for granted that economists can’t run such experiments (“randomized controlled trials”). It is no surprise, then, that economists have become highly adept at seeking out non-experimental random events that serendipitously create treatment and control groups. Differences in outcomes can then be used to measure the effects of an economic intervention.

It is true: some economists do run experiments, for instance in computer labs, usually with students as participants. But while these experiments can be insightful, we ideally need experiments on a macro scale to understand the big macroeconomic questions.

Can large-scale economic experiments be done, or are they impossible, unethical, or dangerous?

I believe macroeconomic experiments are both desirable and feasible.

Let’s return to monetary policy. The board of a central bank meets on a regular basis to decide policy, primarily the short-term interest rate. The board comes to its decision by assessing the state of the economy and then weighing the pros and cons of higher versus lower rates according to their best guess as to how this might affect the economy. Clearly the decision process is far from random.

We observe data on interest rates and macroeconomic outcomes, just as we observe data on people’s college education and subsequent earnings, however in both cases the data will not be able to tell us much about how changes in interest rates (college attendance) affect macroeconomic outcomes (earnings).

Let us therefore consider this twist: right after the board makes a decision, it goes to a computer which generates a number according to a predefined random process. Suppose the board decides to raise the interest rate to 0.5%, but the computer picks either −0.1%, 0%, or +0.1%, with equal probability. This number is added to the board’s choice, so that the actual interest rate ends up being either 0.4%, 0.5%, or 0.6%.

The computer creates randomness, i.e. uncertainty, which could be small, large, or anything in between. The nature of the process would have to be discussed and decided in advance. There is a clear tradeoff: greater uncertainty is economically costly, but would also make the resulting economic data more informative about the effects of monetary policy.

Whatever the choice, it is important that the random process remains fixed for a predetermined length of time. But, after the trial period ends, a new process can be picked.

Policy decisions should be made at regular intervals, for instance, once a week, month, or quarter. Else policymakers could simply generate random numbers until they get the outcome they want. In addition we could let the computer create shocks at random points in time, i.e. in between board meetings. In effect, the computer would be continually adding noise to the human policy choice.

Note that we would not be entrusting a computer to fully set policy: humans would set the target, while the computer adds random variation around this. That said, it might be worth including a “kill switch”, i.e. a formal rule under which the experiment is halted. But the threshold for this should be high — otherwise the randomness of the shocks would be diluted, rendering the data much less useful.

I expect the main objection to this proposal to be: “The economy is far too important to experiment with”. I would argue, on the contrary, that the economy is far too important for policymakers to be “flying half blind”. Moreover, to the extent experimentation is deemed costly, we can choose to reduce the amount of randomness, as discussed above.

Setting monetary policy this way would greatly help us understand its effects and will enhance future policy choices. In fact, some form of randomization could continue indefinitely. This idea could be also applied to other types of economic policy, for instance fiscal policy, e.g. spending levels and tax rates.

While my proposal may seem unorthodox, I believe the benefits of at least some random policy variation would far outweigh the costs.

--

--