Skip to 0 minutes and 0 seconds All fields are evolving, new methods are being brought into all fields at all time. But there have been really big shifts in empirical economics, empirical political science, other social sciences. Where we’ve started using experimental methods that health researchers have been using for a long time. RCTs, experimental approaches, really you could say research design-based empirical work. Angrist and Pischke’s article is really nice. For those of you who haven’t read it yet. it lays out the trends and social science from the isolated early experiments to how momentum built around these issues. So I just want to highlight some of the patterns Angrist and Pischke bring out about the last 25 years.
Skip to 0 minutes and 40 seconds And, you know, maybe this is how it’s going to go down with the new methods, I don’t know. The first thing they point out, and it’s pretty interesting, is 25 years in, 30 years into this scientific revolution in empirical work and economics, there’s a lot of heterogeneity across subfields and how widely they’re used still. Persistent heterogeneity. They put a lot of emphasis on this in their article. So, even though in development economics and labor economics and some other fields, in applied microeconomics. This sort of subfield economics called applied microeconomics. Experimental and quasi-experimental approaches are widespread, they’re not in, say, industrial organization.
Skip to 1 minute and 19 seconds And one of the things I like about this article is, Angrist and Pischke say, “Hey, there are all these really obvious applications of these methods into industrial organization and one or two papers doing it. But somehow the leaders and IO, the powers that be in IO, don’t like these methods. So they haven’t spread that much in industrial organization.” So, there’s more divergence in methods then you would think would be called for based on the intellectual subject matter. That’s pretty interesting. And I think it does speak to a world of multiple equilibrium methods. In certain research communities, certain methods and norms become standard, and that’s an equilibrium, and in others they don’t.
Trends in social science in the last 25 years
If social science is, in fact, experiencing a revolution, what changes have we seen? Economists Joshua Angrist of MIT and Jörn-Steffen Pischke of the London School of Economics identified some recent shifts in how economists and other social scientists do research. In general, they found that different subfields tend to use very different methods.
I’d like to dive a little deeper into Joshua Angrist and Jörn-Steffen Pischke’s article “The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics,” which I discussed in the video. In the article, Angrist and Pischke explore Ed Leamer’s view of econometrics as explained in his well-known 1983 paper “Lets take the con out of econometrics” and seek to determine “whether the quality and the credibility of empirical work has increased since Leamer’s pessimistic assessment.”
In 1983, Leamer argued that contemporary empirical work “suffer[ed] from a distressing lack of robustness to changes in key assumptions – assumptions he called ‘whimsical’ because one seemed as good as another.” The ideal research design, he believed, was randomized trials and the “best way to use nonexperimental data to get closer to the experimental ideal [was] to explore the fragility of nonexperimental estimates.” However, he didn’t actually advocate searching for credible natural experiments or even performing randomized trials, and none of the central figures in his debate had much to say about research design.
To confront the “whimsical nature” of the key assumptions made in regression analysis, Leamer proposed that a process of sensitivity analysis should be implemented, featuring an “explicitly Bayesian agenda” and “extreme bounds analysis” — an estimation of regressions using different sets of covariates as controls.
Angrist and Pischke argue that empirical econometric work has vastly improved since Leamer’s article was published in 1983 and that “a clear-eyed focus on research design is at the heart of the credibility revolution in empirical economics.” They list other key factors contributing to this improvement, including the availability of more and better data and advances in theoretical econometric understanding, as well as sensitivity analysis for which Leamer advocated. They offer explanations for the improvements in empirical work today:
“Better and more robust estimation methods are part of the story, as is a reduced emphasis on econometric considerations that are not central to a causal interpretation of the main findings. But the primary force driving the credibility revolution has been a rigorous push for better and more clearly articulated research designs.”
They also state that a “well-done observational study can be more credible and persuasive than a poorly executed randomized trial.”
Further expressing the importance of research design, Angrist and Pischke argue that “good designs have a beneficial side effect: they typically lend themselves to a simple explanation of empirical methods and a straightforward presentation of results.” They go on to say that they “find the empirical results generated by a good research design more compelling than the conclusions derived from a good theory…”
In response to this “revolution” in credibility and research designs, concerns have arisen, especially with regards to external validity — whether or not the evidence from an experiment is applicable to situations beyond those of the original study. Angrist and Pischke respond to such concerns saying that “empirical evidence on any given causal effect is always local, derived from a particular time, place, and research design” and that a constructive response would be to “look for more evidence, so that a more general picture begins to emerge.”
In conclusion, Angrist and Pischke argue that improvement in empirical work in econometrics is not due to the use of extreme bounds sensitivity analysis, as Leamer suggests, but due to better research design and “careful implementation of quasi-experimental methods.” The progress that has been made in this field has produced a “credibility revolution” in fields as varied as labor, public finance, and development economics.
If you want to dive deeper into the material, you can read the entirety of each paper by clicking on the links in the SEE ALSO section at the bottom of this page.
Angrist, Joshua D, and Jörn-Steffen Pischke. 2010. “The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics.” Journal of Economic Perspectives 24 (2): 3–30. doi:10.1257/jep.24.2.3.
© Center for Effective Global Action