Skip to content

Commit 3ac1151

Browse files
committed
NGM example: minor edits
1 parent b7d8c31 commit 3ac1151

File tree

3 files changed

+43
-25
lines changed

3 files changed

+43
-25
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Other versions: [Matlab](https://github.com/mikkelpm/stderr_calibration_matlab)
1515

1616
- [example.ipynb](example.ipynb): Simple interactive example in Jupyter Notebook illustrating the main functionality of the package (also available in [HTML format](https://mikkelpm.github.io/stderr_calibration_python/example.html))
1717

18-
- [example_ngm.ipynb](example_ngm.ipynb): Even simpler example in Jupyter Notebook in the context of the Neoclassical Growth Model (also available in [HTML format](https://mikkelpm.github.io/stderr_calibration_python/example_ngm.html))
18+
- [example_ngm.ipynb](example_ngm.ipynb): Even simpler example in Jupyter Notebook of calibrating the Neoclassical Growth Model (also available in [HTML format](https://mikkelpm.github.io/stderr_calibration_python/example_ngm.html))
1919

2020
- [stderr_calibration](stderr_calibration): Python package for minimum distance estimation, standard errors, and testing
2121

docs/example_ngm.html

+20-12
Original file line numberDiff line numberDiff line change
@@ -13972,33 +13972,39 @@
1397213972
<div class="jp-Cell-inputWrapper"><div class="jp-InputPrompt jp-InputArea-prompt">
1397313973
</div><div class="jp-RenderedHTMLCommon jp-RenderedMarkdown jp-MarkdownOutput " data-mime-type="text/markdown">
1397413974
<h1 id="Standard-errors-for-calibrated-parameters:-Neoclassical-Growth-Model-example">Standard errors for calibrated parameters: Neoclassical Growth Model example<a class="anchor-link" href="#Standard-errors-for-calibrated-parameters:-Neoclassical-Growth-Model-example">&#182;</a></h1><p><em>We are grateful to Ben Moll for suggesting this example. Any errors are our own.</em></p>
13975-
<p>In this notebook we will work through the basic logic of <a href="https://scholar.princeton.edu/mikkelpm/calibration">Cocci &amp; Plagborg-Møller (2021)</a> in the context of calibrating a simple version of the Neoclassical Growth Model (NGM). Though the model is highly stylized, it helps provide intuition for our procedures. Please see our paper for other, more realistic empirical applications.</p>
13976-
<h2 id="Model">Model<a class="anchor-link" href="#Model">&#182;</a></h2><p>We consider the simplest version of the NGM without population growth or technological growth. As explained in <a href="https://perhuaman.files.wordpress.com/2014/06/macrotheory-dirk-krueger.pdf">Dirk Krueger's lecture notes (section 5.4)</a>, this model implies three steady state equations:</p>
13975+
<p>In this notebook we will work through the basic logic of <a href="https://scholar.princeton.edu/mikkelpm/calibration">Cocci &amp; Plagborg-Møller (2021)</a> in the context of calibrating a simple version of the Neoclassical Growth Model (NGM). Though the model is highly stylized, it helps provide intuition for our procedures. Please see our paper for other, arguably more realistic, empirical applications.</p>
13976+
<h2 id="Model">Model<a class="anchor-link" href="#Model">&#182;</a></h2><p>We consider the simplest version of the NGM without population growth or technological growth. As explained in <a href="https://perhuaman.files.wordpress.com/2014/06/macrotheory-dirk-krueger.pdf">Dirk Krueger's lecture notes (section 5.4)</a>, this model implies three key steady state equations:</p>
1397713977
<ol>
1397813978
<li><strong>Euler equation:</strong> $\frac{1}{1+r} = \beta$, where $\beta$ is the household discount factor and $r$ is the real interest rate.</li>
1397913979
<li><strong>Capital accumulation:</strong> $\frac{I}{K} = \delta$, where $\delta$ is the depreciation rate and $I/K$ is the ratio of investment to capital stock.</li>
1398013980
<li><strong>Rental rate of capital:</strong> $\frac{K}{Y} = \frac{\alpha}{\frac{1}{\beta}-1+\delta}$, where $\alpha$ is the capital elasticity in the production function and $\frac{K}{Y}$ is the capital-output ratio.</li>
1398113981
</ol>
13982-
<p>We want to use these equations to calibrate (i.e., estimate) $\beta$, $\delta$, and $\alpha$.</p>
13982+
<p>We want to use these equations to calibrate (i.e., estimate) the parameters $\beta$, $\delta$, and $\alpha$.</p>
1398313983
<h2 id="Estimation">Estimation<a class="anchor-link" href="#Estimation">&#182;</a></h2><p>We can measure the steady-state values of the variables on the left-hand side of the above equations by computing sample averages of the relevant time series over a long time span. Denote these sample averages by $\widehat{\frac{1}{1+r}}$, $\widehat{\frac{I}{K}}$, and $\widehat{\frac{K}{Y}}$, respectively. We can then obtain natural method-of-moment estimates of the three parameters as follows:
1398413984
$$\hat{\beta} = \widehat{\frac{1}{1+r}},\quad \hat{\delta}=\widehat{\frac{I}{K}},\quad \hat{\alpha}=\widehat{\frac{K}{Y}}\left(\frac{1}{\hat{\beta}}-1+\hat{\delta} \right).$$</p>
1398513985
<h2 id="Standard-errors">Standard errors<a class="anchor-link" href="#Standard-errors">&#182;</a></h2><p>The sample averages above are subject to statistical noise due to the finite time sample. This statistical noise obviously carries over to the estimated parameters. To gauge the extent of the noise, we seek to compute standard errors for the estimated parameters.</p>
13986-
<p>The key ingredients into calculating standard errors for the parameters are the standard errors for the three sample averages. Denote these by $\hat{\sigma}\left(\widehat{\frac{1}{1+r}}\right)$, $\hat{\sigma}\left(\widehat{\frac{I}{K}}\right)$, and $\hat{\sigma}\left(\widehat{\frac{K}{Y}}\right)$. To compute these standard errors, one would have to apply a formula that accounts for serial correlation in the data, such as the <a href="https://www.stata.com/manuals16/tsnewey.pdf">Newey-West estimator</a>. Let's assume that we have already done this.</p>
13986+
<p>The key ingredients into calculating standard errors for the parameters are the standard errors for the three sample averages. Denote these by $\hat{\sigma}\left(\widehat{\frac{1}{1+r}}\right)$, $\hat{\sigma}\left(\widehat{\frac{I}{K}}\right)$, and $\hat{\sigma}\left(\widehat{\frac{K}{Y}}\right)$. To compute these standard errors, one would have to apply a formula that accounts for serial correlation in the data, such as the <a href="https://www.stata.com/manuals16/tsnewey.pdf">Newey-West long-run variance estimator</a>. Let's assume that we have already done this.</p>
1398713987
<p>It's immediate what the standard errors for $\hat{\beta}$ and $\hat{\delta}$ are: They simply equal $\hat{\sigma}\left(\widehat{\frac{1}{1+r}}\right)$ and $\hat{\sigma}\left(\widehat{\frac{I}{K}}\right)$, respectively. However, the standard error for $\hat{\alpha}$ is not so obvious, as this estimator depends implicitly on all three sample averages:
1398813988
$$\hat{\alpha}=\widehat{\frac{K}{Y}}\left(\frac{1}{\widehat{\frac{1}{1+r}}}-1+\widehat{\frac{I}{K}} \right) \approx \alpha + x_1\left(\widehat{\frac{1}{1+r}}-\frac{1}{1+r}\right) + x_2\left(\widehat{\frac{I}{K}}-\frac{I}{K}\right) + x_3\left(\widehat{\frac{K}{Y}}-\frac{K}{Y}\right),$$
1398913989
where the last approximation is a <a href="https://en.wikipedia.org/wiki/Delta_method">delta method</a> linearization with
13990-
$$x_1=-\frac{\frac{K}{Y}}{\left(\frac{1}{1+r}\right)^2},\quad x_2=\frac{K}{Y},\quad x_3=\left(\frac{1}{\frac{1}{1+r}}-1+\frac{I}{K} \right).$$
13991-
Thus, to compute the standard error for $\hat{\alpha}$, we need to know not just the standard errors for the individual sample averages, but also their correlations.</p>
13992-
<h2 id="Limited-information-inference">Limited-information inference<a class="anchor-link" href="#Limited-information-inference">&#182;</a></h2><p>If we observed annual data on the real interest rate, capital, investment, and output, it would not be too difficult to estimate the correlations of the sample moments. This could again be done using a Newey-West long-run variance estimator. Yet, in practice we may face several potential complicating factors:</p>
13990+
$$x_1=-\frac{\frac{K}{Y}}{\left(\frac{1}{1+r}\right)^2}=-(1+r)^2\frac{K}{Y},\quad x_2=\frac{K}{Y},\quad x_3=\frac{1}{\frac{1}{1+r}}-1+\frac{I}{K}=r+\frac{I}{K}.$$
13991+
Since $\hat{\alpha}$ is approximately a linear combination of several sample averages, computing its standard error requires not just the standard errors for the individual sample averages, but also their correlations.</p>
13992+
<h2 id="Limited-information-inference">Limited-information inference<a class="anchor-link" href="#Limited-information-inference">&#182;</a></h2><p>If we observed annual data on the real interest rate, capital, investment, and output, it would not be too difficult to estimate the correlations of the sample averages. This could again be done using the Newey-West estimator of the long-run variance-covariance matrix. Yet, in practice we may face several potential complicating factors:</p>
1399313993
<ol>
13994-
<li><strong>Non-overlapping samples:</strong> Perhaps we do not observe all time series over the same time span. This makes it difficult to apply the usual Newey-West procedure.</li>
13994+
<li><strong>Non-overlapping samples:</strong> Perhaps we do not observe all time series over the same time span. This makes it difficult to apply the usual Newey-West formulas.</li>
1399513995
<li><strong>Different data frequencies:</strong> Perhaps the real interest rate series is obtained from daily yields on inflation-protected bonds, while the other time series are annual. This again complicates the econometric analysis.</li>
1399613996
<li><strong>Finite-sample accuracy:</strong> The Newey-West procedure is known to suffer from small-sample biases when the data exhibits strong time series dependence. Trying to exploit estimates of the correlations of the sample averages could therefore lead to distorted inference in realistic sample sizes.</li>
13997-
<li><strong>Non-public data:</strong> Perhaps some of the sample averages were not computed by ourselves, but only obtained from other papers (say, a paper that imputes real interest rates by feeding bond yields through a structural model). Those other papers may report the standard errors for their respective individual moments, but not the correlations with other moments that we rely on.</li>
13997+
<li><strong>Non-public data:</strong> Perhaps some of the sample averages were not computed by ourselves, but only obtained from other papers (say, a paper that imputes real interest rates by feeding bond yields through a structural model). Those other papers may report the standard errors for their respective individual moments, but not the correlations with other moments that our calibration relies on.</li>
1399813998
</ol>
1399913999
<p>A pragmatic limited-information approach would therefore be to give up on computing the precise standard error of $\hat{\alpha}$ and instead compute an <em>upper bound</em> on it. We seek an upper bound that depends only on the standard errors of the individual moments, not their correlations.</p>
1400014000
<p>The key to obtaining such a bound is the following inequality for random variables $X$ and $Y$:
14001-
$$\text{Std}(X+Y) = \sqrt{\text{Var}(X+Y)} = \sqrt{\text{Var}(X) + \text{Var}(Y)+2\text{Corr}(X,Y)\text{Std}(X)\text{Std}(Y)} \leq \sqrt{\text{Var}(X) + \text{Var}(Y)+2\text{Std}(X)\text{Std}(Y)} = \sqrt{(\text{Std}(X)+\text{Std}(Y))^2} = \text{Std}(X)+\text{Std}(Y).$$
14001+
$$\begin{align*}
14002+
\text{Std}(X+Y) &amp;= \sqrt{\text{Var}(X+Y)} \\
14003+
&amp;= \sqrt{\text{Var}(X) + \text{Var}(Y)+2\text{Corr}(X,Y)\text{Std}(X)\text{Std}(Y)} \\
14004+
&amp;\leq \sqrt{\text{Var}(X) + \text{Var}(Y)+2\text{Std}(X)\text{Std}(Y)} \\
14005+
&amp;= \sqrt{(\text{Std}(X)+\text{Std}(Y))^2} \\
14006+
&amp;= \text{Std}(X)+\text{Std}(Y).
14007+
\end{align*}$$
1400214008
Applying this logic to the earlier approximation for $\hat{\alpha}$, we get the bound (up to a small approximation error)
1400314009
$$\text{Std}(\hat{\alpha}) \leq |x_1|\text{Std}\left(\widehat{\frac{1}{1+r}}\right) + |x_2|\text{Std}\left(\widehat{\frac{I}{K}}\right) + |x_3|\text{Std}\left(\widehat{\frac{K}{Y}}\right).$$
1400414010
We can therefore compute an upper bound for the standard error of $\hat{\alpha}$ as follows:
@@ -14007,7 +14013,7 @@ <h2 id="Limited-information-inference">Limited-information inference<a class="an
1400714013
$$\hat{x}_1=-\frac{\widehat{\frac{K}{Y}}}{\left(\widehat{\frac{1}{1+r}}\right)^2},\quad \hat{x}_2=\widehat{\frac{K}{Y}},\quad \hat{x}_3=\left(\frac{1}{\widehat{\frac{1}{1+r}}}-1+\widehat{\frac{I}{K}} \right).$$
1400814014
Notice that this upper bound only depends on things that we know: the sample averages themselves and their individual standard errors (but not the correlations across moments).</p>
1400914015
<p>It is not possible to improve the bound without further knowledge of the correlation structure: The bound turns out to equal the actual standard error when the three sample averages are perfectly correlated with each other (either positively or negatively, depending on the signs of $x_1,x_2,x_3$). This is proved in Lemma 1 in <a href="https://scholar.princeton.edu/mikkelpm/calibration">our paper</a>. For this reason, we refer to the standard error bound as the <em>worst-case standard error</em>.</p>
14010-
<h2 id="Numerical-example">Numerical example<a class="anchor-link" href="#Numerical-example">&#182;</a></h2><p>Our software package allows for easy calculation of worst-case standard errors. As an illustration, suppose the sample averages (with standard errors in parenthesis) equal
14016+
<h2 id="Numerical-example">Numerical example<a class="anchor-link" href="#Numerical-example">&#182;</a></h2><p>Our software package makes it easy to calculate worst-case standard errors. As an illustration, suppose the sample averages (with standard errors in parenthesis) equal
1401114017
$$\widehat{\frac{1}{1+r}}=0.98\;(0.02), \quad \widehat{\frac{I}{K}}=0.08\;(0.005), \quad \widehat{\frac{K}{Y}} = 3\;(0.04).$$
1401214018
We define the model equations and data as follows. Let $\theta=(\beta,\delta,\alpha)$ and $\mu=(\frac{1}{1+r},\frac{I}{K},\frac{K}{Y})$ denote the vectors of parameters and moments, respectively.</p>
1401314019

@@ -14089,6 +14095,7 @@ <h2 id="Numerical-example">Numerical example<a class="anchor-link" href="#Numeri
1408914095
</div>
1409014096
<div class="jp-Cell-inputWrapper"><div class="jp-InputPrompt jp-InputArea-prompt">
1409114097
</div><div class="jp-RenderedHTMLCommon jp-RenderedMarkdown jp-MarkdownOutput " data-mime-type="text/markdown">
14098+
<p>(Note: The derivatives required to compute $\hat{x}_1,\hat{x}_2,\hat{x}_3$ are computed under the hood by the software using finite differences. We could have also computed the parameter estimates $\hat{\beta},\hat{\delta},\hat{\alpha}$ numerically if we didn't have a closed-form formula. See our <a href="https://mikkelpm.github.io/stderr_calibration_python/example.html">other example</a> for details.)</p>
1409214099
<h2 id="Over-identification-test">Over-identification test<a class="anchor-link" href="#Over-identification-test">&#182;</a></h2><p>The textbook NGM also implies that the steady-state labor share of income should equal $1-\alpha$. Suppose we measure the sample average of the labor share to be 0.65 with a standard error of 0.03. We wish to test the over-identifying restriction that the earlier estimate of $\hat{\alpha}$ is consistent with this moment. We can do this as follows.</p>
1409314100

1409414101
</div>
@@ -14169,9 +14176,10 @@ <h2 id="Other-features-in-the-paper">Other features in the paper<a class="anchor
1416914176
<li>The matched moments need not be simple sample averages, but could be regression coefficients, quantiles, etc. The moments need not be related to steady-state quantities, but could involve essentially any feature of the available data.</li>
1417014177
<li>The calibration (method-of-moments) estimator need not be available in closed form (usually one would obtain it by numerical optimization).</li>
1417114178
<li>If some, but not all, of the correlations between the empirical moments are known, this can be exploited to sharpen inference.</li>
14179+
<li>If we have more moments to match than parameters to estimate, we can compute the optimal weighting of the moments that minimizes the worst-case standard errors of the parameters.</li>
1417214180
<li>If we are interested in a function of the model parameters (such as a counterfactual quantity) rather than the parameters <em>per se</em>, we can compute worst-case standard errors for that function, too.</li>
1417314181
<li>If we are interested in testing several parameter restrictions at once, a joint test is available that has valid size asymptotically.</li>
14174-
<li>All computational routines can handle models with a relatively large number of parameters and moments.</li>
14182+
<li>All computational routines can handle models with relatively large numbers of parameters and moments.</li>
1417514183
</ul>
1417614184

1417714185
</div>

0 commit comments

Comments
 (0)