You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Other versions: [Matlab](https://github.com/mikkelpm/stderr_calibration_matlab)
15
15
16
16
-[example.ipynb](example.ipynb): Simple interactive example in Jupyter Notebook illustrating the main functionality of the package (also available in [HTML format](https://mikkelpm.github.io/stderr_calibration_python/example.html))
17
17
18
-
-[example_ngm.ipynb](example_ngm.ipynb): Even simpler example in Jupyter Notebook in the context of the Neoclassical Growth Model (also available in [HTML format](https://mikkelpm.github.io/stderr_calibration_python/example_ngm.html))
18
+
-[example_ngm.ipynb](example_ngm.ipynb): Even simpler example in Jupyter Notebook of calibrating the Neoclassical Growth Model (also available in [HTML format](https://mikkelpm.github.io/stderr_calibration_python/example_ngm.html))
19
19
20
20
-[stderr_calibration](stderr_calibration): Python package for minimum distance estimation, standard errors, and testing
<h1 id="Standard-errors-for-calibrated-parameters:-Neoclassical-Growth-Model-example">Standard errors for calibrated parameters: Neoclassical Growth Model example<a class="anchor-link" href="#Standard-errors-for-calibrated-parameters:-Neoclassical-Growth-Model-example">¶</a></h1><p><em>We are grateful to Ben Moll for suggesting this example. Any errors are our own.</em></p>
13975
-
<p>In this notebook we will work through the basic logic of <a href="https://scholar.princeton.edu/mikkelpm/calibration">Cocci & Plagborg-Møller (2021)</a> in the context of calibrating a simple version of the Neoclassical Growth Model (NGM). Though the model is highly stylized, it helps provide intuition for our procedures. Please see our paper for other, more realistic empirical applications.</p>
13976
-
<h2 id="Model">Model<a class="anchor-link" href="#Model">¶</a></h2><p>We consider the simplest version of the NGM without population growth or technological growth. As explained in <a href="https://perhuaman.files.wordpress.com/2014/06/macrotheory-dirk-krueger.pdf">Dirk Krueger's lecture notes (section 5.4)</a>, this model implies three steady state equations:</p>
13975
+
<p>In this notebook we will work through the basic logic of <a href="https://scholar.princeton.edu/mikkelpm/calibration">Cocci & Plagborg-Møller (2021)</a> in the context of calibrating a simple version of the Neoclassical Growth Model (NGM). Though the model is highly stylized, it helps provide intuition for our procedures. Please see our paper for other, arguably more realistic, empirical applications.</p>
13976
+
<h2 id="Model">Model<a class="anchor-link" href="#Model">¶</a></h2><p>We consider the simplest version of the NGM without population growth or technological growth. As explained in <a href="https://perhuaman.files.wordpress.com/2014/06/macrotheory-dirk-krueger.pdf">Dirk Krueger's lecture notes (section 5.4)</a>, this model implies three key steady state equations:</p>
13977
13977
<ol>
13978
13978
<li><strong>Euler equation:</strong> $\frac{1}{1+r} = \beta$, where $\beta$ is the household discount factor and $r$ is the real interest rate.</li>
13979
13979
<li><strong>Capital accumulation:</strong> $\frac{I}{K} = \delta$, where $\delta$ is the depreciation rate and $I/K$ is the ratio of investment to capital stock.</li>
13980
13980
<li><strong>Rental rate of capital:</strong> $\frac{K}{Y} = \frac{\alpha}{\frac{1}{\beta}-1+\delta}$, where $\alpha$ is the capital elasticity in the production function and $\frac{K}{Y}$ is the capital-output ratio.</li>
13981
13981
</ol>
13982
-
<p>We want to use these equations to calibrate (i.e., estimate) $\beta$, $\delta$, and $\alpha$.</p>
13982
+
<p>We want to use these equations to calibrate (i.e., estimate) the parameters $\beta$, $\delta$, and $\alpha$.</p>
13983
13983
<h2 id="Estimation">Estimation<a class="anchor-link" href="#Estimation">¶</a></h2><p>We can measure the steady-state values of the variables on the left-hand side of the above equations by computing sample averages of the relevant time series over a long time span. Denote these sample averages by $\widehat{\frac{1}{1+r}}$, $\widehat{\frac{I}{K}}$, and $\widehat{\frac{K}{Y}}$, respectively. We can then obtain natural method-of-moment estimates of the three parameters as follows:
<h2 id="Standard-errors">Standard errors<a class="anchor-link" href="#Standard-errors">¶</a></h2><p>The sample averages above are subject to statistical noise due to the finite time sample. This statistical noise obviously carries over to the estimated parameters. To gauge the extent of the noise, we seek to compute standard errors for the estimated parameters.</p>
13986
-
<p>The key ingredients into calculating standard errors for the parameters are the standard errors for the three sample averages. Denote these by $\hat{\sigma}\left(\widehat{\frac{1}{1+r}}\right)$, $\hat{\sigma}\left(\widehat{\frac{I}{K}}\right)$, and $\hat{\sigma}\left(\widehat{\frac{K}{Y}}\right)$. To compute these standard errors, one would have to apply a formula that accounts for serial correlation in the data, such as the <a href="https://www.stata.com/manuals16/tsnewey.pdf">Newey-West estimator</a>. Let's assume that we have already done this.</p>
13986
+
<p>The key ingredients into calculating standard errors for the parameters are the standard errors for the three sample averages. Denote these by $\hat{\sigma}\left(\widehat{\frac{1}{1+r}}\right)$, $\hat{\sigma}\left(\widehat{\frac{I}{K}}\right)$, and $\hat{\sigma}\left(\widehat{\frac{K}{Y}}\right)$. To compute these standard errors, one would have to apply a formula that accounts for serial correlation in the data, such as the <a href="https://www.stata.com/manuals16/tsnewey.pdf">Newey-West long-run variance estimator</a>. Let's assume that we have already done this.</p>
13987
13987
<p>It's immediate what the standard errors for $\hat{\beta}$ and $\hat{\delta}$ are: They simply equal $\hat{\sigma}\left(\widehat{\frac{1}{1+r}}\right)$ and $\hat{\sigma}\left(\widehat{\frac{I}{K}}\right)$, respectively. However, the standard error for $\hat{\alpha}$ is not so obvious, as this estimator depends implicitly on all three sample averages:
Thus, to compute the standard error for $\hat{\alpha}$, we need to know not just the standard errors for the individual sample averages, but also their correlations.</p>
13992
-
<h2 id="Limited-information-inference">Limited-information inference<a class="anchor-link" href="#Limited-information-inference">¶</a></h2><p>If we observed annual data on the real interest rate, capital, investment, and output, it would not be too difficult to estimate the correlations of the sample moments. This could again be done using a Newey-West long-run variance estimator. Yet, in practice we may face several potential complicating factors:</p>
Since $\hat{\alpha}$ is approximately a linear combination of several sample averages, computing its standard error requires not just the standard errors for the individual sample averages, but also their correlations.</p>
13992
+
<h2 id="Limited-information-inference">Limited-information inference<a class="anchor-link" href="#Limited-information-inference">¶</a></h2><p>If we observed annual data on the real interest rate, capital, investment, and output, it would not be too difficult to estimate the correlations of the sample averages. This could again be done using the Newey-West estimator of the long-run variance-covariance matrix. Yet, in practice we may face several potential complicating factors:</p>
13993
13993
<ol>
13994
-
<li><strong>Non-overlapping samples:</strong> Perhaps we do not observe all time series over the same time span. This makes it difficult to apply the usual Newey-West procedure.</li>
13994
+
<li><strong>Non-overlapping samples:</strong> Perhaps we do not observe all time series over the same time span. This makes it difficult to apply the usual Newey-West formulas.</li>
13995
13995
<li><strong>Different data frequencies:</strong> Perhaps the real interest rate series is obtained from daily yields on inflation-protected bonds, while the other time series are annual. This again complicates the econometric analysis.</li>
13996
13996
<li><strong>Finite-sample accuracy:</strong> The Newey-West procedure is known to suffer from small-sample biases when the data exhibits strong time series dependence. Trying to exploit estimates of the correlations of the sample averages could therefore lead to distorted inference in realistic sample sizes.</li>
13997
-
<li><strong>Non-public data:</strong> Perhaps some of the sample averages were not computed by ourselves, but only obtained from other papers (say, a paper that imputes real interest rates by feeding bond yields through a structural model). Those other papers may report the standard errors for their respective individual moments, but not the correlations with other moments that we rely on.</li>
13997
+
<li><strong>Non-public data:</strong> Perhaps some of the sample averages were not computed by ourselves, but only obtained from other papers (say, a paper that imputes real interest rates by feeding bond yields through a structural model). Those other papers may report the standard errors for their respective individual moments, but not the correlations with other moments that our calibration relies on.</li>
13998
13998
</ol>
13999
13999
<p>A pragmatic limited-information approach would therefore be to give up on computing the precise standard error of $\hat{\alpha}$ and instead compute an <em>upper bound</em> on it. We seek an upper bound that depends only on the standard errors of the individual moments, not their correlations.</p>
14000
14000
<p>The key to obtaining such a bound is the following inequality for random variables $X$ and $Y$:
Notice that this upper bound only depends on things that we know: the sample averages themselves and their individual standard errors (but not the correlations across moments).</p>
14009
14015
<p>It is not possible to improve the bound without further knowledge of the correlation structure: The bound turns out to equal the actual standard error when the three sample averages are perfectly correlated with each other (either positively or negatively, depending on the signs of $x_1,x_2,x_3$). This is proved in Lemma 1 in <a href="https://scholar.princeton.edu/mikkelpm/calibration">our paper</a>. For this reason, we refer to the standard error bound as the <em>worst-case standard error</em>.</p>
14010
-
<h2 id="Numerical-example">Numerical example<a class="anchor-link" href="#Numerical-example">¶</a></h2><p>Our software package allows for easy calculation of worst-case standard errors. As an illustration, suppose the sample averages (with standard errors in parenthesis) equal
14016
+
<h2 id="Numerical-example">Numerical example<a class="anchor-link" href="#Numerical-example">¶</a></h2><p>Our software package makes it easy to calculate worst-case standard errors. As an illustration, suppose the sample averages (with standard errors in parenthesis) equal
We define the model equations and data as follows. Let $\theta=(\beta,\delta,\alpha)$ and $\mu=(\frac{1}{1+r},\frac{I}{K},\frac{K}{Y})$ denote the vectors of parameters and moments, respectively.</p>
<p>(Note: The derivatives required to compute $\hat{x}_1,\hat{x}_2,\hat{x}_3$ are computed under the hood by the software using finite differences. We could have also computed the parameter estimates $\hat{\beta},\hat{\delta},\hat{\alpha}$ numerically if we didn't have a closed-form formula. See our <a href="https://mikkelpm.github.io/stderr_calibration_python/example.html">other example</a> for details.)</p>
14092
14099
<h2 id="Over-identification-test">Over-identification test<a class="anchor-link" href="#Over-identification-test">¶</a></h2><p>The textbook NGM also implies that the steady-state labor share of income should equal $1-\alpha$. Suppose we measure the sample average of the labor share to be 0.65 with a standard error of 0.03. We wish to test the over-identifying restriction that the earlier estimate of $\hat{\alpha}$ is consistent with this moment. We can do this as follows.</p>
14093
14100
14094
14101
</div>
@@ -14169,9 +14176,10 @@ <h2 id="Other-features-in-the-paper">Other features in the paper<a class="anchor
14169
14176
<li>The matched moments need not be simple sample averages, but could be regression coefficients, quantiles, etc. The moments need not be related to steady-state quantities, but could involve essentially any feature of the available data.</li>
14170
14177
<li>The calibration (method-of-moments) estimator need not be available in closed form (usually one would obtain it by numerical optimization).</li>
14171
14178
<li>If some, but not all, of the correlations between the empirical moments are known, this can be exploited to sharpen inference.</li>
14179
+
<li>If we have more moments to match than parameters to estimate, we can compute the optimal weighting of the moments that minimizes the worst-case standard errors of the parameters.</li>
14172
14180
<li>If we are interested in a function of the model parameters (such as a counterfactual quantity) rather than the parameters <em>per se</em>, we can compute worst-case standard errors for that function, too.</li>
14173
14181
<li>If we are interested in testing several parameter restrictions at once, a joint test is available that has valid size asymptotically.</li>
14174
-
<li>All computational routines can handle models with a relatively large number of parameters and moments.</li>
14182
+
<li>All computational routines can handle models with relatively large numbers of parameters and moments.</li>
0 commit comments