高級別期刊目錄推送|Journal of Business & Economic Statistics 38卷4期

2021-02-20 金融預測

這是「高頻數據」第188篇推送

編輯:魯心潔(西南交通大學經濟管理學院)

審稿:肖霄(西南交通大學經濟管理學院)

僅用於學術交流,原本版權歸原作者和原發刊所有

高級別期刊目錄推送———

Journal of Business & Economic Statistics

Transparency in Structural Research

Discussion of 「Transparency in Structural Research」 by Isaiah Andrews, Matthew Gentzkow, and Jesse Shapiro

Thoughts on 「Transparency in Structural Research

Partial Identification of Economic Mobility: With an Application to the United States

Nonparametric Estimation of Search Costs for Differentiated Products: Evidence from Medigap

Which Factors are Risk Factors in Asset Pricing? A Model Scan Framework

A Smooth Nonparametric, Multivariate, Mixed-Data Location-Scale Test

Comparing Possibly Misspecified Forecasts

Asymptotically Uniform Tests After Consistent Model Selection in the Linear Regression Model

Treatment Effects With Heterogeneous Externalities

Multivariate Stochastic Volatility Model With Realized Volatilities and Pairwise Realized Correlations

A Stochastic Volatility Model With Realized Measures for Option Pricing

Bayesian Forecasting of Many Count-Valued Time Series

Matching Using Sufficient Dimension Reduction for Causal Inference

Bounds on Average and Quantile Treatment Effects on Duration Outcomes Under Censoring, Selection, and Noncompliance

Forecast Error Variance Decompositions with Local Projections

Minimum Contrast Empirical Likelihood Inference of Discontinuity in Density

Transparency in Structural Research

Isaiah Andrews:Harvard University;

Matthew Gentzkow:Stanford University;

Jesse M. Shapiro:Brown University.

We propose a formal definition of transparency in empirical research and apply it to structural estimation in economics. We discuss how some existing practices can be understood as attempts to improve transparency, and we suggest ways to improve current practice, emphasizing approaches that impose a minimal computational burden on the researcher. We illustrate with examples.

Discussion of 「Transparency in Structural Research」 by Isaiah Andrews, Matthew Gentzkow, and Jesse Shapiro

Stéphane Bonhomme:University of Chicago.

The author will focus on three questions: Can a definition of trans- parency account for the complexity of the researcher’s report (when the audience is not as sophisticated as the researcher)? Can communication through models make research more trans- parent (for an audience of economists)? Is transparency suffi- cient to ensure that structural policy predictions are credible (for an audience of policy makers)?

Thoughts on 「Transparency in Structural Research

Christopher Taber:University of Wisconsin-Madison.

The article is phrased as transparency in 「structural research.」 He think that is selling itself short as this work on transparency is important and relevant in all types of empirical work. In some sense the main difference between transparency in structural and design based work be not about the style of work but rather that many design based procedures (such as instrumental variables, difference in differences, or regression discontinuity) are familiar. As a result, well informed readers come in with a good understanding of how these methods work. Structural work has the disadvantage of both typically being more complicated and but also more novel as new structural models are usually different than the predecessors in important ways. As a result, while making progress on transparency is particularly important for structural work, it is important for all empirical work.

Partial Identification of Economic Mobility: With an Application to the United States

Daniel L. Millimet:Southern Methodist University;

Hao Li:Nanjing Audit University;

Punarjit Roychowdhury:Indian Institute of Management.

The economic mobility of individuals and households is of fundamental interest. While many measures of economic mobility exist, reliance on transition matrices remains pervasive due to simplicity and ease of interpretation. However, estimation of transition matrices is complicated by the well-acknowledged problem of measurement error in self-reported and even administrative data. Existing methods of addressing measurement error are complex, rely on numerous strong assumptions, and often require data from more than two periods. In this article, we investigate what can be learned about economic mobility as measured via transition matrices while formally accounting for measurement error in a reasonably transparent manner. To do so, we develop a nonparametric partial identification approach to bound transition probabilities under various assumptions on the measurement error and mobility processes. This approach is applied to panel data from the United States to explore short-run mobility before and after the Great Recession.

Nonparametric Estimation of Search Costs for Differentiated Products: Evidence from Medigap

Haizhen Lin:Indiana University;

Matthijs R.Wildenbeest:Indiana University.

This article develops a method to estimate search frictions as well as preference parameters in differenti- ated product markets. Search costs are nonparametrically identified, which means our method can be used to estimate search costs in differentiated product markets that lack a suitable search cost shifter. We apply our model to the U.S. Medigap insurance market. We find that search costs are substantial: the estimated median cost of searching for an insurer is $30. Using the estimated parameters we find that eliminating search costs could result in price decreases of as much as $71 (or 4.7%), along with increases in average consumer welfare of up to $374.

Which Factors are Risk Factors in Asset Pricing? A Model Scan Framework

Siddhartha Chib:WashingtonUniversity in St. Louis; 

Xiaming Zeng:Washington University in St. Louis.

A key question for understanding the cross-section of expected returns of equities is the following: which factors, from a given collection of factors, are risk factors, equivalently, which factors are in the stochastic discount factor (SDF)? Though the SDF is unobserved, assumptions about which factors (from the available set of factors) are in the SDF restricts the joint distribution of factors in specific ways, as a consequence of the economic theory of asset pricing. A different starting collection of factors that go into the SDF leads to a different set of restrictions on the joint distribution of factors. The conditional distribution of equity returns has the same restricted form, regardless of what is assumed about the factors in the SDF, as long as the factors are traded, and hence the distribution of asset returns is irrelevant for isolating the risk-factors. The restricted factors models are distinct (nonnested) and do not arise by omitting or including a variable from a full model, thus precluding analysis by standard statistical variable selection methods, such as those based on the lasso and its variants. Instead, we develop what we call a Bayesian model scanstrategy in which each factor is allowed to enter or not enter the SDF and the resulting restricted models (of which there are 114,674 in our empirical study) are simultaneously confronted with the data. We use a Student-t distribution for the factors, and model-specific independent Student- t distribution for the location parameters, a training sample to fix prior locations, and a creative way to arrive at the joint distribution of several other model-specific parameters from a single prior distribution. This allows our method to be essentially a scaleable and tuned-black-box method that can be applied across our large model space with little to no user-intervention. The model marginal likelihoods, and implied posterior model probabilities, are compared with the prior probability of 1/114,674 of each model to find the best-supported model, and thus the factors most likely to be in the SDF. We provide detailed simulation evidence about the high finite-sample accuracy of the method. Our empirical study with 13 leading factors reveals that the highest marginal likelihood model is a Student-t distributed factor model with 5 degrees of freedom and 8 risk factors.

A Smooth Nonparametric, Multivariate, Mixed-Data Location-Scale Test

Jeffrey S. Racine:McMaster University;

Ingrid Van Keilegom:KU Leuven.

A number of tests have been proposed for assessing the location-scale assumption that is often invoked by  practitioners.  Existing  approaches  include  Kolmogorov–Smirnov  and  Cramer–von  Mises  statistics that each involve measures of divergence between unknown joint distribution functions and products of marginal distributions. In practice, the unknown distribution functions embedded in these statistics are typically approximated using nonsmooth empirical distribution functions (EDFs). In a recent article, Li, Li, and Racine establish the benefits of smoothing the EDF for inference, though their theoretical results are limited to the case where the covariates are observedand the distributions unobserved, while in the current setting some covariates andtheir distributions areunobserved(i.e., the test relies on population error  terms  from  a  location-scale  model)  which  necessarily  involves  a  separate  theoretical  approach. We demonstrate how replacing the nonsmooth distributions of unobservables with their kernel-smoothed sample counterparts can lead to substantial power improvements, and extend existing approaches to the smooth multivariate and mixed continuous and discrete data setting in the presence of unobservables. Theoretical underpinnings are provided, Monte Carlo simulations are undertaken to assess finite-sample performance, and illustrative applications are provided.

Comparing Possibly Misspecified Forecasts

Andrew J. Patton:Duke University.

Recent work has emphasized the importance of evaluating estimates of a statistical functional (such as a conditional mean, quantile, or distribution) using a loss function that is consistent for the functional of interest, of which there is an infinite number. If forecasters all use correctly specified models free from estimation error, and if the information sets of competing forecasters are nested, then the ranking induced by a single consistent loss function is sufficient for the ranking by any consistent loss function. This article shows, via analytical results and realistic simulation-based analyses, that the presence of misspecified models, parameter estimation error, or nonnested information sets, leads generally to sensitivity to the choice of (consistent) loss function. Thus, rather than merely specifying the target functional, which narrows the set of relevant loss functions only to theclassof loss functions consistent for that functional, forecast consumers or survey designers should specify the single specific loss function that will be used to evaluate forecasts. An application to survey forecasts of U.S. inflation illustrates the result. 

Asymptotically Uniform Tests After Consistent Model Selection in the Linear Regression Model

Adam McCloskey:University of Colorado.

This article specializes the critical value (CV) methods that are based upon (refinements of) Bonferroni bounds, introduced by McCloskey to a problem of inference after consistent model selection in a general linear regression model. The post-selection problem is formulated to mimic common empirical practice and is applicable to both cross-sectional and time series contexts. We provide algorithms for constructing the CVs in this setting and establish uniform asymptotic size results for the resulting tests. The practical implementation of the CVs is illustrated in an empirical application to the effect of classroom size on test scores.

Treatment Effects With Heterogeneous Externalities

Tiziano Arduini:University of Bologn;

Eleonora Patacchini:Cornell University;

Edoardo Rainone:Bank of Italy.

This article proposes a new method for estimating heterogeneous externalities in policy analysis when social interactions take the linear-in-means form. We establish that the parameters of interest can be identified and consistently estimated using specific functions of the share of the eligible population. We also study the finite sample performance of the proposed estimators using Monte Carlo simulations. The method is illustrated using data on the PROGRESA program. We find that more than 50% of the effects of the program on schooling attendance are due to externalities, which are heterogeneous within and between poor and nonpoor households.

Multivariate Stochastic Volatility Model With Realized Volatilities and Pairwise Realized Correlations

Yuta Yamauchi:The University of Tokyo;

Yasuhiro Omori:The University of Tokyo.

Although stochastic volatility and GARCH (generalized autoregressive conditional heteroscedasticity) models have successfully described the volatility dynamics of univariate asset returns, extending them to the multivariate models with dynamic correlations has been difficult due to several major problems. First, there are too many parameters to estimate if available data are only daily returns, which results in unstable estimates. One solution to this problem is to incorporate additional observations based on intraday asset returns, such as realized covariances. Second, since multivariate asset returns are not synchronously traded, we have to use the largest time intervals such that all asset returns are observed to compute the realized covariance matrices. However, in this study, we fail to make full use of the available intraday informations when there are less frequently traded assets. Third, it is not straightforward to guarantee that the estimated (and the realized) covariance matrices are positive definite. Our contributions are the following: (1) we obtain the stable parameter estimates for the dynamic correlation models using the realized measures, (2) we make full use of intraday informations by using pairwise realized correlations, (3) the covariance matrices are guaranteed to be positive definite, (4) we avoid the arbitrariness of the ordering of asset returns, (5) we propose the flexible correlation structure model (e.g., such as setting some correlations to be zero if necessary), and (6) the parsimonious specification for the leverage effect is proposed. Our proposed models are applied to the daily returns of nine U.S. stocks with their realized volatilities and pairwise realized correlations and are shown to outperform the existing models with respect to portfolio performances.

A Stochastic Volatility Model With Realized Measures for Option Pricing

Giacomo Bormetti:University of Bologna;

Roberto Casarin:Ca』 Foscari University of Venice;

Fulvio Corsi:University of Pisa;

Giulia Livieri:Piazza dei Cavalieri.

Based on the fact that realized measures of volatility are affected by measurement errors, we introduce 

a new family of discrete-time stochastic volatility models having two measurement equations relating 

both observed returns and realized measures to the latent conditional variance. A semi-analytical option 

pricing framework is developed for this class of models. In addition, we provide analytical filtering and smoothing recursions for the basic specification of the model, and an effectiveMCMC algorithm for its richer variants. The empirical analysis shows the effectiveness of filtering and smoothing realized measures in inflating the latent volatility persistence—the crucial parameter in pricing Standard and Poor’s 500 Index options.

Bayesian Forecasting of Many Count-Valued Time Series

Lindsay R. Berry:Berry Consultants LLC;

Mike West:Duke University.

We develop and exemplify application of new classes of dynamic models for time series of nonnegative counts. Our novel univariate models combine dynamic generalized linear models for binary and condi- tionally Poisson time series, with dynamic random effects for over-dispersion. These models estimate dynamic regression coefficients in both binary and nonzero count components. Sequential Bayesian analysis allows fast, parallel analysis of sets of decoupled time series. New multivariate models then enable information sharing in contexts when data at a more highly aggregated level provide more incisive inferences on shared patterns such as trends and seasonality. A novel multiscale approach—one new example of the concept of decouple/recouple in time series—enables information sharing across series. This incorporates cross-series linkages while insulating parallel estimation of univariate models, and hence enables scalability in the number of series. The major motivating context is supermarket sales forecasting. Detailed examples drawn from a case study in multistep forecasting of sales of a number of related items showcase forecasting of multiple series, with discussion of forecast accuracy metrics, comparisons with existing methods, and broader questions of probabilistic forecast assessment.

Matching Using Sufficient Dimension Reduction for Causal Inference

Wei Luo:Zhejiang University; 

Yeying Zhu:University of Waterloo.

To estimate causal treatment effects, we propose a new matching approach based on the reduced covariates obtained from sufficient dimension reduction. Compared with the original covariates and the propensity score, which are commonly used for matching in the literature, the reduced covariates are nonparametrically estimable and are effective in imputing the missing potential outcomes, under a mild assumption on the low-dimensional structure of the data. Under the ignorability assumption, the consistency of the proposed approach requires a weaker common support condition. In addition, researchers are allowed to employ different reduced covariates to find matched subjects for different treatment groups. We develop relevant asymptotic results and conduct simulation studies as well as real data analysis to illustrate the usefulness of the proposed approach.

Bounds on Average and Quantile Treatment Effects on Duration Outcomes Under Censoring, Selection, and Noncompliance

German Blanco:Illinois State University Norma; 

Xuan Chen:Renmin University of China; 

Carlos A. Flores:California Polytechnic State University at San Luis Obispo; 

Alfonso Flores-Lagunes:Syracuse University.

We consider the problem of assessing the effects of a treatment on duration outcomes using data from a randomized evaluation with noncompliance. For such settings, we derive nonparametric sharp bounds for  average  and  quantile  treatment  effects  addressing  three  pervasive  problems  simultaneously:  self- selection into the spell of interest, endogenous censoring of the duration outcome, and noncompliance with  the  assigned  treatment.  Ignoring  any  of  these  issues  could  yield  biased  estimates  of  the  effects. Notably, the proposed bounds do not impose the independent censoring assumption—which is commonly used to address censoring but is likely to fail in important settings—or exclusion restrictions to address endogeneity  of  censoring  and  selection.  Instead,  they  employ  monotonicity  and  stochastic  dominance assumptions. To illustrate the use of these bounds we assess the effects of the Job Corps (JC) training program  on  its  participants』  last  complete  employment  spell  duration.  Our  estimated  bounds  suggest that JC participation may increase the average duration of the last complete employment spell before week 208 after randomization by at least 5.6 log points (5.8%) for individuals who comply with their treatment assignment and experience a complete employment spell whether or not they enrolled in JC. The estimated quantile treatment effects suggest the impacts may be heterogeneous, and strengthen our conclusions based on the estimated average effects.

Forecast Error Variance Decompositions with Local Projections

Yuriy Gorodnichenko:University of California; 

Byoungchan Lee:The Hong Kong University of Science and Technology.

We propose and study properties of an estimator of the forecast error variance decomposition in the local projections framework. We find for empirically relevant sample sizes that, after being bias-corrected with bootstrap, our estimator performs well in simulations. We also illustrate the workings of our estimator empirically for monetary policy and productivity shocks.

Minimum Contrast Empirical Likelihood Inference of Discontinuity in Density

Jun Ma:Renmin University of China; 

Hugo Jales:Syracuse University; 

Zhengfei Yu:University of Tsukuba.

This article investigates the asymptotic properties of a simple empirical-likelihood-based inference method for discontinuity in density. The parameter of interest is a function of two one-sided limits of the probability density function at (possibly) two cut-off points. Our approach is based on the first-order conditions from a minimum contrast problem. We investigate both first-order and second-order properties of the proposed method. We characterize the leading coverage error of our inference method and propose a coverage-error-optimal (CE-optimal, hereafter) bandwidth selector. We show that the empirical likelihood ratio statistic is Bartlett correctable. An important special case is the manipulation testing problem in a regression discontinuity design (RDD), where the parameter of interest is the density difference at a known threshold. In RDD, the continuity of the density of the assignment variable at the threshold is considered as a 「no-manipulation」 behavioral assumption, which is a testable implication of an identifying condition for the local average treatment effect. When specialized to the manipulation testing problem, the CE-optimal bandwidth selector has an explicit form. We propose a data-driven CE-optimal bandwidth selector for use in practice. Results from Monte Carlo simulations are presented. Usefulness of our method is illustrated by an empirical example.

https://www.tandfonline.com/toc/ubes20/38/4?nav=tocList

微信公眾號「高頻數據」精推關於高頻數據波動率和收益率預測等最新國際高質量期刊論文和工作論文,同時搭配推送相關領域高級別期刊目錄,旨在豐富學術推文市場內容,搭建國內外高頻數據預測的交流平臺。該公眾號由華南理工大學楊科教授,西南交通大學馬鋒副教授帶領的團隊負責。

版塊分區

團隊介紹

團隊成員/團隊老師

推文部分

精品細讀/團隊成果/工作論文

期刊目錄

金融類Top5期刊系列推送:包括JF、RFS、JFE、JFQA、ROF

高級別期刊系列推送:包括JOE、JBES、JBF、IJOF、JAE、JOF

相關焦點