dc.description.abstract | This dissertation presents three essays which are organized as chapters. Each chapter, with its own feature, is focusing on parsimonious estimation of Vector Autoregressive (VAR) models. The newly presented models are applied to some issues in macroeconomics. VARs are widely used in economic analysis and forecasting probably due to its easy tractability and the power in exposing and exploring complicated dynamic process in a modern economy such as shocks, channels and their links. Researchers sometimes would like to include more observations in VARs in order to have a broad investigation into the economy. In addition, this analysis, in particular, currently has extended to time variation in parameters so as to capture the possibly changing dynamics. These two will inevitably cause parameter proliferation. The over-fitting caused by hoping including more variables and extending to time variation in parameters and wide applications because of tractability and detecting power motivate my research interest in parsimonious modeling, estimation and applications as well as comparisons in this dissertation. In the first chapter, I extend the previous parsimonious estimation on constant coefficients to time varying coefficients. It is desirable since when time dimension is taken into account, the number of parameters rises with time periods while number of observations still fixed there, the same as in constant parameter estimation. I use stochastic variable selection method over the time dimension, that is, the time varying dynamics of each coefficient in a TVP-VAR model is checked. I estimate the model via Bayesian method based on the sensitivity of variation of conditional likelihood when the coefficient considered in the model or not. If the coefficient has more contribution to the likelihood, the posterior tends to give more probability of staying in the system. Nevertheless, if the parsimonious check is imposed on every coefficient in the model, the computational burden will substantially increase because this estimation is essential a mixed model estimation such that every possible model needs to be checked. The number of candidate models increases with time dimension. I suggest two ways to alleviate it. One is from model setting that we can check the variables we are interested in collected together as a whole. In other words, we can do block checking rather than single checking. The other is from the efficiency of the numerical computation which is conducted in two dimensions. We construct large matrices to replace Kalman forward filter and backward smoother in order to reduce the procedures in each iteration in Bayesian simulation. On the other hand, the structure of the large matrix is sparse which further make the computation efficient. The single-checking-based TVP–VAR with stochastic volatility is used to estimate the changes in monetary policy stance and agents' behavior to policy shocks over time. With the most parsimonious estimation, I still cannot find significant changes both in policy stance and in the reaction of economic agents to the non-systematic monetary policy shocks. In the second chapter, I present a general parsimonious estimation on the time varying parameter VAR with stochastic volatility via factor idea. That is, the far lags are driven by recent lags; the time variation in coefficients on regressors is driven by several factors and therefore the covariance matrix of innovations to the coefficients become reduced rank. Lastly, we use a latent factor, namely, the common volatility to represent full stochastic volatility based on the empirical evidence that the estimated volatilities of most macroeconomic variables share the similar pattern. Note that the model I present concentrates on how to reduce the dimension of the parameters, not the dimension of large data set such as factor models like dynamic factor models or factor augmented models. The model is estimated by Bayesian simulation. Each estimation procedure or block is presented in this chapter. Based on the general treatment, the estimation procedures can be freely combined with some proper modification depending on the specific research object. I give an empirical analysis by the factor driven model. The analysis is based on the typical small scale monetary VAR. Principal component analysis shows that small scale TVP-VAR is still over-fitting very much, can be driven by several factors and that early lags are not suitable to drive far lags that will cause dynamic contamination. Therefore parsimonious estimation via factors on time varying coefficients and common volatility is used to estimate the small scale monetary VAR. Focusing on agent response to monetary policy shocks, I cannot find significant difference among different time periods. In the last chapter, I consider a large Bayesian VAR that contains 28 variables. The variables cover a broad range of the U.S economy including labor market, housing market, bonds market and so on. High dimensional observations are desirable by researchers because this setting can give a strong background of the whole economy, reducing potential missing variables that are critical for the transmission of some shocks of interest. A large number of endogenous variables will increase the degree of parameter proliferation. I use proper priors that can shrink the values of coefficients to conduct an empirical analysis on monetary policy shocks, financial shocks and uncertainty shocks. I find that for the effects of monetary policy shocks, the impulse response functions are almost perfectly in line with theoretical predictions. For the financial shocks and uncertainty shocks, we analyze them jointly. Both positive financial and uncertainty shocks have negative effects on real activities, however, financial shocks have more persistent effects on these variables than uncertainty shocks. We also find that financial variables care more for uncertainty shocks compared to financial shocks. | |