968 resultados para State-space methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertation presented to obtain the PhD degree in Electrical and Computer Engineering - Electronics

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To identify the clinical and demographic predictors of in-hospital mortality in acute myocardial infarction with elevation of the ST segment in a public hospital, in the city of Fortaleza, Ceará state, Brazil. METHODS: A retrospective study of 373 patients experiencing their first episode of acute myocardial infarction was carried out. Of the study patients, 289 were discharged from the hospital (group A) and 84 died (group B). Both groups were analyzed regarding: sex; age; time elapsed from the beginning of the symptoms of myocardial infarction to assistance at the hospital; use of streptokinase; risk factors for atherosclerosis; electrocardiographic location of myocardial infarct; and Killip functional class. RESULTS: In a univariate analysis, group B had a greater proportion of the following parameters as compared with group A: non-Killip I functional class; diabetes; age >70 years; infarction of the inferior wall associated with right ventricular impairment; time between symptom onset and treatment at the hospital >12 h; anteroseptal or extensive anterior infarction; no use of streptokinase; and no tobacco use. In a multivariate logistic regression analysis, only non-Killip I functional class, diabetes, and age >70 years persisted as independent factors for death. CONCLUSION: Non-Killip I functional class, diabetes, and age >70 years were independent predictors of mortality in acute myocardial infarction with elevation of the ST segment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting model as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces a new model of trend (or underlying) inflation. In contrast to many earlier approaches, which allow for trend inflation to evolve according to a random walk, ours is a bounded model which ensures that trend inflation is constrained to lie in an interval. The bounds of this interval can either be fixed or estimated from the data. Our model also allows for a time-varying degree of persistence in the transitory component of inflation. The bounds placed on trend inflation mean that standard econometric methods for estimating linear Gaussian state space models cannot be used and we develop a posterior simulation algorithm for estimating the bounded trend inflation model. In an empirical exercise with CPI inflation we find the model to work well, yielding more sensible measures of trend inflation and forecasting better than popular alternatives such as the unobserved components stochastic volatility model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates the usefulness of switching Gaussian state space models as a tool for implementing dynamic model selecting (DMS) or averaging (DMA) in time-varying parameter regression models. DMS methods allow for model switching, where a different model can be chosen at each point in time. Thus, they allow for the explanatory variables in the time-varying parameter regression model to change over time. DMA will carry out model averaging in a time-varying manner. We compare our exact approach to DMA/DMS to a popular existing procedure which relies on the use of forgetting factor approximations. In an application, we use DMS to select different predictors in an in ation forecasting application. We also compare different ways of implementing DMA/DMS and investigate whether they lead to similar results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Functional connectivity in human brain can be represented as a network using electroencephalography (EEG) signals. These networks--whose nodes can vary from tens to hundreds--are characterized by neurobiologically meaningful graph theory metrics. This study investigates the degree to which various graph metrics depend upon the network size. To this end, EEGs from 32 normal subjects were recorded and functional networks of three different sizes were extracted. A state-space based method was used to calculate cross-correlation matrices between different brain regions. These correlation matrices were used to construct binary adjacency connectomes, which were assessed with regards to a number of graph metrics such as clustering coefficient, modularity, efficiency, economic efficiency, and assortativity. We showed that the estimates of these metrics significantly differ depending on the network size. Larger networks had higher efficiency, higher assortativity and lower modularity compared to those with smaller size and the same density. These findings indicate that the network size should be considered in any comparison of networks across studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: The interhemispheric asymmetries that originate from connectivity-related structuring of the cerebral cortex are compromised in schizophrenia (SZ). Recently, we have revealed the whole-head topography of EEG synchronization in SZ (Jalili et al. 2007; Knyazeva et al. 2008). Here we extended the analysis to assess the abnormality in the asymmetry of synchronization, which is further motivated by the evidence that the interhemispheric asymmetries suspected to be abnormal in SZ originate from the connectivity-related structuring of the cortex. Methods: Thirteen right-handed SZ patients and thirteen matched controls, participated in this study and the multichannel (128) EEGs were recorded for 3-5 minutes at rest. Then, Laplacian EEG (LEEG) were calculated using a 2-D spline. The LEEGs were analysis through calculating the power spectral density using Welch's average periodogram method. Furthermore, using a state-space based multivariate synchronization measure, S-estimator, we analyzed the correlate of the functional cortico-cortical connectivity in SZ patients compared to the controls. The values of S-estimator were obtained at three different special scales: first-order neighbors for each sensor location, second-order neighbors, and the whole hemisphere. The synchronization measures based on LEEG of alpha and beta bands were applied and tuned to various spatial scales including local, intraregional, and long-distance levels. To assess the between-group differences, we used a permutation version of Hotelling's T2 test. For correlation analysis, Spearman Rank Correlation was calculated. Results: Compared to the controls, who had rightward asymmetry at a local level (LEEG power), rightward anterior and leftward posterior asymmetries at an intraregional level (first- and second-order S-estimator), and rightward global asymmetry (hemispheric S-estimator), SZ patients showed generally attenuated asymmetry, the effect being strongest for intraregional synchronization. This deviation in asymmetry across the anterior-to-posterior axis is consistent with the cerebral form of the so-called Yakovlevian or anticlockwise cerebral torque. Moreover, the negative occipital and positive frontal asymmetry values suggest higher regional synchronization among the left occipital and the right frontal locations relative to their symmetrical counterparts. Correlation analysis linked the posterior intraregional and hemispheric abnormalities to the negative SZ symptoms, whereas the asymmetry of LEEG power appeared to be weakly coupled to clinical ratings. The posterior intraregional abnormalities of asymmetry were shown to increase with the duration of the disease. The tentative links between these findings and gross anatomical asymmetries, including the cerebral torque and gyrification pattern in normal subjects and SZ patients, are discussed. Conclusions: Overall, our findings reveal the abnormalities in the synchronization asymmetry in SZ patients and heavy involvement of the right hemisphere in these abnormalities. These results indicate that anomalous asymmetry of cortico-cortical connections in schizophrenia is amenable to electrophysiological analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.