974 resultados para Variance Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the presence of limit oscillations in an adaptive sampling system. The basic sampling criterion operates in the sense that each next sampling occurs when the absolute difference of the signal amplitude with respect to its currently sampled signal equalizes a prescribed threshold amplitude. The sampling criterion is extended involving a prescribed set of amplitudes. The limit oscillations might be interpreted through the equivalence of the adaptive sampling and hold device with a nonlinear one consisting of a relay with multiple hysteresis whose parameterization is, in general, dependent on the initial conditions of the dynamic system. The performed study is performed on the time domain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este estudo compara previsões de volatilidade de sete ações negociadas na Bovespa usando 02 diferentes modelos de volatilidade realizada e 03 de volatilidade condicional. A intenção é encontrar evidências empíricas quanto à diferença de resultados que são alcançados quando se usa modelos de volatilidade realizada e de volatilidade condicional para prever a volatilidade de ações no Brasil. O período analisado vai de 01 de Novembro de 2007 a 30 de Março de 2011. A amostra inclui dados intradiários de 5 minutos. Os estimadores de volatilidade realizada que serão considerados neste estudo são o Bi-Power Variation (BPVar), desenvolvido por Barndorff-Nielsen e Shephard (2004b), e o Realized Outlyingness Weighted Variation (ROWVar), proposto por Boudt, Croux e Laurent (2008a). Ambos são estimadores não paramétricos, e são robustos a jumps. As previsões de volatilidade realizada foram feitas através de modelos autoregressivos estimados para cada ação sobre as séries de volatilidade estimadas. Os modelos de variância condicional considerados aqui serão o GARCH(1,1), o GJR (1,1), que tem assimetrias em sua construção, e o FIGARCH-CHUNG (1,d,1), que tem memória longa. A amostra foi divida em duas; uma para o período de estimação de 01 de Novembro de 2007 a 30 de Dezembro de 2010 (779 dias de negociação) e uma para o período de validação de 03 de Janeiro de 2011 a 31 de Março de 2011 (61 dias de negociação). As previsões fora da amostra foram feitas para 1 dia a frente, e os modelos foram reestimados a cada passo, incluindo uma variável a mais na amostra depois de cada previsão. As previsões serão comparadas através do teste Diebold-Mariano e através de regressões da variância ex-post contra uma constante e a previsão. Além disto, o estudo também apresentará algumas estatísticas descritivas sobre as séries de volatilidade estimadas e sobre os erros de previsão.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we decompose the variance of logarithmic monthly earnings of prime age males into its permanent and transitory components, using a five-wave rotating panel from the Venezuelan “Encuesta de Hogares por Muestreo” from 1995 to 1997. As far as we know, this is the first time a variance components model is estimated for a developing country. We test several specifications and find that an error component model with individual random effects and first order serially correlated errors fits the data well. In the simplest model, around 22% of earnings variance is explained by the variance of permanent component, 77% by purely stochastic variation and the remaining 1% by serial correlation. These results contrast with studies from industrial countries where the permanent component is predominant. The permanent component is usually interpreted as the results of productivity characteristics of individuals whereas the transitory component is due to stochastic perturbations such as job and/or price instability, among others. Our findings may be due to the timing of the panel when occurred precisely during macroeconomic turmoil resulting from a severe financial crisis. The findings suggest that earnings instability is an important source of inequality in a region characterized by high inequality and macroeconomic instability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The negative effects of very low birthweight on intellectual development have been well documented, and more recently this effect has been shown to generalise to birthweights within the normal range. In this study we investigate the etiology of this relationship by using a classical twin design to disentangle the contributions of genes and environment. A previous Dutch study (Boomsma et al., 2001) examining these effects indicated that genes were important in mediating the association of birthweight to full IQ measured at ages 7 and 10, but not at ages 5 and 12. Here the association between birthweight and IQ at age 16 is considered (N = 523 twin pairs). Using variance components modeling we found that the genetic variance in birthweight (4%) completely overlapped with that in verbal IQ but not performance or full IQ. Results further showed the importance of shared environmental effects on birthweight (similar to 60%) but not on IQ (with genes explaining up to 72% of IQ variance). Models incorporating a direction of causation parameter between birthweight and IQ provided adequate fit to the data in either causal direction for performance and full IQ, but the model with verbal 10 causing birthweight was preferred to one in which birthweight influenced verbal IQ. As the measurement of birthweight precedes the measurement of twins' IQ at age 16, the influence of verbal IQ might be better considered as a proxy for parents' 10 or education, and it is possible that brighter mothers provide better prenatal environments for their children.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The pharmacokinetic disposition of metformin in late pregnancy was studied together with the level of fetal exposure at birth. Blood samples were obtained in the third trimester of pregnancy from women with gestational diabetes or type 2 diabetes, 5 had a previous diagnosis of polycystic ovary syndrome. A cord blood sample also was obtained at the delivery of some of these women, and also at delivery of others who had been taking metformin during pregnancy but from whom no blood had been taken. Plasma metformin concentrations were assayed by a new, validated, reverse-phase HPLC method, A 2-compartment, extravascular maternal model with transplacental partitioning of drug to a fetal compartment was fitted to the data. Nonlinear mixed-effects modeling was performed in'NONMEM using FOCE with INTERACTION. Variability was estimated using logarithmic interindividual and additive residual variance models; the covariance between clearance and volume was modeled simultaneously. Mean (range) metformin concentrations in cord plasma and in maternal plasma were 0.81 (range, 0.1-2.6) mg/L and 1.2 (range, 0. 1-2.9) mg/L, respectively. Typical population values (interindividual variability, CV%) for allometrically scaled maternal clearance and volume of distribution were 28 L/h/70 kg (17.1%) and 190 L/70 ka (46.3%), giving a derived population-wide half-life of 5.1 hours. The placental partition coefficient for metformin was 1.07 (36.3%). Neither maternal age nor weight significantly influenced the pharmacokinetics. The variability (SD) of observed concentrations about model-predicted concentrations was 0.32 mg/L. The pharmacokinetics were similar to those in nonpregnant patients and, therefore, no dosage adjustment is warranted. Metformin readily crosses the placenta, exposing the fetus to concentrations approaching those in the maternal circulation. The sequelae to such exposure, ea, effects on neonatal obesity and insulin resistance, remain unknown.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62J05, 62J10, 62F35, 62H12, 62P30.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mixture experiments are typical for chemical, food, metallurgical and other industries. The aim of these experiments is to find optimal component proportions that provide desired values of some product performance characteristics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A long history of organizational research has shown that organizations are affected significantly by changes in technology. Scholars have given particular attention to the effects of so-called disruptive or discontinuous technological changes. Studies have repeatedly shown that established, incumbent organizations tend to suffer deep performance declines (and even complete demise) in the face of such changes, and researchers have devoted much attention to identifying the organizational conditions and processes that are responsible for this persistent and widespread pattern of adaptation failure. This dissertation, which examines the response of the American College of Radiology (ACR) to the emergence of nuclear magnetic resonance imaging technology (NMR), aims to contribute to this well-established research tradition in three distinct and important ways. First, it focuses on a fundamentally different type of organization, a professional association, rather than the technology producers examined in most prior research. Although technologies are well known to be embedded in “communities” that include technology producers, suppliers, customers, governmental entities, professional societies, and other entities, most prior research has focused on the responses and ultimate fate of producers alone. Little if any research has explored the responses of professional organizations in particular. Second, the study employs a sophisticated process methodology that identifies the individual events that make up the organization’s response to technological change, as well as the overall sequence through which these events unfold. This process approach contrasts sharply with the variance models used in most previous studies and offers the promise of developing knowledge about how adaptation ultimately unfolds (or fails to). Finally, the project also contributes significantly through its exploration of an apparently successful case of adaptation to technological change. Though nuclear magnetic resonance imaging posed a serious threat to the ACR and its members, this threat appears to have been successfully managed and overcome. Although the unique nature of the organization and the technology under study place some important limits on the generalizablity of this research, its findings nonetheless provide some important basic insights about the process through which social organizations can successfully adapt to discontinuous technological changes. These insights, which may also be of substantial relevance to technology producer organizations, will also be elaborated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The organophosphate temephos has been the main insecticide used against larvae of the dengue and yellow fever mosquito ( Aedes aegypti ) in Brazil since the mid-1980s. Reports of resistance date back to 1995; however, no systematic reports of widespread temephos resistance have occurred to date. As resistance investigation is paramount for strategic decision-making by health officials, our objective here was to investigate the spatial and temporal spread of temephos resistance in Ae. aegypti in Brazil for the last 12 years using discriminating temephos concentrations and the bioassay protocols of the World Health Organization. The mortality results obtained were subjected to spatial analysis for distance interpolation using semi-variance models to generate maps that depict the spread of temephos resistance in Brazil since 1999. The problem has been expanding. Since 2002-2003, approximately half the country has exhibited mosquito populations resistant to temephos. The frequency of temephos resistance and, likely, control failures, which start when the insecticide mortality level drops below 80%, has increased even further since 2004. Few parts of Brazil are able to achieve the target 80% efficacy threshold by 2010/2011, resulting in a significant risk of control failure by temephos in most of the country. The widespread resistance to temephos in Brazilian Ae. aegypti populations greatly compromise effective mosquito control efforts using this insecticide and indicates the urgent need to identify alternative insecticides aided by the preventive elimination of potential mosquito breeding sites.