925 resultados para hierarchical linear model
Resumo:
This paper shows that a wavelet network and a linear term can be advantageously combined for the purpose of non linear system identification. The theoretical foundation of this approach is laid by proving that radial wavelets are orthogonal to linear functions. A constructive procedure for building such nonlinear regression structures, termed linear-wavelet models, is described. For illustration, sim ulation data are used to identify a model for a two-link robotic manipulator. The results show that the introduction of wavelets does improve the prediction ability of a linear model.
Resumo:
This paper proposes a nonlinear regression structure comprising a wavelet network and a linear term. The introduction of the linear term is aimed at providing a more parsimonious interpolation in high-dimensional spaces when the modelling samples are sparse. A constructive procedure for building such structures, termed linear-wavelet networks, is described. For illustration, the proposed procedure is employed in the framework of dynamic system identification. In an example involving a simulated fermentation process, it is shown that a linear-wavelet network yields a smaller approximation error when compared with a wavelet network with the same number of regressors. The proposed technique is also applied to the identification of a pressure plant from experimental data. In this case, the results show that the introduction of wavelets considerably improves the prediction ability of a linear model. Standard errors on the estimated model coefficients are also calculated to assess the numerical conditioning of the identification process.
Resumo:
The potential for spatial dependence in models of voter turnout, although plausible from a theoretical perspective, has not been adequately addressed in the literature. Using recent advances in Bayesian computation, we formulate and estimate the previously unutilized spatial Durbin error model and apply this model to the question of whether spillovers and unobserved spatial dependence in voter turnout matters from an empirical perspective. Formal Bayesian model comparison techniques are employed to compare the normal linear model, the spatially lagged X model (SLX), the spatial Durbin model, and the spatial Durbin error model. The results overwhelmingly support the spatial Durbin error model as the appropriate empirical model.
Resumo:
We examine differential equations where nonlinearity is a result of the advection part of the total derivative or the use of quadratic algebraic constraints between state variables (such as the ideal gas law). We show that these types of nonlinearity can be accounted for in the tangent linear model by a suitable choice of the linearization trajectory. Using this optimal linearization trajectory, we show that the tangent linear model can be used to reproduce the exact nonlinear error growth of perturbations for more than 200 days in a quasi-geostrophic model and more than (the equivalent of) 150 days in the Lorenz 96 model. We introduce an iterative method, purely based on tangent linear integrations, that converges to this optimal linearization trajectory. The main conclusion from this article is that this iterative method can be used to account for nonlinearity in estimation problems without using the nonlinear model. We demonstrate this by performing forecast sensitivity experiments in the Lorenz 96 model and show that we are able to estimate analysis increments that improve the two-day forecast using only four backward integrations with the tangent linear model. Copyright © 2011 Royal Meteorological Society
Resumo:
The validity of approximating radiative heating rates in the middle atmosphere by a local linear relaxation to a reference temperature state (i.e., ‘‘Newtonian cooling’’) is investigated. Using radiative heating rate and temperature output from a chemistry–climate model with realistic spatiotemporal variability and realistic chemical and radiative parameterizations, it is found that a linear regressionmodel can capture more than 80% of the variance in longwave heating rates throughout most of the stratosphere and mesosphere, provided that the damping rate is allowed to vary with height, latitude, and season. The linear model describes departures from the climatological mean, not from radiative equilibrium. Photochemical damping rates in the upper stratosphere are similarly diagnosed. Threeimportant exceptions, however, are found.The approximation of linearity breaks down near the edges of the polar vortices in both hemispheres. This nonlinearity can be well captured by including a quadratic term. The use of a scale-independentdamping rate is not well justified in the lower tropical stratosphere because of the presence of a broad spectrum of vertical scales. The local assumption fails entirely during the breakup of the Antarctic vortex, where large fluctuations in temperature near the top of the vortex influence longwave heating rates within the quiescent region below. These results are relevant for mechanistic modeling studies of the middle atmosphere, particularly those investigating the final Antarctic warming.
Resumo:
We consider the impact of data revisions on the forecast performance of a SETAR regime-switching model of U.S. output growth. The impact of data uncertainty in real-time forecasting will affect a model's forecast performance via the effect on the model parameter estimates as well as via the forecast being conditioned on data measured with error. We find that benchmark revisions do affect the performance of the non-linear model of the growth rate, and that the performance relative to a linear comparator deteriorates in real-time compared to a pseudo out-of-sample forecasting exercise.
Resumo:
Atmospheric CO2 concentration is expected to continue rising in the coming decades, but natural or artificial processes may eventually reduce it. We show that, in the FAMOUS atmosphere-ocean general circulation model, the reduction of ocean heat content as radiative forcing decreases is greater than would be expected from a linear model simulation of the response to the applied forcings. We relate this effect to the behavior of the Atlantic meridional overturning circulation (AMOC): the ocean cools more efficiently with a strong AMOC. The AMOC weakens as CO2 rises, then strengthens as CO2 declines, but temporarily overshoots its original strength. This nonlinearity comes mainly from the accumulated advection of salt into the North Atlantic, which gives the system a longer memory. This implies that changes observed in response to different CO2 scenarios or from different initial states, such as from past changes, may not be a reliable basis for making projections.
Resumo:
Prestes, J, Frollini, AB, De Lima, C, Donatto, FF, Foschini, D, de Marqueti, RC, Figueira Jr, A, and Fleck, SJ. Comparison between linear and daily undulating periodized resistance training to increase strength. J Strength Cond Res 23(9): 2437-2442, 2009-To determine the most effective periodization model for strength and hypertrophy is an important step for strength and conditioning professionals. The aim of this study was to compare the effects of linear (LP) and daily undulating periodized (DUP) resistance training on body composition and maximal strength levels. Forty men aged 21.5 +/- 8.3 and with a minimum 1-year strength training experience were assigned to an LP (n = 20) or DUP group (n = 20). Subjects were tested for maximal strength in bench press, leg press 45 degrees, and arm curl (1 repetition maximum [RM]) at baseline (T1), after 8 weeks (T2), and after 12 weeks of training (T3). Increases of 18.2 and 25.08% in bench press 1 RM were observed for LP and DUP groups in T3 compared with T1, respectively (p <= 0.05). In leg press 45 degrees, LP group exhibited an increase of 24.71% and DUP of 40.61% at T3 compared with T1. Additionally, DUP showed an increase of 12.23% at T2 compared with T1 and 25.48% at T3 compared with T2. For the arm curl exercise, LP group increased 14.15% and DUP 23.53% at T3 when compared with T1. An increase of 20% was also found at T2 when compared with T1, for DUP. Although the DUP group increased strength the most in all exercises, no statistical differences were found between groups. In conclusion, undulating periodized strength training induced higher increases in maximal strength than the linear model in strength-trained men. For maximizing strength increases, daily intensity and volume variations were more effective than weekly variations.
Resumo:
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model, Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin er al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214-223]. The usefulness of these models is illustrated in a Simulation study and in applications to three real data sets. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Often, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test, and also to a test obtained from a modified profile likelihood function. Our results generalize those in [Zucker, D.M., Lieberman, O., Manor, O., 2000. Improved small sample inference in the mixed linear model: Bartlett correction and adjusted likelihood. Journal of the Royal Statistical Society B, 62,827-838] by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report simulation results which show that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presented and discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Detecting both the majors genes that control the phenotypic mean and those controlling phenotypic variance has been raised in quantitative trait loci analysis. In order to mapping both kinds of genes, we applied the idea of the classic Haley-Knott regression to double generalized linear models. We performed both kinds of quantitative trait loci detection for a Red Jungle Fowl x White Leghorn F2 intercross using double generalized linear models. It is shown that double generalized linear model is a proper and efficient approach for localizing variance-controlling genes. We compared two models with or without fixed sex effect and prefer including the sex effect in order to reduce the residual variances. We found that different genes might take effect on the body weight at different time as the chicken grows.
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
BACKGROUND: Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. RESULTS: Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. CONCLUSIONS: Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro-environmental changes (diet, climatic region, etc.) may make genetic heterogeneity of variance a less stable trait over time and space.
Resumo:
This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.
Resumo:
In the first essay, "Determinants of Credit Expansion in Brazil", analyzes the determinants of credit using an extensive bank level panel dataset. Brazilian economy has experienced a major boost in leverage in the first decade of 2000 as a result of a set factors ranging from macroeconomic stability to the abundant liquidity in international financial markets before 2008 and a set of deliberate decisions taken by President Lula's to expand credit, boost consumption and gain political support from the lower social strata. As relevant conclusions to our investigation we verify that: credit expansion relied on the reduction of the monetary policy rate, international financial markets are an important source of funds, payroll-guaranteed credit and investment grade status affected positively credit supply. We were not able to confirm the importance of financial inclusion efforts. The importance of financial sector sanity indicators of credit conditions cannot be underestimated. These results raise questions over the sustainability of this expansion process and financial stability in the future. The second essay, “Public Credit, Monetary Policy and Financial Stability”, discusses the role of public credit. The supply of public credit in Brazil has successfully served to relaunch the economy after the Lehman-Brothers demise. It was later transformed into a driver for economic growth as well as a regulation device to force private banks to reduce interest rates. We argue that the use of public funds to finance economic growth has three important drawbacks: it generates inflation, induces higher loan rates and may induce financial instability. An additional effect is the prevention of market credit solutions. This study contributes to the understanding of the costs and benefits of credit as a fiscal policy tool. The third essay, “Bayesian Forecasting of Interest Rates: Do Priors Matter?”, discusses the choice of priors when forecasting short-term interest rates. Central Banks that commit to an Inflation Target monetary regime are bound to respond to inflation expectation spikes and product hiatus widening in a clear and transparent way by abiding to a Taylor rule. There are various reports of central banks being more responsive to inflationary than to deflationary shocks rendering the monetary policy response to be indeed non-linear. Besides that there is no guarantee that coefficients remain stable during time. Central Banks may switch to a dual target regime to consider deviations from inflation and the output gap. The estimation of a Taylor rule may therefore have to consider a non-linear model with time varying parameters. This paper uses Bayesian forecasting methods to predict short-term interest rates. We take two different approaches: from a theoretic perspective we focus on an augmented version of the Taylor rule and include the Real Exchange Rate, the Credit-to-GDP and the Net Public Debt-to-GDP ratios. We also take an ”atheoretic” approach based on the Expectations Theory of the Term Structure to model short-term interest. The selection of priors is particularly relevant for predictive accuracy yet, ideally, forecasting models should require as little a priori expert insight as possible. We present recent developments in prior selection, in particular we propose the use of hierarchical hyper-g priors for better forecasting in a framework that can be easily extended to other key macroeconomic indicators.