907 resultados para Weighted regression


Relevância:

20.00% 20.00%

Publicador:

Resumo:

SUMMARY Background: Age related declines in lower extremity strength have been associated with impaired mobility and changes in gait patterns, which increase the likelihood of falls. Since community dwelling adults encounter a wide range of locomotor challenges including uneven and obstmcted walking surfaces, we examined the effect of a strength 11 and balance exercise program on obstructed walking in postmenopausal women. Objectives: This study examined the effect of a weighted-vest strength and balance exercise program on adaptations of the stance leg during obstacle walking in postmenopausal women. Methods: Eighteen women aged 44-62 years who had not engaged in regular resistance training for the past year were recruited from the St. Catharines community to participate in this study. Eleven women volunteered for an aerobic (walking), strength, and balance training program 3 times per week for 12 weeks while 7 women volunteered as controls. Measurements included: force platform dynamic balance measure of the center of pressure (COP) and ground reaction forces (GRFs) in the stance leg while going over obstacles of different heights (0,5, 10,25 and 30 cm); and isokinetic strength measures of knee and ankle extension and flexion. Results: Of the 18 women, who began the trial, 16 completed it. The EX group showed a significant increase of 40% in ankle plantar flexion strength (P < 0.05). However, no improvements in measures of COP or GRFs were observed for either group. Failure to detect any changes in measures of dynamic balance may be due to small sample size. Conclusions: Postmenopausal women experience significant improvements in ankle strength with 12 weeks of a weighted-vest balance and strength training program, however, these changes do not seem to be associated with any improvement in measures of dynamic balance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relationships between surface sediment diatom assemblages and lake trophic status were studied in 50 Canadian Precambrian Shield lakes in the Muskoka-Haliburton and southern Ontario regions. The purpose of this study was to develop mathematical regression models to infer lake trophic status from diatom assemblage data. To achieve this goal, however, additional investigations dealing with the evaluation of lake trophic status and the autecological features of key diatom species were carried out. Because a unifying index and classification for lake trophic status was not available, a new multiple index was developed in this study, by the computation of the physical, chemical and biological data from 85 south Ontario lakes. By using the new trophic parameter, the lake trophic level (TL) was determined: TL = 1.37 In[1 +(TP x Chl-a / SD)], where, TP=total phosphorus, Chl-a=chlorophyll-a and SD=Secchi depth. The boundaries between 7 lake trophic categories (Ultra-oligotrophic lakes: 0-0.24; Oligotrophic lakes: 0.241-1.8; Oligomesotrophic lakes: 1.813.0; Mesotrophic lakes: 3.01-4.20; Mesoeutrophic lakes: 4.21-5.4; Eutrophic lakes: 5.41-10 and Hyper-eutrophic lakes: above 10) were established. The new trophic parameter was more convenient for management of water quality, communication to the public and comparison with other lake trophic status indices than many of the previously published indices because the TL index attempts to Increase understanding of the characteristics of lakes and their comprehensive trophic states. It is more reasonable and clear for a unifying determination of true trophic states of lakes. Diatom specIes autecology analysis was central to this thesis. However, the autecological relationship of diatom species and lake trophic status had not previously been well documented. Based on the investigation of the diatom composition and variety of species abundance in 30 study lakes, the distribution optima of diatom species were determined. These determinations were based on a quantitative method called "weighted average" (Charles 1985). On this basis, the diatom species were classified into five trophic categories (oligotrophic, oligomesotrophic, mesotrophic, mesoeutrophic and eutrophic species groups). The resulting diatom trophic status autecological features were used in the regressIon analysis between diatom assemblages and lake trophic status. When the TL trophic level values of the 30 lakes were regressed against their fi ve corresponding diatom trophic groups, the two mathematical equations for expressing the assumed linear relationship between the diatom assemblages composition were determined by (1) uSIng a single regression technique: Trophic level of lake (TL) = 2.643 - 7.575 log (Index D) (r = 0.88 r2 = 0.77 P = 0.0001; n = 30) Where, Index D = (0% + OM% + M%)/(E% + ME% + M%); 4 (2) uSIng a' multiple regressIon technique: TL=4.285-0.076 0%- 0.055 OM% - 0.026 M% + 0.033 ME% + 0.065 E% (r=0.89, r2=0.792, P=O.OOOl, n=30) There was a significant correlation between measured and diatom inferred trophic levels both by single and multiple regressIon methods (P < 0.0001, n=20), when both models were applied to another 20 test lakes. Their correlation coefficients (r2 ) were also statistically significant (r2 >0.68, n=20). As such, the two transfer function models between diatoms and lake trophic status were validated. The two models obtained as noted above were developed using one group of lakes and then tested using an entirely different group of lakes. This study indicated that diatom assemblages are sensitive to lake trophic status. As indicators of lake trophic status, diatoms are especially useful in situations where no local trophic information is available and in studies of the paleotrophic history of lakes. Diatom autecological information was used to develop a theory assessing water quality and lake trophic status.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple, low-cost concentric capillary nebulizer (CCN) was developed and evaluated for ICP spectrometry. The CCN could be operated at sample uptake rates of 0.050-1.00 ml min'^ and under oscillating and non-oscillating conditions. Aerosol characteristics for the CCN were studied using a laser Fraunhofter diffraction analyzer. Solvent transport efficiencies and transport rates, detection limits, and short- and long-term stabilities were evaluated for the CCN with a modified cyclonic spray chamber at different sample uptake rates. The Mg II (280.2nm)/l\/lg 1(285.2nm) ratio was used for matrix effect studies. Results were compared to those with conventional nebulizers, a cross-flow nebulizer with a Scott-type spray chamber, a GemCone nebulizer with a cyclonic spray chamber, and a Meinhard TR-30-K3 concentric nebulizer with a cyclonic spray chamber. Transport efficiencies of up to 57% were obtained for the CCN. For the elements tested, short- and long-term precisions and detection limits obtained with the CCN at 0.050-0.500 ml min'^ are similar to, or better than, those obtained on the same instrument using the conventional nebulizers (at 1.0 ml min'^). The depressive and enhancement effects of easily ionizable element Na, sulfuric acid, and dodecylamine surfactant on analyte signals with the CCN are similar to, or better than, those obtained with the conventional nebulizers. However, capillary clog was observed when the sample solution with high dissolved solids was nebulized for more than 40 min. The effects of data acquisition and data processing on detection limits were studied using inductively coupled plasma-atomic emission spectrometry. The study examined the effects of different detection limit approaches, the effects of data integration modes, the effects of regression modes, the effects of the standard concentration range and the number of standards, the effects of sample uptake rate, and the effect of Integration time. All the experiments followed the same protocols. Three detection limit approaches were examined, lUPAC method, the residual standard deviation (RSD), and the signal-to-background ratio and relative standard deviation of the background (SBR-RSDB). The study demonstrated that the different approaches, the integration modes, the regression methods, and the sample uptake rates can have an effect on detection limits. The study also showed that the different approaches give different detection limits and some methods (for example, RSD) are susceptible to the quality of calibration curves. Multicomponents spectral fitting (MSF) gave the best results among these three integration modes, peak height, peak area, and MSF. Weighted least squares method showed the ability to obtain better quality calibration curves. Although an effect of the number of standards on detection limits was not observed, multiple standards are recommended because they provide more reliable calibration curves. An increase of sample uptake rate and integration time could improve detection limits. However, an improvement with increased integration time on detection limits was not observed because the auto integration mode was used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies seemingly unrelated linear models with integrated regressors and stationary errors. By adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by feasible generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. Simulation results suggest that this new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of purchasing power parity (PPP) among the G-7 countries. The test based on the efficient estimates rejects the PPP hypothesis for most countries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of the paper is the nonparametric estimation of an instrumental regression function P defined by conditional moment restrictions stemming from a structural econometric model : E[Y-P(Z)|W]=0 and involving endogenous variables Y and Z and instruments W. The function P is the solution of an ill-posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyses identification and overidentification of this model and presents asymptotic properties of the estimated nonparametric instrumental regression function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Paper Studies Tests of Joint Hypotheses in Time Series Regression with a Unit Root in Which Weakly Dependent and Heterogeneously Distributed Innovations Are Allowed. We Consider Two Types of Regression: One with a Constant and Lagged Dependent Variable, and the Other with a Trend Added. the Statistics Studied Are the Regression \"F-Test\" Originally Analysed by Dickey and Fuller (1981) in a Less General Framework. the Limiting Distributions Are Found Using Functinal Central Limit Theory. New Test Statistics Are Proposed Which Require Only Already Tabulated Critical Values But Which Are Valid in a Quite General Framework (Including Finite Order Arma Models Generated by Gaussian Errors). This Study Extends the Results on Single Coefficients Derived in Phillips (1986A) and Phillips and Perron (1986).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many unit root and cointegration tests require an estimate of the spectral density function at frequency zero at some process. Kernel estimators based on weighted sums of autocovariances constructed using estimated residuals from an AR(1) regression are commonly used. However, it is known that with substantially correlated errors, the OLS estimate of the AR(1) parameter is severely biased. in this paper, we first show that this least squares bias induces a significant increase in the bias and mean-squared error of kernel-based estimators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L’intérêt principal de cette recherche porte sur la validation d’une méthode statistique en pharmaco-épidémiologie. Plus précisément, nous allons comparer les résultats d’une étude précédente réalisée avec un devis cas-témoins niché dans la cohorte utilisé pour tenir compte de l’exposition moyenne au traitement : – aux résultats obtenus dans un devis cohorte, en utilisant la variable exposition variant dans le temps, sans faire d’ajustement pour le temps passé depuis l’exposition ; – aux résultats obtenus en utilisant l’exposition cumulative pondérée par le passé récent ; – aux résultats obtenus selon la méthode bayésienne. Les covariables seront estimées par l’approche classique ainsi qu’en utilisant l’approche non paramétrique bayésienne. Pour la deuxième le moyennage bayésien des modèles sera utilisé pour modéliser l’incertitude face au choix des modèles. La technique utilisée dans l’approche bayésienne a été proposée en 1997 mais selon notre connaissance elle n’a pas été utilisée avec une variable dépendante du temps. Afin de modéliser l’effet cumulatif de l’exposition variant dans le temps, dans l’approche classique la fonction assignant les poids selon le passé récent sera estimée en utilisant des splines de régression. Afin de pouvoir comparer les résultats avec une étude précédemment réalisée, une cohorte de personnes ayant un diagnostique d’hypertension sera construite en utilisant les bases des données de la RAMQ et de Med-Echo. Le modèle de Cox incluant deux variables qui varient dans le temps sera utilisé. Les variables qui varient dans le temps considérées dans ce mémoire sont iv la variable dépendante (premier évènement cérébrovasculaire) et une des variables indépendantes, notamment l’exposition