836 resultados para linear-regression
Resumo:
The streams flowing through the Niagara Escarpment are paved by coarse carbonate and sandstone sediments which have originated from the escarpment units and can be traced downstream from their source. Fifty-nine sediment samples were taken from five streams, over distances of 3,000 to 10,000 feet (915 to 3050 m), to determine downstream changes in sediment composition, textural characteristics and sorting. In addition, fluorometric velocity measurements were used in conjunction with measured -discharge and flow records to estimate the frequency of sediment movement. The frequency of sediments of a given lithology changes downstream in direct response to the outcrop position of the formations in the channels. Clasts derived from a single stratigraphic unit usually reach a maximum frequency within the first 1,000 feet (305 m) of transport. Sediments derived from formations at the top of waterfalls reach a modal frequency farther downstream than material originating at the base of waterfalls. Downstream variations in sediment size over the lengths of the study reaches reflect the changes in channel morphology and lithologic composition of the sediment samples. Linear regression analyses indicate that there is a decrease in the axial lengths between the intial and final samples and that the long axis decreases in length more rapidly than the intermediate, while the short axis remains almost constant. Carbonate sediments from coarse-grained, fossiliferous units - iii - are more variable in size than fine-grained dolostones and sandstones. The average sphericity for carbonates and sandstones increases from 0.65 to 0.67, while maximum projection sphericity remains nearly constant with an average value of 0.52. Pebble roundness increases more rapidly than either of the sphericity parameters and the sediments change from subrounded to rounded. The Hjulstrom diagram indicates that the velocities required to initiate transport of sediments with an average intermediate diameter of 10 cm range from 200 cm/s to 300 cm/s (6.6 ft./sec. to 9.8 ft./sec.). From the modal velocitydischarge relations, the flows corresponding to these velocities are greater than 3,500 cfs (99 m3s). These discharges occur less than 0.01 p~r cent (0.4 days) of the time and correspond to a discharge occurring during the spring flood.
Resumo:
Two groups of rainbow trout were acclimated to 20 , 100 , and 18 o C. Plasma sodium, potassium, and chloride levels were determined for both. One group was employed in the estimation of branchial and renal (Na+-K+)-stimulated, (HC0 3-)-stimulated, and CMg++)-dependent ATPase activities, while the other was used in the measurement of carbonic anhydrase activity in the blood, gill and kidney. Assays were conducted using two incubation temperature schemes. One provided for incubation of all preparations at a common temperature of 2S oC, a value equivalent to the upper incipient lethal level for this species. In the other procedure the preparations were incubated at the appropriate acclimation temperature of the sampled fish. Trout were able to maintain plasma sodium and chloride levels essentially constant over the temperature range employed. The different incubation temperature protocols produced different levels of activity, and, in some cases, contrary trends with respect to acclimation temperature. This information was discussed in relation to previous work on gill and kidney. The standing-gradient flow hypothesis was discussed with reference to the structure of the chloride cell, known thermallyinduced changes in ion uptake, and the enzyme activities obtained in this study. Modifications of the model of gill lon uptake suggested by Maetz (1971) were proposed; high and low temperature models resulting. In short, ion transport at the gill at low temperatures appears to involve sodium and chloride 2 uptake by heteroionic exchange mechanisms working in association w.lth ca.rbonlc anhydrase. G.l ll ( Na + -K + ) -ATPase and erythrocyte carbonic anhydrase seem to provide the supplemental uptake required at higher temperatures. It appears that the kidney is prominent in ion transport at low temperatures while the gill is more important at high temperatures. 3 Linear regression analyses involving weight, plasma ion levels, and enzyme activities indicated several trends, the most significant being the interrelationship observed between plasma sodium and chloride. This, and other data obtained in the study was considered in light of the theory that a link exists between plasma sodium and chloride regulatory mechanisms.
Resumo:
Background: The purpose of this study was to examine the relationships between physical activity and healthy eating behaviour with the participant's motives and goals for each health behaviour. Methods: Participants (N 121; 93.2% female) enrolled in commercial weightloss programs at the time of data collection, completed self-reported instruments using a web-based interface that were in accordance with Deci and Ryan's (2002) Self-Determination Theory (SDT). Results: Multiple linear regression models revealed that motivation and goals collectively accounted for between 0.21 to 0.29 percent and 0.03 to 0.16 percent of the variance in physical and healthy eating behaviours in this sample. In general, goals regarding either behaviour did not appear to have strong predictive relationships with each health behaviour beyond the contributions of motives. Discussion: Overall, findings from this study suggest that motives seem to mattermore than goals for both physical activity and healthy eating behaviour in clientele of commercial weight-loss programs. Therefore commercial weight-loss program implementers may want to consider placing more attention on motives I than goals for their clientele when designing weight-loss and weight-maintenance initiatives.
Resumo:
For years institutionalization has been the primary method of service delivery for persons with developmental disabilities (DD). However, in Ontario the last institution was closed on March 31, 2009 with former residents now residing in small, communitybased homes. This study investigated potential predictors of primary health care utilization by former residents. Several indirect measures were employed to gather information from 60 participants on their age, health status, adaptive functioning level, problem behaviour, mental health status and, total psychotropic medication use. A direct measure was used to gather primary health care utilization information, which served as the dependent variable. A stepwise linear regression failed to reveal significant predictors of health care utilization. The data were subsequently dichotomized and the outcomes of a logistic regression analysis indicated that mental health status, psychotropic medication use and, an interaction between mental health status and health status significantly predicted higher primary health care usage.
Resumo:
I t is generally accepted among scholars that individual learning and team learning contribute to the concept we refer to as organizational learning. However, a small number of quantitative and qualitative studies that have investigated their relationship reported contradicting results. This thesis investigated the relationship between individual learning, team learning, and organizational learning. A survey instrument was used to collect information on individual learning, team learning, and organizational learning. The study sample comprised of supervisors from the clinical laboratories in teaching hospitals and community hospitals in Ontario. The analyses utilized a linear regression to investigate the relationship between individual and team learning. The relationship between individual and organizational learning, and team and organizational learning were simultaneously investigated with canonical correlation and set correlation. T-test and multivariate analysis of variance were used to compare the differences in learning scores of respondents employed by laboratories in teaching and those employed by community hospitals. The study validated its tests results with 1,000 bootstrap replications. Results from this study suggest that there are moderate correlations between individual learning and team learning. The correlation individual learning and organizational learning and team learning and organizational learning appeared to be weak. The scores of the three learning levels show statistically significant differences between respondents from laboratories in teaching hospitals and respondents from community hospitals.
Resumo:
Children with developmental coordination disorder (DCD) are often referred to as clumsy because of their compromised motor coordination. Clumsiness and slow movement performances while scripting in children with DCD often result in poor academic performance and a diminished sense of scholastic competence. This study purported to examine the mediating role of perceived scholastic competence in the relationship between motor coordination and academic performance in children in grade six. Children receive a great deal of comparative information on their academic performances, which influence a student's sense of scholastic competence and self-efficacy. The amount of perceived academic self-efficacy has significant impact on academic performance, their willingness to complete academic tasks, and their self-motivation to improve where necessary. Independent t-tests reveal a significant difference (p < .001) between DCD and non-DCD groups when compared against their overall grade six average with the DCD group performing significantly lower. Independent t-tests found no significant difference between DCD and non-DCD groups for perceived scholastic competence. However, multiple linear regression analysis revealed a significant mediating role of 15% by perceived scholastic competence when examining the relationship between motor coordination and academic performance. While children with probable DCD may not rate their perceived scholastic competence as less than their healthy peers, there is a significant mediating effect on their academic performance.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
Ce texte propose des méthodes d’inférence exactes (tests et régions de confiance) sur des modèles de régression linéaires avec erreurs autocorrélées suivant un processus autorégressif d’ordre deux [AR(2)], qui peut être non stationnaire. L’approche proposée est une généralisation de celle décrite dans Dufour (1990) pour un modèle de régression avec erreurs AR(1) et comporte trois étapes. Premièrement, on construit une région de confiance exacte pour le vecteur des coefficients du processus autorégressif (φ). Cette région est obtenue par inversion de tests d’indépendance des erreurs sur une forme transformée du modèle contre des alternatives de dépendance aux délais un et deux. Deuxièmement, en exploitant la dualité entre tests et régions de confiance (inversion de tests), on détermine une région de confiance conjointe pour le vecteur φ et un vecteur d’intérêt M de combinaisons linéaires des coefficients de régression du modèle. Troisièmement, par une méthode de projection, on obtient des intervalles de confiance «marginaux» ainsi que des tests à bornes exacts pour les composantes de M. Ces méthodes sont appliquées à des modèles du stock de monnaie (M2) et du niveau des prix (indice implicite du PNB) américains
Resumo:
We propose methods for testing hypotheses of non-causality at various horizons, as defined in Dufour and Renault (1998, Econometrica). We study in detail the case of VAR models and we propose linear methods based on running vector autoregressions at different horizons. While the hypotheses considered are nonlinear, the proposed methods only require linear regression techniques as well as standard Gaussian asymptotic distributional theory. Bootstrap procedures are also considered. For the case of integrated processes, we propose extended regression methods that avoid nonstandard asymptotics. The methods are applied to a VAR model of the U.S. economy.
Resumo:
In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.