988 resultados para exact test
Resumo:
The Banff classification was introduced to achieve uniformity in the assessment of renal allograft biopsies. The primary aim of this study was to evaluate the impact of specimen adequacy on the Banff classification. All renal allograft biopsies obtained between July 2010 and June 2012 for suspicion of acute rejection were included. Pre-biopsy clinical data on suspected diagnosis and time from renal transplantation were provided to a nephropathologist who was blinded to the original pathological report. Second pathological readings were compared with the original to assess agreement stratified by specimen adequacy. Cohen's kappa test and Fisher's exact test were used for statistical analyses. Forty-nine specimens were reviewed. Among these specimens, 81.6% were classified as adequate, 6.12% as minimal, and 12.24% as unsatisfactory. The agreement analysis among the first and second readings revealed a kappa value of 0.97. Full agreement between readings was found in 75% of the adequate specimens, 66.7 and 50% for minimal and unsatisfactory specimens, respectively. There was no agreement between readings in 5% of the adequate specimens and 16.7% of the unsatisfactory specimens. For the entire sample full agreement was found in 71.4%, partial agreement in 20.4% and no agreement in 8.2% of the specimens. Statistical analysis using Fisher's exact test yielded a P value above 0.25 showing that - probably due to small sample size - the results were not statistically significant. Specimen adequacy may be a determinant of a diagnostic agreement in renal allograft specimen assessment. While additional studies including larger case numbers are required to further delineate the impact of specimen adequacy on the reliability of histopathological assessments, specimen quality must be considered during clinical decision making while dealing with biopsy reports based on minimal or unsatisfactory specimens.
Resumo:
Vitamin D metabolites are important in the regulation of bone and calcium homeostasis, but also have a more ubiquitous role in the regulation of cell differentiation and immune function. Severely low circulating 25-dihydroxyvitamin D [25(OH)D] concentrations have been associated with the onset of active tuberculosis (TB) in immigrant populations, although the association with latent TB infection (LTBI) has not received much attention. A previous study identified the prevalence of LTBI among a sample of Mexican migrant workers enrolled in Canada's Seasonal Agricultural Workers Program (SA WP) in the Niagara Region of Ontario. The aim of the present study was to determine the vitamin D status of the same sample, and identify if a relationship existed with LTBI. Studies of vitamin D deficiency and active TB are most commonly carried out among immigrant populations to non-endemic regions, in which reactivation of LTBI has occurred. Currently, there is limited knowledge of the association between vitamin D deficiency and LTBI. Entry into Canada ensured that these individuals did not have active TB, and L TBI status was established previously by an interferon-gamma release assay (IGRA) (QuantiFERON-TB Gold In-Tube®, Cellestis Ltd., Australia). Awareness of vitamin D status may enable individuals at risk of deficiency to improve their nutritional health, and those with LTBI to be aware of this risk factor for disease. Prevalence of vitamin D insufficiency among the Mexican migrant workers was determined from serum samples collected in the summer of 2007 as part of the cross sectional LTBI study. Samples were measured for concentrations of the main circulating vitamin D metabolite, 25(OH)D, with a widely used 1251 250HD RIA (DiaSorin Inc.®, Stillwater, MN), and were categorized as deficient «37.5 nmoI/L), insufficient (>37.5 nmollL, < 80 nmol/L) or sufficient (2::80 nmoI/L). Fisher's exact tests and t tests were used to determine if vitamin D status (sufficiency or insufficiency) or 25(OH)D concentrations significantly differed by sex or age categories. Predictors of vitamin D insufficiency and 25(OH)D concentrations were taken from questionnaires carried out during the previous study, and analyzed in the present study using multiple regression prediction models. Fisher's exact test and t test was used to determine if vitamin D status or 25(OH)D concentration differed by LTBI status. Strength of the relationship between interferongamma (IFN-y) concentration (released by peripheral T cells in response to TB antigens) and 25(OH)D concentration was analyzed using a Spearman correlation. Out of 87 participants included in the study (78% male; mean age 38 years), 14 were identified as LTBI positive but none had any signs or symptoms of TB reactivation. Only 30% of the participants were vitamin D sufficient, whereas 68% were insufficient and 2% were deficient. Significant independent predictors of lower 25(OH)D concentrations were sex, number of years enrolled in the SA WP and length of stay in Canada. No significant differences were found between 25(OH)D concentrations and LTBI status. There was a significant moderate correlation between IFN-y and 25(OH)D concentrations ofLTBI-positive individuals. The majority of participants presented with Vitamin D insufficiency but none were severely deficient, indicating that 25(OH)D concentrations do not decrease dramatically in populations who temporarily reside in Canada but go back to their countries of origin during the Canadian winter. This study did not find a statistical relationship between low levels of vitamin D and LTBI which suggests that in the presence of overall good health, lower than ideal levels of 2S(OH)D, may still be exerting a protective immunological effect against LTBI reactivation. The challenge remains to determine a critical 2S(OH)D concentration at which reactivation is more likely to occur.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
Ce texte propose des méthodes d’inférence exactes (tests et régions de confiance) sur des modèles de régression linéaires avec erreurs autocorrélées suivant un processus autorégressif d’ordre deux [AR(2)], qui peut être non stationnaire. L’approche proposée est une généralisation de celle décrite dans Dufour (1990) pour un modèle de régression avec erreurs AR(1) et comporte trois étapes. Premièrement, on construit une région de confiance exacte pour le vecteur des coefficients du processus autorégressif (φ). Cette région est obtenue par inversion de tests d’indépendance des erreurs sur une forme transformée du modèle contre des alternatives de dépendance aux délais un et deux. Deuxièmement, en exploitant la dualité entre tests et régions de confiance (inversion de tests), on détermine une région de confiance conjointe pour le vecteur φ et un vecteur d’intérêt M de combinaisons linéaires des coefficients de régression du modèle. Troisièmement, par une méthode de projection, on obtient des intervalles de confiance «marginaux» ainsi que des tests à bornes exacts pour les composantes de M. Ces méthodes sont appliquées à des modèles du stock de monnaie (M2) et du niveau des prix (indice implicite du PNB) américains
Resumo:
The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).
Resumo:
We consider the problem of testing whether the observations X1, ..., Xn of a time series are independent with unspecified (possibly nonidentical) distributions symmetric about a common known median. Various bounds on the distributions of serial correlation coefficients are proposed: exponential bounds, Eaton-type bounds, Chebyshev bounds and Berry-Esséen-Zolotarev bounds. The bounds are exact in finite samples, distribution-free and easy to compute. The performance of the bounds is evaluated and compared with traditional serial dependence tests in a simulation experiment. The procedures proposed are applied to U.S. data on interest rates (commercial paper rate).
Resumo:
Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.
Resumo:
Contexte : La détérioration de l’état nutritionnel liée à la perte d’autonomie qui accompagne l’évolution de la démence du type Alzheimer (DTA) peut être limitée par un proche aidant efficace. À long terme, le rôle soignant du proche aidant peut affecter sa propre santé physique et psychologique. Objectifs : (1) décrire les caractéristiques sociodémographiques des patients et de leurs proches aidants; (2) examiner l’évolution de la maladie et des variables à l’étude au cours de la période de suivi; (3) explorer la relation possible entre le fardeau perçu du proche aidant, l’état nutritionnel des patients et la stabilité du poids corporel du proche aidant. Hypothèses : L’absence du fardeau chez l’aidant est associée à un meilleur état nutritionnel chez le patient; la détérioration de la fonction cognitive chez le patient s’accompagne d’une augmentation du fardeau perçu par l’aidant; la dégradation du fardeau chez l’aidant conduit à sa perte de poids. Méthode : Les données analysées proviennent de l’étude « Nutrition-mémoire » menée entre 2003 et 2006 dans les trois cliniques de cognition situées dans des hôpitaux universitaires à Montréal. Quarante-deux patients avec une DTA probable vivant dans la communauté et leurs aidants ont été suivis en dyades pendant une période de dix-huit mois. Les analyses ont porté sur les données colligées du recrutement à douze mois plus tard en raison du nombre restreint des patients interviewés à la dernière mesure. La relation entre le fardeau de l’aidant et les variables caractérisant l’état nutritionnel chez les patients a été évaluée à l’aide des analyses de corrélations, du test khi-carré ou du test de Fisher. L’état cognitif des patients était évalué à l’aide du score au Mini-Mental State Examination, le fardeau de l’aidant était estimé par le score au « Zarit Burden Interview », l’état nutritionnel des patients était défini par la suffisance en énergie et en protéines, le score à l’outil de dépistage nutritionnel des aînés, le poids et l’indice de masse corporelle des patients. Résultats : Le fardeau perçu des aidants était associé à la suffisance en énergie chez les patients. Le nombre de patients ayant des apports insuffisants en énergie était plus important chez les dyades où les aidants percevaient un fardeau plus élevé. Toutefois, aucune association n’a été observée entre le fardeau des aidants et le risque nutritionnel ou la suffisance en protéines chez les patients. La détérioration de la fonction cognitive des patients ne semble pas avoir provoqué une augmentation du fardeau chez leurs aidants. De plus, l’augmentation du fardeau de l’aidant n’était pas accompagnée d’une perte de son poids corporel. Par ailleurs, un fardeau plus important a été observé chez les aidants des patients obèses ou présentant un embonpoint. Conclusion : La réduction du fardeau perçu des aidants permettrait d’améliorer les apports alimentaires des patients et ainsi de limiter ou minimiser le risque de détérioration de leur état nutritionnel et de perte de poids.
Resumo:
Un nombre significatif d’enfants autistes ont une macrocéphalie. Malgré plusieurs études du périmètre crânien en autisme, peu d’études ont été faites sur des adultes. Aussi, les références actuelles en périmètre crânien (PC) adulte datent d’environ 20 ans. Les objectifs de cette étude étaient de construire une échelle de référence du PC adulte, et de comparer les taux de macrocéphalie entre un groupe d’adultes autistes et un groupe d’adultes neurotypiques. Dans cette étude, 221 sujets masculins adultes étaient recrutés de différents milieux afin de déterminer le meilleur modèle prédictif du PC et de construire l’échelle de référence. La hauteur et le poids étaient mesurés pour chaque participant afin de déterminer leur influence sur les dimensions crâniennes. Pour la partie comparative, 30 autistes et 36 sujets neurotypiques, tous adultes, étaient recrutés à partir de la base de données du laboratoire de recherche. Pour l’échelle de référence, les résultats démontraient des corrélations positives entre le PC avec la hauteur et le poids. Après analyse, la corrélation conjointe de la hauteur et du poids sur le PC a été déterminée comme étant le modèle qui offre les résultats les plus significatifs dans la prédiction du PC. Pour la partie comparative, les taux de macrocéphalie atteignaient 10,00% chez les autistes contre 2,56% chez les neurotypiques selon la formule de régression linéaire obtenue du modèle. Cependant le test d’exactitude de Fisher n’a révélé aucune différence significative entre les 2 groupes. Mes résultats suggèrent qu’il est nécessaire de considérer la hauteur et le poids en construisant une référence du PC et que, même en utilisant la nouvelle référence, les taux de macrocéphalie demeurent plus élevés chez les autistes adultes que chez les adultes neurotypiques en dépit de l’absence de différences significatives.
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certain assumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur in the proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p. There are many statistical tests being used to check whether empirical marker data obeys the Hardy-Weinberg principle. Among these are the classical xi-square test (with or without continuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combination with Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE) are numerical in nature, requiring the computation of a test statistic and a p-value. There is however, ample space for the use of graphics in HWE tests, in particular for the ternary plot. Nowadays, many genetical studies are using genetical markers known as Single Nucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the counts one typically computes genotype frequencies and allele frequencies. These frequencies satisfy the unit-sum constraint, and their analysis therefore falls within the realm of compositional data analysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotype frequencies can be adequately represented in a ternary plot. Compositions that are in exact HWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected in a statistical test are typically “close" to the parabola, whereas compositions that differ significantly from HWE are “far". By rewriting the statistics used to test for HWE in terms of heterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted in the ternary plot. This way, compositions can be tested for HWE purely on the basis of their position in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphical representations where large numbers of SNPs can be tested for HWE in a single graph. Several examples of graphical tests for HWE (implemented in R software), will be shown, using SNP data from different human populations
Resumo:
La hipoacusia neurosensorial en recién nacidos es más frecuente cuando a ésta se suman factores de alto riesgo. Teniendo en |cuenta estos factores de riesgo y la aplicación de una prueba de tamizaje neonatal para detección de hipoacusias como lo son las Otoemisiones Acústicas de tipo Transientes (OEAT) se decide llevar a cabo un estudio analítico de casos y controles con el objetivo de establecer si existe alguna asociación estadísticamente significativa en donde se demuestre que el hecho de tener uno o más factores de riesgo para desarrollar hipoacusia de tipo sensorial está asociado a una respuesta fallida o ausente en las otoemisiones acústicas. El presente estudio fue llevado a cabo en el Hospital Universitario de la Samaritana en una muestra de 192 recién nacidos que tenían uno o más factores de alto riesgo para hipoacusia de tipo sensorial, a cada uno de ellos se les realizó Oto-emisiones acústicas de tipo Transientes; 176 de éstos obtuvieron un paso en la respuesta de oto-emisiones seleccionándose así como grupo control y el resto de los recién nacidos, es decir 16 de ellos, presentaron respuestas ausentes en el resultado de las otoemisiones, considerándose de esta forma como grupo de casos. Mediante pruebas de chi cuadrado, estimación de riesgo y una prueba exacta de Fisher se observó entonces que no existe correlación alguna entre tener factores de alto riesgo para hipoacusia neurosensorial con la obtención de respuestas fallidas en las otoemisiones acústicas.