984 resultados para Tests accuracy
Resumo:
The allele-specific polymerase chain reaction (PCR) was used to screen for the presence of benomyl resistance, and to characterize their levels and frequencies in field populations of Venturia inaequalis during two seasons. Three hundred isolates of V. inaequalis were collected each season from infected leaves of MalusX domestica. Borkh c.v. Mcintosh. The trees used were sprayed in the year prior to collection with five applications of benomyl, its homologue Azindoyle, or water. Monoconidial isolates of V. inaequalis were grown on 2% potato dextrose agar (PDA) for four weeks. Each isolate was taken from a single lesion from a single leaf. Total genomic DNA was extracted from the four week old colonies of V. inaequalis, prepared and used as a template in PCR reactions. PCR reactions were achieved by utilizing allele-specific primers. Each primer was designed to amplify fragments from a specific allele. Primer Vin was specific for mutations conferring the ben^^"^ phenotype. It was expected to amplify a 171 bp. DNA fragment from the ben^"^ alleles only. Primers BenHR and BenMR were specific for mutations conferring the ben"" and ben'^'' phenotypes, respectively. They were expected to amplify 172 bp. and 165 bp. DNA fragments from the ben"" and ben"^" alleles, respectively. Of the 953 isolates tested, 414 (69.9%) were benomyl sensitive (ben^) and 179 (30.1%) were benomyl resistant. All the benomyl resistant alleles were ben^"", since neither the ben"" nor the ben"" alleles were detected. Frequencies of benomyl resistance were 23%, 24%, and 23% for the 1997 collections, and were 46%, 26% and 38% for the 1998 collections for benomyl, Azindoyle and water treatments, respectively. Growth assay was performed to evaluate the applicability of using PCR in monitoring benomyl resistance in fungal field populations. Tests were performed on 14 isolates representing the two phenotypes (ben^ and ben^"'' alleles) characterized by PCR. Results of those tests were in agreement with PCR results. Enzyme digestion was also used to evaluate the accuracy and reliability of PCR products. The mutation associated with the ben^"'' phenotype creates a unique site for the endonuclease enzyme Bsh^236^ allowing the use of enzyme digestion. Isolates characterized by PCR as ben^'^'^ alleles had this restriction site for the SsA7l2361 enzyme. The most time consuming aspect of this study was growing fungal isolates on culture media for DNA extraction. In addition, the risk of contamination or losing the fungus during growth processes was relatively high. A technique for extracting DNA directly from lesions on leaves has been used (Luck and Gillings 1 995). In order to apply this technique in experiments designed to monitor fungicide resistance, a lesion has to be homogeneous for fungicide sensitivity. For this purpose, PCR protocol was used to determine lesion homogeneity. One hundred monoconidial isolates of V. inaequalis from 10 lesions (10-conidia/ lesion) were tested for their phenotypes with respect to benomyl sensitivity. Conidia of six lesions were homogeneous, while conidia of the remaining lesions were mixtures of ben^ and ben^ phenotypes. Neither the ben" nor the ben' phenotype was detected.
Resumo:
Recent dose-response sleep restriction studies, in which nightly sleep is curtailed to varying degrees (e.g., 3-, 5-, 7-hours), have found cumulative, dose-dependent changes in sleepiness, mood, and reaction time. However, brain activity has typically not been measured, and attentionbased tests employed tend to be simple (e.g., reaction time). One task addressing the behavioural and electrophysiological aspects of a specific attention mechanism is the Attentional Blink (AB), which shows that the report accuracy of a second target (T2) is impaired when it is presented soon after a first target (Tl). The aim of the present study was to examine behavioural and electrophysioiogical responses to the AB task to elucidate how sleep restriction impacts attentional capacity. Thirty-six young-adults spent four consecutive days and nights in a sleep laboratory where sleep, food, and activity were controlled. Nightly sleep began with a baseline sleep (8 hours), followed by two nights of sleep restriction (3,5 or 8 hours of sleep), and a recovery sleep (8 hours). An AB task was administered each day at 11 am. Results from a basic battery oftests (e.g., sleepiness, mood, reaction time) confirmed the effectiveness of the sleep restriction manipulation. In terms of the AB, baseline performance was typical (Le., T2 accuracy impaired when presented soon after Tl); however, no changes in any AB behavioural measures were observed following sleep restriction for the 3- or 5-hour groups. The only statistically significant electrophysiological result was a decrease in P300 amplitude (for Tl) from baseline to the second sleep restriction night for the 3-hour group. Therefore, following a brief, two night sleep restriction paradigm, brain functioning was impaired for the TI of the AB in the absence of behavioural deficit. Study limitations and future directions are discussed.
Resumo:
A system comprised of a Bomem interferometer and a LT3-110 Heli-Tran cryostat was set up to measure the reflectance of materials in the mid-infrared spectral region. Several tests were conducted to ensure the consistency and reliability of the system. Silicon and Chromium, two materials with well known optical properties were measured to test the accuracy of the system, and the results were found to be in good agreement with the literature. Reflectance measurements on pure SnTe and several Pb and Mn-doped alloys were carried out. These materials were chosen because they exhibit a strong plasma edge in the mid infrared region. The optical conductivity and several related optical parameters were calculated from the measured reflectance. Very low temperature measurements were carried out in the far-infrared on Sn9SMn2Te, and the results are indicative of a spin glass phase at 0.8 K. Resistivity measurements were made at room temperature. The resistivity values were found, as expected, to decrease with increasing carrier concentration and to increase with increasing manganese concentration.
Resumo:
The conclusion of the article states "it appears that previously learned choices may affect future choices in Y-mazes for cattle. Another area that needs to be researched is the effects of a mildly aversive treatment versus a severely aversive treatment on the tendency of a bovine to resist changing a learned choice".
Resumo:
Rapport de recherche
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
Dans ce texte, nous revoyons certains développements récents de l’économétrie qui peuvent être intéressants pour des chercheurs dans des domaines autres que l’économie et nous soulignons l’éclairage particulier que l’économétrie peut jeter sur certains thèmes généraux de méthodologie et de philosophie des sciences, tels la falsifiabilité comme critère du caractère scientifique d’une théorie (Popper), la sous-détermination des théories par les données (Quine) et l’instrumentalisme. En particulier, nous soulignons le contraste entre deux styles de modélisation - l’approche parcimonieuse et l’approche statistico-descriptive - et nous discutons les liens entre la théorie des tests statistiques et la philosophie des sciences.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.
Resumo:
In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.
Resumo:
Ce Texte Presente Plusieurs Resultats Exacts Sur les Seconds Moments des Autocorrelations Echantillonnales, Pour des Series Gaussiennes Ou Non-Gaussiennes. Nous Donnons D'abord des Formules Generales Pour la Moyenne, la Variance et les Covariances des Autocorrelations Echantillonnales, Dans le Cas Ou les Variables de la Serie Sont Interchangeables. Nous Deduisons de Celles-Ci des Bornes Pour les Variances et les Covariances des Autocorrelations Echantillonnales. Ces Bornes Sont Utilisees Pour Obtenir des Limites Exactes Sur les Points Critiques Lorsqu'on Teste le Caractere Aleatoire D'une Serie Chronologique, Sans Qu'aucune Hypothese Soit Necessaire Sur la Forme de la Distribution Sous-Jacente. Nous Donnons des Formules Exactes et Explicites Pour les Variances et Covariances des Autocorrelations Dans le Cas Ou la Serie Est un Bruit Blanc Gaussien. Nous Montrons Que Ces Resultats Sont Aussi Valides Lorsque la Distribution de la Serie Est Spheriquement Symetrique. Nous Presentons les Resultats D'une Simulation Qui Indiquent Clairement Qu'on Approxime Beaucoup Mieux la Distribution des Autocorrelations Echantillonnales En Normalisant Celles-Ci Avec la Moyenne et la Variance Exactes et En Utilisant la Loi N(0,1) Asymptotique, Plutot Qu'en Employant les Seconds Moments Approximatifs Couramment En Usage. Nous Etudions Aussi les Variances et Covariances Exactes D'autocorrelations Basees Sur les Rangs des Observations.
Resumo:
This Paper Studies Tests of Joint Hypotheses in Time Series Regression with a Unit Root in Which Weakly Dependent and Heterogeneously Distributed Innovations Are Allowed. We Consider Two Types of Regression: One with a Constant and Lagged Dependent Variable, and the Other with a Trend Added. the Statistics Studied Are the Regression \"F-Test\" Originally Analysed by Dickey and Fuller (1981) in a Less General Framework. the Limiting Distributions Are Found Using Functinal Central Limit Theory. New Test Statistics Are Proposed Which Require Only Already Tabulated Critical Values But Which Are Valid in a Quite General Framework (Including Finite Order Arma Models Generated by Gaussian Errors). This Study Extends the Results on Single Coefficients Derived in Phillips (1986A) and Phillips and Perron (1986).
Resumo:
In This Paper Several Additional Gmm Specification Tests Are Studied. a First Test Is a Chow-Type Test for Structural Parameter Stability of Gmm Estimates. the Test Is Inspired by the Fact That \"Taste and Technology\" Parameters Are Uncovered. the Second Set of Specification Tests Are Var Encompassing Tests. It Is Assumed That the Dgp Has a Finite Var Representation. the Moment Restrictions Which Are Suggested by Economic Theory and Exploited in the Gmm Procedure Represent One Possible Characterization of the Dgp. the Var Is a Different But Compatible Characterization of the Same Dgp. the Idea of the Var Encompassing Tests Is to Compare Parameter Estimates of the Euler Conditions and Var Representations of the Dgp Obtained Separately with Parameter Estimates of the Euler Conditions and Var Representations Obtained Jointly. There Are Several Ways to Construct Joint Systems Which Are Discussed in the Paper. Several Applications Are Also Discussed.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the 1976-1992 period. We also test a conditional APT model by using the difference between the 30-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from a total of 25 securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be crucial for the appropriate pricing of the portfolios.