24 resultados para unavoidable type ii error
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
We discuss the formation of a vortex polycrystal in type II superconductors from the competition between pinning and elastic forces. We compute the elastic energy of a deformed grain boundary, which is strongly nonlocal, and obtain the depinning stress for weak and strong pinning. Our estimates for the grain size dependence on the magnetic field strength are in good agreement with previous experiments on NbMo. Finally, we discuss the effect of thermal noise on grain growth.
Resumo:
Hierarchical clustering is a popular method for finding structure in multivariate data,resulting in a binary tree constructed on the particular objects of the study, usually samplingunits. The user faces the decision where to cut the binary tree in order to determine the numberof clusters to interpret and there are various ad hoc rules for arriving at a decision. A simplepermutation test is presented that diagnoses whether non-random levels of clustering are presentin the set of objects and, if so, indicates the specific level at which the tree can be cut. The test isvalidated against random matrices to verify the type I error probability and a power study isperformed on data sets with known clusteredness to study the type II error.
Resumo:
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. In order to obtain information about each possible data division we carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. Robustness and power are studied under several experimental conditions: different autocorrelation levels and different effect sizes, as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. Satisfactory Type II error rates were obtained only for large treatment effects. The relationship between the lengths of the four phases appeared to be an important factor for the robustness and the power of the randomization test.
Resumo:
General signaling results in dynamic Tullock contests have been missing for long. The reason is the tractability of the problems. In this paper, an uninformed contestant with valuation vx competes against an informed opponent with valuation, either high vh or low vl. We show that; (i) When the hierarchy of valuations is vh ≥ vx ≥ vl, there is no pooling. Sandbagging is too costly for the high type. (ii) When the order of valuations is vx ≥ vh ≥ vl, there is no separation if vh and vl are close. Sandbagging is cheap due to the proximity of valuations. However, if vh and vx are close, there is no pooling. First period cost of pooling is high. (iii) For valuations satisfying vh ≥ vl ≥ vx, there is no separation if vh and vl are close. Bluffing in the first period is cheap for the low valuation type. Conversely, if vx and vl are close there is no pooling. Bluffing in the first stage is too costly. JEL: C72, C73, D44, D82. KEYWORDS: Signaling, Dynamic Contests, Non-existence, Sandbag Pooling, Bluff Pooling, Separating
Resumo:
Kahneman and Tversky asserted a fundamental asymmetry between gains and losses, namely a reflection effect which occurs when an individual prefers a sure gain of $ pz to anuncertain gain of $ z with probability p, while preferring an uncertain loss of $z with probability p to a certain loss of $ pz.We focus on this class of choices (actuarially fair), and explore the extent to which thereflection effect, understood as occurring at a range of wealth levels, is compatible with single-self preferences.We decompose the reflection effect into two components, a probability switch effect,which is compatible with single-self preferences, and a translation effect, which is not. To argue the first point, we analyze two classes of single-self, nonexpected utility preferences, which we label homothetic and weakly homothetic. In both cases, we characterize the switch effect as well as the dependence of risk attitudes on wealth.We also discuss two types of utility functions of a form reminiscent of expected utility but with distorted probabilities. Type I always distorts the probability of the worst outcome downwards, yielding attraction to small risks for all probabilities. Type II distorts low probabilities upwards, and high probabilities downwards, implying risk aversion when the probability of the worst outcome is low. By combining homothetic or weak homothetic preferences with Type I or Type II distortion functions, we present four explicit examples: All four display a switch effect and, hence, a form of reflection effect consistent a single self preferences.
Resumo:
Aims.We revisit the vicinity of the microquasar Cygnus X-3 at radio wavelengths. We aim to improve our previous search for possible associated extended radio features/hot spots in the position angle of the Cygnus X-3 relativistic jets focusing on shorter angular scales than previously explored. Methods.Our work is mostly based on analyzing modern survey and archive radio data, mainly including observations carried out with the Very Large Array and the Ryle Telescopes. We also used deep near-infrared images that we obtained in 2005. Results.We present new radio maps of the Cygnus X-3 field computed after combining multi-configuration Very Large Array archive data at 6 cm and different observing runs at 2 cm with the Ryle Telescope. These are probably among the deepest radio images of Cygnus X-3 reported to date at cm wavelengths. Both interferometers reveal an extended radio feature within a few arc-minutes of the microquasar position, thus making our detection more credible. Moreover, this extended emission is possibly non-thermal, although this point still needs confirmation. Its physical connection with the microquasar is tentatively considered under different physical scenarios. We also report on the serendipitous discovery of a likely Fanaroff-Riley type II radio galaxy only away from Cygnus X-3.
Resumo:
We explore the statistical properties of grain boundaries in the vortex polycrystalline phase of type-II superconductors. Treating grain boundaries as arrays of dislocations interacting through linear elasticity, we show that self-interaction of a deformed grain boundary is equivalent to a nonlocal long-range surface tension. This affects the pinning properties of grain boundaries, which are found to be less rough than isolated dislocations. The presence of grain boundaries has an important effect on the transport properties of type-II superconductors as we show by numerical simulations: our results indicate that the critical current is higher for a vortex polycrystal than for a regular vortex lattice. Finally, we discuss the possible role of grain boundaries in vortex lattice melting. Through a phenomenological theory we show that melting can be preceded by an intermediate polycrystalline phase.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
Introducció: Els errors de medicació són definits com qualsevol incident prevenible que pot causar dany al pacient o donar lloc a una utilització inapropiada dels medicaments, quan aquests estan sota el control dels professionals sanitaris o del pacient. Els errors en la preparació i l’administració de medicació són els més comuns de l’àrea hospitalària i, tot i la llarga cadena per la qual passa el fàrmac, el professional d’infermeria és el últim responsable de l’acció, tenint així, un paper molt important en la seguretat del pacient. Les infermeres dediquen el 40% del temps de la seva jornada laboral en tasques relacionades amb la medicació. Objectiu: Determinar si les infermeres produeixen més errors si treballen amb sistemes de distribució de medicació de stock o en sistemes de distribució unidosis de medicació. Metodologia: Estudi quantitatiu, observacional i descriptiu, on la notificació d’errors (o oportunitats d’error) realitzats per la infermera, en les fases de preparació i administració de medicació, es farà mitjançant un qüestionari autoelaborat. Els elements a identificar seran: el tipus d’error, les causes que poden haver--‐lo produït, la seva potencial gravetat i qui l’ha pogut evitar; així com el tipus de professional que l’ha produït. Altres dades rellevants són: el medicament implicat junt amb la dosis i la via d’administració i el sistema de distribució utilitzat. Mostreig i mostra: El mostreig serà no probabilístic i per conveniència. S’escolliran aquelles infermeres que l’investigador consideri amb les característiques necessàries per participar en l’estudi, així que la mostra estarà formada per les infermeres les quals treballen a la unitat 40 de l’Hospital del Mar i utilitzen un sistema de distribució de medicació de dosis unitàries i les infermeres que treballen a urgències (concretament a l’àrea de nivell dos) de l’Hospital del Mar les quals treballen amb un sistema de distribució de medicació de stock.
Resumo:
In recent decades, technological advances have made extensive documentation available to us. But the philologist must be aware of the dangers of poor use of the documentary corpus in order to avoid creating dreaded ghost words. In this paper we recall the main sources of this type of error: folk etymology phenomena among speakers, copyists" errors, transcribers" errors in the interpretation of some abbreviations and graphic variants of the manuscripts, onomastic changes introduced by cartographers" ignorance of linguistic variants, gaps in the dating of some documents, confusion in the processes of lemmatization and the evaluation of texts... All these sources of error contribute, to a greater or lesser degree, to the distortion or to the masking of the data on which the research of philologists is based. Hence the importance of philological rigour in the transmission and study of ancient texts.
Resumo:
Proposit: L'objectiu d'aquest treball és analitzar quins efectes produeix un programa d'activitat física en pacients amb trastorn bipolar de tipus II. Mètodes: Per la realització d'aquest estudi he comptat amb una mostra petita (n=14), la qual estava dividida en grup control (n=7) i grup intervenció (n=7). Es va realitzar una valoració inicial de la mostra per mitjà del Yesavage i SF-12. Posteriorment el grup intervenció va participar en 20 sessions d'activitat física (freq: 2/set). Un cop finalitzat el programa es va tornar a valorar la mostra per mitjà del Yesavage i SF-12. Resultats: Els subjectes del grup intervenció van millorar en: Estat d'ànim (Yesavage), Percepció de la pròpia salut, nivell d'energia i sentiment de tristesa i desànim. En l'estat de tranquil·litat i calma no van mostrar una gran variació. Conclusions: La pràctica d'activitat física en grup millora l'estat d'ànim, percepció de pròpia salut, redueix l'estat de tristesa i desànim i nivell d'energia.
Resumo:
Background: Interventions designed to increase workplace physical activity may not automatically reduce high volumes of sitting, a behaviour independently linked to chronic diseases such as obesity and type II diabetes. This study compared the impact two different walking strategies had on step counts and reported sitting times. Methods: Participants were white-collar university employees (n = 179; age 41.3 ± 10.1 years; 141 women), who volunteered and undertook a standardised ten-week intervention at three sites. Preintervention step counts (Yamax SW-200) and self-reported sitting times were measured over five consecutive workdays. Using pre-intervention step counts, employees at each site were randomly allocated to a control group (n = 60; maintain normal behaviour), a route-based walking group (n = 60; at least 10 minutes sustained walking each workday) or an incidental walking group (n = 59; walking in workday tasks). Workday step counts and reported sitting times were re-assessed at the beginning, mid- and endpoint of intervention and group mean± SD steps/day and reported sitting times for pre-intervention and intervention measurement points compared using a mixed factorial ANOVA; paired sample-t-tests were used for follow-up, simple effect analyses. Results: A significant interactive effect (F = 3.5; p < 0.003) was found between group and step counts. Daily steps for controls decreased over the intervention period (-391 steps/day) and increased for route (968 steps/day; t = 3.9, p < 0.000) and incidental (699 steps/day; t = 2.5, p < 0.014) groups. There were no significant changes for reported sitting times, but average values did decrease relative to the control (routes group = 7 minutes/day; incidental group = 15 minutes/day). Reductions were most evident for the incidental group in the first week of intervention, where reported sitting decreased by an average of 21 minutes/day (t = 1.9; p < 0.057). Conclusion: Compared to controls, both route and incidental walking increased physical activity in white-collar employees. Our data suggests that workplace walking, particularly through incidental movement, also has the potential to decrease employee sitting times, but there is a need for on-going research using concurrent and objective measures of sitting, standing and walking.
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Resumo:
We demonstrate how duality invariance of the low energy expansion of the four-supergraviton amplitude in type II string theory determines the precise coefficients of multiloop logarithmic ultraviolet divergences of maximal supergravity in various dimensions. This is illustrated by the explicit moduli-dependence of terms of the form ¿2k R4, with k ¿ 3, in the effective action. Furthermore, we show that in the supergravity limit the perturbative contributions are swamped by an accumulation of non-perturbative effects of zero-action instantons.