819 resultados para Classification error rate
Resumo:
Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing (bio)equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate (FWER) using TOST is given. This condition then leads to a simple solution for controlling the FWER. Specifically, we demonstrate that if all pairwise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given.
Resumo:
Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.
Resumo:
Automatischen Sortiersysteme (Sorter) besitzen in der Intralogistik eine große Bedeutung. Sorter erreichen eine ausdauernd hohe Sortierleistung bei gleichzeitig geringer Fehlsortierrate und bilden deshalb oft den zentralen Baustein in Materialflusssystemen mit hoher Umschlagsrate. Distributionszentren mit Lager und Kommissionierfunktion sind typische Vertreter solcher Materialflusssysteme. Ein Sorter besteht aus den Subsystemen Einschleusung, Verteilförderer und Endstellen. Die folgenden Betrachtungen fokussieren auf ein Sortermodell mit einem Verteilförderer in Ringstruktur und einer Einzelplatzbelegung. Auf jedem Platz kann genau ein Gut transportiert werden. Der Verteilförderer besitzt somit eine feste Transportkapazität. Derartige Förderer werden in der Regel als Kippschalen- oder Quergurt-Sorter ausgeführt. Die theoretische Sortierleistung für diesen Sortertyp kann aus Fahrgeschwindigkeit und Transportplatzabstand bestimmt werden. Diese Systemleistung wird im praktischen Betrieb kaum erreicht. Verschiedene Faktoren im Einschleusbereich und im Ausschleusbereich führen zu einer Leistungsminderung. Betrachtungen zur Bestimmung der mittleren Warteschlangenlänge im Einschleusbereich sowie zur Ermittlung des Rundläuferanteils auf dem Verteilförderer werden im folgenden Beitrag vorgestellt. Diesem Beitrag liegt ein Forschungsvorhaben zugrunde, das aus Mitteln des Bundesministeriums für Wirtschaft und Technologie (BMWi) über die Arbeitsgemeinschaft industrieller Forschungsvereinigungen "Otto von Guericke" (AiF) gefördert und im Auftrage der Bundesvereinigung Logistik e.V. (BVL) ausgeführt wurde.
Resumo:
Der Kommissionierprozess stellt im Rahmen der innerbetrieblichen Logistik - gerade auch im Hinblick auf Just-In-Time-Lieferungen und Fragen der Produkthaftung - einen zentralen Baustein des Material- und Informationsflusses in Unternehmen dar. Dabei ist die Wahl des Kommissioniersystems ausschlaggebend für die Optimierung der personal- und zeitaufwendigen Kommissioniervorgänge und dient damit zur Leistungssteigerung unter gleichzeitiger Reduzierung der Fehlerquote.
Resumo:
The aim of this study was to assess the effect of bracket type on the labiopalatal moments generated by lingual and conventional brackets. Incognito™ lingual brackets (3M Unitek), STb™ lingual brackets (Light Lingual System; ORMCO), In-Ovation L lingual brackets (DENTSPLY GAC), and conventional 0.018 inch slot brackets (Gemini; 3M Unitek) were bonded on identical maxillary acrylic resin models with levelled and aligned teeth. Each model was mounted on the orthodontic measurement and simulation system and 10 0.0175 × 0.0175 TMA wires were used for each bracket type. The wire was ligated with elastomerics into the Incognito, STb, and conventional brackets and each measurement was repeated once after religation. A 15 degrees buccal root torque (+15 degrees) and then a 15 degrees palatal root torque (-15 degrees) were gradually applied to the right central incisor bracket. After each activation, the bracket returned to its initial position and the moments in the sagittal plane were recorded during these rotations of the bracket. One-way analysis of variance with post hoc multiple comparisons (Tukey test at 0.05 error rate) was conducted to assess the effect on bracket type on the generated moments. The magnitude of maximum moment at +15 degrees ranged 8.8, 8.2, 7.1, and 5.8 Nmm for the Incognito, STb, conventional Gemini, and the In-Ovation L brackets, respectively; similar values were recorded at -15 degrees: 8.6, 8.1, 7.0, and 5.7 Nmm, respectively. The recorded differences of maximum moments were statistically significant, except between the Incognito and STb brackets. Additionally, the torque angles were evaluated at which the crown torque fell well below the minimum levels of 5.0 Nmm, as well as the moment/torque ratio at the last part of the activation/deactivation curve, between 10 and 15 degrees. The lowest torque expression was observed at the self-ligating lingual brackets, followed by the conventional brackets. The Incognito and STb lingual brackets generated the highest moments.
Resumo:
Background: Emotional processing in essential hypertension beyond self-report questionnaire has hardly been investigated. The aim of this study is to examine associations between hypertension status and recognition of facial affect. Methods: 25 healthy, non-smoking, medication-free men including 13 hypertensive subjects aged between 20 and 65 years completed a computer-based task in order to examine sensitivity of recognition of facial affect. Neutral faces gradually changed to a specific emotion in a pseudo-continuous manner. Slides of the six basic emotions (fear, sadness, disgust, happiness, anger, surprise) were chosen from the „NimStim Set“. Pictures of three female and three male faces were electronically morphed in 1% steps of intensity from 0% to 100% (36 sets of faces with 100 pictures each). Each picture of a set was presented for one second, ranging from 0% to 100% of intensity. Participants were instructed to press a stop button as soon as they recognized the expression of the face. After stopping a forced choice between the six basic emotions was required. As dependent variables, we recorded the emotion intensity at which the presentation was stopped and the number of errors (error rate). Recognition sensitivity was calculated as emotion intensity of correctly identified emotions. Results: Mean arterial pressure was associated with a significantly increased recognition sensitivity of facial affect for the emotion anger (ß = - .43, p = 0.03*, Δ R2= .110). There was no association with the emotions fear, sadness, disgust, happiness, and surprise (p’s > .0.41). Mean arterial pressure did not relate to the mean number of errors for any of the facial emotions. Conclusions: Our findings suggest that an increased blood pressure is associated with increased recognition sensitivity of facial affect for the emotion anger, if a face shows anger. Hypertensives perceive facial anger expression faster than normotensives, if anger is shown.
Resumo:
Several studies have shown that children with spina bifida meningomyelocele (SBM) and hydrocephalus have attention problems on parent ratings and difficulties in stimulus orienting associated with a posterior brain attention system. Less is known about response control and inhibition associated with an anterior brain attention system. Using the Gordon Vigilance Task (Gordon, 1983), we studied error rate, reaction time, and performance over time for sustained attention, a key anterior attention function, in 101 children with SBM, 17 with aqueductal stenosis (AS; another condition involving congenital hydrocephalus), and 40 typically developing controls (NC). In SBM, we investigated the relation between cognitive attention and parent ratings of inattention and hyperactivity and explored the impact of medical variables. Children with SBM did not differ from AS or NC groups on measures of sustained attention, but they committed more errors and responded more slowly. Approximately one-third of the SBM group had attention symptoms, although parent attention ratings were not associated with task performance. Hydrocephalus does not account for the attention profile of children with SBM, which also reflects the distinctive brain dysmorphologies associated with this condition.
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
Impairment of cognitive performance during and after high-altitude climbing has been described in numerous studies and has mostly been attributed to cerebral hypoxia and resulting functional and structural cerebral alterations. To investigate the hypothesis that high-altitude climbing leads to cognitive impairment, we used of neuropsychological tests and measurements of eye movement (EM) performance during different stimulus conditions. The study was conducted in 32 mountaineers participating in an expedition to Muztagh Ata (7,546 m). Neuropsychological tests comprised figural fluency, line bisection, letter and number cancellation, and a modified pegboard task. Saccadic performance was evaluated under three stimulus conditions with varying degrees of cortical involvement: visually guided pro- and anti-saccades, and visuo-visual interaction. Typical saccade parameters (latency, mean sequence, post-saccadic stability, and error rate) were computed off-line. Measurements were taken at a baseline level of 440 m and at altitudes of 4,497, 5,533, 6,265, and again at 440 m. All subjects reached 5,533 m, and 28 reached 6,265 m. The neuropsychological test results did not reveal any cognitive impairment. Complete eye movement recordings for all stimulus conditions were obtained in 24 subjects at baseline and at least two altitudes and in 10 subjects at baseline and all altitudes. Measurements of saccade performances showed no dependence on any altitude-related parameter and were well within normal limits. Our data indicates that acclimatized climbers do not seem to suffer from significant cognitive deficits during or after climbs to altitudes above 7,500 m. We demonstrated that investigation of EMs is feasible during high-altitude expeditions.
Resumo:
BACKGROUND/AIMS Clinical differentiation between organic hypersomnia and non-organic hypersomnia (NOH) is challenging. We aimed to determine the diagnostic value of sleepiness and performance tests in patients with excessive daytime sleepiness (EDS) of organic and non-organic origin. METHODS We conducted a retrospective comparison of the multiple sleep latency test (MSLT), pupillography, and the Steer Clear performance test in three patient groups complaining of EDS: 19 patients with NOH, 23 patients with narcolepsy (NAR), and 46 patients with mild to moderate obstructive sleep apnoea syndrome (OSAS). RESULTS As required by the inclusion criteria, all patients had Epworth Sleepiness Scale (ESS) scores >10. The mean sleep latency in the MSLT indicated mild objective sleepiness in NOH (8.1 ± 4.0 min) and OSAS (7.2 ± 4.1 min), but more severe sleepiness in NAR (2.5 ± 2.0 min). The difference between NAR and the other two groups was significant; the difference between NOH and OSAS was not. In the Steer Clear performance test, NOH patients performed worst (error rate = 10.4%) followed by NAR (8.0%) and OSAS patients (5.9%; p = 0.008). The difference between OSAS and the other two groups was significant, but not between NOH and NAR. The pupillary unrest index was found to be highest in NAR (11.5) followed by NOH (9.2) and OSAS (7.4; n.s.). CONCLUSION A high error rate in the Steer Clear performance test along with mild sleepiness in an objective sleepiness test (MSLT) in a patient with subjective sleepiness (ESS) is suggestive of NOH. This disproportionately high error rate in NOH may be caused by factors unrelated to sleep pressure, such as anergia, reduced attention and motivation affecting performance, but not conventional sleepiness measurements.
Resumo:
Introduction. Erroneous answers in studies on the misinformation effect (ME) can be reduced in different ways. In some studies, ME was reduced by SM questions, warnings, or a low credibility of the source of post-event information (PEI). Results are inconsistent, however. Of course, a participant can deliberately decide to refrain from reporting a critical item only when the difference between the original event and the PEI is distinguishable in principle. We were interested in the question to what extent the influence of erroneous information on a central aspect of the original event can be reduced by different means applied singly or in combination. Method. With a 2 (credibility; high vs. low) x 2 (warning; present vs. absent) between subjects design and an additional control group that received neither misinformation nor a warning (N = 116), we examined the above-mentioned factors’ influence on the ME. Participants viewed a short video of a robbery. The critical item suggested in the PEI was that the victim was given a kick by the perpetrator (which he was actually not). The memory test consisted of a two-forced-choice recognition test followed by a SM test. Results. To our surprise, neither a main effect of erroneous PEI nor a main effect of credibility was found. The error rates for the critical item in the control group (50%) as well as in the high (65%) and low (52%) credibility condition without warning did not significantly differ. A warning about possible misleading information in the PEI significantly reduced the influence of misinformation in both credibility conditions by 32-37%. Using a SM question significantly reduced the error rate too, but only in the high credibility no warning condition. Conclusion and Future Research. Our results show that, contrary to a warning or the use of a SM question, low source credibility did not reduce the ME. The most striking finding was, however, the absence of a main effect of erroneous PEI. Due to the high error rate in the control group, we suspect that the wrong answers might have been caused either by the response format (recognition test) or by autosuggestion possibly promoted by the high schema-consistency of the critical item. First results of a post-study in which we used open-ended questions before the recognition test support the former assumption. Results of a replication of this study using open-ended questions prior to the recognition test will be available by June.
Resumo:
People report suggested misinformation about a previously witnessed event for manifold reasons, such as social pressure, lack of memory of the original aspect, or a firm belief to remember the misinformation from the witnessed event. In our experiments (N = 429), which follow Loftus's paradigm, we tried to disentangle the reasons for reporting a central and a peripheral piece of misinformation in a recognition task by examining (a) the impact a warning about possible misinformation has on the error rate, and (b) whether once reported misinformation was actually attributed to the witnessed event in a later source-monitoring (SM) task. Overall, a misinformation effect was found for both items. The warning strongly reduced the misinformation effect, but only for the central item. In contrast, reports of the peripheral misinformation were correctly attributed to the misinformation source or, at least, ascribed to guesswork much more often than the central ones. As a consequence, after the SM task, the initially higher error rate for the peripheral item was even lower than that of the central item. Results convincingly show that the reasons for reporting misinformation, and correspondingly also the potential to avoid them in legal settings, depend on the centrality of the misinformation.
Resumo:
With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^
Resumo:
Linkage and association studies are major analytical tools to search for susceptibility genes for complex diseases. With the availability of large collection of single nucleotide polymorphisms (SNPs) and the rapid progresses for high throughput genotyping technologies, together with the ambitious goals of the International HapMap Project, genetic markers covering the whole genome will be available for genome-wide linkage and association studies. In order not to inflate the type I error rate in performing genome-wide linkage and association studies, multiple adjustment for the significant level for each independent linkage and/or association test is required, and this has led to the suggestion of genome-wide significant cut-off as low as 5 × 10 −7. Almost no linkage and/or association study can meet such a stringent threshold by the standard statistical methods. Developing new statistics with high power is urgently needed to tackle this problem. This dissertation proposes and explores a class of novel test statistics that can be used in both population-based and family-based genetic data by employing a completely new strategy, which uses nonlinear transformation of the sample means to construct test statistics for linkage and association studies. Extensive simulation studies are used to illustrate the properties of the nonlinear test statistics. Power calculations are performed using both analytical and empirical methods. Finally, real data sets are analyzed with the nonlinear test statistics. Results show that the nonlinear test statistics have correct type I error rates, and most of the studied nonlinear test statistics have higher power than the standard chi-square test. This dissertation introduces a new idea to design novel test statistics with high power and might open new ways to mapping susceptibility genes for complex diseases. ^