846 resultados para Bit error rate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Der Kommissionierprozess stellt im Rahmen der innerbetrieblichen Logistik - gerade auch im Hinblick auf Just-In-Time-Lieferungen und Fragen der Produkthaftung - einen zentralen Baustein des Material- und Informationsflusses in Unternehmen dar. Dabei ist die Wahl des Kommissioniersystems ausschlaggebend für die Optimierung der personal- und zeitaufwendigen Kommissioniervorgänge und dient damit zur Leistungssteigerung unter gleichzeitiger Reduzierung der Fehlerquote.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this study was to assess the effect of bracket type on the labiopalatal moments generated by lingual and conventional brackets. Incognito™ lingual brackets (3M Unitek), STb™ lingual brackets (Light Lingual System; ORMCO), In-Ovation L lingual brackets (DENTSPLY GAC), and conventional 0.018 inch slot brackets (Gemini; 3M Unitek) were bonded on identical maxillary acrylic resin models with levelled and aligned teeth. Each model was mounted on the orthodontic measurement and simulation system and 10 0.0175 × 0.0175 TMA wires were used for each bracket type. The wire was ligated with elastomerics into the Incognito, STb, and conventional brackets and each measurement was repeated once after religation. A 15 degrees buccal root torque (+15 degrees) and then a 15 degrees palatal root torque (-15 degrees) were gradually applied to the right central incisor bracket. After each activation, the bracket returned to its initial position and the moments in the sagittal plane were recorded during these rotations of the bracket. One-way analysis of variance with post hoc multiple comparisons (Tukey test at 0.05 error rate) was conducted to assess the effect on bracket type on the generated moments. The magnitude of maximum moment at +15 degrees ranged 8.8, 8.2, 7.1, and 5.8 Nmm for the Incognito, STb, conventional Gemini, and the In-Ovation L brackets, respectively; similar values were recorded at -15 degrees: 8.6, 8.1, 7.0, and 5.7 Nmm, respectively. The recorded differences of maximum moments were statistically significant, except between the Incognito and STb brackets. Additionally, the torque angles were evaluated at which the crown torque fell well below the minimum levels of 5.0 Nmm, as well as the moment/torque ratio at the last part of the activation/deactivation curve, between 10 and 15 degrees. The lowest torque expression was observed at the self-ligating lingual brackets, followed by the conventional brackets. The Incognito and STb lingual brackets generated the highest moments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is a summary of the main contribu- tions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the commu- nication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with con- tributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is intro- duced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a metho- dology for robust, reliable and accurate software-based energy-estimation, which is calculated at network run- time on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adap- tively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communi- cations in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the perfor- mance degrading effects of packet corruption and trans- mission failures when transmitting data over multiple hops. The performance of all developed protocols are eval- uated on a self-developed real-world WSN testbed and achieve superior performance over selected existing ap- proaches, especially where traffic load and channel condi- tions are suspect to rapid variations over time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Emotional processing in essential hypertension beyond self-report questionnaire has hardly been investigated. The aim of this study is to examine associations between hypertension status and recognition of facial affect. Methods: 25 healthy, non-smoking, medication-free men including 13 hypertensive subjects aged between 20 and 65 years completed a computer-based task in order to examine sensitivity of recognition of facial affect. Neutral faces gradually changed to a specific emotion in a pseudo-continuous manner. Slides of the six basic emotions (fear, sadness, disgust, happiness, anger, surprise) were chosen from the „NimStim Set“. Pictures of three female and three male faces were electronically morphed in 1% steps of intensity from 0% to 100% (36 sets of faces with 100 pictures each). Each picture of a set was presented for one second, ranging from 0% to 100% of intensity. Participants were instructed to press a stop button as soon as they recognized the expression of the face. After stopping a forced choice between the six basic emotions was required. As dependent variables, we recorded the emotion intensity at which the presentation was stopped and the number of errors (error rate). Recognition sensitivity was calculated as emotion intensity of correctly identified emotions. Results: Mean arterial pressure was associated with a significantly increased recognition sensitivity of facial affect for the emotion anger (ß = - .43, p = 0.03*, Δ R2= .110). There was no association with the emotions fear, sadness, disgust, happiness, and surprise (p’s > .0.41). Mean arterial pressure did not relate to the mean number of errors for any of the facial emotions. Conclusions: Our findings suggest that an increased blood pressure is associated with increased recognition sensitivity of facial affect for the emotion anger, if a face shows anger. Hypertensives perceive facial anger expression faster than normotensives, if anger is shown.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several studies have shown that children with spina bifida meningomyelocele (SBM) and hydrocephalus have attention problems on parent ratings and difficulties in stimulus orienting associated with a posterior brain attention system. Less is known about response control and inhibition associated with an anterior brain attention system. Using the Gordon Vigilance Task (Gordon, 1983), we studied error rate, reaction time, and performance over time for sustained attention, a key anterior attention function, in 101 children with SBM, 17 with aqueductal stenosis (AS; another condition involving congenital hydrocephalus), and 40 typically developing controls (NC). In SBM, we investigated the relation between cognitive attention and parent ratings of inattention and hyperactivity and explored the impact of medical variables. Children with SBM did not differ from AS or NC groups on measures of sustained attention, but they committed more errors and responded more slowly. Approximately one-third of the SBM group had attention symptoms, although parent attention ratings were not associated with task performance. Hydrocephalus does not account for the attention profile of children with SBM, which also reflects the distinctive brain dysmorphologies associated with this condition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Impairment of cognitive performance during and after high-altitude climbing has been described in numerous studies and has mostly been attributed to cerebral hypoxia and resulting functional and structural cerebral alterations. To investigate the hypothesis that high-altitude climbing leads to cognitive impairment, we used of neuropsychological tests and measurements of eye movement (EM) performance during different stimulus conditions. The study was conducted in 32 mountaineers participating in an expedition to Muztagh Ata (7,546 m). Neuropsychological tests comprised figural fluency, line bisection, letter and number cancellation, and a modified pegboard task. Saccadic performance was evaluated under three stimulus conditions with varying degrees of cortical involvement: visually guided pro- and anti-saccades, and visuo-visual interaction. Typical saccade parameters (latency, mean sequence, post-saccadic stability, and error rate) were computed off-line. Measurements were taken at a baseline level of 440 m and at altitudes of 4,497, 5,533, 6,265, and again at 440 m. All subjects reached 5,533 m, and 28 reached 6,265 m. The neuropsychological test results did not reveal any cognitive impairment. Complete eye movement recordings for all stimulus conditions were obtained in 24 subjects at baseline and at least two altitudes and in 10 subjects at baseline and all altitudes. Measurements of saccade performances showed no dependence on any altitude-related parameter and were well within normal limits. Our data indicates that acclimatized climbers do not seem to suffer from significant cognitive deficits during or after climbs to altitudes above 7,500 m. We demonstrated that investigation of EMs is feasible during high-altitude expeditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND/AIMS Clinical differentiation between organic hypersomnia and non-organic hypersomnia (NOH) is challenging. We aimed to determine the diagnostic value of sleepiness and performance tests in patients with excessive daytime sleepiness (EDS) of organic and non-organic origin. METHODS We conducted a retrospective comparison of the multiple sleep latency test (MSLT), pupillography, and the Steer Clear performance test in three patient groups complaining of EDS: 19 patients with NOH, 23 patients with narcolepsy (NAR), and 46 patients with mild to moderate obstructive sleep apnoea syndrome (OSAS). RESULTS As required by the inclusion criteria, all patients had Epworth Sleepiness Scale (ESS) scores >10. The mean sleep latency in the MSLT indicated mild objective sleepiness in NOH (8.1 ± 4.0 min) and OSAS (7.2 ± 4.1 min), but more severe sleepiness in NAR (2.5 ± 2.0 min). The difference between NAR and the other two groups was significant; the difference between NOH and OSAS was not. In the Steer Clear performance test, NOH patients performed worst (error rate = 10.4%) followed by NAR (8.0%) and OSAS patients (5.9%; p = 0.008). The difference between OSAS and the other two groups was significant, but not between NOH and NAR. The pupillary unrest index was found to be highest in NAR (11.5) followed by NOH (9.2) and OSAS (7.4; n.s.). CONCLUSION A high error rate in the Steer Clear performance test along with mild sleepiness in an objective sleepiness test (MSLT) in a patient with subjective sleepiness (ESS) is suggestive of NOH. This disproportionately high error rate in NOH may be caused by factors unrelated to sleep pressure, such as anergia, reduced attention and motivation affecting performance, but not conventional sleepiness measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction. Erroneous answers in studies on the misinformation effect (ME) can be reduced in different ways. In some studies, ME was reduced by SM questions, warnings, or a low credibility of the source of post-event information (PEI). Results are inconsistent, however. Of course, a participant can deliberately decide to refrain from reporting a critical item only when the difference between the original event and the PEI is distinguishable in principle. We were interested in the question to what extent the influence of erroneous information on a central aspect of the original event can be reduced by different means applied singly or in combination. Method. With a 2 (credibility; high vs. low) x 2 (warning; present vs. absent) between subjects design and an additional control group that received neither misinformation nor a warning (N = 116), we examined the above-mentioned factors’ influence on the ME. Participants viewed a short video of a robbery. The critical item suggested in the PEI was that the victim was given a kick by the perpetrator (which he was actually not). The memory test consisted of a two-forced-choice recognition test followed by a SM test. Results. To our surprise, neither a main effect of erroneous PEI nor a main effect of credibility was found. The error rates for the critical item in the control group (50%) as well as in the high (65%) and low (52%) credibility condition without warning did not significantly differ. A warning about possible misleading information in the PEI significantly reduced the influence of misinformation in both credibility conditions by 32-37%. Using a SM question significantly reduced the error rate too, but only in the high credibility no warning condition. Conclusion and Future Research. Our results show that, contrary to a warning or the use of a SM question, low source credibility did not reduce the ME. The most striking finding was, however, the absence of a main effect of erroneous PEI. Due to the high error rate in the control group, we suspect that the wrong answers might have been caused either by the response format (recognition test) or by autosuggestion possibly promoted by the high schema-consistency of the critical item. First results of a post-study in which we used open-ended questions before the recognition test support the former assumption. Results of a replication of this study using open-ended questions prior to the recognition test will be available by June.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

People report suggested misinformation about a previously witnessed event for manifold reasons, such as social pressure, lack of memory of the original aspect, or a firm belief to remember the misinformation from the witnessed event. In our experiments (N = 429), which follow Loftus's paradigm, we tried to disentangle the reasons for reporting a central and a peripheral piece of misinformation in a recognition task by examining (a) the impact a warning about possible misinformation has on the error rate, and (b) whether once reported misinformation was actually attributed to the witnessed event in a later source-monitoring (SM) task. Overall, a misinformation effect was found for both items. The warning strongly reduced the misinformation effect, but only for the central item. In contrast, reports of the peripheral misinformation were correctly attributed to the misinformation source or, at least, ascribed to guesswork much more often than the central ones. As a consequence, after the SM task, the initially higher error rate for the peripheral item was even lower than that of the central item. Results convincingly show that the reasons for reporting misinformation, and correspondingly also the potential to avoid them in legal settings, depend on the centrality of the misinformation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Linkage and association studies are major analytical tools to search for susceptibility genes for complex diseases. With the availability of large collection of single nucleotide polymorphisms (SNPs) and the rapid progresses for high throughput genotyping technologies, together with the ambitious goals of the International HapMap Project, genetic markers covering the whole genome will be available for genome-wide linkage and association studies. In order not to inflate the type I error rate in performing genome-wide linkage and association studies, multiple adjustment for the significant level for each independent linkage and/or association test is required, and this has led to the suggestion of genome-wide significant cut-off as low as 5 × 10 −7. Almost no linkage and/or association study can meet such a stringent threshold by the standard statistical methods. Developing new statistics with high power is urgently needed to tackle this problem. This dissertation proposes and explores a class of novel test statistics that can be used in both population-based and family-based genetic data by employing a completely new strategy, which uses nonlinear transformation of the sample means to construct test statistics for linkage and association studies. Extensive simulation studies are used to illustrate the properties of the nonlinear test statistics. Power calculations are performed using both analytical and empirical methods. Finally, real data sets are analyzed with the nonlinear test statistics. Results show that the nonlinear test statistics have correct type I error rates, and most of the studied nonlinear test statistics have higher power than the standard chi-square test. This dissertation introduces a new idea to design novel test statistics with high power and might open new ways to mapping susceptibility genes for complex diseases. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Group sequential methods and response adaptive randomization (RAR) procedures have been applied in clinical trials due to economical and ethical considerations. Group sequential methods are able to reduce the average sample size by inducing early stopping, but patients are equally allocated with half of chance to inferior arm. RAR procedures incline to allocate more patients to better arm; however it requires more sample size to obtain a certain power. This study intended to combine these two procedures. We applied the Bayesian decision theory approach to define our group sequential stopping rules and evaluated the operating characteristics under RAR setting. The results showed that Bayesian decision theory method was able to preserve the type I error rate as well as achieve a favorable power; further by comparing with the error spending function method, we concluded that Bayesian decision theory approach was more effective on reducing average sample size.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The difficulty of detecting differential gene expression in microarray data has existed for many years. Several correction procedures try to avoid the family-wise error rate in multiple comparison process, including the Bonferroni and Sidak single-step p-value adjustments, Holm's step-down correction method, and Benjamini and Hochberg's false discovery rate (FDR) correction procedure. Each multiple comparison technique has its advantages and weaknesses. We studied each multiple comparison method through numerical studies (simulations) and applied the methods to the real exploratory DNA microarray data, which detect of molecular signatures in papillary thyroid cancer (PTC) patients. According to our results of simulation studies, Benjamini and Hochberg step-up FDR controlling procedure is the best process among these multiple comparison methods and we discovered 1277 potential biomarkers among 54675 probe sets after applying the Benjamini and Hochberg's method to PTC microarray data.^