985 resultados para error rates


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Experimental two-phase frictional pressure drop and flow boiling heat transfer results are presented for a horizontal 2.32-mm ID stainless-steel tube using R245fa as working fluid. The frictional pressure drop data was obtained under adiabatic and diabatic conditions. Experiments were performed for mass velocities ranging from 100 to 700 kg m−2 s−1 , heat flux from 0 to 55 kW m−2 , exit saturation temperatures of 31 and 41◦C, and vapor qualities from 0.10 to 0.99. Pressures drop gradients and heat transfer coefficients ranging from 1 to 70 kPa m−1 and from 1 to 7 kW m−2 K−1 were measured. It was found that the heat transfer coefficient is a strong function of the heat flux, mass velocity, and vapor quality. Five frictional pressure drop predictive methods were compared against the experimental database. The Cioncolini et al. (2009) method was found to work the best. Six flow boiling heat transfer predictive methods were also compared against the present database. Liu and Winterton (1991), Zhang et al. (2004), and Saitoh et al. (2007) were ranked as the best methods. They predicted the experimental flow boiling heat transfer data with an average error around 19%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN] [EN] In this paper we present a new method for image primitives tracking based on a CART (Classification and Regression Tree). Primitives tracking procedure uses lines and circles as primitives. We have applied the proposed method to sport event scenarios, specifically, soccer matches. We estimate CART parameters using a learning procedure based on RGB image channels. In order to illustrate its performance, it has been applied to real HD (High Definition) video sequences and some numerical experiments are shown. The quality of the primitives tracking with the decision tree is validated by the percentage error rates obtained and the comparison with other techniques as a morphological method. We also present applications of the proposed method to camera calibration and graphic object insertion in real video sequences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN]This work makes an extensive experimental study of smile detection testing the Local Binary Patterns (LBP) combined with self similarity (LAC) as main descriptors of the image, along with the powerful Support Vector Machines classifier. Results show that error rates can be acceptable and the self similarity approach for the detection of smiles is suitable for real-time interaction, although there is still room for improvement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, the main Executive Control theories are exposed. Methods typical of Cognitive and Computational Neuroscience are introduced and the role of behavioural tasks involving conflict resolution in the response elaboration, after the presentation of a stimulus to the subject, are highlighted. In particular, the Eriksen Flanker Task and its variants are discussed. Behavioural data, from scientific literature, are illustrated in terms of response times and error rates. During experimental behavioural tasks, EEG is registered simultaneously. Thanks to this, event related potential, related with the current task, can be studied. Different theories regarding relevant event related potential in this field - such as N2, fERN (feedback Error Related Negativity) and ERN (Error Related Negativity) – are introduced. The aim of this thesis is to understand and simulate processes regarding Executive Control, including performance improvement, error detection mechanisms, post error adjustments and the role of selective attention, with the help of an original neural network model. The network described here has been built with the purpose to simulate behavioural results of a four choice Eriksen Flanker Task. Model results show that the neural network can simulate response times, error rates and event related potentials quite well. Finally, results are compared with behavioural data and discussed in light of the mentioned Executive Control theories. Future perspective for this new model are outlined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The historical context in which saccades are made influences their latency and error rates, but less is known about how context influences their spatial parameters. We recently described a novel spatial bias for antisaccades, in which the endpoints of these responses deviate towards alternative goal locations used in the same experimental block, and showed that expectancy (prior probability) is at least partly responsible for this 'alternate-goal bias'. In this report we asked whether trial history also plays a role. Subjects performed antisaccades to a stimulus randomly located on the horizontal meridian, on a 40° angle downwards from the horizontal meridian, or on a 40° upward angle, with all three locations equally probable on any given trial. We found that the endpoints of antisaccades were significantly displaced towards the goal location of not only the immediately preceding trial (n - 1) but also the penultimate (n - 2) trial. Furthermore, this bias was mainly present for antisaccades with a short latency of <250 ms and was rapidly corrected by secondary saccades. We conclude that the location of recent antisaccades biases the spatial programming of upcoming antisaccades, that this historical effect persists over many seconds, and that it influences mainly rapidly generated eye movements. Because corrective saccades eliminate the historical bias, we suggest that the bias arises in processes generating the response vector, rather than processes generating the perceptual estimate of goal location.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Motion-induced blindness (MIB) occurs when target stimuli are presented together with a moving distractor pattern. Most observers experience the targets disappearing and reappearing repeatedly for periods of up to several seconds. MIB can be viewed as a striking marker for the organization of cognitive functioning. In the present study, MIB rates and durations were assessed in 34 schizophrenia-spectrum disorder patients and matched controls. The results showed that positive symptoms and excitement enhanced MIB, whereas depression and negative symptoms attenuated the illusion. MIB was more frequently found in normal subjects. The results remained consistent after adjusting for reaction time and error rates. Hence, MIB may provide a valid and reliable measure of cognitive organization in schizophrenia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The advances in computational biology have made simultaneous monitoring of thousands of features possible. The high throughput technologies not only bring about a much richer information context in which to study various aspects of gene functions but they also present challenge of analyzing data with large number of covariates and few samples. As an integral part of machine learning, classification of samples into two or more categories is almost always of interest to scientists. In this paper, we address the question of classification in this setting by extending partial least squares (PLS), a popular dimension reduction tool in chemometrics, in the context of generalized linear regression based on a previous approach, Iteratively ReWeighted Partial Least Squares, i.e. IRWPLS (Marx, 1996). We compare our results with two-stage PLS (Nguyen and Rocke, 2002A; Nguyen and Rocke, 2002B) and other classifiers. We show that by phrasing the problem in a generalized linear model setting and by applying bias correction to the likelihood to avoid (quasi)separation, we often get lower classification error rates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation has three separate parts: the first part deals with the general pedigree association testing incorporating continuous covariates; the second part deals with the association tests under population stratification using the conditional likelihood tests; the third part deals with the genome-wide association studies based on the real rheumatoid arthritis (RA) disease data sets from Genetic Analysis Workshop 16 (GAW16) problem 1. Many statistical tests are developed to test the linkage and association using either case-control status or phenotype covariates for family data structure, separately. Those univariate analyses might not use all the information coming from the family members in practical studies. On the other hand, the human complex disease do not have a clear inheritance pattern, there might exist the gene interactions or act independently. In part I, the new proposed approach MPDT is focused on how to use both the case control information as well as the phenotype covariates. This approach can be applied to detect multiple marker effects. Based on the two existing popular statistics in family studies for case-control and quantitative traits respectively, the new approach could be used in the simple family structure data set as well as general pedigree structure. The combined statistics are calculated using the two statistics; A permutation procedure is applied for assessing the p-value with adjustment from the Bonferroni for the multiple markers. We use simulation studies to evaluate the type I error rates and the powers of the proposed approach. Our results show that the combined test using both case-control information and phenotype covariates not only has the correct type I error rates but also is more powerful than the other existing methods. For multiple marker interactions, our proposed method is also very powerful. Selective genotyping is an economical strategy in detecting and mapping quantitative trait loci in the genetic dissection of complex disease. When the samples arise from different ethnic groups or an admixture population, all the existing selective genotyping methods may result in spurious association due to different ancestry distributions. The problem can be more serious when the sample size is large, a general requirement to obtain sufficient power to detect modest genetic effects for most complex traits. In part II, I describe a useful strategy in selective genotyping while population stratification is present. Our procedure used a principal component based approach to eliminate any effect of population stratification. The paper evaluates the performance of our procedure using both simulated data from an early study data sets and also the HapMap data sets in a variety of population admixture models generated from empirical data. There are one binary trait and two continuous traits in the rheumatoid arthritis dataset of Problem 1 in the Genetic Analysis Workshop 16 (GAW16): RA status, AntiCCP and IgM. To allow multiple traits, we suggest a set of SNP-level F statistics by the concept of multiple-correlation to measure the genetic association between multiple trait values and SNP-specific genotypic scores and obtain their null distributions. Hereby, we perform 6 genome-wide association analyses using the novel one- and two-stage approaches which are based on single, double and triple traits. Incorporating all these 6 analyses, we successfully validate the SNPs which have been identified to be responsible for rheumatoid arthritis in the literature and detect more disease susceptibility SNPs for follow-up studies in the future. Except for chromosome 13 and 18, each of the others is found to harbour susceptible genetic regions for rheumatoid arthritis or related diseases, i.e., lupus erythematosus. This topic is discussed in part III.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Parkinson's disease, typically thought of as a movement disorder, is increasingly recognized as causing cognitive impairment and dementia. Eye movement abnormalities are also described, including impairment of rapid eye movements (saccades) and the fixations interspersed between them. Such movements are under the influence of cortical and subcortical networks commonly targeted by the neurodegeneration seen in Parkinson's disease and, as such, may provide a marker for cognitive decline. This study examined the error rates and visual exploration strategies of subjects with Parkinson's disease, with and without cognitive impairment, whilst performing a battery of visuo-cognitive tasks. Error rates were significantly higher in those Parkinson's disease groups with either mild cognitive impairment (P = 0.001) or dementia (P < 0.001), than in cognitively normal subjects with Parkinson's disease. When compared with cognitively normal subjects with Parkinson's disease, exploration strategy, as measured by a number of eye tracking variables, was least efficient in the dementia group but was also affected in those subjects with Parkinson's disease with mild cognitive impairment. When compared with control subjects and cognitively normal subjects with Parkinson's disease, saccade amplitudes were significantly reduced in the groups with mild cognitive impairment or dementia. Fixation duration was longer in all Parkinson's disease groups compared with healthy control subjects but was longest for cognitively impaired Parkinson's disease groups. The strongest predictor of average fixation duration was disease severity. Analysing only data from the most complex task, with the highest error rates, both cognitive impairment and disease severity contributed to a predictive model for fixation duration [F(2,76) = 12.52, P ≤ 0.001], but medication dose did not (r = 0.18, n = 78, P = 0.098, not significant). This study highlights the potential use of exploration strategy measures as a marker of cognitive decline in Parkinson's disease and reveals the efficiency by which fixations and saccades are deployed in the build-up to a cognitive response, rather than merely focusing on the outcome itself. The prolongation of fixation duration, present to a small but significant degree even in cognitively normal subjects with Parkinson's disease, suggests a disease-specific impact on the networks directing visual exploration, although the study also highlights the multi-factorial nature of changes in exploration and the significant impact of cognitive decline on efficiency of visual search.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper is a summary of the main contribu- tions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the commu- nication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with con- tributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is intro- duced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a metho- dology for robust, reliable and accurate software-based energy-estimation, which is calculated at network run- time on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adap- tively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communi- cations in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the perfor- mance degrading effects of packet corruption and trans- mission failures when transmitting data over multiple hops. The performance of all developed protocols are eval- uated on a self-developed real-world WSN testbed and achieve superior performance over selected existing ap- proaches, especially where traffic load and channel condi- tions are suspect to rapid variations over time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite current enthusiasm for investigation of gene-gene interactions and gene-environment interactions, the essential issue of how to define and detect gene-environment interactions remains unresolved. In this report, we define gene-environment interactions as a stochastic dependence in the context of the effects of the genetic and environmental risk factors on the cause of phenotypic variation among individuals. We use mutual information that is widely used in communication and complex system analysis to measure gene-environment interactions. We investigate how gene-environment interactions generate the large difference in the information measure of gene-environment interactions between the general population and a diseased population, which motives us to develop mutual information-based statistics for testing gene-environment interactions. We validated the null distribution and calculated the type 1 error rates for the mutual information-based statistics to test gene-environment interactions using extensive simulation studies. We found that the new test statistics were more powerful than the traditional logistic regression under several disease models. Finally, in order to further evaluate the performance of our new method, we applied the mutual information-based statistics to three real examples. Our results showed that P-values for the mutual information-based statistics were much smaller than that obtained by other approaches including logistic regression models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The diversity of European culture is reflected in its healthcare training programs. In intensive care medicine (ICM), the differences in national training programs were so marked that it was unlikely that they could produce specialists of equivalent skills. The Competency-Based Training in Intensive Care Medicine in Europe (CoBaTrICE) program was established in 2003 as a Europe-based worldwide collaboration of national training organizations to create core competencies for ICM using consensus methodologies to establish common ground. The group's professional and research ethos created a social identity that facilitated change. The program was easily adaptable to different training structures and incorporated the voice of patients and relatives. The CoBaTrICE program has now been adopted by 15 European countries, with another 12 countries planning to adopt the training program, and is currently available in nine languages, including English. ICM is now recognized as a primary specialty in Spain, Switzerland, and the UK. There are still wide variations in structures and processes of training in ICM across Europe, although there has been agreement on a set of common program standards. The combination of a common "product specification" for an intensivist, combined with persisting variation in the educational context in which competencies are delivered, provides a rich source of research inquiry. Pedagogic research in ICM could usefully focus on the interplay between educational interventions, healthcare systems and delivery, and patient outcomes, such as including whether competency-based program are associated with lower error rates, whether communication skills training is associated with greater patient and family satisfaction, how multisource feedback might best be used to improve reflective learning and teamworking, or whether increasing the proportion of specialists trained in acute care in the hospital at weekends results in better patient outcomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction. Erroneous answers in studies on the misinformation effect (ME) can be reduced in different ways. In some studies, ME was reduced by SM questions, warnings, or a low credibility of the source of post-event information (PEI). Results are inconsistent, however. Of course, a participant can deliberately decide to refrain from reporting a critical item only when the difference between the original event and the PEI is distinguishable in principle. We were interested in the question to what extent the influence of erroneous information on a central aspect of the original event can be reduced by different means applied singly or in combination. Method. With a 2 (credibility; high vs. low) x 2 (warning; present vs. absent) between subjects design and an additional control group that received neither misinformation nor a warning (N = 116), we examined the above-mentioned factors’ influence on the ME. Participants viewed a short video of a robbery. The critical item suggested in the PEI was that the victim was given a kick by the perpetrator (which he was actually not). The memory test consisted of a two-forced-choice recognition test followed by a SM test. Results. To our surprise, neither a main effect of erroneous PEI nor a main effect of credibility was found. The error rates for the critical item in the control group (50%) as well as in the high (65%) and low (52%) credibility condition without warning did not significantly differ. A warning about possible misleading information in the PEI significantly reduced the influence of misinformation in both credibility conditions by 32-37%. Using a SM question significantly reduced the error rate too, but only in the high credibility no warning condition. Conclusion and Future Research. Our results show that, contrary to a warning or the use of a SM question, low source credibility did not reduce the ME. The most striking finding was, however, the absence of a main effect of erroneous PEI. Due to the high error rate in the control group, we suspect that the wrong answers might have been caused either by the response format (recognition test) or by autosuggestion possibly promoted by the high schema-consistency of the critical item. First results of a post-study in which we used open-ended questions before the recognition test support the former assumption. Results of a replication of this study using open-ended questions prior to the recognition test will be available by June.