960 resultados para ERROR rates


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this study was to investigate the effect of court surface (clay v hard-court) on technical, physiological and perceptual responses to on-court training. Four high-performance junior male players performed two identical training sessions on hard and clay courts, respectively. Sessions included both physical conditioning and technical elements as led by the coach. Each session was filmed for later notational analysis of stroke count and error rates. Further, players wore a global positioning satellite device to measure distance covered during each session; whilst heart rate, countermovement jump distance and capillary blood measures of metabolites were measured before, during and following each session. Additionally a respective coach and athlete rating of perceived exertion (RPE) were measured following each session. Total duration and distance covered during of each session were comparable (P>0.05; d<0.20). While forehand and backhands stroke volume did not differ between sessions (P>0.05; d<0.30); large effects for increased unforced and forced errors were present on the hard court (P>0.05; d>0.90). Furthermore, large effects for increased heart rate, blood lactate and RPE values were evident on clay compared to hard courts (P>0.05; d>0.90). Additionally, while player and coach RPE on hard courts were similar, there were large effects for coaches to underrate the RPE of players on clay courts (P>0.05; d>0.90). In conclusion, training on clay courts results in trends for increased heart rate, lactate and RPE values, suggesting sessions on clay tend towards higher physiological and perceptual loads than hard courts. Further, coaches appear effective at rating player RPE on hard courts, but may underrate the perceived exertion of sessions on clay courts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite the prominent use of the Suchey-Brooks (S-B) method of age estimation in forensic anthropological practice, it is subject to intrinsic limitations, with reports of differential inter-population error rates between geographical locations. This study assessed the accuracy of the S-B method to a contemporary adult population in Queensland, Australia and provides robust age parameters calibrated for our population. Three-dimensional surface reconstructions were generated from computed tomography scans of the pubic symphysis of male and female Caucasian individuals aged 15–70 years (n = 195) in Amira® and Rapidform®. Error was analyzed on the basis of bias, inaccuracy and percentage correct classification for left and right symphyseal surfaces. Application of transition analysis and Chi-square statistics demonstrated 63.9% and 69.7% correct age classification associated with the left symphyseal surface of Australian males and females, respectively, using the S-B method. Using Bayesian statistics, probability density distributions for each S-B phase were calculated, providing refined age parameters for our population. Mean inaccuracies of 6.77 (±2.76) and 8.28 (±4.41) years were reported for the left surfaces of males and females, respectively; with positive biases for younger individuals (<55 years) and negative biases in older individuals. Significant sexual dimorphism in the application of the S-B method was observed; and asymmetry in phase classification of the pubic symphysis was a frequent phenomenon. These results recommend that the S-B method should be applied with caution in medico-legal death investigations of Queensland skeletal remains and warrant further investigation of reliable age estimation techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Encompasses the whole BPM lifecycle, including process identification, modelling, analysis, redesign, automation and monitoring Class-tested textbook complemented with additional teaching material on the accompanying website Covers both relevant conceptual background, industrial standards and actionable skills Business Process Management (BPM) is the art and science of how work should be performed in an organization in order to ensure consistent outputs and to take advantage of improvement opportunities, e.g. reducing costs, execution times or error rates. Importantly, BPM is not about improving the way individual activities are performed, but rather about managing entire chains of events, activities and decisions that ultimately produce added value for an organization and its customers. This textbook encompasses the entire BPM lifecycle, from process identification to process monitoring, covering along the way process modelling, analysis, redesign and automation. Concepts, methods and tools from business management, computer science and industrial engineering are blended into one comprehensive and inter-disciplinary approach. The presentation is illustrated using the BPMN industry standard defined by the Object Management Group and widely endorsed by practitioners and vendors worldwide. In addition to explaining the relevant conceptual background, the book provides dozens of examples, more than 100 hands-on exercises – many with solutions – as well as numerous suggestions for further reading. The textbook is the result of many years of combined teaching experience of the authors, both at the undergraduate and graduate levels as well as in the context of professional training. Students and professionals from both business management and computer science will benefit from the step-by-step style of the textbook and its focus on fundamental concepts and proven methods. Lecturers will appreciate the class-tested format and the additional teaching material available on the accompanying website fundamentals-of-bpm.org.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Establishing age-at-death for skeletal remains is a vital component of forensic anthropology. The Suchey-Brooks (S-B) method of age estimation has been widely utilised since 1986 and relies on a visual assessment of the pubic symphyseal surface in comparison to a series of casts. Inter-population studies (Kimmerle et al., 2005; Djuric et al., 2007; Sakaue, 2006) demonstrate limitations of the S-B method, however, no assessment of this technique specific to Australian populations has been published. Aim: This investigation assessed the accuracy and applicability of the S-B method to an adult Australian Caucasian population by highlighting error rates associated with this technique. Methods: Computed tomography (CT) and contact scans of the S-B casts were performed; each geometrically modelled surface was extracted and quantified for reference purposes. A Queensland skeletal database for Caucasian remains aged 15 – 70 years was initiated at the Queensland Health Forensic and Scientific Services – Forensic Pathology Mortuary (n=350). Three-dimensional reconstruction of the bone surface using innovative volume visualisation protocols in Amira® and Rapidform® platforms was performed. Samples were allocated into 11 sub-sets of 5-year age intervals and changes associated with the surface geometry were quantified in relation to age, gender and asymmetry. Results: Preliminary results indicate that computational analysis was successfully applied to model morphological surface changes. Significant differences in observed versus actual ages were noted. Furthermore, initial morphological assessment demonstrates significant bilateral asymmetry of the pubic symphysis, which is unaccounted for in the S-B method. These results propose refinements to the S-B method, when applied to Australian casework. Conclusion: This investigation promises to transform anthropological analysis to be more quantitative and less invasive using CT imaging. The overarching goal contributes to improving skeletal identification and medico-legal death investigation in the coronial process by narrowing the range of age-at-death estimation in a biological profile.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Speaker attribution is the task of annotating a spoken audio archive based on speaker identities. This can be achieved using speaker diarization and speaker linking. In our previous work, we proposed an efficient attribution system, using complete-linkage clustering, for conducting attribution of large sets of two-speaker telephone data. In this paper, we build on our proposed approach to achieve a robust system, applicable to multiple recording domains. To do this, we first extend the diarization module of our system to accommodate multi-speaker (>2) recordings. We achieve this through using a robust cross-likelihood ratio (CLR) threshold stopping criterion for clustering, as opposed to the original stopping criterion of two speakers used for telephone data. We evaluate this baseline diarization module across a dataset of Australian broadcast news recordings, showing a significant lack of diarization accuracy without previous knowledge of the true number of speakers within a recording. We thus propose applying an additional pass of complete-linkage clustering to the diarization module, demonstrating an absolute improvement of 20% in diarization error rate (DER). We then evaluate our proposed multi-domain attribution system across the broadcast news data, demonstrating achievable attribution error rates (AER) as low as 17%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Single nucleotide polymorphisms (SNPs) rs429358 (ε4) and rs7412 (ε2), both invoking changes in the amino-acid sequence of the apolipoprotein E (APOE) gene, have previously been tested for association with multiple sclerosis (MS) risk. However, none of these studies was sufficiently powered to detect modest effect sizes at acceptable type-I error rates. As both SNPs are only imperfectly captured on commonly used microarray genotyping platforms, their evaluation in the context of genome-wide association studies has been hindered until recently. Methods We genotyped 12 740 subjects hitherto not studied for their APOE status, imputed raw genotype data from 8739 subjects from five independent genome wide association studies datasets using the most recent high-resolution reference panels, and extracted genotype data for 8265 subjects from previous candidate gene assessments. Results Despite sufficient power to detect associations at genome-wide significance thresholds across a range of ORs, our analyses did not support a role of rs429358 or rs7412 on MS susceptibility. This included meta-analyses of the combined data across 13 913 MS cases and 15 831 controls (OR=0.95, p=0.259, and OR 1.07, p=0.0569, for rs429358 and rs7412, respectively). Conclusion Given the large sample size of our analyses, it is unlikely that the two APOE missense SNPs studied here exert any relevant effects on MS susceptibility.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fusion techniques can be used in biometrics to achieve higher accuracy. When biometric systems are in operation and the threat level changes, controlling the trade-off between detection error rates can reduce the impact of an attack. In a fused system, varying a single threshold does not allow this to be achieved, but systematic adjustment of a set of parameters does. In this paper, fused decisions from a multi-part, multi-sample sequential architecture are investigated for that purpose in an iris recognition system. A specific implementation of the multi-part architecture is proposed and the effect of the number of parts and samples in the resultant detection error rate is analysed. The effectiveness of the proposed architecture is then evaluated under two specific cases of obfuscation attack: miosis and mydriasis. Results show that robustness to such obfuscation attacks is achieved, since lower error rates than in the case of the non-fused base system are obtained.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Nutrition screening identifies patients at risk of malnutrition to facilitate early nutritional intervention. Studies have reported incompletion and error rates of 30-90% for a range of commonly used screening tools. This study aims to investigate the incompletion and error rates of 3-Minute Nutrition Screening (3-MinNS) and the effect of quality improvement initiatives in improving the overall performance of the screening tool and the referral process for at risk patients. Methods Annual audits were carried out from 2008-2013 on 4467 patients. Value Stream Mapping, Plan-Do-Check-Act cycle and Root Cause Analysis were used in this study to identify gaps and determine the best intervention. The intervention included 1) implementing a nutrition screening protocol, 2) nutrition screening training, 3) nurse empowerment for online dietetics referral of at-risk cases, 4) closed-loop feedback system and 5) removing a component of 3-MinNS that caused the most error without compromising its sensitivity and specificity. Results Nutrition screening error rates were 33% and 31%, with 5% and 8% blank or missing forms, in 2008 and 2009 respectively. For patients at risk of malnutrition, referral to dietetics took up to 7.5 days, with 10% not referred at all. After intervention, the latter decreased to 7% (2010), 4% (2011) and 3% (2012 and 2013), and the mean turnaround time from screening to referral was reduced significantly from 4.3 ± 1.8 days to 0.3 ± 0.4 days (p < 0.001). Error rates were reduced to 25% (2010), 15% (2011), 7% (2012) and 5% (2013) and percentage of blank or missing forms reduced to and remained at 1%. Conclusion Quality improvement initiatives are effective in reducing the incompletion and error rates of nutrition screening, and led to sustainable improvements in the referral process of patients at nutritional risk.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

What helps us determine whether a word is a noun or a verb, without conscious awareness? We report on cues in the way individual English words are spelled, and, for the first time, identify their neural correlates via functional magnetic resonance imaging (fMRI). We used a lexical decision task with trisyllabic nouns and verbs containing orthographic cues that are either consistent or inconsistent with the spelling patterns of words from that grammatical category. Significant linear increases in response times and error rates were observed as orthography became less consistent, paralleled by significant linear decreases in blood oxygen level dependent (BOLD) signal in the left supramarginal gyrus of the left inferior parietal lobule, a brain region implicated in visual word recognition. A similar pattern was observed in the left superior parietal lobule. These findings align with an emergentist view of grammatical category processing which results from sensitivity to multiple probabilistic cues.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study compared IQs and Verbal-Performance IQ discrepancies estimated from two seven-subtest short forms of the Wechsler Adult Intelligence Scale-Revised (WAIS-R) in a sample of 100 subjects referred for neuropsychological assessment. The short forms of Warrington, James, and Maciejewski (1986) and Ward (1990) yielded similar correlation coefficients and absolute error rates with respect to WAIS-R IQs, although the Warrington short form requires more time to administer and score. Both short forms were able to detect significant Verbal-Performance IQ discrepancies 70% of the time. However, they incorrectly yielded significant discrepancies for approximately 25% of the sample who did not have significant differences on the full WAIS-R. The results do not support reporting and interpreting significant Verbal-Performance IQ discrepancies estimated from these short forms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The MFG test is a family-based association test that detects genetic effects contributing to disease in offspring, including offspring allelic effects, maternal allelic effects and MFG incompatibility effects. Like many other family-based association tests, it assumes that the offspring survival and the offspring-parent genotypes are conditionally independent provided the offspring is affected. However, when the putative disease-increasing locus can affect another competing phenotype, for example, offspring viability, the conditional independence assumption fails and these tests could lead to incorrect conclusions regarding the role of the gene in disease. We propose the v-MFG test to adjust for the genetic effects on one phenotype, e.g., viability, when testing the effects of that locus on another phenotype, e.g., disease. Using genotype data from nuclear families containing parents and at least one affected offspring, the v-MFG test models the distribution of family genotypes conditional on offspring phenotypes. It simultaneously estimates genetic effects on two phenotypes, viability and disease. Simulations show that the v-MFG test produces accurate genetic effect estimates on disease as well as on viability under several different scenarios. It generates accurate type-I error rates and provides adequate power with moderate sample sizes to detect genetic effects on disease risk when viability is reduced. We demonstrate the v-MFG test with HLA-DRB1 data from study participants with rheumatoid arthritis (RA) and their parents, we show that the v-MFG test successfully detects an MFG incompatibility effect on RA while simultaneously adjusting for a possible viability loss.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.