948 resultados para ERROR rates
Resumo:
BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.
Resumo:
Complex diseases will have multiple functional sites, and it will be invaluable to understand the cross-locus interaction in terms of linkage disequilibrium (LD) between those sites (epistasis) in addition to the haplotype-LD effects. We investigated the statistical properties of a class of matrix-based statistics to assess this epistasis. These statistical methods include two LD contrast tests (Zaykin et al., 2006) and partial least squares regression (Wang et al., 2008). To estimate Type 1 error rates and power, we simulated multiple two-variant disease models using the SIMLA software package. SIMLA allows for the joint action of up to two disease genes in the simulated data with all possible multiplicative interaction effects between them. Our goal was to detect an interaction between multiple disease-causing variants by means of their linkage disequilibrium (LD) patterns with other markers. We measured the effects of marginal disease effect size, haplotype LD, disease prevalence and minor allele frequency have on cross-locus interaction (epistasis). In the setting of strong allele effects and strong interaction, the correlation between the two disease genes was weak (r=0.2). In a complex system with multiple correlations (both marginal and interaction), it was difficult to determine the source of a significant result. Despite these complications, the partial least squares and modified LD contrast methods maintained adequate power to detect the epistatic effects; however, for many of the analyses we often could not separate interaction from a strong marginal effect. While we did not exhaust the entire parameter space of possible models, we do provide guidance on the effects that population parameters have on cross-locus interaction.
Resumo:
In view of the evidence that cognitive deficits in schizophrenia are critically important for long-term outcome, it is essential to establish the effects that the various antipsychotic compounds have on cognition, particularly second-generation drugs. This parallel group, placebo-controlled study aimed to compare the effects in healthy volunteers (n = 128) of acute doses of the atypical antipsychotics amisulpride (300 mg) and risperidone (3 mg) to those of chlorpromazine (100 mg) on tests thought relevant to the schizophrenic process: auditory and visual latent inhibition, prepulse inhibition of the acoustic startle response, executive function and eye movements. The drugs tested were not found to affect auditory latent inhibition, prepulse inhibition or executive functioning as measured by the Cambridge Neuropsychological Test Battery and the FAS test of verbal fluency. However, risperidone disrupted and amisulpride showed a trend to disrupt visual latent inhibition. Although amisulpride did not affect eye movements, both risperidone and chlorpromazine decreased peak saccadic velocity and increased antisaccade error rates, which, in the risperidone group, correlated with drug-induced akathisia. It was concluded that single doses of these drugs appear to have little effect on cognition, but may affect eye movement parameters in accordance with the amount of sedation and akathisia they produce. The effect risperidone had on latent inhibition is likely to relate to its serotonergic properties. Furthermore, as the trend for disrupted visual latent inhibition following amisulpride was similar in nature to that which would be expected with amphetamine, it was concluded that its behaviour in this model is consistent with its preferential presynaptic dopamine antagonistic activity in low dose and its efficacy in the negative symptoms of schizophrenia.
Resumo:
This study finds evidence that attempts to reduce costs and error rates in the Inland Revenue through the use of e-commerce technology are flawed. While it is technically possible to write software that will record tax data, and then transmit it to the Inland Revenue, there is little demand for this service. The key finding is that the tax system is so complex that many people are unable to complete their own tax returns. This complexity cannot be overcome by well-designed software. The recommendation is to encourage the use of agents to assist taxpayers or simplify the tax system. The Inland Revenue is interested in saving administrative costs and errors by encouraging electronic submission of tax returns. To achieve these objectives, given the raw data it would seem clear that the focus should be on facilitating the work of agents.
Resumo:
This paper investigates the problem of speaker identi-fication and verification in noisy conditions, assuming that speechsignals are corrupted by environmental noise, but knowledgeabout the noise characteristics is not available. This research ismotivated in part by the potential application of speaker recog-nition technologies on handheld devices or the Internet. Whilethe technologies promise an additional biometric layer of securityto protect the user, the practical implementation of such systemsfaces many challenges. One of these is environmental noise. Due tothe mobile nature of such systems, the noise sources can be highlytime-varying and potentially unknown. This raises the require-ment for noise robustness in the absence of information about thenoise. This paper describes a method that combines multicondi-tion model training and missing-feature theory to model noisewith unknown temporal-spectral characteristics. Multiconditiontraining is conducted using simulated noisy data with limitednoise variation, providing a “coarse” compensation for the noise,and missing-feature theory is applied to refine the compensationby ignoring noise variation outside the given training conditions,thereby reducing the training and testing mismatch. This paperis focused on several issues relating to the implementation of thenew model for real-world applications. These include the gener-ation of multicondition training data to model noisy speech, thecombination of different training data to optimize the recognitionperformance, and the reduction of the model’s complexity. Thenew algorithm was tested using two databases with simulated andrealistic noisy speech data. The first database is a redevelopmentof the TIMIT database by rerecording the data in the presence ofvarious noise types, used to test the model for speaker identifica-tion with a focus on the varieties of noise. The second database isa handheld-device database collected in realistic noisy conditions,used to further validate the model for real-world speaker verifica-tion. The new model is compared to baseline systems and is foundto achieve lower error rates.
Resumo:
Response time (RT) variability is a common finding in ADHD research. RT variability may reflect frontal cortex function and may be related to deficits in sustained attention. The existence of a sustained attention deficit in ADHD has been debated, largely because of inconsistent evidence of time-on-task effects. A fixed-sequence Sustained Attention to Response Task (SART) was given to 29 control, 39 unimpaired and 24 impaired-ADHD children (impairment defined by the number of commission errors). The response time data were analysed using the Fast Fourier Transform, to define the fast-frequency and slow-frequency contributions to overall response variability. The impaired-ADHD group progressively slowed in RT over the course of the 5.5 min task, as reflected in this group's greater slow-frequency variability. The fast-frequency trial-to-trial variability was also significantly greater, but did not differentially worsen over the course of the task. The higher error rates of the impaired-ADHD group did not become differentially greater over the length of the task. The progressive slowing in mean RT over the course of the task may relate to a deficit in arousal in the impaired-ADHD group. The consistently poor performance in fast-frequency variability and error rates may be due to difficulties in sustained attention that fluctuate on a trial-to-trial basis. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Difficulties in phonological processing have been proposed to be the core symptom of developmental dyslexia. Phoneme awareness tasks have been shown to both index and predict individual reading ability. In a previous experiment, we observed that dyslexic adults fail to display a P3a modulation for phonological deviants within an alliterated word stream when concentrating primarily on a lexical decision task [Fosker and Thierry, 2004, Neurosci. Lett. 357, 171-174]. Here we recorded the P3b oddball response elicited by initial phonemes within streams of alliterated words and pseudo-words when participants focussed directly on detecting the oddball phonemes. Despite significant verbal screening test differences between dyslexic adults and controls, the error rates, reactions times, and main components (P2, N2, P3a, and P3b) were indistinguishable across groups. The only difference between groups was found in the NI range, where dyslexic participants failed to show the modulations induced by phonological pairings (/b/-/p/ versus /r/ /g/) in controls. In light of previous P3a differences, these results suggest an important role for attention allocation in the manifestation of phonological deficits in developmental dyslexia. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
Errors involving drug prescriptions are a key target for patient safety initiatives. Recent studies have focused on error rates across different grades of doctors in order to target interventions. However, many prescriptions are not instigated by the doctor who writes them. It is important to clarify how often this occurs in order to interpret these studies and create interventions. This study aimed to provisionally quantify and describe prescriptions where the identity of the decision maker and prescription writer differed.
Resumo:
Objectives: Study objectives were to investigate the prevalence and causes of prescribing errors amongst foundation doctors (i.e. junior doctors in their first (F1) or second (F2) year of post-graduate training), describe their knowledge and experience of prescribing errors, and explore their self-efficacy (i.e. confidence) in prescribing.
Method: A three-part mixed-methods design was used, comprising: prospective observational study; semi-structured interviews and cross-sectional survey. All doctors prescribing in eight purposively selected hospitals in Scotland participated. All foundation doctors throughout Scotland participated in the survey. The number of prescribing errors per patient, doctor, ward and hospital, perceived causes of errors and a measure of doctors’ self-efficacy were established.
Results: 4710 patient charts and 44,726 prescribed medicines were reviewed. There were 3364 errors, affecting 1700 (36.1%) charts (overall error rate: 7.5%; F1:7.4%; F2:8.6%; consultants:6.3%). Higher error rates were associated with : teaching hospitals (p,0.001), surgical (p = ,0.001) or mixed wards (0.008) rather thanmedical ward, higher patient turnover wards (p,0.001), a greater number of prescribed medicines (p,0.001) and the months December and June (p,0.001). One hundred errors were discussed in 40 interviews. Error causation was multi-factorial; work environment and team factors were particularly noted. Of 548 completed questionnaires (national response rate of 35.4%), 508 (92.7% of respondents) reported errors, most of which (328 (64.6%) did not reach the patient. Pressure from other staff, workload and interruptions were cited as the main causes of errors. Foundation year 2 doctors reported greater confidence than year 1 doctors in deciding the most appropriate medication regimen.
Conclusions: Prescribing errors are frequent and of complex causation. Foundation doctors made more errors than other doctors, but undertook the majority of prescribing, making them a key target for intervention. Contributing causes included work environment, team, task, individual and patient factors. Further work is needed to develop and assess interventions that address these.
Resumo:
Hardware impairments in physical transceivers are known to have a deleterious effect on communication systems; however, very few contributions have investigated their impact on relaying. This paper quantifies the impact of transceiver impairments in a two-way amplify-and-forward configuration. More specifically, the effective signal-to-noise-and-distortion ratios at both transmitter nodes are obtained. These are used to deduce exact and asymptotic closed-form expressions for the outage probabilities (OPs), as well as tractable formulations for the symbol error rates (SERs). It is explicitly shown that non-zero lower bounds on the OP and SER exist in the high-power regime---this stands in contrast to the special case of ideal hardware, where the OP and SER go asymptotically to zero.
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
The end of Dennard scaling has pushed power consumption into a first order concern for current systems, on par with performance. As a result, near-threshold voltage computing (NTVC) has been proposed as a potential means to tackle the limited cooling capacity of CMOS technology. Hardware operating in NTV consumes significantly less power, at the cost of lower frequency, and thus reduced performance, as well as increased error rates. In this paper, we investigate if a low-power systems-on-chip, consisting of ARM's asymmetric big.LITTLE technology, can be an alternative to conventional high performance multicore processors in terms of power/energy in an unreliable scenario. For our study, we use the Conjugate Gradient solver, an algorithm representative of the computations performed by a large range of scientific and engineering codes.
Outperformance in exchange-traded fund pricing deviations: Generalized control of data snooping bias
Resumo:
An investigation into exchange-traded fund (ETF) outperforrnance during the period 2008-2012 is undertaken utilizing a data set of 288 U.S. traded securities. ETFs are tested for net asset value (NAV) premium, underlying index and market benchmark outperformance, with Sharpe, Treynor, and Sortino ratios employed as risk-adjusted performance measures. A key contribution is the application of an innovative generalized stepdown procedure in controlling for data snooping bias. We find that a large proportion of optimized replication and debt asset class ETFs display risk-adjusted premiums with energy and precious metals focused funds outperforming the S&P 500 market benchmark.
Resumo:
A digital directional modulation (DM) transmitter structure is proposed from a practical implementation point of view in this paper. This digital DM architecture is built with the help of several off-the-shelf physical layer wireless experiment platform hardware boards. When compared with previous analogue DM transmitter architectures, the digital means offers more precise and fast control on the updates of the array excitations. More importantly, it is an ideal physical arrangement to implement the most universal DM synthesis algorithm, i.e., the orthogonal vector approach. The practical issues in digital DM system calibrations are described and solved. The bit error rates (BERs) are measured via real-time data transmissions to illustrate the DM advantages, in terms of secrecy performance, over conventional non-DM beam-steering transmitters.