960 resultados para ERROR rates


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background: Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods: Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion: At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Escherichia coli dnaQ gene encodes the proofreading 3' exonuclease (epsilon subunit) of DNA polymerase III holoenzyme and is a critical determinant of chromosomal replication fidelity. We constructed by site-specific mutagenesis a mutant, dnaQ926, by changing two conserved amino acid residues (Asp-12-->Ala and Glu-14-->Ala) in the Exo I motif, which, by analogy to other proofreading exonucleases, is essential for the catalytic activity. When residing on a plasmid, dnaQ926 confers a strong, dominant mutator phenotype, suggesting that the protein, although deficient in exonuclease activity, still binds to the polymerase subunit (alpha subunit or dnaE gene product). When dnaQ926 was transferred to the chromosome, replacing the wild-type gene, the cells became inviable. However, viable dnaQ926 strains could be obtained if they contained one of the dnaE alleles previously characterized in our laboratory as antimutator alleles or if it carried a multicopy plasmid containing the E. coli mutL+ gene. These results suggest that loss of proofreading exonuclease activity in dnaQ926 is lethal due to excessive error rates (error catastrophe). Error catastrophe results from both the loss of proofreading and the subsequent saturation of DNA mismatch repair. The probability of lethality by excessive mutation is supported by calculations estimating the number of inactivating mutations in essential genes per chromosome replication.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Medication errors are associated with significant morbidity and people with mental health problems may be particularly susceptible to medication errors due to various factors. Primary care has a key role in improving medication safety in this vulnerable population. The complexity of services, involving primary and secondary care and social services, and potential training issues may increase error rates, with physical medicines representing a particular risk. Service users may be cognitively impaired and fail to identify an error placing additional responsibilities on clinicians. The potential role of carers in error prevention and medication safety requires further elaboration. A potential lack of trust between service users and clinicians may impair honest communication about medication issues leading to errors. There is a need for detailed research within this field.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We investigate the use of different direct detection modulation formats in a wavelength switched optical network. We find the minimum time it takes a tunable sampled grating distributed Bragg reflector laser to recover after switching from one wavelength channel to another for different modulation formats. The recovery time is investigated utilizing a field programmable gate array which operates as a time resolved bit error rate detector. The detector offers 93 ps resolution operating at 10.7 Gb/s and allows for all the data received to contribute to the measurement, allowing low bit error rates to be measured at high speed. The recovery times for 10.7 Gb/s non-return-to-zero on–off keyed modulation, 10.7 Gb/s differentially phase shift keyed signal and 21.4 Gb/s differentially quadrature phase shift keyed formats can be as low as 4 ns, 7 ns and 40 ns, respectively. The time resolved phase noise associated with laser settling is simultaneously measured for 21.4 Gb/s differentially quadrature phase shift keyed data and it shows that the phase noise coupled with frequency error is the primary limitation on transmitting immediately after a laser switching event.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We investigate the use of different direct detection modulation formats in a wavelength switched optical network. We find the minimum time it takes a tunable sampled grating distributed Bragg reflector laser to recover after switching from one wavelength channel to another for different modulation formats. The recovery time is investigated utilizing a field programmable gate array which operates as a time resolved bit error rate detector. The detector offers 93 ps resolution operating at 10.7 Gb/s and allows for all the data received to contribute to the measurement, allowing low bit error rates to be measured at high speed. The recovery times for 10.7 Gb/s non-return-to-zero on–off keyed modulation, 10.7 Gb/s differentially phase shift keyed signal and 21.4 Gb/s differentially quadrature phase shift keyed formats can be as low as 4 ns, 7 ns and 40 ns, respectively. The time resolved phase noise associated with laser settling is simultaneously measured for 21.4 Gb/s differentially quadrature phase shift keyed data and it shows that the phase noise coupled with frequency error is the primary limitation on transmitting immediately after a laser switching event.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: To determine (a) the effect of different sunglass tint colorations on traffic signal detection and recognition for color normal and color deficient observers, and (b) the adequacy of coloration requirements in current sunglass standards. Methods: Twenty color-normals and 49 color-deficient males performed a tracking task while wearing sunglasses of different colorations (clear, gray, green, yellow-green, yellow-brown, red-brown). At random intervals, simulated traffic light signals were presented against a white background at 5° to the right or left and observers were instructed to identify signal color (red/yellow/green) by pressing a response button as quickly as possible; response times and response errors were recorded. Results: Signal color and sunglass tint had significant effects on response times and error rates (p < 0.05), with significant between-color group differences and interaction effects. Response times for color deficient people were considerably slower than color normals for both red and yellow signals for all sunglass tints, but for green signals they were only noticeably slower with the green and yellow-green lenses. For most of the color deficient groups, there were recognition errors for yellow signals combined with the yellow-green and green tints. In addition, deuteranopes had problems for red signals combined with red-brown and yellow-brown tints, and protanopes had problems for green signals combined with the green tint and for red signals combined with the red-brown tint. Conclusions: Many sunglass tints currently permitted for drivers and riders cause a measurable decrement in the ability of color deficient observers to detect and recognize traffic signals. In general, combinations of signals and sunglasses of similar colors are of particular concern. This is prima facie evidence of a risk in the use of these tints for driving and cautions against the relaxation of coloration limits in sunglasses beyond those represented in the study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information fusion in biometrics has received considerable attention. The architecture proposed here is based on the sequential integration of multi-instance and multi-sample fusion schemes. This method is analytically shown to improve the performance and allow a controlled trade-off between false alarms and false rejects when the classifier decisions are statistically independent. Equations developed for detection error rates are experimentally evaluated by considering the proposed architecture for text dependent speaker verification using HMM based digit dependent speaker models. The tuning of parameters, n classifiers and m attempts/samples, is investigated and the resultant detection error trade-off performance is evaluated on individual digits. Results show that performance improvement can be achieved even for weaker classifiers (FRR-19.6%, FAR-16.7%). The architectures investigated apply to speaker verification from spoken digit strings such as credit card numbers in telephone or VOIP or internet based applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Speaker verification is the process of verifying the identity of a person by analysing their speech. There are several important applications for automatic speaker verification (ASV) technology including suspect identification, tracking terrorists and detecting a person’s presence at a remote location in the surveillance domain, as well as person authentication for phone banking and credit card transactions in the private sector. Telephones and telephony networks provide a natural medium for these applications. The aim of this work is to improve the usefulness of ASV technology for practical applications in the presence of adverse conditions. In a telephony environment, background noise, handset mismatch, channel distortions, room acoustics and restrictions on the available testing and training data are common sources of errors for ASV systems. Two research themes were pursued to overcome these adverse conditions: Modelling mismatch and modelling uncertainty. To directly address the performance degradation incurred through mismatched conditions it was proposed to directly model this mismatch. Feature mapping was evaluated for combating handset mismatch and was extended through the use of a blind clustering algorithm to remove the need for accurate handset labels for the training data. Mismatch modelling was then generalised by explicitly modelling the session conditions as a constrained offset of the speaker model means. This session variability modelling approach enabled the modelling of arbitrary sources of mismatch, including handset type, and halved the error rates in many cases. Methods to model the uncertainty in speaker model estimates and verification scores were developed to address the difficulties of limited training and testing data. The Bayes factor was introduced to account for the uncertainty of the speaker model estimates in testing by applying Bayesian theory to the verification criterion, with improved performance in matched conditions. Modelling the uncertainty in the verification score itself met with significant success. Estimating a confidence interval for the "true" verification score enabled an order of magnitude reduction in the average quantity of speech required to make a confident verification decision based on a threshold. The confidence measures developed in this work may also have significant applications for forensic speaker verification tasks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The rapid growth of mobile telephone use, satellite services, and now the wireless Internet and WLANs are generating tremendous changes in telecommunication and networking. As indoor wireless communications become more prevalent, modeling indoor radio wave propagation in populated environments is a topic of significant interest. Wireless MIMO communication exploits phenomena such as multipath propagation to increase data throughput and range, or reduce bit error rates, rather than attempting to eliminate effects of multipath propagation as traditional SISO communication systems seek to do. The MIMO approach can yield significant gains for both link and network capacities, with no additional transmitting power or bandwidth consumption when compared to conventional single-array diversity methods. When MIMO and OFDM systems are combined and deployed in a suitable rich scattering environment such as indoors, a significant capacity gain can be observed due to the assurance of multipath propagation. Channel variations can occur as a result of movement of personnel, industrial machinery, vehicles and other equipment moving within the indoor environment. The time-varying effects on the propagation channel in populated indoor environments depend on the different pedestrian traffic conditions and the particular type of environment considered. A systematic measurement campaign to study pedestrian movement effects in indoor MIMO-OFDM channels has not yet been fully undertaken. Measuring channel variations caused by the relative positioning of pedestrians is essential in the study of indoor MIMO-OFDM broadband wireless networks. Theoretically, due to high multipath scattering, an increase in MIMO-OFDM channel capacity is expected when pedestrians are present. However, measurements indicate that some reductions in channel capacity could be observed as the number of pedestrians approaches 10 due to a reduction in multipath conditions as more human bodies absorb the wireless signals. This dissertation presents a systematic characterization of the effects of pedestrians in indoor MIMO-OFDM channels. Measurement results, using the MIMO-OFDM channel sounder developed at the CSIRO ICT Centre, have been validated by a customized Geometric Optics-based ray tracing simulation. Based on measured and simulated MIMO-OFDM channel capacity and MIMO-OFDM capacity dynamic range, an improved deterministic model for MIMO-OFDM channels in indoor populated environments is presented. The model can be used for the design and analysis of future WLAN to be deployed in indoor environments. The results obtained show that, in both Fixed SNR and Fixed Tx for deterministic condition, the channel capacity dynamic range rose with the number of pedestrians as well as with the number of antenna combinations. In random scenarios with 10 pedestrians, an increment in channel capacity of up to 0.89 bits/sec/Hz in Fixed SNR and up to 1.52 bits/sec/Hz in Fixed Tx has been recorded compared to the one pedestrian scenario. In addition, from the results a maximum increase in average channel capacity of 49% has been measured while 4 antenna elements are used, compared with 2 antenna elements. The highest measured average capacity, 11.75 bits/sec/Hz, corresponds to the 4x4 array with 10 pedestrians moving randomly. Moreover, Additionally, the spread between the highest and lowest value of the the dynamic range is larger for Fixed Tx, predicted 5.5 bits/sec/Hz and measured 1.5 bits/sec/Hz, in comparison with Fixed SNR criteria, predicted 1.5 bits/sec/Hz and measured 0.7 bits/sec/Hz. This has been confirmed by both measurements and simulations ranging from 1 to 5, 7 and 10 pedestrians.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the use of lip information, in conjunction with speech information, for robust speaker verification in the presence of background noise. It has been previously shown in our own work, and in the work of others, that features extracted from a speaker's moving lips hold speaker dependencies which are complementary with speech features. We demonstrate that the fusion of lip and speech information allows for a highly robust speaker verification system which outperforms the performance of either sub-system. We present a new technique for determining the weighting to be applied to each modality so as to optimize the performance of the fused system. Given a correct weighting, lip information is shown to be highly effective for reducing the false acceptance and false rejection error rates in the presence of background noise

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Investigates the use of lip information, in conjunction with speech information, for robust speaker verification in the presence of background noise. We have previously shown (Int. Conf. on Acoustics, Speech and Signal Proc., vol. 6, pp. 3693-3696, May 1998) that features extracted from a speaker's moving lips hold speaker dependencies which are complementary with speech features. We demonstrate that the fusion of lip and speech information allows for a highly robust speaker verification system which outperforms either subsystem individually. We present a new technique for determining the weighting to be applied to each modality so as to optimize the performance of the fused system. Given a correct weighting, lip information is shown to be highly effective for reducing the false acceptance and false rejection error rates in the presence of background noise

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.