846 resultados para Bit error rate
Resumo:
The training and ongoing education of medical practitioners has undergone major changes in an incremental fashion over the past 15 years. These changes have been driven by patient safety, educational, economic and legislative/regulatory factors. In the near future, training in procedural skills will undergo a paradigm shift to proficiency based progression with associated requirements for competence-based programmes, valid, reliable assessment tools and simulation technology. Before training begins, the learning outcomes require clear definition; any form of assessment applied should include measurement of these outcomes. Currently training in a procedural skill often takes place on an ad hoc basis. The number of attempts necessary to attain a defined degree of proficiency varies from procedure to procedure. Convincing evidence exists that simulation training helps trainees to acquire skills more efficiently rather than relying on opportunities in their clinical practice. Simulation provides a safe, stress free environment for trainees for skill acquisition, generalization and transfer via deliberate practice. The work described in this thesis contributes to a greater understanding of how medical procedures can be performed more safely and effectively through education. The effect of feedback, provided to novices in a standardized setting on a bench model, based on knowledge of performance was associated with an increase in the speed of skill acquisition and a decrease in error rate during initial learning. The timing of feedback was also associated with effective learning of skill. A marked attrition of skills (independent of the type of feedback provided) was demonstrable 24 hrs after they have first been learned. Using the principles of feedback as described above, when studying the effect of an intense training program on novices of varied years of experience in anaesthesia (i.e. the present training programmes / courses of an intense training day for one or more procedures). There was a marked attrition of skill at 24 hours with a significant correlation with increasing years of experience; there also appeared to be an inverse relationship between years of experience in anaesthesia and performance. The greater the number of years of practice experience, the longer it required a learner to acquire a new skill. The findings of the studies described in this thesis may have important implications for the trainers, trainees and training bodies in the design and implementation of training courses and the formats of delivery of changing curricula. Both curricula and training modalities will need to take account of characteristics of individual learners and the dynamic nature of procedural healthcare.
Resumo:
BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.
Resumo:
Wireless enabled portable devices must operate with the highest possible energy efficiency while still maintaining a minimum level and quality of service to meet the user's expectations. The authors analyse the performance of a new pointer-based medium access control protocol that was designed to significantly improve the energy efficiency of user terminals in wireless local area networks. The new protocol, pointer controlled slot allocation and resynchronisation protocol (PCSAR), is based on the existing IEEE 802.11 point coordination function (PCF) standard. PCSAR reduces energy consumption by removing the need for power saving stations to remain awake and listen to the channel. Using OPNET, simulations were performed under symmetric channel loading conditions to compare the performance of PCSAR with the infrastructure power saving mode of IEEE 802.11, PCF-PS. The simulation results demonstrate a significant improvement in energy efficiency without significant reduction in performance when using PCSAR. For a wireless network consisting of an access point and 8 stations in power saving mode, the energy saving was up to 31% while using PCSAR instead of PCF-PS, depending upon frame error rate and load. The results also show that PCSAR offers significantly reduced uplink access delay over PCF-PS while modestly improving uplink throughput.
Resumo:
Objectives: It is increasingly important to develop predictors of treatment response and outcome in schizophrenia. Neuropsychological impairments, particularly those reflecting frontal lobe function, appear to predict poor outcome. Eye movement abnormalities probably also reflect frontal lobe deficits. We wished to see if these two aspects of schizophrenia were correlated and whether they could distinguish a treatment resistant from a treatment responsive group. Methods: Ten treatment resistant schizophrenic patients were compared with ten treatment responsive patients on three eye movement paradigms (reflexive saccades, antisaccades and smooth pursuit), clinical psychopathology (BPRS, SANS and CGI) and a neuropsychological test battery designed to detect frontal lobe dysfunction. Ten aged-matched controls also carried out the eye movement tasks. Results: Both treatment responsive (p = 0.038) and treatment resistant (p = 0.007) patients differed significantly from controls on the antisaccade task. The treatment resistant group had a higher error rate than the treatment responsive group, but the difference was not statistically significant. Similar poor neuropsychological test performance was found in both groups. Conclusions: To demonstrate the biological differences characteristic of treatment resistance, larger sample sizes and wider differences in outcome between the two groups are necessary.
Resumo:
This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.
Resumo:
In this paper, we present a new approach to visual speech recognition which improves contextual modelling by combining Inter-Frame Dependent and Hidden Markov Models. This approach captures contextual information in visual speech that may be lost using a Hidden Markov Model alone. We apply contextual modelling to a large speaker independent isolated digit recognition task, and compare our approach to two commonly adopted feature based techniques for incorporating speech dynamics. Results are presented from baseline feature based systems and the combined modelling technique. We illustrate that both of these techniques achieve similar levels of performance when used independently. However significant improvements in performance can be achieved through a combination of the two. In particular we report an improvement in excess of 17% relative Word Error Rate in comparison to our best baseline system.
Resumo:
In this paper, a novel motion-tracking scheme using scale-invariant features is proposed for automatic cell motility analysis in gray-scale microscopic videos, particularly for the live-cell tracking in low-contrast differential interference contrast (DIC) microscopy. In the proposed approach, scale-invariant feature transform (SIFT) points around live cells in the microscopic image are detected, and a structure locality preservation (SLP) scheme using Laplacian Eigenmap is proposed to track the SIFT feature points along successive frames of low-contrast DIC videos. Experiments on low-contrast DIC microscopic videos of various live-cell lines shows that in comparison with principal component analysis (PCA) based SIFT tracking, the proposed Laplacian-SIFT can significantly reduce the error rate of SIFT feature tracking. With this enhancement, further experimental results demonstrate that the proposed scheme is a robust and accurate approach to tackling the challenge of live-cell tracking in DIC microscopy.
Resumo:
This paper introduces a new technique for palmprint recognition based on Fisher Linear Discriminant Analysis (FLDA) and Gabor filter bank. This method involves convolving a palmprint image with a bank of Gabor filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, FLDA is applied for dimensionality reduction and class separability. Since the palmprint features are derived from the principal lines, wrinkles and texture along the palm area. One should carefully consider this fact when selecting the appropriate palm region for the feature extraction process in order to enhance recognition accuracy. To address this problem, an improved region of interest (ROI) extraction algorithm is introduced. This algorithm allows for an efficient extraction of the whole palm area by ignoring all the undesirable parts, such as the fingers and background. Experiments have shown that the proposed method yields attractive performances as evidenced by an Equal Error Rate (EER) of 0.03%.
Resumo:
In this paper, we present a novel approach to person verification by fusing face and lip features. Specifically, the face is modeled by the discriminative common vector and the discrete wavelet transform. Our lip features are simple geometric features based on a lip contour, which can be interpreted as multiple spatial widths and heights from a center of mass. In order to combine these features, we consider two simple fusion strategies: data fusion before training and score fusion after training, working with two different face databases. Fusing them together boosts the performance to achieve an equal error rate as low as 0.4% and 0.28%, respectively, confirming that our approach of fusing lips and face is effective and promising.
Resumo:
For the first time in this paper we present results showing the effect of speaker head pose angle on automatic lip-reading performance over a wide range of closely spaced angles. We analyse the effect head pose has upon the features themselves and show that by selecting coefficients with minimum variance w.r.t. pose angle, recognition performance can be improved when train-test pose angles differ. Experiments are conducted using the initial phase of a unique multi view Audio-Visual database designed specifically for research and development of pose-invariant lip-reading systems. We firstly show that it is the higher order horizontal spatial frequency components that become most detrimental as the pose deviates. Secondly we assess the performance of different feature selection masks across a range of pose angles including a new mask based on Minimum Cross-Pose Variance coefficients. We report a relative improvement of 50% in Word Error Rate when using our selection mask over a common energy based selection during profile view lip-reading.
Resumo:
Conventional approaches of digital modulation schemes make use of amplitude, frequency and/or phase as modulation characteristic to transmit data. In this paper, we exploit circular polarization (CP) of the propagating electromagnetic carrier as modulation attribute which is a novel concept in digital communications. The requirement of antenna alignment to maximize received power is eliminated for CP signals and these are not affected by linearly polarized jamming signals. The work presents the concept of Circular Polarization Modulation for 2, 4 and 8 states of carrier and refers them as binary circular polarization modulation (BCPM), quaternary circular polarization modulation (QCPM) and 8-state circular polarization modulation (8CPM) respectively. Issues of modulation, demodulation, 3D symbol constellations and 3D propagating waveforms for the proposed modulation schemes are presented and analyzed in the presence of channel effects, and they are shown to have the same bit error performance in the presence of AWGN compared with conventional schemes while provide 3dB gain in the flat Rayleigh fading channel.
Resumo:
Objectives: Study objectives were to investigate the prevalence and causes of prescribing errors amongst foundation doctors (i.e. junior doctors in their first (F1) or second (F2) year of post-graduate training), describe their knowledge and experience of prescribing errors, and explore their self-efficacy (i.e. confidence) in prescribing.
Method: A three-part mixed-methods design was used, comprising: prospective observational study; semi-structured interviews and cross-sectional survey. All doctors prescribing in eight purposively selected hospitals in Scotland participated. All foundation doctors throughout Scotland participated in the survey. The number of prescribing errors per patient, doctor, ward and hospital, perceived causes of errors and a measure of doctors’ self-efficacy were established.
Results: 4710 patient charts and 44,726 prescribed medicines were reviewed. There were 3364 errors, affecting 1700 (36.1%) charts (overall error rate: 7.5%; F1:7.4%; F2:8.6%; consultants:6.3%). Higher error rates were associated with : teaching hospitals (p,0.001), surgical (p = ,0.001) or mixed wards (0.008) rather thanmedical ward, higher patient turnover wards (p,0.001), a greater number of prescribed medicines (p,0.001) and the months December and June (p,0.001). One hundred errors were discussed in 40 interviews. Error causation was multi-factorial; work environment and team factors were particularly noted. Of 548 completed questionnaires (national response rate of 35.4%), 508 (92.7% of respondents) reported errors, most of which (328 (64.6%) did not reach the patient. Pressure from other staff, workload and interruptions were cited as the main causes of errors. Foundation year 2 doctors reported greater confidence than year 1 doctors in deciding the most appropriate medication regimen.
Conclusions: Prescribing errors are frequent and of complex causation. Foundation doctors made more errors than other doctors, but undertook the majority of prescribing, making them a key target for intervention. Contributing causes included work environment, team, task, individual and patient factors. Further work is needed to develop and assess interventions that address these.
Resumo:
Future digital signal processing (DSP) systems must provide robustness on algorithm and application level to the presence of reliability issues that come along with corresponding implementations in modern semiconductor process technologies. In this paper, we address this issue by investigating the impact of unreliable memories on general DSP systems. In particular, we propose a novel framework to characterize the effects of unreliable memories, which enables us to devise novel methods to mitigate the associated performance loss. We propose to deploy specifically designed data representations, which have the capability of substantially improving the system reliability compared to that realized by conventional data representations used in digital integrated circuits, such as 2's-complement or sign-magnitude number formats. To demonstrate the efficacy of the proposed framework, we analyze the impact of unreliable memories on coded communication systems, and we show that the deployment of optimized data representations substantially improves the error-rate performance of such systems.
Resumo:
In this paper, we analyze the performance of cognitive amplify-and-forward (AF) relay networks with beamforming under the peak interference power constraint of the primary user (PU). We focus on the scenario that beamforming is applied at the multi-antenna secondary transmitter and receiver. Also, the secondary relay network operates in channel state information-assisted AF mode, and the signals undergo independent Nakagami-m fading. In particular, closed-form expressions for the outage probability and symbol error rate (SER) of the considered network over Nakagami-m fading are presented. More importantly, asymptotic closed-form expressions for the outage probability and SER are derived. These tractable closed-form expressions for the network performance readily enable us to evaluate and examine the impact of network parameters on the system performance. Specifically, the impact of the number of antennas, the fading severity parameters, the channel mean powers, and the peak interference power is addressed. The asymptotic analysis manifests that the peak interference power constraint imposed on the secondary relay network has no effect on the diversity gain. However, the coding gain is affected by the fading parameters of the links from the primary receiver to the secondary relay network
Resumo:
We consider transmit antenna selection with receive generalized selection combining (TAS/GSC) for cognitive decodeand-forward (DF) relaying in Nakagami-m fading channels. In an effort to assess the performance, the probability density function and the cumulative distribution function of the endto-end SNR are derived using the moment generating function, from which new exact closed-form expressions for the outage probability and the symbol error rate are derived. We then derive a new closed-form expression for the ergodic capacity. More importantly, by deriving the asymptotic expressions for the outage probability and the symbol error rate, as well as the high SNR approximations of the ergodic capacity, we establish new design insights under the two distinct constraint scenarios: 1) proportional interference power constraint, and 2) fixed interference power constraint. Several pivotal conclusions are reached. For the first scenario, the full diversity order of the
outage probability and the symbol error rate is achieved, and the high SNR slope of the ergodic capacity is 1/2. For the second scenario, the diversity order of the outage probability and the symbol error rate is zero with error floors, and the high SNR slope of the ergodic capacity is zero with capacity ceiling.