819 resultados para Classification error rate
Resumo:
This thesis examines the problem of an autonomous agent learning a causal world model of its environment. Previous approaches to learning causal world models have concentrated on environments that are too "easy" (deterministic finite state machines) or too "hard" (containing much hidden state). We describe a new domain --- environments with manifest causal structure --- for learning. In such environments the agent has an abundance of perceptions of its environment. Specifically, it perceives almost all the relevant information it needs to understand the environment. Many environments of interest have manifest causal structure and we show that an agent can learn the manifest aspects of these environments quickly using straightforward learning techniques. We present a new algorithm to learn a rule-based causal world model from observations in the environment. The learning algorithm includes (1) a low level rule-learning algorithm that converges on a good set of specific rules, (2) a concept learning algorithm that learns concepts by finding completely correlated perceptions, and (3) an algorithm that learns general rules. In addition this thesis examines the problem of finding a good expert from a sequence of experts. Each expert has an "error rate"; we wish to find an expert with a low error rate. However, each expert's error rate and the distribution of error rates are unknown. A new expert-finding algorithm is presented and an upper bound on the expected error rate of the expert is derived.
Resumo:
Background Single nucleotide polymorphisms (SNPs) have been used extensively in genetics and epidemiology studies. Traditionally, SNPs that did not pass the Hardy-Weinberg equilibrium (HWE) test were excluded from these analyses. Many investigators have addressed possible causes for departure from HWE, including genotyping errors, population admixture and segmental duplication. Recent large-scale surveys have revealed abundant structural variations in the human genome, including copy number variations (CNVs). This suggests that a significant number of SNPs must be within these regions, which may cause deviation from HWE. Results We performed a Bayesian analysis on the potential effect of copy number variation, segmental duplication and genotyping errors on the behavior of SNPs. Our results suggest that copy number variation is a major factor of HWE violation for SNPs with a small minor allele frequency, when the sample size is large and the genotyping error rate is 0~1%. Conclusions Our study provides the posterior probability that a SNP falls in a CNV or a segmental duplication, given the observed allele frequency of the SNP, sample size and the significance level of HWE testing.
Resumo:
TCP performance degrades when end-to-end connections extend over wireless connections-links which are characterized by high bit error rate and intermittent connectivity. Such link characteristics can significantly degrade TCP performance as the TCP sender assumes wireless losses to be congestion losses resulting in unnecessary congestion control actions. Link errors can be reduced by increasing transmission power, code redundancy (FEC) or number of retransmissions (ARQ). But increasing power costs resources, increasing code redundancy reduces available channel bandwidth and increasing persistency increases end-to-end delay. The paper proposes a TCP optimization through proper tuning of power management, FEC and ARQ in wireless environments (WLAN and WWAN). In particular, we conduct analytical and numerical analysis taking into "wireless-aware" TCP) performance under different settings. Our results show that increasing power, redundancy and/or retransmission levels always improves TCP performance by reducing link-layer losses. However, such improvements are often associated with cost and arbitrary improvement cannot be realized without paying a lot in return. It is therefore important to consider some kind of net utility function that should be optimized, thus maximizing throughput at the least possible cost.
Resumo:
This work considers the effect of hardware constraints that typically arise in practical power-aware wireless sensor network systems. A rigorous methodology is presented that quantifies the effect of output power limit and quantization constraints on bit error rate performance. The approach uses a novel, intuitively appealing means of addressing the output power constraint, wherein the attendant saturation block is mapped from the output of the plant to its input and compensation is then achieved using a robust anti-windup scheme. A priori levels of system performance are attained using a quantitative feedback theory approach on the initial, linear stage of the design paradigm. This hybrid design is assessed experimentally using a fully compliant 802.15.4 testbed where mobility is introduced through the use of autonomous robots. A benchmark comparison between the new approach and a number of existing strategies is also presented.
Resumo:
The training and ongoing education of medical practitioners has undergone major changes in an incremental fashion over the past 15 years. These changes have been driven by patient safety, educational, economic and legislative/regulatory factors. In the near future, training in procedural skills will undergo a paradigm shift to proficiency based progression with associated requirements for competence-based programmes, valid, reliable assessment tools and simulation technology. Before training begins, the learning outcomes require clear definition; any form of assessment applied should include measurement of these outcomes. Currently training in a procedural skill often takes place on an ad hoc basis. The number of attempts necessary to attain a defined degree of proficiency varies from procedure to procedure. Convincing evidence exists that simulation training helps trainees to acquire skills more efficiently rather than relying on opportunities in their clinical practice. Simulation provides a safe, stress free environment for trainees for skill acquisition, generalization and transfer via deliberate practice. The work described in this thesis contributes to a greater understanding of how medical procedures can be performed more safely and effectively through education. The effect of feedback, provided to novices in a standardized setting on a bench model, based on knowledge of performance was associated with an increase in the speed of skill acquisition and a decrease in error rate during initial learning. The timing of feedback was also associated with effective learning of skill. A marked attrition of skills (independent of the type of feedback provided) was demonstrable 24 hrs after they have first been learned. Using the principles of feedback as described above, when studying the effect of an intense training program on novices of varied years of experience in anaesthesia (i.e. the present training programmes / courses of an intense training day for one or more procedures). There was a marked attrition of skill at 24 hours with a significant correlation with increasing years of experience; there also appeared to be an inverse relationship between years of experience in anaesthesia and performance. The greater the number of years of practice experience, the longer it required a learner to acquire a new skill. The findings of the studies described in this thesis may have important implications for the trainers, trainees and training bodies in the design and implementation of training courses and the formats of delivery of changing curricula. Both curricula and training modalities will need to take account of characteristics of individual learners and the dynamic nature of procedural healthcare.
Resumo:
BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.
Resumo:
Wireless enabled portable devices must operate with the highest possible energy efficiency while still maintaining a minimum level and quality of service to meet the user's expectations. The authors analyse the performance of a new pointer-based medium access control protocol that was designed to significantly improve the energy efficiency of user terminals in wireless local area networks. The new protocol, pointer controlled slot allocation and resynchronisation protocol (PCSAR), is based on the existing IEEE 802.11 point coordination function (PCF) standard. PCSAR reduces energy consumption by removing the need for power saving stations to remain awake and listen to the channel. Using OPNET, simulations were performed under symmetric channel loading conditions to compare the performance of PCSAR with the infrastructure power saving mode of IEEE 802.11, PCF-PS. The simulation results demonstrate a significant improvement in energy efficiency without significant reduction in performance when using PCSAR. For a wireless network consisting of an access point and 8 stations in power saving mode, the energy saving was up to 31% while using PCSAR instead of PCF-PS, depending upon frame error rate and load. The results also show that PCSAR offers significantly reduced uplink access delay over PCF-PS while modestly improving uplink throughput.
Resumo:
Objectives: It is increasingly important to develop predictors of treatment response and outcome in schizophrenia. Neuropsychological impairments, particularly those reflecting frontal lobe function, appear to predict poor outcome. Eye movement abnormalities probably also reflect frontal lobe deficits. We wished to see if these two aspects of schizophrenia were correlated and whether they could distinguish a treatment resistant from a treatment responsive group. Methods: Ten treatment resistant schizophrenic patients were compared with ten treatment responsive patients on three eye movement paradigms (reflexive saccades, antisaccades and smooth pursuit), clinical psychopathology (BPRS, SANS and CGI) and a neuropsychological test battery designed to detect frontal lobe dysfunction. Ten aged-matched controls also carried out the eye movement tasks. Results: Both treatment responsive (p = 0.038) and treatment resistant (p = 0.007) patients differed significantly from controls on the antisaccade task. The treatment resistant group had a higher error rate than the treatment responsive group, but the difference was not statistically significant. Similar poor neuropsychological test performance was found in both groups. Conclusions: To demonstrate the biological differences characteristic of treatment resistance, larger sample sizes and wider differences in outcome between the two groups are necessary.
Resumo:
This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.
Resumo:
In this paper, a reduced-complexity soft-interference-cancellation minimum mean-square-error.(SIC-MMSE) iterative equalization method for severe time-dispersive multiple-input-multiple-output (MIMO) channels is proposed. To mitigate the severe time dispersiveness of the channel, a single carrier with cyclic prefix is employed, and the equalization is per-formed in the frequency domain. This simplifies the challenging problem of equalization in MIMO channels due to both the intersymbol interference (ISI) and the coantenna interference (CAI). The proposed iterative algorithm works in two stages. The first stage estimates the transmitted frequency-domain symbols using a low-complexity SIC-MMSE equalizer. The second stage converts the estimated frequency-domain symbols in the time domain and finds their means and variances to incorporate in the SIC-MMSE equalizer in the next iteration. Simulation results show the bit-/symbol-error-rate performance of the SIC-MMSE equalizer, with and without coding, for various modulation schemes.
Resumo:
In this letter, we show how a 2.4-GHz retrodirective array operating in a multipath rich environment can be utilized in order to spatially encrypt digital data. For the first time, we give experimental evidence that digital data that has no mathematical encryption applied to it can be successfully recovered only when it is detected with a receiver that is polarization-matched to that of a reference continuous-wave (CW) pilot tone signal. In addition, we show that successful detection with low bit error rate (BER) will only occur within a highly constrained spatial region colocated close to the position of the CW reference signal. These effects mean that the signal cannot be intercepted and its modulated data recovered at locations other than the constrained spatial region around the position from which the retrodirective communication was initiated.
Resumo:
The authors propose a three-node full diversity cooperative protocol, which allows the retransmission of all symbols. By allowing multiple nodes to transmit simultaneously, relaying transmission only consumes limited bandwidth resource. To facilitate the performance analysis of the proposed cooperative protocol, the lower and upper bounds of the outage probability are first developed, and then the high signal-to-noise ratio behaviour is studied. Our analytical results show that the proposed strategy can achieve full diversity. To achieve the performance gain promised by the cooperative diversity, at the relays decode-and-forward strategy is adopted and an iterative soft-interference-cancellation minimum mean-squared error equaliser is developed. The simulation results compare the bit-error-rate performance of the proposed protocol with the non-cooperative scheme and the scheme presented by Azarian et al. ( 2005).
Resumo:
This letter derives mathematical expressions for the received signal-to-interference-plus-noise ratio (SINR) of uplink Single Carrier (SC) Frequency Division Multiple Access (FDMA) multiuser MIMO systems. An improved frequency domain receiver algorithm is derived for the studied systems, and is shown to be significantly superior to the conventional linear MMSE based receiver in terms of SINR and bit error rate (BER) performance.
Resumo:
In this paper, we present a new approach to visual speech recognition which improves contextual modelling by combining Inter-Frame Dependent and Hidden Markov Models. This approach captures contextual information in visual speech that may be lost using a Hidden Markov Model alone. We apply contextual modelling to a large speaker independent isolated digit recognition task, and compare our approach to two commonly adopted feature based techniques for incorporating speech dynamics. Results are presented from baseline feature based systems and the combined modelling technique. We illustrate that both of these techniques achieve similar levels of performance when used independently. However significant improvements in performance can be achieved through a combination of the two. In particular we report an improvement in excess of 17% relative Word Error Rate in comparison to our best baseline system.
Resumo:
In this paper, a novel motion-tracking scheme using scale-invariant features is proposed for automatic cell motility analysis in gray-scale microscopic videos, particularly for the live-cell tracking in low-contrast differential interference contrast (DIC) microscopy. In the proposed approach, scale-invariant feature transform (SIFT) points around live cells in the microscopic image are detected, and a structure locality preservation (SLP) scheme using Laplacian Eigenmap is proposed to track the SIFT feature points along successive frames of low-contrast DIC videos. Experiments on low-contrast DIC microscopic videos of various live-cell lines shows that in comparison with principal component analysis (PCA) based SIFT tracking, the proposed Laplacian-SIFT can significantly reduce the error rate of SIFT feature tracking. With this enhancement, further experimental results demonstrate that the proposed scheme is a robust and accurate approach to tackling the challenge of live-cell tracking in DIC microscopy.