986 resultados para Training algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous research evidence appears to suggest that while they suffer from similiar socio-economic problems to the wider nationalist community, the problems for republican ex-prisoners seem to be on a greater scale. The primary objective of this research was to investigate the current obstacles facing republication ex-prisoners in training and employment and to make proposals for change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Case-Based Reasoning (CBR) uses past experiences to solve new problems. The quality of the past experiences, which are stored as cases in a case base, is a big factor in the performance of a CBR system. The system's competence may be improved by adding problems to the case base after they have been solved and their solutions verified to be correct. However, from time to time, the case base may have to be refined to reduce redundancy and to get rid of any noisy cases that may have been introduced. Many case base maintenance algorithms have been developed to delete noisy and redundant cases. However, different algorithms work well in different situations and it may be difficult for a knowledge engineer to know which one is the best to use for a particular case base. In this thesis, we investigate ways to combine algorithms to produce better deletion decisions than the decisions made by individual algorithms, and ways to choose which algorithm is best for a given case base at a given time. We analyse five of the most commonly-used maintenance algorithms in detail and show how the different algorithms perform better on different datasets. This motivates us to develop a new approach: maintenance by a committee of experts (MACE). MACE allows us to combine maintenance algorithms to produce a composite algorithm which exploits the merits of each of the algorithms that it contains. By combining different algorithms in different ways we can also define algorithms that have different trade-offs between accuracy and deletion. While MACE allows us to define an infinite number of new composite algorithms, we still face the problem of choosing which algorithm to use. To make this choice, we need to be able to identify properties of a case base that are predictive of which maintenance algorithm is best. We examine a number of measures of dataset complexity for this purpose. These provide a numerical way to describe a case base at a given time. We use the numerical description to develop a meta-case-based classification system. This system uses previous experience about which maintenance algorithm was best to use for other case bases to predict which algorithm to use for a new case base. Finally, we give the knowledge engineer more control over the deletion process by creating incremental versions of the maintenance algorithms. These incremental algorithms suggest one case at a time for deletion rather than a group of cases, which allows the knowledge engineer to decide whether or not each case in turn should be deleted or kept. We also develop incremental versions of the complexity measures, allowing us to create an incremental version of our meta-case-based classification system. Since the case base changes after each deletion, the best algorithm to use may also change. The incremental system allows us to choose which algorithm is the best to use at each point in the deletion process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The training and ongoing education of medical practitioners has undergone major changes in an incremental fashion over the past 15 years. These changes have been driven by patient safety, educational, economic and legislative/regulatory factors. In the near future, training in procedural skills will undergo a paradigm shift to proficiency based progression with associated requirements for competence-based programmes, valid, reliable assessment tools and simulation technology. Before training begins, the learning outcomes require clear definition; any form of assessment applied should include measurement of these outcomes. Currently training in a procedural skill often takes place on an ad hoc basis. The number of attempts necessary to attain a defined degree of proficiency varies from procedure to procedure. Convincing evidence exists that simulation training helps trainees to acquire skills more efficiently rather than relying on opportunities in their clinical practice. Simulation provides a safe, stress free environment for trainees for skill acquisition, generalization and transfer via deliberate practice. The work described in this thesis contributes to a greater understanding of how medical procedures can be performed more safely and effectively through education. The effect of feedback, provided to novices in a standardized setting on a bench model, based on knowledge of performance was associated with an increase in the speed of skill acquisition and a decrease in error rate during initial learning. The timing of feedback was also associated with effective learning of skill. A marked attrition of skills (independent of the type of feedback provided) was demonstrable 24 hrs after they have first been learned. Using the principles of feedback as described above, when studying the effect of an intense training program on novices of varied years of experience in anaesthesia (i.e. the present training programmes / courses of an intense training day for one or more procedures). There was a marked attrition of skill at 24 hours with a significant correlation with increasing years of experience; there also appeared to be an inverse relationship between years of experience in anaesthesia and performance. The greater the number of years of practice experience, the longer it required a learner to acquire a new skill. The findings of the studies described in this thesis may have important implications for the trainers, trainees and training bodies in the design and implementation of training courses and the formats of delivery of changing curricula. Both curricula and training modalities will need to take account of characteristics of individual learners and the dynamic nature of procedural healthcare.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, extensive experiments are firstly conducted to characterize the performance of using the emerging IEEE 802.15.4-2011 ultra wideband (UWB) for indoor localization, and the results demonstrate the accuracy and precision of using time of arrival measurements for ranging applications. A multipath propagation controlling technique is synthesized which considers the relationship between transmit power, transmission range and signal-to-noise ratio. The methodology includes a novel bilateral transmitter output power control algorithm which is demonstrated to be able to stabilize the multipath channel, and enable sub 5cm instant ranging accuracy in line of sight conditions. A fully-coupled architecture is proposed for the localization system using a combination of IEEE 802.15.4-2011 UWB and inertial sensors. This architecture not only implements the position estimation of the object by fusing the UWB and inertial measurements, but enables the nodes in the localization network to mutually share positional and other useful information via the UWB channel. The hybrid system has been demonstrated to be capable of simultaneous local-positioning and remote-tracking of the mobile object. Three fusion algorithms for relative position estimation are proposed, including internal navigation system (INS), INS with UWB ranging correction, and orientation plus ranging. Experimental results show that the INS with UWB correction algorithm achieves an average position accuracy of 0.1883m, and gets 83% and 62% improvements on the accuracy of the INS (1.0994m) and the existing extended Kalman filter tracking algorithm (0.5m), respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

According to EUSOMA position paper 'The requirements of a specialist breast unit', each breast unit should have a core team made up of health professionals who have undergone specialist training in breast cancer. In this paper, on behalf of EUSOMA, authors have identified the standards of training in breast cancer, to harmonise and foster breast care training in Europe. The aim of this paper is to contribute to the increase in the level of care in a breast unit, as the input of qualified health professionals increases the quality of breast cancer patient care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The Lung Cancer Exercise Training Study (LUNGEVITY) is a randomized trial to investigate the efficacy of different types of exercise training on cardiorespiratory fitness (VO2peak), patient-reported outcomes, and the organ components that govern VO2peak in post-operative non-small cell lung cancer (NSCLC) patients. METHODS/DESIGN: Using a single-center, randomized design, 160 subjects (40 patients/study arm) with histologically confirmed stage I-IIIA NSCLC following curative-intent complete surgical resection at Duke University Medical Center (DUMC) will be potentially eligible for this trial. Following baseline assessments, eligible participants will be randomly assigned to one of four conditions: (1) aerobic training alone, (2) resistance training alone, (3) the combination of aerobic and resistance training, or (4) attention-control (progressive stretching). The ultimate goal for all exercise training groups will be 3 supervised exercise sessions per week an intensity above 70% of the individually determined VO2peak for aerobic training and an intensity between 60 and 80% of one-repetition maximum for resistance training, for 30-45 minutes/session. Progressive stretching will be matched to the exercise groups in terms of program length (i.e., 16 weeks), social interaction (participants will receive one-on-one instruction), and duration (30-45 mins/session). The primary study endpoint is VO2peak. Secondary endpoints include: patient-reported outcomes (PROs) (e.g., quality of life, fatigue, depression, etc.) and organ components of the oxygen cascade (i.e., pulmonary function, cardiac function, skeletal muscle function). All endpoints will be assessed at baseline and postintervention (16 weeks). Substudies will include genetic studies regarding individual responses to an exercise stimulus, theoretical determinants of exercise adherence, examination of the psychological mediators of the exercise - PRO relationship, and exercise-induced changes in gene expression. DISCUSSION: VO2peak is becoming increasingly recognized as an outcome of major importance in NSCLC. LUNGEVITY will identify the optimal form of exercise training for NSCLC survivors as well as provide insight into the physiological mechanisms underlying this effect. Overall, this study will contribute to the establishment of clinical exercise therapy rehabilitation guidelines for patients across the entire NSCLC continuum. TRIAL REGISTRATION: NCT00018255.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Mutations in the TP53 gene are extremely common and occur very early in the progression of serous ovarian cancers. Gene expression patterns that relate to mutational status may provide insight into the etiology and biology of the disease. METHODS: The TP53 coding region was sequenced in 89 frozen serous ovarian cancers, 40 early stage (I/II) and 49 advanced stage (III/IV). Affymetrix U133A expression data was used to define gene expression patterns by mutation, type of mutation, and cancer stage. RESULTS: Missense or chain terminating (null) mutations in TP53 were found in 59/89 (66%) ovarian cancers. Early stage cancers had a significantly higher rate of null mutations than late stage disease (38% vs. 8%, p < 0.03). In advanced stage cases, mutations were more prevalent in short term survivors than long term survivors (81% vs. 30%, p = 0.0004). Gene expression patterns had a robust ability to predict TP53 status within training data. By using early versus late stage disease for out of sample predictions, the signature derived from early stage cancers could accurately (86%) predict mutation status of late stage cancers. CONCLUSIONS: This represents the first attempt to define a genomic signature of TP53 mutation in ovarian cancer. Patterns of gene expression characteristic of TP53 mutation could be discerned and included several genes that are known p53 targets or have been described in the context of expression signatures of TP53 mutation in breast cancer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The Exercise Intensity Trial (EXcITe) is a randomized trial to compare the efficacy of supervised moderate-intensity aerobic training to moderate to high-intensity aerobic training, relative to attention control, on aerobic capacity, physiologic mechanisms, patient-reported outcomes, and biomarkers in women with operable breast cancer following the completion of definitive adjuvant therapy. METHODS/DESIGN: Using a single-center, randomized design, 174 postmenopausal women (58 patients/study arm) with histologically confirmed, operable breast cancer presenting to Duke University Medical Center (DUMC) will be enrolled in this trial following completion of primary therapy (including surgery, radiation therapy, and chemotherapy). After baseline assessments, eligible participants will be randomized to one of two supervised aerobic training interventions (moderate-intensity or moderate/high-intensity aerobic training) or an attention-control group (progressive stretching). The aerobic training interventions will include 150 mins.wk⁻¹ of supervised treadmill walking per week at an intensity of 60%-70% (moderate-intensity) or 60% to 100% (moderate to high-intensity) of the individually determined peak oxygen consumption (VO₂peak) between 20-45 minutes/session for 16 weeks. The progressive stretching program will be consistent with the exercise interventions in terms of program length (16 weeks), social interaction (participants will receive one-on-one instruction), and duration (20-45 mins/session). The primary study endpoint is VO₂peak, as measured by an incremental cardiopulmonary exercise test. Secondary endpoints include physiologic determinants that govern VO₂peak, patient-reported outcomes, and biomarkers associated with breast cancer recurrence/mortality. All endpoints will be assessed at baseline and after the intervention (16 weeks). DISCUSSION: EXCITE is designed to investigate the intensity of aerobic training required to induce optimal improvements in VO₂peak and other pertinent outcomes in women who have completed definitive adjuvant therapy for operable breast cancer. Overall, this trial will inform and refine exercise guidelines to optimize recovery in breast and other cancer survivors following the completion of primary cytotoxic therapy. TRIAL REGISTRATION: NCT01186367.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The ability to write clearly and effectively is of central importance to the scientific enterprise. Encouraged by the success of simulation environments in other biomedical sciences, we developed WriteSim TCExam, an open-source, Web-based, textual simulation environment for teaching effective writing techniques to novice researchers. We shortlisted and modified an existing open source application - TCExam to serve as a textual simulation environment. After testing usability internally in our team, we conducted formal field usability studies with novice researchers. These were followed by formal surveys with researchers fitting the role of administrators and users (novice researchers) RESULTS: The development process was guided by feedback from usability tests within our research team. Online surveys and formal studies, involving members of the Research on Research group and selected novice researchers, show that the application is user-friendly. Additionally it has been used to train 25 novice researchers in scientific writing to date and has generated encouraging results. CONCLUSION: WriteSim TCExam is the first Web-based, open-source textual simulation environment designed to complement traditional scientific writing instruction. While initial reviews by students and educators have been positive, a formal study is needed to measure its benefits in comparison to standard instructional methods.