7 resultados para Sample algorithms
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
Case-Based Reasoning (CBR) uses past experiences to solve new problems. The quality of the past experiences, which are stored as cases in a case base, is a big factor in the performance of a CBR system. The system's competence may be improved by adding problems to the case base after they have been solved and their solutions verified to be correct. However, from time to time, the case base may have to be refined to reduce redundancy and to get rid of any noisy cases that may have been introduced. Many case base maintenance algorithms have been developed to delete noisy and redundant cases. However, different algorithms work well in different situations and it may be difficult for a knowledge engineer to know which one is the best to use for a particular case base. In this thesis, we investigate ways to combine algorithms to produce better deletion decisions than the decisions made by individual algorithms, and ways to choose which algorithm is best for a given case base at a given time. We analyse five of the most commonly-used maintenance algorithms in detail and show how the different algorithms perform better on different datasets. This motivates us to develop a new approach: maintenance by a committee of experts (MACE). MACE allows us to combine maintenance algorithms to produce a composite algorithm which exploits the merits of each of the algorithms that it contains. By combining different algorithms in different ways we can also define algorithms that have different trade-offs between accuracy and deletion. While MACE allows us to define an infinite number of new composite algorithms, we still face the problem of choosing which algorithm to use. To make this choice, we need to be able to identify properties of a case base that are predictive of which maintenance algorithm is best. We examine a number of measures of dataset complexity for this purpose. These provide a numerical way to describe a case base at a given time. We use the numerical description to develop a meta-case-based classification system. This system uses previous experience about which maintenance algorithm was best to use for other case bases to predict which algorithm to use for a new case base. Finally, we give the knowledge engineer more control over the deletion process by creating incremental versions of the maintenance algorithms. These incremental algorithms suggest one case at a time for deletion rather than a group of cases, which allows the knowledge engineer to decide whether or not each case in turn should be deleted or kept. We also develop incremental versions of the complexity measures, allowing us to create an incremental version of our meta-case-based classification system. Since the case base changes after each deletion, the best algorithm to use may also change. The incremental system allows us to choose which algorithm is the best to use at each point in the deletion process.
Resumo:
The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.
Resumo:
The application of biological effect monitoring for the detection of environmental chemical exposure in domestic animals is still in its infancy. This study investigated blood sample preparations in vitro for their use in biological effect monitoring. When peripheral blood mononuclear cells (PBMCs), isolated following the collection of multiple blood samples from sheep in the field, were cryopreserved and subsequently cultured for 24 hours a reduction in cell viability (<80%) was attributed to delays in the processing following collection. Alternative blood sample preparations using rat and sheep blood demonstrated that 3 to 5 hour incubations can be undertaken without significant alterations in the viability of the lymphocytes; however, a substantial reduction in viability was observed after 24 hours in frozen blood. Detectable levels of early and late apoptosis as well as increased levels of ROS were detectable in frozen sheep blood samples. The addition of ascorbic acid partly reversed this effect and reduced the loss in cell viability. The response of the rat and sheep blood sample preparations to genotoxic compounds ex vivo showed that EMS caused comparable dose-dependent genotoxic effects in all sample preparations (fresh and frozen) as detected by the Comet assay. In contrast, the effects of CdCl2 were dependent on the duration of exposure as well as the sample preparation. The analysis of leukocyte subsets in frozen sheep blood showed no alterations in the percentages of T and B lymphocytes but led to a major decrease in the percentage of granulocytes compared to those in the fresh samples. The percentages of IFN-γ and IL-4 but not IL-6 positive cells were comparable between fresh and frozen sheep blood after 4 hour stimulation with phorbol 12-myrisate 13-acetate and ionomycin (PMA+I). These results show that frozen blood gives comparable responses to fresh blood samples in the toxicological and immune assays used.
Resumo:
Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.
Resumo:
Background: Spirituality is fundamental to all human beings, existing within a person, and developing until death. This research sought to operationalise spirituality in a sample of individuals with chronic illness. A review of the conceptual literature identified three dimensions of spirituality: connectedness, transcendence, and meaning in life. A review of the empirical literature identified one instrument that measures the three dimensions together. Yet, recent appraisals of this instrument highlighted issues with item formulation and limited evidence of reliability and validity. Aim: The aim of this research was to develop a theoretically-grounded instrument to measure spirituality – the Spirituality Instrument-27 (SpI-27). A secondary aim was to psychometrically evaluate this instrument in a sample of individuals with chronic illness (n=249). Methods: A two-phase design was adopted. Phase one consisted of the development of the SpI-27 based on item generation from a concept analysis, a literature review, and an instrument appraisal. The second phase established the psychometric properties of the instrument and included: a qualitative descriptive design to establish content validity; a pilot study to evaluate the mode of administration; and a descriptive correlational design to assess the instrument’s reliability and validity. Data were analysed using SPSS (Version 18). Results: Results of exploratory factor analysis concluded a final five-factor solution with 27 items. These five factors were labelled: Connectedness with Others, Self-Transcendence, Self-Cognisance, Conservationism, and Connectedness with a Higher Power. Cronbach’s alpha coefficients ranged from 0.823 to 0.911 for the five factors, and 0.904 for the overall scale, indicating high internal consistency. Paired-sample t-tests, intra-class correlations, and weighted kappa values supported the temporal stability of the instrument over 2 weeks. A significant positive correlation was found between the SpI-27 and the Spirituality Index of Well-Being, providing evidence for convergent validity. Conclusion: This research addresses a call for a theoretically-grounded instrument to measure spirituality.
Resumo:
New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).
Resumo:
In this thesis, extensive experiments are firstly conducted to characterize the performance of using the emerging IEEE 802.15.4-2011 ultra wideband (UWB) for indoor localization, and the results demonstrate the accuracy and precision of using time of arrival measurements for ranging applications. A multipath propagation controlling technique is synthesized which considers the relationship between transmit power, transmission range and signal-to-noise ratio. The methodology includes a novel bilateral transmitter output power control algorithm which is demonstrated to be able to stabilize the multipath channel, and enable sub 5cm instant ranging accuracy in line of sight conditions. A fully-coupled architecture is proposed for the localization system using a combination of IEEE 802.15.4-2011 UWB and inertial sensors. This architecture not only implements the position estimation of the object by fusing the UWB and inertial measurements, but enables the nodes in the localization network to mutually share positional and other useful information via the UWB channel. The hybrid system has been demonstrated to be capable of simultaneous local-positioning and remote-tracking of the mobile object. Three fusion algorithms for relative position estimation are proposed, including internal navigation system (INS), INS with UWB ranging correction, and orientation plus ranging. Experimental results show that the INS with UWB correction algorithm achieves an average position accuracy of 0.1883m, and gets 83% and 62% improvements on the accuracy of the INS (1.0994m) and the existing extended Kalman filter tracking algorithm (0.5m), respectively.