8 resultados para Intrusion Detection Systems
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.
Resumo:
A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.
Resumo:
In this thesis I present the work done during my PhD. The Thesis is divided into two parts; in the first one I present the study of mesoscopic quantum systems whereas in the second one I address the problem of the definition of Markov regime for quantum system dynamics. The first work presented is the study of vortex patterns in (quasi) two dimensional rotating Bose Einstein condensates (BECs). I consider the case of an anisotropy trapping potential and I shall show that the ground state of the system hosts vortex patterns that are unstable. In a second work I designed an experimental scheme to transfer entanglement from two entangled photons to two BECs. This work is meant to propose a feasible experimental set up to bring entanglement from microscopic to macroscopic systems for both the study of fundamental questions (quantum to classical transition) and technological applications. In the last work of the first part another experimental scheme is presented in order to detect coherences of a mechanical oscillator which is assumed to have been previously cooled down to the quantum regime. In this regime in fact the system can rapidly undergo decoherence so that new techniques have to be employed in order to detect and manipulate their states. In the scheme I propose a micro-mechanical oscillator is coupled to a BEC and the detection is performed by monitoring the BEC with a negligible back-action on the cantilever. In the second part of the thesis I give a definition of Markov regime for open quantum dynamics. The importance of such definition comes from both the mathematical description of the system dynamics and from the understanding of the role played by the environment in the evolution of an open system. In the Markov regime the mathematical description can be simplified and the role of the environment is a passive one.
Resumo:
Silicon (Si) is the base material for electronic technologies and is emerging as a very attractive platform for photonic integrated circuits (PICs). PICs allow optical systems to be made more compact with higher performance than discrete optical components. Applications for PICs are in the area of fibre-optic communication, biomedical devices, photovoltaics and imaging. Germanium (Ge), due to its suitable bandgap for telecommunications and its compatibility with Si technology is preferred over III-V compounds as an integrated on-chip detector at near infrared wavelengths. There are two main approaches for Ge/Si integration: through epitaxial growth and through direct wafer bonding. The lattice mismatch of ~4.2% between Ge and Si is the main problem of the former technique which leads to a high density of dislocations while the bond strength and conductivity of the interface are the main challenges of the latter. Both result in trap states which are expected to play a critical role. Understanding the physics of the interface is a key contribution of this thesis. This thesis investigates Ge/Si diodes using these two methods. The effects of interface traps on the static and dynamic performance of Ge/Si avalanche photodetectors have been modelled for the first time. The thesis outlines the original process development and characterization of mesa diodes which were fabricated by transferring a ~700 nm thick layer of p-type Ge onto n-type Si using direct wafer bonding and layer exfoliation. The effects of low temperature annealing on the device performance and on the conductivity of the interface have been investigated. It is shown that the diode ideality factor and the series resistance of the device are reduced after annealing. The carrier transport mechanism is shown to be dominated by generation–recombination before annealing and by direct tunnelling in forward bias and band-to-band tunnelling in reverse bias after annealing. The thesis presents a novel technique to realise photodetectors where one of the substrates is thinned by chemical mechanical polishing (CMP) after bonding the Si-Ge wafers. Based on this technique, Ge/Si detectors with remarkably high responsivities, in excess of 3.5 A/W at 1.55 μm at −2 V, under surface normal illumination have been measured. By performing electrical and optical measurements at various temperatures, the carrier transport through the hetero-interface is analysed by monitoring the Ge band bending from which a detailed band structure of the Ge/Si interface is proposed for the first time. The above unity responsivity of the detectors was explained by light induced potential barrier lowering at the interface. To our knowledge this is the first report of light-gated responsivity for vertically illuminated Ge/Si photodiodes. The wafer bonding approach followed by layer exfoliation or by CMP is a low temperature wafer scale process. In principle, the technique could be extended to other materials such as Ge on GaAs, or Ge on SOI. The unique results reported here are compatible with surface normal illumination and are capable of being integrated with CMOS electronics and readout units in the form of 2D arrays of detectors. One potential future application is a low-cost Si process-compatible near infrared camera.
Resumo:
The advent of modern wireless technologies has seen a shift in focus towards the design and development of educational systems for deployment through mobile devices. The use of mobile phones, tablets and Personal Digital Assistants (PDAs) is steadily growing across the educational sector as a whole. Mobile learning (mLearning) systems developed for deployment on such devices hold great significance for the future of education. However, mLearning systems must be built around the particular learner’s needs based on both their motivation to learn and subsequent learning outcomes. This thesis investigates how biometric technologies, in particular accelerometer and eye-tracking technologies, could effectively be employed within the development of mobile learning systems to facilitate the needs of individual learners. The creation of personalised learning environments must enable the achievement of improved learning outcomes for users, particularly at an individual level. Therefore consideration is given to individual learning-style differences within the electronic learning (eLearning) space. The overall area of eLearning is considered and areas such as biometric technology and educational psychology are explored for the development of personalised educational systems. This thesis explains the basis of the author’s hypotheses and presents the results of several studies carried out throughout the PhD research period. These results show that both accelerometer and eye-tracking technologies can be employed as an Human Computer Interaction (HCI) method in the detection of student learning-styles to facilitate the provision of automatically adapted eLearning spaces. Finally the author provides recommendations for developers in the creation of adaptive mobile learning systems through the employment of biometric technology as a user interaction tool within mLearning applications. Further research paths are identified and a roadmap for future of research in this area is defined.
Resumo:
The overall objective of this thesis is to integrate a number of micro/nanotechnologies into integrated cartridge type systems to implement such biochemical protocols. Instrumentation and systems were developed to interface such cartridge systems: (i) implementing microfluidic handling, (ii) executing thermal control during biochemical protocols and (iii) detection of biomolecules associated with inherited or infectious disease. This system implements biochemical protocols for DNA extraction, amplification and detection. A digital microfluidic chip (ElectroWetting on Dielectric) manipulated droplets of sample and reagent implementing sample preparation protocols. The cartridge system also integrated a planar magnetic microcoil device to generate local magnetic field gradients, manipulating magnetic beads. For hybridisation detection a fluorescence microarray, screening for mutations associated with CFTR gene is printed on a waveguide surface and integrated within the cartridge. A second cartridge system was developed to implement amplification and detection screening for DNA associated with disease-causing pathogens e.g. Escherichia coli. This system incorporates (i) elastomeric pinch valves isolating liquids during biochemical protocols and (ii) a silver nanoparticle microarray for fluorescent signal enhancement, using localized surface plasmon resonance. The microfluidic structures facilitated the sample and reagent to be loaded and moved between chambers with external heaters implementing thermal steps for nucleic acid amplification and detection. In a technique allowing probe DNA to be immobilised within a microfluidic system using (3D) hydrogel structures a prepolymer solution containing probe DNA was formulated and introduced into the microfluidic channel. Photo-polymerisation was undertaken forming 3D hydrogel structures attached to the microfluidic channel surface. The prepolymer material, poly-ethyleneglycol (PEG), was used to form hydrogel structures containing probe DNA. This hydrogel formulation process was fast compared to conventional biomolecule immobilization techniques and was also biocompatible with the immobilised biomolecules, as verified by on-chip hybridisation assays. This process allowed control over hydrogel height growth at the micron scale.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
The contribution of buildings towards total worldwide energy consumption in developed countries is between 20% and 40%. Heating Ventilation and Air Conditioning (HVAC), and more specifically Air Handling Units (AHUs) energy consumption accounts on average for 40% of a typical medical device manufacturing or pharmaceutical facility’s energy consumption. Studies have indicated that 20 – 30% energy savings are achievable by recommissioning HVAC systems, and more specifically AHU operations, to rectify faulty operation. Automated Fault Detection and Diagnosis (AFDD) is a process concerned with potentially partially or fully automating the commissioning process through the detection of faults. An expert system is a knowledge-based system, which employs Artificial Intelligence (AI) methods to replicate the knowledge of a human subject matter expert, in a particular field, such as engineering, medicine, finance and marketing, to name a few. This thesis details the research and development work undertaken in the development and testing of a new AFDD expert system for AHUs which can be installed in minimal set up time on a large cross section of AHU types in a building management system vendor neutral manner. Both simulated and extensive field testing was undertaken against a widely available and industry known expert set of rules known as the Air Handling Unit Performance Assessment Rules (APAR) (and a later more developed version known as APAR_extended) in order to prove its effectiveness. Specifically, in tests against a dataset of 52 simulated faults, this new AFDD expert system identified all 52 derived issues whereas the APAR ruleset identified just 10. In tests using actual field data from 5 operating AHUs in 4 manufacturing facilities, the newly developed AFDD expert system for AHUs was shown to identify four individual fault case categories that the APAR method did not, as well as showing improvements made in the area of fault diagnosis.