857 resultados para Artefact rejection
Resumo:
The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.
Resumo:
Purpose To develop a signal processing paradigm for extracting ERG responses to temporal sinusoidal modulation with contrasts ranging from below perceptual threshold to suprathreshold contrasts. To estimate the magnitude of intrinsic noise in ERG signals at different stimulus contrasts. Methods Photopic test stimuli were generated using a 4-primary Maxwellian view optical system. The 4-primary lights were sinusoidally temporally modulated in-phase (36 Hz; 2.5 - 50% Michelson). The stimuli were presented in 1 s epochs separated by a 1 ms blank interval and repeated 160 times (160.16 s duration) during the recording of the continuous flicker ERG from the right eye using DTL fiber electrodes. After artefact rejection, the ERG signal was extracted using Fourier methods in each of the 1 s epochs where a stimulus was presented. The signal processing allows for computation of the intrinsic noise distribution in addition to the signal to noise (SNR) ratio. Results We provide the initial report that the ERG intrinsic noise distribution is independent of stimulus contrast whereas SNR decreases linearly with decreasing contrast until the noise limit at ~2.5%. The 1ms blank intervals between epochs de-correlated the ERG signal at the line frequency (50 Hz) and thus increased the SNR of the averaged response. We confirm that response amplitude increases linearly with stimulus contrast. The phase response shows a shallow positive relationship with stimulus contrast. Conclusions This new technique will enable recording of intrinsic noise in ERG signals above and below perceptual visual threshold and is suitable for measurement of continuous rod and cone ERGs across a range of temporal frequencies, and post-receptoral processing in the primary retinogeniculate pathways at low stimulus contrasts. The intrinsic noise distribution may have application as a biomarker for detecting changes in disease progression or treatment efficacy.
Resumo:
Electrical compound action potentials (ECAPs) of the cochlear nerve are used clinically for quick and efficient cochlear implant parameter setting. The ECAP is the aggregate response of nerve fibres at various distances from the recording electrode, and the magnitude of the ECAP is therefore related to the number of fibres excited by a particular stimulus. Current methods, such as the masker-probe or alternating polarity methods, use the ECAP magnitude at various stimulus levels to estimate the neural threshold, from which the parameters are calculated. However, the correlation between ECAP threshold and perceptual threshold is not always good, with ECAP threshold typically being much higher than perceptual threshold. The lower correlation is partly due to the very different pulse rates used for ECAPs (below 100 Hz) and clinical programs (hundreds of Hz up to several kHz). Here we introduce a new method of estimating ECAP threshold for cochlear implants based upon the variability of the response. At neural threshold, where some but not all fibers respond, there is a different response each trial. This inter-trial variability can be detected overlaying the constant variability of the system noise. The large stimulus artefact, which requires additional trials for artefact rejection in the standard ECAP magnitude methods, is not consequential, as it has little variability. The variability method therefore consists of simply presenting a pulse and recording the ECAP, and as such is quicker than other methods. It also has the potential to be run at high rates like clinical programs, potentially improving the correlation with behavioural threshold. Preliminary data is presented that shows a detectable variability increase shortly after probe offset, at probe levels much lower than those producing a detectable ECAP magnitude. Care must be taken, however, to avoid saturation of the recording amplifier saturation; in our experiments we found a gain of 300 to be optimal.
Resumo:
This study sought to improve understanding of the persuasive process of emotion-based appeals not only in relation to negative, fear-based appeals but also for appeals based upon positive emotions. In particular, the study investigated whether response efficacy, as a cognitive construct, mediated outcome measures of message effectiveness in terms of both acceptance and rejection of negative and positive emotion-based messages. Licensed drivers (N = 406) participated via the completion of an on-line survey. Within the survey, participants received either a negative (fear-based) appeal or one of the two possible positive appeals (pride or humor-based). Overall, the study's findings confirmed the importance of emotional and cognitive components of persuasive health messages and identified response efficacy as a key cognitive construct influencing the effectiveness of not only fear-based messages but also positive emotion-based messages. Interestingly, however, the results suggested that response efficacy's influence on message effectiveness may differ for positive and negative emotion-based appeals such that significant indirect (and mediational) effects were found with both acceptance and rejection of the positive appeals yet only with rejection of the fear-based appeal. As such, the study's findings provide an important extension to extant literature and may inform future advertising message design.
Resumo:
Studies continue to report ancient DNA sequences and viable microbial cells that are many millions of years old. In this paper we evaluate some of the most extravagant claims of geologically ancient DNA. We conclude that although exciting, the reports suffer from inadequate experimental setup and insufficient authentication of results. Consequently, it remains doubtful whether amplifiable DNA sequences and viable bacteria can survive over geological timescales. To enhance the credibility of future studies and assist in discarding false-positive results, we propose a rigorous set of authentication criteria for work with geologically ancient DNA.
Resumo:
3D models of long bones are being utilised for a number of fields including orthopaedic implant design. Accurate reconstruction of 3D models is of utmost importance to design accurate implants to allow achieving a good alignment between two bone fragments. Thus for this purpose, CT scanners are employed to acquire accurate bone data exposing an individual to a high amount of ionising radiation. Magnetic resonance imaging (MRI) has been shown to be a potential alternative to computed tomography (CT) for scanning of volunteers for 3D reconstruction of long bones, essentially avoiding the high radiation dose from CT. In MRI imaging of long bones, the artefacts due to random movements of the skeletal system create challenges for researchers as they generate inaccuracies in the 3D models generated by using data sets containing such artefacts. One of the defects that have been observed during an initial study is the lateral shift artefact occurring in the reconstructed 3D models. This artefact is believed to result from volunteers moving the leg during two successive scanning stages (the lower limb has to be scanned in at least five stages due to the limited scanning length of the scanner). As this artefact creates inaccuracies in the implants designed using these models, it needs to be corrected before the application of 3D models to implant design. Therefore, this study aimed to correct the lateral shift artefact using 3D modelling techniques. The femora of five ovine hind limbs were scanned with a 3T MRI scanner using a 3D vibe based protocol. The scanning was conducted in two halves, while maintaining a good overlap between them. A lateral shift was generated by moving the limb several millimetres between two scanning stages. The 3D models were reconstructed using a multi threshold segmentation method. The correction of the artefact was achieved by aligning the two halves using the robust iterative closest point (ICP) algorithm, with the help of the overlapping region between the two. The models with the corrected artefact were compared with the reference model generated by CT scanning of the same sample. The results indicate that the correction of the artefact was achieved with an average deviation of 0.32 ± 0.02 mm between the corrected model and the reference model. In comparison, the model obtained from a single MRI scan generated an average error of 0.25 ± 0.02 mm when compared with the reference model. An average deviation of 0.34 ± 0.04 mm was seen when the models generated after the table was moved were compared to the reference models; thus, the movement of the table is also a contributing factor to the motion artefacts.
Resumo:
In this workshop proposal I discuss a case study physical computing environment named Talk2Me. This work was exhibited in February 2006 at The Block, Brisbane as an interactive installation in the early stages of its development. The major artefact in this work is a 10 metre wide X 3 metre high light-permeable white dome. There are other technologies and artefacts contained within the dome that make up this interactive environment. The dome artefact has impacted heavily on the design process, including shaping the types of interactions involved, the kinds of technologies employed, and the choice of other artefacts. In this workshop paper, I chart some of the various iterations Talk2Me has undergone in the design process.
Resumo:
Frameworks such as activity theory, distributed cognition and structuration theory, amongst others, have shown that detailed study of contextual settings where users work (or live) can help the design of interactive systems. However, these frameworks do not adequately focus on accounting for the materiality (and embodiment) of the contextual settings. Within the IST-EU funded AMIDA project (Augmented Multiparty Interaction with Distance Access) we are looking into supporting meeting practices with distance access. Meetings are inherently embodied in everyday work life and that material artefacts associated with meeting practices play a critical role in their formation. Our eventual goal is to develop a deeper understanding of the dynamic and embodied nature of meeting practices and designing technologies to support these. In this paper we introduce the notion of "artefact ecologies" as a conceptual base for understanding embodied meeting practices with distance access. Artefact ecologies refer to a system consisting of different digital and physical artefacts, people, their work practices and values and lays emphasis on the role artefacts play in embodiment, work coordination and supporting remote awareness. In the end we layout our plans for designing technologies for supporting embodied meeting practices within the AMIDA project.
Resumo:
The 1951 Convention Relating to the Status of Refugees and the 1967 Protocol Relating to the Status of Refugees are the two primary international legal instruments that states use to process asylum seekers' claim to refugee status. However, in Southeast Asia only two states have acceded to these instruments. This is seemingly paradoxical for a region that has been host to a large number of asylum seekers who, as a result, are forced to live as ‘illegal migrants’. This book examines the region's continued rejection of international refugee law through extensive archival analysis and argues that this rejection was shaped by the region’s response to its largest refugee crisis in the post-1945 era: the Indochinese refugee crisis from 1975 to 1996. The result is a seminal study into Southeast Asian's relationship with international refugee law and the impact that this has had on states surrounding the region, the UNHCR and the asylum seekers themselves.
Resumo:
An improved Phase-Locked Loop (PLL) for extracting phase and frequency of the fundamental component of a highly distorted grid voltage is presented. The structure of the single-phase PLL is based on the Synchronous Reference Frame (SRF) PLL and uses an All Pass Filter (APF) to generate the quadrature component from the single phase input voltage. In order to filter the harmonic content, a Moving Average Filter (MAF) is used, and performance is improved by designing a lead compensator and also a feed-forward compensator. The simulation results are compared to show the improved performance with feed-forward. In addition, the frequency dependency of MAF is dealt with by a proposed method for adaption to the frequency. This method changes the window size based on the frequency on a sample-by-sample basis. By using this method, the speed of resizing can be reduced in order to decrease the output ripples caused by window size variations.
Resumo:
The growth of APIs and Web services on the Internet, especially through larger enterprise systems increasingly being leveraged for Cloud and software-as-a-service opportunities, poses challenges for improving the efficiency of integration with these services. Interfaces of enterprise systems are typically larger, more complex and overloaded, with single operations having multiple data entities and parameter sets, supporting varying requests, and reflecting versioning across different system releases, compared to fine-grained operations of contemporary interfaces. We propose a technique to support the refactoring of service interfaces by deriving business entities and their relationships. In this paper, we focus on the behavioural aspects of service interfaces, aiming to discover the sequential dependencies of operations (otherwise known as protocol extraction) based on the entities and relationships derived. Specifically, we propose heuristics according to these relationships, and in turn, deriving permissible orders in which operations are invoked. As a result of this, service operations can be refactored on business entity CRUD lines, with explicit behavioural protocols as part of an interface definition. This supports flexible service discovery, composition and integration. A prototypical implementation and analysis of existing Web services, including those of commercial logistic systems (Fedex), are used to validate the algorithms proposed through the paper.
Resumo:
This two-study paper examines the detrimental impact of workgroup mistreatment and the mediating role of perceived rejection. In Study 1, perceived rejection emerged as a mediator between workgroup mistreatment and depression, organization-based self-esteem, organizational deviance, and organizational citizenship behaviors. In Study 2, the role of organizational norms was examined. Employees who experienced supportive organizational norms reported lower levels of perceived rejection, depression and turnover intentions, and higher levels of organization-based self-esteem and job satisfaction. Employees in the supportive norms condition reported that they were more likely to seek reconciliation after experiencing mistreatment than those who experienced low support. Perceived rejection also emerged as a mediator. Results, practical implications, and future research directions are discussed.