972 resultados para Computer Controlled Signals.
Resumo:
Effective interaction with personal computers is a basic requirement for many of the functions that are performed in our daily lives. With the rapid emergence of the Internet and the World Wide Web, computers have become one of the premier means of communication in our society. Unfortunately, these advances have not become equally accessible to physically handicapped individuals. In reality, a significant number of individuals with severe motor disabilities, due to a variety of causes such as Spinal Cord Injury (SCI), Amyothrophic Lateral Sclerosis (ALS), etc., may not be able to utilize the computer mouse as a vital input device for computer interaction. The purpose of this research was to further develop and improve an existing alternative input device for computer cursor control to be used by individuals with severe motor disabilities. This thesis describes the development and the underlying principle for a practical hands-off human-computer interface based on Electromyogram (EMG) signals and Eye Gaze Tracking (EGT) technology compatible with the Microsoft Windows operating system (OS). Results of the software developed in this thesis show a significant improvement in the performance and usability of the EMG/EGT cursor control HCI.
Resumo:
Recent research has indicated that the pupil diameter (PD) in humans varies with their affective states. However, this signal has not been fully investigated for affective sensing purposes in human-computer interaction systems. This may be due to the dominant separate effect of the pupillary light reflex (PLR), which shrinks the pupil when light intensity increases. In this dissertation, an adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a processed MPD (PMPD) signal, from which a classification feature, PMPDmean, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of stress states in the subject from whom the pupil diameter signal was recorded, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of stress and relaxation states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. The discriminating potential of each individual feature extracted from GSR, BVP and PD was studied by analysis of its receiver operating characteristic (ROC) curve. The ROC curve found for the PMPDmean feature encompassed the largest area (0.8546) of all the single-feature ROCs investigated. The encouraging results seen in affective sensing based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the benefits of using the AIC implementation with the HITV adaptive algorithm to isolate the PAR and the potential of using PD monitoring to sense the evolving affective states of a computer user.
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
Nucleic Acid hairpins have been a subject of study for the last four decades. They are composed of single strand that is
hybridized to itself, and the central section forming an unhybridized loop. In nature, they stabilize single stranded RNA, serve as nucleation
sites for RNA folding, protein recognition signals, mRNA localization and regulation of mRNA degradation. On the other hand,
DNA hairpins in biological contexts have been studied with respect to forming cruciform structures that can regulate gene expression.
The use of DNA hairpins as fuel for synthetic molecular devices, including locomotion, was proposed and experimental demonstrated in 2003. They
were interesting because they bring to the table an on-demand energy/information supply mechanism.
The energy/information is hidden (from hybridization) in the hairpin’s loop, until required.
The energy/information is harnessed by opening the stem region, and exposing the single stranded loop section.
The loop region is now free for possible hybridization and help move the system into a thermodynamically favourable state.
The hidden energy and information coupled with
programmability provides another functionality, of selectively choosing what reactions to hide and
what reactions to allow to proceed, that helps develop a topological sequence of events.
Hairpins have been utilized as a source of fuel for many different DNA devices. In this thesis, we program four different
molecular devices using DNA hairpins, and experimentally validate them in the
laboratory. 1) The first device: A
novel enzyme-free autocatalytic self-replicating system composed entirely of DNA that operates isothermally. 2) The second
device: Time-Responsive Circuits using DNA have two properties: a) asynchronous: the final output is always correct
regardless of differences in the arrival time of different inputs.
b) renewable circuits which can be used multiple times without major degradation of the gate motifs
(so if the inputs change over time, the DNA-based circuit can re-compute the output correctly based on the new inputs).
3) The third device: Activatable tiles are a theoretical extension to the Tile assembly model that enhances
its robustness by protecting the sticky sides of tiles until a tile is partially incorporated into a growing assembly.
4) The fourth device: Controlled Amplification of DNA catalytic system: a device such that the amplification
of the system does not run uncontrollably until the system runs out of fuel, but instead achieves a finite
amount of gain.
Nucleic acid circuits with the ability
to perform complex logic operations have many potential practical applications, for example the ability to achieve point of care diagnostics.
We discuss the designs of our DNA Hairpin molecular devices, the results we have obtained, and the challenges we have overcome
to make these truly functional.
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Resumo:
Aim. The purpose of this study was to develop and evaluate a computer-based, dietary, and physical activity self-management program for people recently diagnosed with type 2 diabetes.
Methods. The computer-based program was developed in conjunction with the target group and evaluated in a 12-week randomised controlled trial (RCT). Participants were randomised to the intervention (computer-program) or control group (usual care). Primary outcomes were diabetes knowledge and goal setting (ADKnowl questionnaire, Diabetes Obstacles Questionnaire (DOQ)) measured at baseline and week 12. User feedback on the program was obtained via a questionnaire and focus groups. Results. Seventy participants completed the 12-week RCT (32 intervention, 38 control, mean age 59 (SD) years). After completion there was a significant between-group difference in the “knowledge and beliefs scale” of the DOQ. Two-thirds of the intervention group rated the program as either good or very good, 92% would recommend the program to others, and 96% agreed that the information within the program was clear and easy to understand.
Conclusions. The computer-program resulted in a small but statistically significant improvement in diet-related knowledge and user satisfaction was high. With some further development, this computer-based educational tool may be a useful adjunct to diabetes self-management.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Computer-based tools for assessing micro-longitudinal patterns of cognitive function in older adults
Resumo:
Patterns of cognitive change over micro-longitudinal timescales (i.e., ranging from hours to days) are associated with a wide range of age-related health and functional outcomes. However, practical issues with conducting high-frequency assessments make investigations of micro-longitudinal cognition costly and burdensome to run. One way of addressing this is to develop cognitive assessments that can be performed by older adults, in their own homes, without a researcher being present. Here, we address the question of whether reliable and valid cognitive data can be collected over micro-longitudinal timescales using unsupervised cognitive tests.In study 1, 48 older adults completed two touchscreen cognitive tests, on three occasions, in controlled conditions, alongside a battery of standard tests of cognitive functions. In study 2, 40 older adults completed the same two computerized tasks on multiple occasions, over three separate week-long periods, in their own homes, without a researcher present. Here, the tasks were incorporated into a wider touchscreen system (Novel Assessment of Nutrition and Ageing (NANA)) developed to assess multiple domains of health and behavior. Standard tests of cognitive function were also administered prior to participants using the NANA system.Performance on the two “NANA” cognitive tasks showed convergent validity with, and similar levels of reliability to, the standard cognitive battery in both studies. Completion and accuracy rates were also very high. These results show that reliable and valid cognitive data can be collected from older adults using unsupervised computerized tests, thus affording new opportunities for the investigation of cognitive function.
Resumo:
The use of human brain electroencephalography (EEG) signals for automatic person identi cation has been investigated for a decade. It has been found that the performance of an EEG-based person identication system highly depends on what feature to be extracted from multi-channel EEG signals. Linear methods such as Power Spectral Density and Autoregressive Model have been used to extract EEG features. However these methods assumed that EEG signals are stationary. In fact, EEG signals are complex, non-linear, non-stationary, and random in nature. In addition, other factors such as brain condition or human characteristics may have impacts on the performance, however these factors have not been investigated and evaluated in previous studies. It has been found in the literature that entropy is used to measure the randomness of non-linear time series data. Entropy is also used to measure the level of chaos of braincomputer interface systems. Therefore, this thesis proposes to study the role of entropy in non-linear analysis of EEG signals to discover new features for EEG-based person identi- cation. Five dierent entropy methods including Shannon Entropy, Approximate Entropy, Sample Entropy, Spectral Entropy, and Conditional Entropy have been proposed to extract entropy features that are used to evaluate the performance of EEG-based person identication systems and the impacts of epilepsy, alcohol, age and gender characteristics on these systems. Experiments were performed on the Australian EEG and Alcoholism datasets. Experimental results have shown that, in most cases, the proposed entropy features yield very fast person identication, yet with compatible accuracy because the feature dimension is low. In real life security operation, timely response is critical. The experimental results have also shown that epilepsy, alcohol, age and gender characteristics have impacts on the EEG-based person identication systems.
Resumo:
Current hearing-assistive technology performs poorly in noisy multi-talker conditions. The goal of this thesis was to establish the feasibility of using EEG to guide acoustic processing in such conditions. To attain this goal, this research developed a model via the constructive research method, relying on literature review. Several approaches have revealed improvements in the performance of hearing-assistive devices under multi-talker conditions, namely beamforming spatial filtering, model-based sparse coding shrinkage, and onset enhancement of the speech signal. Prior research has shown that electroencephalography (EEG) signals contain information that concerns whether the person is actively listening, what the listener is listening to, and where the attended sound source is. This thesis constructed a model for using EEG information to control beamforming, model-based sparse coding shrinkage, and onset enhancement of the speech signal. The purpose of this model is to propose a framework for using EEG signals to control sound processing to select a single talker in a noisy environment containing multiple talkers speaking simultaneously. On a theoretical level, the model showed that EEG can control acoustical processing. An analysis of the model identified a requirement for real-time processing and that the model inherits the computationally intensive properties of acoustical processing, although the model itself is low complexity placing a relatively small load on computational resources. A research priority is to develop a prototype that controls hearing-assistive devices with EEG. This thesis concludes highlighting challenges for future research.
Resumo:
In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology -- Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains -- Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness
Resumo:
Cognitive radio (CR) was developed for utilizing the spectrum bands efficiently. Spectrum sensing and awareness represent main tasks of a CR, providing the possibility of exploiting the unused bands. In this thesis, we investigate the detection and classification of Long Term Evolution (LTE) single carrier-frequency division multiple access (SC-FDMA) signals, which are used in uplink LTE, with applications to cognitive radio. We explore the second-order cyclostationarity of the LTE SC-FDMA signals, and apply results obtained for the cyclic autocorrelation function to signal detection and classification (in other words, to spectrum sensing and awareness). The proposed detection and classification algorithms provide a very good performance under various channel conditions, with a short observation time and at low signal-to-noise ratios, with reduced complexity. The validity of the proposed algorithms is verified using signals generated and acquired by laboratory instrumentation, and the experimental results show a good match with computer simulation results.
Resumo:
Introduction: Brain computer interface (BCI) is a promising new technology with possible application in neurorehabilitation after spinal cord injury. Movement imagination or attempted movement-based BCI coupled with functional electrical stimulation (FES) enables the simultaneous activation of the motor cortices and the muscles they control. When using the BCI- coupled with FES (known as BCI-FES), the subject activates the motor cortex using attempted movement or movement imagination of a limb. The BCI system detects the motor cortex activation and activates the FES attached to the muscles of the limb the subject is attempting or imaging to move. In this way the afferent and the efferent pathways of the nervous system are simultaneously activated. This simultaneous activation encourages Hebbian type learning which could be beneficial in functional rehabilitation after spinal cord injury (SCI). The FES is already in use in several SCI rehabilitation units but there is currently not enough clinical evidence to support the use of BCI-FES for rehabilitation. Aims: The main aim of this thesis is to assess outcomes in sub-acute tetraplegic patients using BCI-FES for functional hand rehabilitation. In addition, the thesis explores different methods for assessing neurological rehabilitation especially after BCI-FES therapy. The thesis also investigated mental rotation as a possible rehabilitation method in SCI. Methods: Following investigation into applicable methods that can be used to implement rehabilitative BCI, a BCI based on attempted movement was built. Further, the BCI was used to build a BCI-FES system. The BCI-FES system was used to deliver therapy to seven sub-acute tetraplegic patients who were scheduled to receive the therapy over a total period of 20 working days. These seven patients are in a 'BCI-FES' group. Five more patients were also recruited and offered equivalent FES quantity without the BCI. These further five patients are in a 'FES-only' group. Neurological and functional measures were investigated and used to assess both patient groups before and after therapy. Results: The results of the two groups of patients were compared. The patients in the BCI-FES group had better improvements. These improvements were found with outcome measures assessing neurological changes. The neurological changes following the use of the BCI-FES showed that during movement attempt, the activation of the motor cortex areas of the SCI patients became closer to the activation found in healthy individuals. The intensity of the activation and its spatial localisation both improved suggesting desirable cortical reorganisation. Furthermore, the responses of the somatosensory cortex during sensory stimulation were of clear evidence of better improvement in patients who used the BCI-FES. Missing somatosensory evoked potential peaks returned more for the BCI-FES group while there was no overall change in the FES-only group. Although the BCI-FES group had better neurological improvement, they did not show better functional improvement than the FES-only group. This was attributed mainly to the short duration of the study where therapies were only delivered for 20 working days. Conclusions: The results obtained from this study have shown that BCI-FES may induce cortical changes in the desired direction at least faster than FES alone. The observation of better improvement in the patients who used the BCI-FES is a good result in neurorehabilitation and it shows the potential of thought-controlled FES as a neurorehabilitation tool. These results back other studies that have shown the potential of BCI-FES in rehabilitation following neurological injuries that lead to movement impairment. Although the results are promising, further studies are necessary given the small number of subjects in the current study.
Resumo:
One of the challenges to biomedical engineers proposed by researchers in neuroscience is brain machine interaction. The nervous system communicates by interpreting electrochemical signals, and implantable circuits make decisions in order to interact with the biological environment. It is well known that Parkinson’s disease is related to a deficit of dopamine (DA). Different methods has been employed to control dopamine concentration like magnetic or electrical stimulators or drugs. In this work was automatically controlled the neurotransmitter concentration since this is not currently employed. To do that, four systems were designed and developed: deep brain stimulation (DBS), transmagnetic stimulation (TMS), Infusion Pump Control (IPC) for drug delivery, and fast scan cyclic voltammetry (FSCV) (sensing circuits which detect varying concentrations of neurotransmitters like dopamine caused by these stimulations). Some softwares also were developed for data display and analysis in synchronously with current events in the experiments. This allowed the use of infusion pumps and their flexibility is such that DBS or TMS can be used in single mode and other stimulation techniques and combinations like lights, sounds, etc. The developed system allows to control automatically the concentration of DA. The resolution of the system is around 0.4 µmol/L with time correction of concentration adjustable between 1 and 90 seconds. The system allows controlling DA concentrations between 1 and 10 µmol/L, with an error about +/- 0.8 µmol/L. Although designed to control DA concentration, the system can be used to control, the concentration of other substances. It is proposed to continue the closed loop development with FSCV and DBS (or TMS, or infusion) using parkinsonian animals models.
Resumo:
Background: Spinal anaesthesia is the standard of care for elective caesarean delivery. It has advantages over general anaesthesia. However the sympathetic blockade induced by spinal anaesthesia results in an 80 percent incidence of hypotension without prophylactic management. Current evidence supports co-loading with intravenous fluids in conjunction with the use of vasopressors as the most effective way to prevent and treat the hypotension. Phenylephrine is the accepted vasopressor of choice in the parturient. A prophylactic phenylephrine infusion combined with a fluid co-load is proven to be an effective and safe method of maintaining maternal hemodynamic stability. While most published studies have assessed the effectiveness of a prophylactic phenylephrine fixed dose infusion, few studies have assessed the effect of a prophylactic phenylephrine weight adjusted dose infusion on maintaining maternal hemodynamic stability following spinal anesthesia for a cesarean delivery. Objective: To compare the incidence of hypotension between women undergoing elective caesarean section under spinal anaesthesia, receiving prophylactic phenylephrine infusion at a fixed dose of 37.5 micrograms per minute versus a weight adjusted dose of 0.5 micrograms per kilogram per minute. Methods: One hundred and eight patients scheduled for non-urgent caesarean section under spinal anaesthesia were randomized into 2 groups; control group and intervention group using a computer generated table of numbers. Control group; Received prophylactic phenylephrine fixed dose infusion at 37.5 micrograms per minute. Intervention group; Received prophylactic phenylephrine weight adjusted dose infusion at 0.5 micrograms per kilogram per minute Results: The two groups had similar baseline characteristics in terms of ; Age, sex, weight and height. There was a 35.2% incidence of hypotension in the fixed dose group and an 18.6% incidence of hypotension in the weight adjusted dose group. This difference was found to be of borderline statistical significance p-value 0.05, and the difference in the incidence rates between the two groups was found to be statistically significant p= 0.03. The difference in the incidence of reactive hypertension and bradycardia between the two groups was not statistically significant: p-value of 0.19 for reactive hypertension and p-value of 0.42 for the incidence of bradycardia. There was also no statistically significant difference in the use of phenylephrine boluses, use of atropine, intravenous fluid used and the number of times the infusion was stopped. Conclusion: Among this population, the incidence of hypotension was significantly less in the weight adjusted dose group than in the fixed dose group. There was no difference in the number of physician interventions required to keep the blood pressure within 20% of baseline, and no difference in the proportion of reactive hypertension or bradycardia between the two groups. Administering prophylactic phenylephrine infusion at a weight adjusted dose of 0.5 micrograms per kilogram per minute results in a lower incidence of hypotension compared to its administration at a fixed dose of 37.5 micrograms per minute.