985 resultados para Computer interfaces.


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this article, an overview of some of the latest developments in the field of cerebral cortex to computer interfacing (CCCI) is given. This is posed in the more general context of Brain-Computer Interfaces in order to assess advantages and disadvantages. The emphasis is clearly placed on practical studies that have been undertaken and reported on, as opposed to those speculated, simulated or proposed as future projects. Related areas are discussed briefly only in the context of their contribution to the studies being undertaken. The area of focus is notably the use of invasive implant technology, where a connection is made directly with the cerebral cortex and/or nervous system. Tests and experimentation which do not involve human subjects are invariably carried out a priori to indicate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies from this area are discussed. The paper goes on to describe human experimentation, in which neural implants have linked the human nervous system bidirectionally with technology and the internet. A view is taken as to the prospects for the future for CCCI, in terms of its broad therapeutic role.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Haptic computer interfaces provide users with feedback through the sense of touch, thereby allowing users to feel a graphical user interface. Force feedback gravity wells, i.e. attractive basins that can pull the cursor toward a target, are one type of haptic effect that have been shown to provide improvements in "point and click" tasks. For motion-impaired users, gravity wells could improve times by as much as 50%. It has been reported that the presentation of information to multiple sensory modalities, e.g. haptics and vision, can provide performance benefits. However, previous studies investigating the use of force feedback gravity wells have generally not provided visual representations of the haptic effect. Where force fields extend beyond clickable targets, the addition of visual cues may affect performance. This paper investigates how the performance of motion-impaired computer users is affected by having visual representations of force feedback gravity wells presented on-screen. Results indicate that the visual representation does not affect times and errors in a "point and click" task involving multiple targets.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

OBJECTIVE: Assimilating the diagnosis complete spinal cord injury (SCI) takes time and is not easy, as patients know that there is no 'cure' at the present time. Brain-computer interfaces (BCIs) can facilitate daily living. However, inter-subject variability demands measurements with potential user groups and an understanding of how they differ to healthy users BCIs are more commonly tested with. Thus, a three-class motor imagery (MI) screening (left hand, right hand, feet) was performed with a group of 10 able-bodied and 16 complete spinal-cord-injured people (paraplegics, tetraplegics) with the objective of determining what differences were present between the user groups and how they would impact upon the ability of these user groups to interact with a BCI. APPROACH: Electrophysiological differences between patient groups and healthy users are measured in terms of sensorimotor rhythm deflections from baseline during MI, electroencephalogram microstate scalp maps and strengths of inter-channel phase synchronization. Additionally, using a common spatial pattern algorithm and a linear discriminant analysis classifier, the classification accuracy was calculated and compared between groups. MAIN RESULTS: It is seen that both patient groups (tetraplegic and paraplegic) have some significant differences in event-related desynchronization strengths, exhibit significant increases in synchronization and reach significantly lower accuracies (mean (M) = 66.1%) than the group of healthy subjects (M = 85.1%). SIGNIFICANCE: The results demonstrate significant differences in electrophysiological correlates of motor control between healthy individuals and those individuals who stand to benefit most from BCI technology (individuals with SCI). They highlight the difficulty in directly translating results from healthy subjects to participants with SCI and the challenges that, therefore, arise in providing BCIs to such individuals.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

OBJECTIVE: To investigate the efficacy and effects of transcranial direct current stimulation (tDCS) on motor imagery brain-computer interface (MI-BCI) with robotic feedback for stroke rehabilitation. DESIGN: A sham-controlled, randomized controlled trial. SETTING: Patients recruited through a hospital stroke rehabilitation program. PARTICIPANTS: Subjects (N=19) who incurred a stroke 0.8 to 4.3 years prior, with moderate to severe upper extremity functional impairment, and passed BCI screening. INTERVENTIONS: Ten sessions of 20 minutes of tDCS or sham before 1 hour of MI-BCI with robotic feedback upper limb stroke rehabilitation for 2 weeks. Each rehabilitation session comprised 8 minutes of evaluation and 1 hour of therapy. MAIN OUTCOME MEASURES: Upper extremity Fugl-Meyer Motor Assessment (FMMA) scores measured end-intervention at week 2 and follow-up at week 4, online BCI accuracies from the evaluation part, and laterality coefficients of the electroencephalogram (EEG) from the therapy part of the 10 rehabilitation sessions. RESULTS: FMMA score improved in both groups at week 4, but no intergroup differences were found at any time points. Online accuracies of the evaluation part from the tDCS group were significantly higher than those from the sham group. The EEG laterality coefficients from the therapy part of the tDCS group were significantly higher than those of the sham group. CONCLUSIONS: The results suggest a role for tDCS in facilitating motor imagery in stroke.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Neste trabalho, investigamos o papel de componentes interativos, frequentemente utilizados na construção de interfaces computacionais educativas, na postura exploratória do estudante e na aprendizagem de conceitos matemáticos. Selecionamos para esta pesquisa os seguintes componentes: caixa de combinações (combo box) e campo de texto (text field). Do ponto de vista educacional, estes componentes têm papéis distintos: o primeiro orienta as escolhas do estudante durante um processo exploratório, enquanto que o segundo não oferece qualquer orientação. Para comparar o papel desses componentes, desenvolvemos duas interfaces computacionais interativas através das quais o estudante pode explorar o comportamento gráfico de uma função do primeiro grau. Ambas as interfaces são idênticas entre si, a menos do componente interativo empregado: em uma delas foi utilizado a caixa de combinações e em outra o campo de texto. Tanto a postura exploratória quanto o desempenho em testes de conhecimento foram avaliados a partir de medidas diretas registradas pelas próprias interfaces. A postura exploratória foi avaliada através do número e do tipo de interações do estudante com o componente interativo, sendo este registro uma das características singulares desta pesquisa, pois permite a observação de alguns comportamentos do estudante durante o processo de interação com a interface, e não somente antes e após a interação. Dentro da limitação da ferramenta de coleta de dados da presente pesquisa, a aprendizagem foi medida através da comparação do desempenho em testes de conhecimento aplicados antes e depois do uso dos componentes interativos pelos estudantes. Neste contexto, diferenças significativas no papel de cada componente na postura exploratória e na aprendizagem foram então observadas.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The monitoring of cognitive functions aims at gaining information about the current cognitive state of the user by decoding brain signals. In recent years, this approach allowed to acquire valuable information about the cognitive aspects regarding the interaction of humans with external world. From this consideration, researchers started to consider passive application of brain–computer interface (BCI) in order to provide a novel input modality for technical systems solely based on brain activity. The objective of this thesis is to demonstrate how the passive Brain Computer Interfaces (BCIs) applications can be used to assess the mental states of the users, in order to improve the human machine interaction. Two main studies has been proposed. The first one allows to investigate whatever the Event Related Potentials (ERPs) morphological variations can be used to predict the users’ mental states (e.g. attentional resources, mental workload) during different reactive BCI tasks (e.g. P300-based BCIs), and if these information can predict the subjects’ performance in performing the tasks. In the second study, a passive BCI system able to online estimate the mental workload of the user by relying on the combination of the EEG and the ECG biosignals has been proposed. The latter study has been performed by simulating an operative scenario, in which the occurrence of errors or lack of performance could have significant consequences. The results showed that the proposed system is able to estimate online the mental workload of the subjects discriminating three different difficulty level of the tasks ensuring a high reliability.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.

This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.

In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This is the final report of research project 2002-057-C: Enabling Team Collaboration with Pervasive and Mobile Computing. The research project was carried out by the Australian Cooperative Research Centre for Construction Innovation and has two streams that consider the use of pervasive computing technologies in two different contexts. The first context was the on-site deployment of mobile computing devices, where as the second context was the use and development of intelligent rooms based on sensed environments and new human-computer interfaces (HCI) for collaboration in the design office. The two streams present a model of team collaboration that relies on continues communication to people and information to reduce information leakage. This report consists of five sections: (1) Introduction; (2) Research Project Background; (3) Project Implementation; (4) Case Studies and Outcomes; and (5) Conclusion and Recommendation. Introduction in Section 1 presents a brief description of the research project including general research objectives and structure. Section 2 introduces the background of the research and detailed information regarding project participants, objectives and significance, and also research methodology. Review of all research activities such as literature review and case studies are summarised in Project Implementation in Section 3. Following this, in Section 4 the report then focuses on analysing the case studies and presents their outcomes. Conclusion and recommendation of the research project are summarised in Section 5. Other information to support the content of the report such as research project schedule is provided in Appendices. The purpose of the final project report is to provide industry partners with detailed information on the project activities and methodology such as the implementation of pervasive computing technologies in the real contexts. The report summarises the outcomes of the case studies and provides necessary recommendation to industry partners of using new technologies to support better project collaboration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Accurate and fast decoding of speech imagery from electroencephalographic (EEG) data could serve as a basis for a new generation of brain computer interfaces (BCIs), more portable and easier to use. However, decoding of speech imagery from EEG is a hard problem due to many factors. In this paper we focus on the analysis of the classification step of speech imagery decoding for a three-class vowel speech imagery recognition problem. We empirically show that different classification subtasks may require different classifiers for accurately decoding and obtain a classification accuracy that improves the best results previously published. We further investigate the relationship between the classifiers and different sets of features selected by the common spatial patterns method. Our results indicate that further improvement on BCIs based on speech imagery could be achieved by carefully selecting an appropriate combination of classifiers for the subtasks involved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.