889 resultados para Multimodal interface


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta dissertação apresenta o desenvolvimento de uma plataforma multimodal de aquisição e processamento de sinais. O projeto proposto insere-se no contexto do desenvolvimento de interfaces multimodais para aplicação em dispositivos robóticos cujo propósito é a reabilitação motora adaptando o controle destes dispositivos de acordo com a intenção do usuário. A interface desenvolvida adquire, sincroniza e processa sinais eletroencefalográficos (EEG), eletromiográficos (EMG) e sinais provenientes de sensores inerciais (IMUs). A aquisição dos dados é feita em experimentos realizados com sujeitos saudáveis que executam tarefas motoras de membros inferiores. O objetivo é analisar a intenção de movimento, a ativação muscular e o início efetivo dos movimentos realizados, respectivamente, através dos sinais de EEG, EMG e IMUs. Para este fim, uma análise offline foi realizada. Nessa análise, são utilizadas técnicas de processamento dos sinais biológicos e técnicas para processar sinais provenientes de sensores inerciais. A partir destes, os ângulos da articulação do joelho também são aferidos ao longo dos movimentos. Um protocolo experimental de testes foi proposto para as tarefas realizadas. Os resultados demonstraram que o sistema proposto foi capaz de adquirir, sincronizar, processar e classificar os sinais combinadamente. Análises acerca da acurácia dos classificadores utilizados mostraram que a interface foi capaz de identificar intenção de movimento em 76, 0 ± 18, 2% dos movimentos. A maior média de tempo de antecipação ao movimento foi obtida através da análise do sinal de EEG e foi de 716, 0±546, 1 milisegundos. A partir da análise apenas do sinal de EMG, este valor foi de 88, 34 ± 67, 28 milisegundos. Os resultados das etapas de processamento dos sinais biológicos, a medição dos ângulos da articulação, bem como os valores de acurácia e tempo de antecipação ao movimento se mostraram em conformidade com a literatura atual relacionada.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation introduces the design of a multimodal, adaptive real-time assistive system as an alternate human computer interface that can be used by individuals with severe motor disabilities. The proposed design is based on the integration of a remote eye-gaze tracking system, voice recognition software, and a virtual keyboard. The methodology relies on a user profile that customizes eye gaze tracking using neural networks. The user profiling feature facilitates the notion of universal access to computing resources for a wide range of applications such as web browsing, email, word processing and editing. ^ The study is significant in terms of the integration of key algorithms to yield an adaptable and multimodal interface. The contributions of this dissertation stem from the following accomplishments: (a) establishment of the data transport mechanism between the eye-gaze system and the host computer yielding to a significantly low failure rate of 0.9%; (b) accurate translation of eye data into cursor movement through congregate steps which conclude with calibrated cursor coordinates using an improved conversion function; resulting in an average reduction of 70% of the disparity between the point of gaze and the actual position of the mouse cursor, compared with initial findings; (c) use of both a moving average and a trained neural network in order to minimize the jitter of the mouse cursor, which yield an average jittering reduction of 35%; (d) introduction of a new mathematical methodology to measure the degree of jittering of the mouse trajectory; (e) embedding an onscreen keyboard to facilitate text entry, and a graphical interface that is used to generate user profiles for system adaptability. ^ The adaptability nature of the interface is achieved through the establishment of user profiles, which may contain the jittering and voice characteristics of a particular user as well as a customized list of the most commonly used words ordered according to the user's preferences: in alphabetical or statistical order. This allows the system to successfully provide the capability of interacting with a computer. Every time any of the sub-system is retrained, the accuracy of the interface response improves even more. ^

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper focuses on evaluating the usability of an Intelligent Wheelchair (IW) in both real and simulated environments. The wheelchair is controlled at a high-level by a flexible multimodal interface, using voice commands, facial expressions, head movements and joystick as its main inputs. A Quasi-experimental design was applied including a deterministic sample with a questionnaire that enabled to apply the System Usability Scale. The subjects were divided in two independent samples: 46 individuals performing the experiment with an Intelligent Wheelchair in a simulated environment (28 using different commands in a sequential way and 18 with the liberty to choose the command); 12 individuals performing the experiment with a real IW. The main conclusion achieved by this study is that the usability of the Intelligent Wheelchair in a real environment is higher than in the simulated environment. However there were not statistical evidences to affirm that there are differences between the real and simulated wheelchairs in terms of safety and control. Also, most of users considered the multimodal way of driving the wheelchair very practical and satisfactory. Thus, it may be concluded that the multimodal interfaces enables very easy and safe control of the IW both in simulated and real environments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of an intelligent wheelchair (IW) platform that may be easily adapted to any commercial electric powered wheelchair and aid any person with special mobility needs is the main objective of this project. To be able to achieve this main objective, three distinct control methods were implemented in the IW: manual, shared and automatic. Several algorithms were developed for each of these control methods. This paper presents three of the most significant of those algorithms with emphasis on the shared control method. Experiments were performed by users suffering from cerebral palsy, using a realistic simulator, in order to validate the approach. The experiments revealed the importance of using shared (aided) controls for users with severe disabilities. The patients still felt having complete control over the wheelchair movement when using a shared control at a 50% level and thus this control type was very well accepted. Thus it may be used in intelligent wheelchairs since it is able to correct the direction in case of involuntary movements of the user but still gives him a sense of complete control over the IW movement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Intelligent wheelchairs (IW) are technologies that can increase the autonomy and independence of elderly people and patients suffering from some kind of disability. Nowadays the intelligent wheelchairs and the human-machine studies are very active research areas. This paper presents a methodology and a Data Analysis System (DAS) that provides an adapted command language to an user of the IW. This command language is a set of input sequences that can be created using inputs from an input device or a combination of the inputs available in a multimodal interface. The results show that there are statistical evidences to affirm that the mean of the evaluation of the DAS generated command language is higher than the mean of the evaluation of the command language recommended by the health specialist (p value = 0.002) with a sample of 11 cerebral palsy users. This work demonstrates that it is possible to adapt an intelligent wheelchair interface to the user even when the users present heterogeneous and severe physical constraints.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The results of research the intelligence multimodal man-machine interface and virtual reality means for assistive medical systems including computers and mechatronic systems (robots) are discussed. The gesture translation for disability peoples, the learning-by-showing technology and virtual operating room with 3D visualization are presented in this report and were announced at International exhibition "Intelligent and Adaptive Robots–2005".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In research on Silent Speech Interfaces (SSI), different sources of information (modalities) have been combined, aiming at obtaining better performance than the individual modalities. However, when combining these modalities, the dimensionality of the feature space rapidly increases, yielding the well-known "curse of dimensionality". As a consequence, in order to extract useful information from this data, one has to resort to feature selection (FS) techniques to lower the dimensionality of the learning space. In this paper, we assess the impact of FS techniques for silent speech data, in a dataset with 4 non-invasive and promising modalities, namely: video, depth, ultrasonic Doppler sensing, and surface electromyography. We consider two supervised (mutual information and Fisher's ratio) and two unsupervised (meanmedian and arithmetic mean geometric mean) FS filters. The evaluation was made by assessing the classification accuracy (word recognition error) of three well-known classifiers (knearest neighbors, support vector machines, and dynamic time warping). The key results of this study show that both unsupervised and supervised FS techniques improve on the classification accuracy on both individual and combined modalities. For instance, on the video component, we attain relative performance gains of 36.2% in error rates. FS is also useful as pre-processing for feature fusion. Copyright © 2014 ISCA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

TESSA is a toolkit for experimenting with sensory augmentation. It includes hardware and software to facilitate rapid prototyping of interfaces that can enhance one sense using information gathered from another sense. The toolkit contains a range of sensors (e.g. ultrasonics, temperature sensors) and actuators (e.g. tactors or stereo sound), designed modularly so that inputs and outputs can be easily swapped in and out and customized using TESSA’s graphical user interface (GUI), with “real time” feedback. The system runs on a Raspberry Pi with a built-in touchscreen, providing a compact and portable form that is amenable for field trials. At CHI Interactivity, the audience will have the opportunity to experience sensory augmentation effects using this system, and design their own sensory augmentation interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Digital imaging methods are a centrepiece for diagnosis and management of macular disease. A recently developed imaging device is composed of simultaneous confocal scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT). By means of clinical samples the benefit of this technique concerning diagnostic and therapeutic follow-up will be assessed. METHODS: The combined OCT-SLO-System (Ophthalmic Technologies Inc., Toronto, Canada) allows for confocal en-face fundus imaging and high resolution OCT scanning at the same time. OCT images are obtained from transversal line scans. One light source and the identical scanning rate yield a pixel-to-pixel correspondence of images. Three-dimensional thickness maps are derived from C-scan stacking. RESULTS: We followed-up patients with cystoid macular edema, pigment epithelium detachment, macular hole, venous branch occlusion, and vitreoretinal tractions during their course of therapy. The new imaging method illustrates the reduction of cystoid volume, e.g. after intravitreal injections of either angiostatic drugs or steroids. C-scans are used for appreciation of lesion diameters, visualisation of pathologies involving the vitreoretinal interface, and quantification of retinal thickness change. CONCLUSION: The combined OCT-SLO system creates both topographic and tomographic images of the retina. New therapeutic options can be followed-up closely by observing changes in lesion thickness and cyst volumes. For clinical use further studies are needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A vision of the future of intraoperative monitoring for anesthesia is presented-a multimodal world based on advanced sensing capabilities. I explore progress towards this vision, outlining the general nature of the anesthetist's monitoring task and the dangers of attentional capture. Research in attention indicates different kinds of attentional control, such as endogenous and exogenous orienting, which are critical to how awareness of patient state is maintained, but which may work differently across different modalities. Four kinds of medical monitoring displays are surveyed: (1) integrated visual displays, (2) head-mounted displays, (3) advanced auditory displays and (4) auditory alarms. Achievements and challenges in each area are outlined. In future research, we should focus more clearly on identifying anesthetists' information needs and we should develop models of attention in different modalities and across different modalities that are more capable of guiding design. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile and wearable computers present input/output prob-lems due to limited screen space and interaction techniques. When mobile, users typically focus their visual attention on navigating their environment - making visually demanding interface designs hard to operate. This paper presents two multimodal interaction techniques designed to overcome these problems and allow truly mobile, 'eyes-free' device use. The first is a 3D audio radial pie menu that uses head gestures for selecting items. An evaluation of a range of different audio designs showed that egocentric sounds re-duced task completion time, perceived annoyance, and al-lowed users to walk closer to their preferred walking speed. The second is a sonically enhanced 2D gesture recognition system for use on a belt-mounted PDA. An evaluation of the system with and without audio feedback showed users' ges-tures were more accurate when dynamically guided by au-dio-feedback. These novel interaction techniques demon-strate effective alternatives to visual-centric interface de-signs on mobile devices.