973 resultados para computer interface


Relevância:

60.00% 60.00%

Publicador:

Resumo:

O reconhecimento da intenção do condutor a partir de sinais de eletroencefalografia (EEG) pode ser útil no desenvolvimento de interfaces cérebro computador (BCI) para serem usadas em sinergia com veículos inteligentes. Isso pode ser benéfico para melhorar a qualidade de interação entre o motorista e o carro, por exemplo, fornecendo uma resposta do carro inteligente alinhada com a intenção do motorista. Neste estudo, considera-se a antecipação como sendo o estado cognitivo que leva a ações especificas durante a condução de um automóvel. Portanto, propomos investigar a presença de padrões antecipatórios em sinais EEG durante a condução de veículos para determinar duas ações especifícas (1) virar à esquerda e (2) virar à direita, alguns milissegundos antes que tais ações aconteçam. Um protocolo experimental foi proposto para gravar sinais EEG de 5 indivíduos enquanto eles operam um simulador de realidade virtual não invasiva - que foi projetado para tal experimento - que simula a condução de um carro virtual. O protocolo experimental é uma variante do paradigma da variação negativa contingente (CNV) com condições Go e No-go no sistema de condução de realidade virtual. Os resultados apresentados neste estudo indicam a presença de padrões antecipatórios em potenciais corticais lentos observados no domínio do tempo (medias dos sinais EEG) e da frequência (Power Spectra e coerência de fase). Isso abre um leque de possibilidades no desenvolvimento de sistemas BCI - baseados em sinais antecipatórios - que conectem o motorista ao veiculo inteligente favorecendo uma tomada de decisão que analise as intenções dos condutores podendo eventualmente evitar acidentes durante a condução.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work is to analyze the behavior of context concentrated stresses generated around a nozzle connected to a pressure vessel. For this analysis we used the finite element method via a computer interface, the software ANSYS WORKBENCH. It was first necessary to study and intensive training of the software used, and also a study of the ASME Code, Section VIII, which is responsible for the standards used in pressure vessels. We analyzed three cases, which differ primarily in the variation of the diameter of the nozzle in order to analyze the variation of the stresses according to the variation of the diameters. The nozzle diameters were 35, 75 and 105 mm. After the model designed vessel, a pressure was applied on the innervessel of 0.5 MPa. For the smallest diameter, was found the lowest tensions concentrated. Varying between 1 and 223 MPa. Increasing the diameter of the nozzle resulted in increased tensions concentrated around the junction nozzle /vessel. The maximum stresses increased by 78% when the value was increased in diameter from 35 to 75 mm. Since the increase in diameter from 75 to 105 mm, the values of the tensions increased around 43%. These figures emphasize that stress concentrations increased with increasing the diameter of the nozzles, but not linearly

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Mixed Reality proposes scenes combining between virtual and real worlds offering to the user an intuitive way of interaction according to a specific application. This tutorial paper aims at presenting the fundamentals concepts of this emergent kind of human-computer interface.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The elimination of all external incisions is an important step in reducing the invasiveness of surgical procedures. Natural Orifice Translumenal Endoscopic Surgery (NOTES) is an incision-less surgery and provides explicit benefits such as reducing patient trauma and shortening recovery time. However, technological difficulties impede the widespread utilization of the NOTES method. A novel robotic tool has been developed, which makes NOTES procedures feasible by using multiple interchangeable tool tips. The robotic tool has the capability of entering the body cavity through an orifice or a single incision using a flexible articulated positioning mechanism and once inserted is not constrained by incisions, allowing for visualization and manipulations throughout the cavity. Multiple interchangeable tool tips of the robotic device initially consist of three end effectors: a grasper, scissors, and an atraumatic Babcock clamp. The tool changer is capable of selecting and switching between the three tools depending on the surgical task using a miniature mechanism driven by micro-motors. The robotic tool is remotely controlled through a joystick and computer interface. In this thesis, the following aspects of this robotic tool will be detailed. The first-generation robot is designed as a conceptual model for implementing a novel mechanism of switching, advancing, and controlling the tool tips using two micro-motors. It is believed that this mechanism achieves a reduction in cumbersome instrument exchanges and can reduce overall procedure time and the risk of inadvertent tissue trauma during exchanges with a natural orifice approach. Also, placing actuators directly at the surgical site enables the robot to generate sufficient force to operate effectively. Mounting the multifunctional robot on the distal end of an articulating tube provides freedom from restriction on the robot kinematics and helps solve some of the difficulties otherwise faced during surgery using NOTES or related approaches. The second-generation multifunctional robot is then introduced in which the overall size is reduced and two arms provide 2 additional degrees of freedom, resulting in feasibility of insertion through the esophagus and increased dexterity. Improvements are necessary in future iterations of the multifunctional robot; however, the work presented is a proof of concept for NOTES robots capable of abdominal surgical interventions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O presente trabalho visa descrever os passos para desenvolvimento de um curso e sua estrutura em ambiente virtual de aprendizagem Moodle. Para tanto, a pesquisa consistiu na aplicação de conteúdos de enfermagem para oferecimento de curso online em workshop internacional para grupo de estudantes de graduação e licenciatura em enfermagem do Brasil e de Portugal. Durante a pesquisa foram registradas etapas distintas, desde o planejamento do curso passando pela construção e transformação dos conteúdos, até a disponibilização aos estudantes. As atividades interativas e conteúdos foram elaborados pelos professores com participação de equipe técnica. No trabalho são apresentados procedimentos específicos e papéis a serem desempenhados por professores, especialistas, estudantes e técnicos. Os resultados do desenvolvimento e oferecimento do curso online apontaram alguns aspectos a serem aperfeiçoados no processo de trabalho, no formato dos conteúdos e na utilização das ferramentas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nel presente lavoro di tesi è stato sviluppato e testato un sistema BCI EEG-based che sfrutta la modulazione dei ritmi sensorimotori tramite immaginazione motoria della mano destra e della mano sinistra. Per migliorare la separabilità dei due stati mentali, in questo lavoro di tesi si è sfruttato l'algoritmo CSP (Common Spatial Pattern), in combinazione ad un classificatore lineare SVM. I due stati mentali richiesti sono stati impiegati per controllare il movimento (rotazione) di un modello di arto superiore a 1 grado di libertà, simulato sullo schermo. Il cuore del lavoro di tesi è consistito nello sviluppo del software del sistema BCI (basato su piattaforma LabVIEW 2011), descritto nella tesi. L'intero sistema è stato poi anche testato su 4 soggetti, per 6 sessioni di addestramento.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ogni anno si registra un crescente aumento delle persone affette da patologie neurodegenerative come la sclerosi laterale amiotrofica, la sclerosi multipla, la malattia di Parkinson e persone soggette a gravi disabilità motorie dovute ad ictus, paralisi cerebrale o lesioni al midollo spinale. Spesso tali condizioni comportano menomazioni molto invalidanti e permanenti delle vie nervose, deputate al controllo dei muscoli coinvolti nell’esecuzione volontaria delle azioni. Negli ultimi anni, molti gruppi di ricerca si sono interessati allo sviluppo di sistemi in grado di soddisfare le volontà dell’utente. Tali sistemi sono generalmente definiti interfacce neurali e non sono pensati per funzionare autonomamente ma per interagire con il soggetto. Tali tecnologie, note anche come Brain Computer Interface (BCI), consentono una comunicazione diretta tra il cervello ed un’apparecchiatura esterna, basata generalmente sull’elettroencefalografia (EEG), in grado di far comunicare il sistema nervoso centrale con una periferica esterna. Tali strumenti non impiegano le usuali vie efferenti coinvolte nella produzione di azioni quali nervi e muscoli, ma collegano l'attività cerebrale ad un computer che ne registra ed interpreta le variazioni, permettendo quindi di ripristinare in modo alternativo i collegamenti danneggiati e recuperare, almeno in parte, le funzioni perse. I risultati di numerosi studi dimostrano che i sistemi BCI possono consentire alle persone con gravi disabilità motorie di condividere le loro intenzioni con il mondo circostante e provano perciò il ruolo importante che esse sono in grado di svolgere in alcune fasi della loro vita.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il funzionamento del cervello umano, organo responsabile di ogni nostra azione e pensiero, è sempre stato di grande interesse per la ricerca scientifica. Dopo aver compreso lo sviluppo dei potenziali elettrici da parte di nuclei neuronali in risposta a stimoli, si è riusciti a graficare il loro andamento con l'avvento dell'ElettroEncefaloGrafia (EEG). Tale tecnologia è entrata a far parte degli esami di routine per la ricerca di neuropsicologia e di interesse clinico, poiché permette di diagnosticare e discriminare i vari tipi di epilessia, la presenza di traumi cranici e altre patologie del sistema nervoso centrale. Purtroppo presenta svariati difetti: il segnale è affetto da disturbi e richiede un'adeguata elaborazione tramite filtraggio e amplificazione, rimanendo comunque sensibile a disomogeneità dei tessuti biologici e rendendo difficoltoso il riconoscimento delle sorgenti del segnale che si sono attivate durante l'esame (il cosiddetto problema inverso). Negli ultimi decenni la ricerca ha portato allo sviluppo di nuove tecniche d'indagine, di particolare interesse sono la ElettroEncefaloGrafia ad Alta Risoluzione (HREEG) e la MagnetoEncefaloGrafia (MEG). L'HREEG impiega un maggior numero di elettrodi (fino a 256) e l'appoggio di accurati modelli matematici per approssimare la distribuzione di potenziale elettrico sulla cute del soggetto, garantendo una migliore risoluzione spaziale e maggior sicurezza nel riscontro delle sorgenti neuronali. Il progresso nel campo dei superconduttori ha reso possibile lo sviluppo della MEG, che è in grado di registrare i deboli campi magnetici prodotti dai segnali elettrici corticali, dando informazioni immuni dalle disomogeneità dei tessuti e andando ad affiancare l'EEG nella ricerca scientifica. Queste nuove tecnologie hanno aperto nuovi campi di sviluppo, più importante la possibilità di comandare protesi e dispositivi tramite sforzo mentale (Brain Computer Interface). Il futuro lascia ben sperare per ulteriori innovazioni.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

GuideView is a system designed for structured, multi-modal delivery of clinical guidelines. Clinical instructions are presented simultaneously in voice, text, pictures or video or animations. Users navigate using mouse-clicks and voice commands. An evaluation study performed at a medical simulation laboratory found that voice and video instructions were rated highly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper, a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular as well as improving healthcare quality and patient safety in general. METHOD: The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. RESULTS: The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. CONCLUSIONS: Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation is about the research carried on developing an MPS (Multipurpose Portable System) which consists of an instrument and many accessories. The instrument is portable, hand-held, and rechargeable battery operated, and it measures temperature, absorbance, and concentration of samples by using optical principles. The system also performs auxiliary functions like incubation and mixing. This system can be used in environmental, industrial, and medical applications. ^ Research emphasis is on system modularity, easy configuration, accuracy of measurements, power management schemes, reliability, low cost, computer interface, and networking. The instrument can send the data to a computer for data analysis and presentation, or to a printer. ^ This dissertation includes the presentation of a full working system. This involved integration of hardware and firmware for the micro-controller in assembly language, software in C and other application modules. ^ The instrument contains the Optics, Transimpedance Amplifiers, Voltage-to-Frequency Converters, LCD display, Lamp Driver, Battery Charger, Battery Manager, Timer, Interface Port, and Micro-controller. ^ The accessories are a Printer, Data Acquisition Adapter (to transfer the measurements to a computer via the Printer Port and expand the Analog/Digital conversion capability), Car Plug Adapter, and AC Transformer. This system has been fully evaluated for fault tolerance and the schemes will also be presented. ^