962 resultados para User-Computer Interface


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Brain-computer music interface (BCMI) is developed to allow for continuous modification of the tempo of dynamically generated music. Six out of seven participants are able to control the BCMI at significant accuracies and their performance is observed to increase over time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This chapter describes the use of a graphical humane interface - a Virtual Salesperson. The face of the Virtual Salesperson is a generic Facial Animation Engine developed at the University of Genova in Italy and uses a 3-D computer graphics model based on the MPEG-4 standard supplemented by Cyberware scans for facial detail. The appearance of the head may be modified by Facial Definition Parameters to more accurately model the required visage allowing one model to represent many different Talking Heads. The “brain” of the Virtual Salesperson, developed at Curtin University, integrates natural language parsing, text to speech synthesis, and artificial intelligence systems to produce a “bot” capable of helping a user through a question/answer sales enquiry. The Virtual Salesperson is a specific example of a generic Human Computer Interface - a Talking Head.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Information technology research over the past two decades suggests that the installation and use of computers fundamentally affects the structure and function of organisations and, m particular, the workers in these organizations. Following the release of the IBM Personal Computer in 1982, microcomputers have become an integral part of most work environments. The accounting services industry, in particular, has felt the impact of this ‘microcomputer revolution’. In Big Six accounting firms, there is almost one microcomputer for each professional accountant employed, Notwithstanding this, little research has been done on the effect of microcomputers on the work outcomes of professional accountants working in these firms. This study addresses this issue. It assesses, in an organisational setting, how accountant’ perceptions of ease of use and usefulness of microcomputers act on their computer anxieties, microcomputer attitudes and use to affect their job satisfaction and job performance. The research also examines how different types of human-computer interfaces affect the relationships between accountants' beliefs about microcomputer utility and ease of use, computer anxiety, microcomputer attitudes and microcomputer use. To attain this research objective, a conceptual model was first developed, The model indicates that work outcomes (job satisfaction and job performance) of professional accountants using microcomputers are influenced by users' perceptions of ease of use and usefulness of microcomputers via paths through (a) the level of computer anxiety experienced by users, (b) the general attitude of users toward using microcomputers, and (c) the extent to which microcomputers are used by individuals. Empirically testable propositions were derived from the model to test the postulated relationships between these constructs. The study also tested whether or not users of different human-computer interfaces reacted differently to the perceptions and anxieties they hold about microcomputers and their use in the workplace. It was argued that users of graphical interfaces, because of the characteristics of those interfaces, react differently to their perceptions and anxieties about microcomputers compared with users of command-line (or textual-based) interfaces. A passive-observational study in a field setting was used to test the model and the research propositions. Data was collected from 164 professional accountants working in a Big Six accounting firm in a metropolitan city in Australia. Structural equation modelling techniques were used to test the, hypothesised causal relationships between the components comprising the general research model. Path analysis and ordinary least squares regression was used to estimate the parameters of the model and analyse the data obtained. Multisample analysis (or stacked model analysis) using EQS was used to test the fit of the model to the data of the different human-computer interface groups and to estimate the parameters for the paths in those different groups. The results show that the research model is a good description of the data. The job satisfaction of professional accountants is directly affected by their attitude toward using microcomputers and by microcomputer use itself. However, job performance appears to be only directly affected by microcomputer attitudes. Microcomputer use does not directly affect job performance. Along with perceived ease of use and perceived usefulness, computer anxiety is shown to be an important determinant of attitudes toward using microcomputers - higher levels of computer anxiety negatively affect attitudes toward using microcomputers. Conversely, higher levels of perceived ease of use and perceived usefulness heighten individuals' positive attitudes toward using microcomputers. Perceived ease of use and perceived usefulness also indirectly affect microcomputer attitudes through their effect on computer anxiety. The results show that higher levels of perceived ease of use and perceived usefulness result in lower levels of computer anxiety. A surprising result from the study is that while perceived ease of use is shown to directly affect the level of microcomputer usage, perceived usefulness and attitude toward using microcomputers does not. The results of the multisample analysis confirm that the research model fits the stacked model and that the stacked model is a significantly better fit if specific parameters are allowed to vary between the two human-computer interface user groups. In general, these results confirm that an interaction exists between the type of human-computer interface (the variable providing the grouping) and the other variables in the model The results show a clear difference between the two groups in the way in which perceived ease of use and perceived usefulness affect microcomputer attitude. In the case of users of command-line interfaces, these variables appear to affect microcomputer attitude via an intervening variable, computer anxiety, whereas in the graphical interface user group the effect occurs directly. Related to this, the results show that perceived ease of use and perceived usefulness have a significant direct effect on computer anxiety in command-line interface users, but no effect at all for graphical interface users. Of the two exogenous variables only perceived ease of use, and that in the case of the command-line interface users, has a direct significant effect on extent of use of microcomputers. In summary, the research has contributed to the development of a theory of individual adjustment to information technology in the workplace. It identifies certain perceptions, anxieties and attitudes about microcomputers and shows how they may affect work outcomes such as job satisfaction and job performance. It also shows that microcomputer-interface types have a differential effect on some of the hypothesised relationships represented in the general model. Future replication studies could sample a broader cross-section of the microcomputer user community. Finally, the results should help Big Six accounting firms to maximise the benefits of microcomputer use by making them aware of how working with microcomputers affects job satisfaction and job performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Brain Computer Interface (BCI) plays an important role in the communication between human and machines. This communication is based on the human brain signals. In these systems, users use their brain instead of the limbs or body movements to do tasks. The brain signals are analyzed and translated into commands to control any communication devices, robots or computers. In this paper, the aim was to enhance the performance of a brain computer interface (BCI) systems through better prosthetic motor imaginary tasks classification. The challenging part is to use only a single channel of electroencephalography (EEG). Arm movement imagination is the task of the user, where (s)he was asked to imagine moving his arm up or down. Our system detected the imagination based on the input brain signal. Some EEG quality features were extracted from the brain signal, and the Decision Tree was used to classify the participant's imagination based on the extracted features. Our system is online which means that it can give the decision as soon as the signal is given to the system (takes only 20 ms). Also, only one EEG channel is used for classification which reduces the complexity of the system which leads to fast performance. Hundred signals were used for testing, on average 97.4% of the up-down prosthetic motor imaginary tasks were detected correctly. This method can be used in many different applications such as: moving artificial limbs and wheelchairs due to it's high speed and accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

DBMODELING is a relational database of annotated comparative protein structure models and their metabolic, pathway characterization. It is focused on enzymes identified in the genomes of Mycobacterium tuberculosis and Xylella fastidiosa. The main goal of the present database is to provide structural models to be used in docking simulations and drug design. However, since the accuracy of structural models is highly dependent on sequence identity between template and target, it is necessary to make clear to the user that only models which show high structural quality should be used in such efforts. Molecular modeling of these genomes generated a database, in which all structural models were built using alignments presenting more than 30% of sequence identity, generating models with medium and high accuracy. All models in the database are publicly accessible at http://www.biocristalografia.df.ibilce.unesp.br/tools. DBMODELING user interface provides users friendly menus, so that all information can be printed in one stop from any web browser. Furthermore, DBMODELING also provides a docking interface, which allows the user to carry out geometric docking simulation, against the molecular models available in the database. There are three other important homology model databases: MODBASE, SWISSMODEL, and GTOP. The main applications of these databases are described in the present article. © 2007 Bentham Science Publishers Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Mixed Reality proposes scenes combining between virtual and real worlds offering to the user an intuitive way of interaction according to a specific application. This tutorial paper aims at presenting the fundamentals concepts of this emergent kind of human-computer interface.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Un'interfaccia cervello-computer (BCI: Brain-Computer Interface) è un sistema di comunicazione diretto tra il cervello e un dispositivo esterno che non dipende dalle normali vie di output del cervello, costituite da nervi o muscoli periferici. Il segnale generato dall'utente viene acquisito per mezzo di appositi sensori, poi viene processato e classificato estraendone così le informazioni di interesse che verranno poi utilizzate per produrre un output reinviato all'utente come feedback. La tecnologia BCI trova interessanti applicazioni nel campo biomedico dove può essere di grande aiuto a persone soggette da paralisi, ma non sono da escludere altri utilizzi. Questa tesi in particolare si concentra sulle componenti hardware di una interfaccia cervello-computer analizzando i pregi e i difetti delle varie possibilità: in particolar modo sulla scelta dell'apparecchiatura per il rilevamento della attività cerebrale e dei meccanismi con cui gli utilizzatori della BCI possono interagire con l'ambiente circostante (i cosiddetti attuatori). Le scelte saranno effettuate tenendo in considerazione le necessità degli utilizzatori in modo da ridurre i costi e i rischi aumentando il numero di utenti che potranno effettivamente beneficiare dell'uso di una interfaccia cervello-computer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: This study examined the daily surgical scheduling problem in a teaching hospital. This problem relates to the use of multiple operating rooms and different types of surgeons in a typical surgical day with deterministic operation durations (preincision, incision, and postincision times). Teaching hospitals play a key role in the health-care system; however, existing models assume that the duration of surgery is independent of the surgeon's skills. This problem has not been properly addressed in other studies. We analyze the case of a Spanish public hospital, in which continuous pressures and budgeting reductions entail the more efficient use of resources. Methods: To obtain an optimal solution for this problem, we developed a mixed-integer programming model and user-friendly interface that facilitate the scheduling of planned operations for the following surgical day. We also implemented a simulation model to assist the evaluation of different dispatching policies for surgeries and surgeons. The typical aspects we took into account were the type of surgeon, potential overtime, idling time of surgeons, and the use of operating rooms. Results: It is necessary to consider the expertise of a given surgeon when formulating a schedule: such skill can decrease the probability of delays that could affect subsequent surgeries or cause cancellation of the final surgery. We obtained optimal solutions for a set of given instances, which we obtained through surgical information related to acceptable times collected from a Spanish public hospital. Conclusions: We developed a computer-aided framework with a user-friendly interface for use by a surgical manager that presents a 3-D simulation of the problem. Additionally, we obtained an efficient formulation for this complex problem. However, the spread of this kind of operation research in Spanish public health hospitals will take a long time since there is a lack of knowledge of the beneficial techniques and possibilities that operational research can offer for the health-care system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En el mundo actual las aplicaciones basadas en sistemas biométricos, es decir, aquellas que miden las señales eléctricas de nuestro organismo, están creciendo a un gran ritmo. Todos estos sistemas incorporan sensores biomédicos, que ayudan a los usuarios a controlar mejor diferentes aspectos de la rutina diaria, como podría ser llevar un seguimiento detallado de una rutina deportiva, o de la calidad de los alimentos que ingerimos. Entre estos sistemas biométricos, los que se basan en la interpretación de las señales cerebrales, mediante ensayos de electroencefalografía o EEG están cogiendo cada vez más fuerza para el futuro, aunque están todavía en una situación bastante incipiente, debido a la elevada complejidad del cerebro humano, muy desconocido para los científicos hasta el siglo XXI. Por estas razones, los dispositivos que utilizan la interfaz cerebro-máquina, también conocida como BCI (Brain Computer Interface), están cogiendo cada vez más popularidad. El funcionamiento de un sistema BCI consiste en la captación de las ondas cerebrales de un sujeto para después procesarlas e intentar obtener una representación de una acción o de un pensamiento del individuo. Estos pensamientos, correctamente interpretados, son posteriormente usados para llevar a cabo una acción. Ejemplos de aplicación de sistemas BCI podrían ser mover el motor de una silla de ruedas eléctrica cuando el sujeto realice, por ejemplo, la acción de cerrar un puño, o abrir la cerradura de tu propia casa usando un patrón cerebral propio. Los sistemas de procesamiento de datos están evolucionando muy rápido con el paso del tiempo. Los principales motivos son la alta velocidad de procesamiento y el bajo consumo energético de las FPGAs (Field Programmable Gate Array). Además, las FPGAs cuentan con una arquitectura reconfigurable, lo que las hace más versátiles y potentes que otras unidades de procesamiento como las CPUs o las GPUs.En el CEI (Centro de Electrónica Industrial), donde se lleva a cabo este TFG, se dispone de experiencia en el diseño de sistemas reconfigurables en FPGAs. Este TFG es el segundo de una línea de proyectos en la cual se busca obtener un sistema capaz de procesar correctamente señales cerebrales, para llegar a un patrón común que nos permita actuar en consecuencia. Más concretamente, se busca detectar cuando una persona está quedándose dormida a través de la captación de unas ondas cerebrales, conocidas como ondas alfa, cuya frecuencia está acotada entre los 8 y los 13 Hz. Estas ondas, que aparecen cuando cerramos los ojos y dejamos la mente en blanco, representan un estado de relajación mental. Por tanto, este proyecto comienza como inicio de un sistema global de BCI, el cual servirá como primera toma de contacto con el procesamiento de las ondas cerebrales, para el posterior uso de hardware reconfigurable sobre el cual se implementarán los algoritmos evolutivos. Por ello se vuelve necesario desarrollar un sistema de procesamiento de datos en una FPGA. Estos datos se procesan siguiendo la metodología de procesamiento digital de señales, y en este caso se realiza un análisis de la frecuencia utilizando la transformada rápida de Fourier, o FFT. Una vez desarrollado el sistema de procesamiento de los datos, se integra con otro sistema que se encarga de captar los datos recogidos por un ADC (Analog to Digital Converter), conocido como ADS1299. Este ADC está especialmente diseñado para captar potenciales del cerebro humano. De esta forma, el sistema final capta los datos mediante el ADS1299, y los envía a la FPGA que se encarga de procesarlos. La interpretación es realizada por los usuarios que analizan posteriormente los datos procesados. Para el desarrollo del sistema de procesamiento de los datos, se dispone primariamente de dos plataformas de estudio, a partir de las cuales se captarán los datos para después realizar el procesamiento: 1. La primera consiste en una herramienta comercial desarrollada y distribuida por OpenBCI, proyecto que se dedica a la venta de hardware para la realización de EEG, así como otros ensayos. Esta herramienta está formada por un microprocesador, un módulo de memoria SD para el almacenamiento de datos, y un módulo de comunicación inalámbrica que transmite los datos por Bluetooth. Además cuenta con el mencionado ADC ADS1299. Esta plataforma ofrece una interfaz gráfica que sirve para realizar la investigación previa al diseño del sistema de procesamiento, al permitir tener una primera toma de contacto con el sistema. 2. La segunda plataforma consiste en un kit de evaluación para el ADS1299, desde la cual se pueden acceder a los diferentes puertos de control a través de los pines de comunicación del ADC. Esta plataforma se conectará con la FPGA en el sistema integrado. Para entender cómo funcionan las ondas más simples del cerebro, así como saber cuáles son los requisitos mínimos en el análisis de ondas EEG se realizaron diferentes consultas con el Dr Ceferino Maestu, neurofisiólogo del Centro de Tecnología Biomédica (CTB) de la UPM. Él se encargó de introducirnos en los distintos procedimientos en el análisis de ondas en electroencefalogramas, así como la forma en que se deben de colocar los electrodos en el cráneo. Para terminar con la investigación previa, se realiza en MATLAB un primer modelo de procesamiento de los datos. Una característica muy importante de las ondas cerebrales es la aleatoriedad de las mismas, de forma que el análisis en el dominio del tiempo se vuelve muy complejo. Por ello, el paso más importante en el procesamiento de los datos es el paso del dominio temporal al dominio de la frecuencia, mediante la aplicación de la transformada rápida de Fourier o FFT (Fast Fourier Transform), donde se pueden analizar con mayor precisión los datos recogidos. El modelo desarrollado en MATLAB se utiliza para obtener los primeros resultados del sistema de procesamiento, el cual sigue los siguientes pasos. 1. Se captan los datos desde los electrodos y se escriben en una tabla de datos. 2. Se leen los datos de la tabla. 3. Se elige el tamaño temporal de la muestra a procesar. 4. Se aplica una ventana para evitar las discontinuidades al principio y al final del bloque analizado. 5. Se completa la muestra a convertir con con zero-padding en el dominio del tiempo. 6. Se aplica la FFT al bloque analizado con ventana y zero-padding. 7. Los resultados se llevan a una gráfica para ser analizados. Llegados a este punto, se observa que la captación de ondas alfas resulta muy viable. Aunque es cierto que se presentan ciertos problemas a la hora de interpretar los datos debido a la baja resolución temporal de la plataforma de OpenBCI, este es un problema que se soluciona en el modelo desarrollado, al permitir el kit de evaluación (sistema de captación de datos) actuar sobre la velocidad de captación de los datos, es decir la frecuencia de muestreo, lo que afectará directamente a esta precisión. Una vez llevado a cabo el primer procesamiento y su posterior análisis de los resultados obtenidos, se procede a realizar un modelo en Hardware que siga los mismos pasos que el desarrollado en MATLAB, en la medida que esto sea útil y viable. Para ello se utiliza el programa XPS (Xilinx Platform Studio) contenido en la herramienta EDK (Embedded Development Kit), que nos permite diseñar un sistema embebido. Este sistema cuenta con: Un microprocesador de tipo soft-core llamado MicroBlaze, que se encarga de gestionar y controlar todo el sistema; Un bloque FFT que se encarga de realizar la transformada rápida Fourier; Cuatro bloques de memoria BRAM, donde se almacenan los datos de entrada y salida del bloque FFT y un multiplicador para aplicar la ventana a los datos de entrada al bloque FFT; Un bus PLB, que consiste en un bus de control que se encarga de comunicar el MicroBlaze con los diferentes elementos del sistema. Tras el diseño Hardware se procede al diseño Software utilizando la herramienta SDK(Software Development Kit).También en esta etapa se integra el sistema de captación de datos, el cual se controla mayoritariamente desde el MicroBlaze. Por tanto, desde este entorno se programa el MicroBlaze para gestionar el Hardware que se ha generado. A través del Software se gestiona la comunicación entre ambos sistemas, el de captación y el de procesamiento de los datos. También se realiza la carga de los datos de la ventana a aplicar en la memoria correspondiente. En las primeras etapas de desarrollo del sistema, se comienza con el testeo del bloque FFT, para poder comprobar el funcionamiento del mismo en Hardware. Para este primer ensayo, se carga en la BRAM los datos de entrada al bloque FFT y en otra BRAM los datos de la ventana aplicada. Los datos procesados saldrán a dos BRAM, una para almacenar los valores reales de la transformada y otra para los imaginarios. Tras comprobar el correcto funcionamiento del bloque FFT, se integra junto al sistema de adquisición de datos. Posteriormente se procede a realizar un ensayo de EEG real, para captar ondas alfa. Por otro lado, y para validar el uso de las FPGAs como unidades ideales de procesamiento, se realiza una medición del tiempo que tarda el bloque FFT en realizar la transformada. Este tiempo se compara con el tiempo que tarda MATLAB en realizar la misma transformada a los mismos datos. Esto significa que el sistema desarrollado en Hardware realiza la transformada rápida de Fourier 27 veces más rápido que lo que tarda MATLAB, por lo que se puede ver aquí la gran ventaja competitiva del Hardware en lo que a tiempos de ejecución se refiere. En lo que al aspecto didáctico se refiere, este TFG engloba diferentes campos. En el campo de la electrónica:  Se han mejorado los conocimientos en MATLAB, así como diferentes herramientas que ofrece como FDATool (Filter Design Analysis Tool).  Se han adquirido conocimientos de técnicas de procesado de señal, y en particular, de análisis espectral.  Se han mejorado los conocimientos en VHDL, así como su uso en el entorno ISE de Xilinx.  Se han reforzado los conocimientos en C mediante la programación del MicroBlaze para el control del sistema.  Se ha aprendido a crear sistemas embebidos usando el entorno de desarrollo de Xilinx usando la herramienta EDK (Embedded Development Kit). En el campo de la neurología, se ha aprendido a realizar ensayos EEG, así como a analizar e interpretar los resultados mostrados en el mismo. En cuanto al impacto social, los sistemas BCI afectan a muchos sectores, donde destaca el volumen de personas con discapacidades físicas, para los cuales, este sistema implica una oportunidad de aumentar su autonomía en el día a día. También otro sector importante es el sector de la investigación médica, donde los sistemas BCIs son aplicables en muchas aplicaciones como, por ejemplo, la detección y estudio de enfermedades cognitivas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-05

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The emergence of pen-based mobile devices such as PDAs and tablet PCs provides a new way to input mathematical expressions to computer by using handwriting which is much more natural and efficient for entering mathematics. This paper proposes a web-based handwriting mathematics system, called WebMath, for supporting mathematical problem solving. The proposed WebMath system is based on client-server architecture. It comprises four major components: a standard web server, handwriting mathematical expression editor, computation engine and web browser with Ajax-based communicator. The handwriting mathematical expression editor adopts a progressive recognition approach for dynamic recognition of handwritten mathematical expressions. The computation engine supports mathematical functions such as algebraic simplification and factorization, and integration and differentiation. The web browser provides a user-friendly interface for accessing the system using advanced Ajax-based communication. In this paper, we describe the different components of the WebMath system and its performance analysis.