965 resultados para Web-Assisted Error Detection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Early and effective identification of developmental disorders during childhood remains a critical task for the international community. The second highest prevalence of common developmental disorders in children are language delays, which are frequently the first symptoms of a possible disorder. Objective: This paper evaluates a Web-based Clinical Decision Support System (CDSS) whose aim is to enhance the screening of language disorders at a nursery school. The common lack of early diagnosis of language disorders led us to deploy an easy-to-use CDSS in order to evaluate its accuracy in early detection of language pathologies. This CDSS can be used by pediatricians to support the screening of language disorders in primary care. Methods: This paper details the evaluation results of the ?Gades? CDSS at a nursery school with 146 children, 12 educators, and 1 language therapist. The methodology embraces two consecutive phases. The first stage involves the observation of each child?s language abilities, carried out by the educators, to facilitate the evaluation of language acquisition level performed by a language therapist. Next, the same language therapist evaluates the reliability of the observed results. Results: The Gades CDSS was integrated to provide the language therapist with the required clinical information. The validation process showed a global 83.6% (122/146) success rate in language evaluation and a 7% (7/94) rate of non-accepted system decisions within the range of children from 0 to 3 years old. The system helped language therapists to identify new children with potential disorders who required further evaluation. This process will revalidate the CDSS output and allow the enhancement of early detection of language disorders in children. The system does need minor refinement, since the therapists disagreed with some questions from the CDSS knowledge base (KB) and suggested adding a few questions about speech production and pragmatic abilities. The refinement of the KB will address these issues and include the requested improvements, with the support of the experts who took part in the original KB development. Conclusions: This research demonstrated the benefit of a Web-based CDSS to monitor children?s neurodevelopment via the early detection of language delays at a nursery school. Current next steps focus on the design of a model that includes pseudo auto-learning capacity, supervised by experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated Teller Machines (ATMs) are sensitive self-service systems that require important investments in security and testing. ATM certifications are testing processes for machines that integrate software components from different vendors and are performed before their deployment for public use. This project was originated from the need of optimization of the certification process in an ATM manufacturing company. The process identifies compatibility problems between software components through testing. It is composed by a huge number of manual user tasks that makes the process very expensive and error-prone. Moreover, it is not possible to fully automate the process as it requires human intervention for manipulating ATM peripherals. This project presented important challenges for the development team. First, this is a critical process, as all the ATM operations rely on the software under test. Second, the context of use of ATMs applications is vastly different from ordinary software. Third, ATMs’ useful lifetime is beyond 15 years and both new and old models need to be supported. Fourth, the know-how for efficient testing depends on each specialist and it is not explicitly documented. Fifth, the huge number of tests and their importance implies the need for user efficiency and accuracy. All these factors led us conclude that besides the technical challenges, the usability of the intended software solution was critical for the project success. This business context is the motivation of this Master Thesis project. Our proposal focused in the development process applied. By combining user-centered design (UCD) with agile development we ensured both the high priority of usability and the early mitigation of software development risks caused by all the technology constraints. We performed 23 development iterations and finally we were able to provide a working solution on time according to users’ expectations. The evaluation of the project was carried out through usability tests, where 4 real users participated in different tests in the real context of use. The results were positive, according to different metrics: error rate, efficiency, effectiveness, and user satisfaction. We discuss the problems found, the benefits and the lessons learned in the process. Finally, we measured the expected project benefits by comparing the effort required by the current and the new process (once the new software tool is adopted). The savings corresponded to 40% less effort (man-hours) per certification. Future work includes additional evaluation of product usability in a real scenario (with customers) and the measuring of benefits in terms of quality improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La forma de consumir contenidos en Internet ha cambiado durante los últimos años. Inicialmente se empleaban webs estáticas y con contenidos pobres visualmente. Con la evolución de las redes de comunicación, esta tendencia ha variado. A día de hoy, deseamos páginas agradables, accesibles y que nos presenten temas variados. Todo esto ha cambiado la forma de crear páginas web y en todos los casos se persigue el objetivo de atraer a los usuarios. El gran auge de los smartphones y las aplicaciones móviles que invaden el mercado actual han revolucionado el mundo del estudio de los idiomas permitiendo compatibilizar los recursos punteros con el aprendizaje tradicional. La popularidad de los dispositivos móviles y de las aplicaciones ha sido el principal motivo de la realización de este proyecto. En él se realizará un análisis de las diferentes tecnologías existentes y se elegirá la mejor opción que se ajuste a nuestras necesidades para poder desarrollar un sistema que implemente el enfoque llamado Mobile Assisted Language Learning (MALL) que supone una aproximación innovadora al aprendizaje de idiomas con la ayuda de un dispositivo móvil. En este documento se va a ofrecer una panorámica general acerca del desarrollo de aplicaciones para dispositivos móviles en el entorno del e-learning. Se estudiarán características técnicas de diferentes plataformas seleccionando la mejor opción para la implementación de un sistema que proporcione los contenidos básicos para el aprendizaje de un idioma, en este caso del inglés, de forma intuitiva y divertida. Dicho sistema permitirá al usuario mejorar su nivel de inglés mediante una interfaz web de forma dinámica y cercana empleando los recursos que ofrecen los dispositivos móviles y haciendo uso del diseño adaptativo. Este proyecto está pensado para los usuarios que dispongan de poco tiempo libre para realizar un curso de forma presencial o, mejor aún, para reforzar o repasar contenidos ya aprendidos por otros medios más tradicionales o no. La aplicación ofrece la posibilidad de que se haga uso del sistema de forma fácil y sencilla desde cualquier dispositivo móvil del que se disponga como es un smartphone, tablet o un ordenador personal, compitiendo con otros usuarios o contra uno mismo y mejorando así el nivel de partida a través de las actividades propuestas. Durante el proyecto se han comparado diversas soluciones, la mayoría de código abierto y de libre distribución que permiten desplegar servicios de almacenamiento accesibles mediante Internet. Se concluirá con un caso práctico analizando los requisitos técnicos y llevando a cabo las fases de análisis, diseño, creación de la base de datos, implementación y pruebas dentro del ciclo de vida del software. Finalmente, se migrará la aplicación con toda la información a un servidor en la nube. ABSTRACT. The way of consuming content on the Internet has changed over the past years. Initially, static websites were used with poor visual contents. Nevertheless, with the evolution of communication networks this trend has changed. Nowadays, we expect pleasant, accessible and varied topic pages and such expectations have changed the way to create web pages generally aiming at appealing and therefore, attracting users. The great boom of smartphones and mobile applications in the current market, have revolutionized the world of language learning as they make it possible to combine computing with traditional learning resources. The popularity of mobile devices and applications has been the main reason for the development of this project. Here, the different existing technologies will be examined and we will try to select the best option that adapts to our needs in order to develop a system that implements Mobile Assisted Language Learning (MALL) that in broad terms implies an approach to language learning with the help of a mobile device. This report provides an overview of the development of applications for mobile devices in the e-learning environment. We will study the technical characteristics of different platforms and we will select the best option for the implementation of a system that provide the basic content for learning a language, in this case English, by means of an intuitive and fun method. This system will allow the user to improve their level of English with a web interface in a dynamic and close way employing the resources offered by mobile devices using the adaptive design. This project is intended for users who do not have enough free time to make a classroom course or to review contents from more traditional courses as it offers the possibility to make use of the system quickly and easily from any mobile device available such as a smartphone, a tablet or a personal computer, competing with other users or against oneself and thus improving their departing level through different activities. During the project, different solutions have been compared. Most of them, open source and free distribution that allow to deploy storage services accessible via the Internet. It will conclude with a case study analyzing the technical requirements and conducting phases of analysis, design and creation of a database, implementation and testing in the software lifecycle. Finally, the application will be migrated with all the information to a server in the cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis se centra en el análisis de dos aspectos complementarios de la ciberdelincuencia (es decir, el crimen perpetrado a través de la red para ganar dinero). Estos dos aspectos son las máquinas infectadas utilizadas para obtener beneficios económicos de la delincuencia a través de diferentes acciones (como por ejemplo, clickfraud, DDoS, correo no deseado) y la infraestructura de servidores utilizados para gestionar estas máquinas (por ejemplo, C & C, servidores explotadores, servidores de monetización, redirectores). En la primera parte se investiga la exposición a las amenazas de los ordenadores victimas. Para realizar este análisis hemos utilizado los metadatos contenidos en WINE-BR conjunto de datos de Symantec. Este conjunto de datos contiene metadatos de instalación de ficheros ejecutables (por ejemplo, hash del fichero, su editor, fecha de instalación, nombre del fichero, la versión del fichero) proveniente de 8,4 millones de usuarios de Windows. Hemos asociado estos metadatos con las vulnerabilidades en el National Vulnerability Database (NVD) y en el Opens Sourced Vulnerability Database (OSVDB) con el fin de realizar un seguimiento de la decadencia de la vulnerabilidad en el tiempo y observar la rapidez de los usuarios a remiendar sus sistemas y, por tanto, su exposición a posibles ataques. Hemos identificado 3 factores que pueden influir en la actividad de parches de ordenadores victimas: código compartido, el tipo de usuario, exploits. Presentamos 2 nuevos ataques contra el código compartido y un análisis de cómo el conocimiento usuarios y la disponibilidad de exploit influyen en la actividad de aplicación de parches. Para las 80 vulnerabilidades en nuestra base de datos que afectan código compartido entre dos aplicaciones, el tiempo entre el parche libera en las diferentes aplicaciones es hasta 118 das (con una mediana de 11 das) En la segunda parte se proponen nuevas técnicas de sondeo activos para detectar y analizar las infraestructuras de servidores maliciosos. Aprovechamos técnicas de sondaje activo, para detectar servidores maliciosos en el internet. Empezamos con el análisis y la detección de operaciones de servidores explotadores. Como una operación identificamos los servidores que son controlados por las mismas personas y, posiblemente, participan en la misma campaña de infección. Hemos analizado un total de 500 servidores explotadores durante un período de 1 año, donde 2/3 de las operaciones tenían un único servidor y 1/2 por varios servidores. Hemos desarrollado la técnica para detectar servidores explotadores a diferentes tipologías de servidores, (por ejemplo, C & C, servidores de monetización, redirectores) y hemos logrado escala de Internet de sondeo para las distintas categorías de servidores maliciosos. Estas nuevas técnicas se han incorporado en una nueva herramienta llamada CyberProbe. Para detectar estos servidores hemos desarrollado una novedosa técnica llamada Adversarial Fingerprint Generation, que es una metodología para generar un modelo único de solicitud-respuesta para identificar la familia de servidores (es decir, el tipo y la operación que el servidor apartenece). A partir de una fichero de malware y un servidor activo de una determinada familia, CyberProbe puede generar un fingerprint válido para detectar todos los servidores vivos de esa familia. Hemos realizado 11 exploraciones en todo el Internet detectando 151 servidores maliciosos, de estos 151 servidores 75% son desconocidos a bases de datos publicas de servidores maliciosos. Otra cuestión que se plantea mientras se hace la detección de servidores maliciosos es que algunos de estos servidores podrán estar ocultos detrás de un proxy inverso silente. Para identificar la prevalencia de esta configuración de red y mejorar el capacidades de CyberProbe hemos desarrollado RevProbe una nueva herramienta a través del aprovechamiento de leakages en la configuración de la Web proxies inversa puede detectar proxies inversos. RevProbe identifica que el 16% de direcciones IP maliciosas activas analizadas corresponden a proxies inversos, que el 92% de ellos son silenciosos en comparación con 55% para los proxies inversos benignos, y que son utilizado principalmente para equilibrio de carga a través de múltiples servidores. ABSTRACT In this dissertation we investigate two fundamental aspects of cybercrime: the infection of machines used to monetize the crime and the malicious server infrastructures that are used to manage the infected machines. In the first part of this dissertation, we analyze how fast software vendors apply patches to secure client applications, identifying shared code as an important factor in patch deployment. Shared code is code present in multiple programs. When a vulnerability affects shared code the usual linear vulnerability life cycle is not anymore effective to describe how the patch deployment takes place. In this work we show which are the consequences of shared code vulnerabilities and we demonstrate two novel attacks that can be used to exploit this condition. In the second part of this dissertation we analyze malicious server infrastructures, our contributions are: a technique to cluster exploit server operations, a tool named CyberProbe to perform large scale detection of different malicious servers categories, and RevProbe a tool that detects silent reverse proxies. We start by identifying exploit server operations, that are, exploit servers managed by the same people. We investigate a total of 500 exploit servers over a period of more 13 months. We have collected malware from these servers and all the metadata related to the communication with the servers. Thanks to this metadata we have extracted different features to group together servers managed by the same entity (i.e., exploit server operation), we have discovered that 2/3 of the operations have a single server while 1/3 have multiple servers. Next, we present CyberProbe a tool that detects different malicious server types through a novel technique called adversarial fingerprint generation (AFG). The idea behind CyberProbe’s AFG is to run some piece of malware and observe its network communication towards malicious servers. Then it replays this communication to the malicious server and outputs a fingerprint (i.e. a port selection function, a probe generation function and a signature generation function). Once the fingerprint is generated CyberProbe scans the Internet with the fingerprint and finds all the servers of a given family. We have performed a total of 11 Internet wide scans finding 151 new servers starting with 15 seed servers. This gives to CyberProbe a 10 times amplification factor. Moreover we have compared CyberProbe with existing blacklists on the internet finding that only 40% of the server detected by CyberProbe were listed. To enhance the capabilities of CyberProbe we have developed RevProbe, a reverse proxy detection tool that can be integrated with CyberProbe to allow precise detection of silent reverse proxies used to hide malicious servers. RevProbe leverages leakage based detection techniques to detect if a malicious server is hidden behind a silent reverse proxy and the infrastructure of servers behind it. At the core of RevProbe is the analysis of differences in the traffic by interacting with a remote server.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multiresidue method was developed for the simultaneous determination of 31 emerging contaminants (pharmaceutical compounds, hormones, personal care products, biocides and flame retardants) in aquatic plants. Analytes were extracted by ultrasound assisted-matrix solid phase dispersion (UA-MSPD) and determined by gas chromatography-mass spectrometry after sylilation. The method was validated for different aquatic plants (Typha angustifolia, Arundo donax and Lemna minor) and a semiaquatic cultivated plant (Oryza sativa) with good recoveries at concentrations of 100 and 25 ng g-1 wet weight, ranging from 70 to 120 %, and low method detection limits (0.3 to 2.2 ng g-1 wet weight). A significant difference of the chromatographic response was observed for some compounds in neat solvent versus matrix extracts and therefore quantification was carried out using matrix-matched standards in order to overcome this matrix effect. Aquatic plants taken from rivers located at three Spanish regions were analyzed and the compounds detected were parabens, bisphenol A, benzophenone-3, cyfluthrin and cypermethrin. The levels found ranged from 6 to 25 ng g-1 wet weight except for cypermethrin that was detected at 235 ng g-1 wet weight in Oryza sativa samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approach to analyzing single-nucleotide polymorphisms (SNPs) found in the human genome has been developed that couples a recently developed invasive cleavage assay for nucleic acids with detection by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). The invasive cleavage assay is a signal amplification method that enables the analysis of SNPs by MALDI-TOF MS directly from human genomic DNA without the need for initial target amplification by PCR. The results presented here show the successful genotyping by this approach of twelve SNPs located randomly throughout the human genome. Conventional Sanger sequencing of these SNP positions confirmed the accuracy of the MALDI-TOF MS analysis results. The ability to unambiguously detect both homozygous and heterozygous genotypes is clearly demonstrated. The elimination of the need for target amplification by PCR, combined with the inherently rapid and accurate nature of detection by MALDI-TOF MS, gives this approach unique and significant advantages in the high-throughput genotyping of large numbers of SNPs, useful for locating, identifying, and characterizing the function of specific genes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast, simple and environmentally friendly ultrasound-assisted dispersive liquid-liquid microextraction (USA-DLLME) procedure has been developed to preconcentrate eight cyclic and linear siloxanes from wastewater samples prior to quantification by gas chromatography-mass spectrometry (GC-MS). A two-stage multivariate optimization approach has been developed employing a Plackett-Burman design for screening and selecting the significant factors involved in the USA-DLLME procedure, which was later optimized by means of a circumscribed central composite design. The optimum conditions were: extractant solvent volume, 13 µL; solvent type, chlorobenzene; sample volume, 13 mL; centrifugation speed, 2300 rpm; centrifugation time, 5 min; and sonication time, 2 min. Under the optimized experimental conditions the method gave levels of repeatability with coefficients of variation between 10 and 24% (n=7). Limits of detection were between 0.002 and 1.4 µg L−1. Calculated calibration curves gave high levels of linearity with correlation coefficient values between 0.991 and 0.9997. Finally, the proposed method was applied for the analysis of wastewater samples. Relative recovery values ranged between 71–116% showing that the matrix had a negligible effect upon extraction. To our knowledge, this is the first time that combines LLME and GC-MS for the analysis of methylsiloxanes in wastewater samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the chemical textile domain experts have to analyse chemical components and substances that might be harmful for their usage in clothing and textiles. Part of this analysis is performed searching opinions and reports people have expressed concerning these products in the Social Web. However, this type of information on the Internet is not as frequent for this domain as for others, so its detection and classification is difficult and time-consuming. Consequently, problems associated to the use of chemical substances in textiles may not be detected early enough, and could lead to health problems, such as allergies or burns. In this paper, we propose a framework able to detect, retrieve, and classify subjective sentences related to the chemical textile domain, that could be integrated into a wider health surveillance system. We also describe the creation of several datasets with opinions from this domain, the experiments performed using machine learning techniques and different lexical resources such as WordNet, and the evaluation focusing on the sentiment classification, and complaint detection (i.e., negativity). Despite the challenges involved in this domain, our approach obtains promising results with an F-score of 65% for polarity classification and 82% for complaint detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel method is reported, whereby screen-printed electrodes (SPELs) are combined with dispersive liquid–liquid microextraction. In-situ ionic liquid (IL) formation was used as an extractant phase in the microextraction technique and proved to be a simple, fast and inexpensive analytical method. This approach uses miniaturized systems both in sample preparation and in the detection stage, helping to develop environmentally friendly analytical methods and portable devices to enable rapid and onsite measurement. The microextraction method is based on a simple metathesis reaction, in which a water-immiscible IL (1-hexyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide, [Hmim][NTf2]) is formed from a water-miscible IL (1-hexyl-3-methylimidazolium chloride, [Hmim][Cl]) and an ion-exchange reagent (lithium bis[(trifluoromethyl)sulfonyl]imide, LiNTf2) in sample solutions. The explosive 2,4,6-trinitrotoluene (TNT) was used as a model analyte to develop the method. The electrochemical behavior of TNT in [Hmim][NTf2] has been studied in SPELs. The extraction method was first optimized by use of a two-step multivariate optimization strategy, using Plackett–Burman and central composite designs. The method was then evaluated under optimum conditions and a good level of linearity was obtained, with a correlation coefficient of 0.9990. Limits of detection and quantification were 7 μg L−1 and 9 μg L−1, respectively. The repeatability of the proposed method was evaluated at two different spiking levels (20 and 50 μg L−1), and coefficients of variation of 7 % and 5 % (n = 5) were obtained. Tap water and industrial wastewater were selected as real-world water samples to assess the applicability of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this manuscript, a study of the effect of microwave radiation on the high-performance liquid chromatography separation of tocopherols and vitamin K1 was conducted. The novelty of the application was the use of a relatively low polarity mobile phase in which the dielectric heating effect was minimized to evaluate the nonthermal effect of the microwave radiation over the separation process. Results obtained show that microwave-assisted high-performance liquid chromatography had a shorter analysis time from 31.5 to 13.3 min when the lowest microwave power was used. Moreover, narrower peaks were obtained; hence the separation was more efficient maintaining or even increasing the resolution between the peaks. This result confirms that the increase in mobile phase temperature is not the only variable for improving the separation process but also other nonthermal processes must intervene. Fluorescence detection demonstrated better signal-to-noise compared to photodiode arrayed detection mainly due to the independent effect of microwave pulses on the baseline noise, but photodiode array detection was finally chosen as it allowed a simultaneous detection of nonfluorescent compounds. Finally, a determination of the content of the vitamin E homologs was carried out in different vegetable oils. Results were coherent with those found in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to obtain the exact value of the keratometric index (nkexact) and to clinically validate a variable keratometric index (nkadj) that minimizes this error. Methods: The nkexact value was determined by obtaining differences (DPc) between keratometric corneal power (Pk) and Gaussian corneal power (PGauss c ) equal to 0. The nkexact was defined as the value associated with an equivalent difference in the magnitude of DPc for extreme values of posterior corneal radius (r2c) for each anterior corneal radius value (r1c). This nkadj was considered for the calculation of the adjusted corneal power (Pkadj). Values of r1c ∈ (4.2, 8.5) mm and r2c ∈ (3.1, 8.2) mm were considered. Differences of True Net Power with PGauss c , Pkadj, and Pk(1.3375) were calculated in a clinical sample of 44 eyes with keratoconus. Results: nkexact ranged from 1.3153 to 1.3396 and nkadj from 1.3190 to 1.3339 depending on the eye model analyzed. All the nkadj values adjusted perfectly to 8 linear algorithms. Differences between Pkadj and PGauss c did not exceed 60.7 D (Diopter). Clinically, nk = 1.3375 was not valid in any case. Pkadj and True Net Power and Pk(1.3375) and Pkadj were statistically different (P , 0.01), whereas no differences were found between PGauss c and Pkadj (P . 0.01). Conclusions: The use of a single value of nk for the calculation of the total corneal power in keratoconus has been shown to be imprecise, leading to inaccuracies in the detection and classification of this corneal condition. Furthermore, our study shows the relevance of corneal thickness in corneal power calculations in keratoconus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The explosive growth of the traffic in computer systems has made it clear that traditional control techniques are not adequate to provide the system users fast access to network resources and prevent unfair uses. In this paper, we present a reconfigurable digital hardware implementation of a specific neural model for intrusion detection. It uses a specific vector of characterization of the network packages (intrusion vector) which is starting from information obtained during the access intent. This vector will be treated by the system. Our approach is adaptative and to detecting these intrusions by using a complex artificial intelligence method known as multilayer perceptron. The implementation have been developed and tested into a reconfigurable hardware (FPGA) for embedded systems. Finally, the Intrusion detection system was tested in a real-world simulation to gauge its effectiveness and real-time response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel approach is presented to determine mercury in urine samples, employing vortex-assisted ionic liquid dispersive liquid–liquid microextraction and microvolume back-extraction to prepare samples, and screen-printed electrodes modified with gold nanoparticles for voltammetric analysis. Mercury was extracted directly from non-digested urine samples in a water-immiscible ionic liquid, being back-extracted into an acidic aqueous solution. Subsequently, it was determined using gold nanoparticle-modified screen-printed electrodes. Under optimized microextraction conditions, standard addition calibration was applied to urine samples containing 5, 10 and 15 μg L−1 of mercury. Standard addition calibration curves using standards between 0 and 20 μg L−1 gave a high level of linearity with correlation coefficients ranging from 0.990 to 0.999 (N = 5). The limit of detection was empirical and statistically evaluated, obtaining values that ranged from 0.5 to 1.5 μg L−1, and from 1.1 to 1.3 μg L−1, respectively, which are significantly lower than the threshold level established by the World Health Organization for normal mercury content in urine (i.e., 10–20 μg L−1). A certified reference material (REC-8848/Level II) was analyzed to assess method accuracy finding 87% and 3 μg L−1 as the recovery (trueness) and standard deviation values, respectively. Finally, the method was used to analyze spiked urine samples, obtaining good agreement between spiked and found concentrations (recovery ranged from 97 to 100%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interacting with a computer system in the operating room (OR) can be a frustrating experience for a surgeon, who currently has to verbally delegate to an assistant every computer interaction task. This indirect mode of interaction is time consuming, error prone and can lead to poor usability of OR computer systems. This thesis describes the design and evaluation of a joystick-like device that allows direct surgeon control of the computer in the OR. The device was tested extensively in comparison to a mouse and delegated dictation with seven surgeons, eleven residents, and five graduate students. The device contains no electronic parts, is easy to use, is unobtrusive, has no physical connection to the computer and makes use of an existing tool in the OR. We performed a user study to determine its effectiveness in allowing a user to perform all the tasks they would be expected to perform on an OR computer system during a computer-assisted surgery. Dictation was found to be superior to the joystick in qualitative measures, but the joystick was preferred over dictation in user satisfaction responses. The mouse outperformed both joystick and dictation, but it is not a readily accepted modality in the OR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Originally presented as the author's thesis (M.A.), University of Illinois at Urbana-Champaign.