53 resultados para Information Interaction
em Universidade do Minho
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
Tese de Doutoramento em Engenharia de Eletrónica e de Computadores
Resumo:
The aim of this study is evaluating the interaction between several base pen grade asphalt binders (35/50, 50/70, 70/100, 160/220) and two different plastic wastes (EVA and HDPE), for a set of new polymer modified binders produced with different amounts of both plastic wastes. After analysing the results obtained for the several polymer modified binders evaluated in this study, including a commercial modified binder, it can be concluded that the new PMBs produced with the base bitumen 70/100 and 5% of each plastic waste (HDPE or EVA) results in binders with very good performance, similar to that of the commercial modified binder.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
Information security is concerned with the protection of information, which can be stored, processed or transmitted within critical information systems of the organizations, against loss of confidentiality, integrity or availability. Protection measures to prevent these problems result through the implementation of controls at several dimensions: technical, administrative or physical. A vital objective for military organizations is to ensure superiority in contexts of information warfare and competitive intelligence. Therefore, the problem of information security in military organizations has been a topic of intensive work at both national and transnational levels, and extensive conceptual and standardization work is being produced. A current effort is therefore to develop automated decision support systems to assist military decision makers, at different levels in the command chain, to provide suitable control measures that can effectively deal with potential attacks and, at the same time, prevent, detect and contain vulnerabilities targeted at their information systems. The concept and processes of the Case-Based Reasoning (CBR) methodology outstandingly resembles classical military processes and doctrine, in particular the analysis of “lessons learned” and definition of “modes of action”. Therefore, the present paper addresses the modeling and design of a CBR system with two key objectives: to support an effective response in context of information security for military organizations; to allow for scenario planning and analysis for training and auditing processes.
Resumo:
Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.
Resumo:
Dissertação de mestrado em Construção e Reabilitação Sustentáveis
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
Information technologies changed the way of how the health organizations work, contributing to their effectiveness, efficiency and sustainability. Hospital Information Systems (HIS) are emerging on all of health institutions, helping health professionals and patients. However, HIS are not always implemented and used in the best way, leading to low levels of benefits and acceptance by users of these systems. In order to mitigate this problem, it is essential to take measures able to ensure if the HIS and their interfaces are designed in a simple and interactive way. With this in mind, a study to measure the user satisfaction and their opinion was made. It was applied the Technology Acceptance Model (TAM) on a HIS implemented on various hospital centers (AIDA), being used the Pathologic Anatomy Service. The study identified weakness and strengths features of AIDA and it pointed some solutions to improve the medical record.
Resumo:
Dissertação de mestrado em Educação Especial (área de especialização em Intervenção Precoce)
Resumo:
Tese de Doutoramento em Ciências da Educação - Especialidade de Desenvolvimento Curricular
Resumo:
A measurement is presented of the tt¯ inclusive production cross section in pp collisions at a center-of-mass energy of s√=8 TeV using data collected by the ATLAS detector at the CERN Large Hadron Collider. The measurement was performed in the lepton+jets final state using a data set corresponding to an integrated luminosity of 20.3 fb−1. The cross section was obtained using a likelihood discriminant fit and b-jet identification was used to improve the signal-to-background ratio. The inclusive tt¯ production cross section was measured to be 260±1(stat)+22−23(stat)±8(lumi)±4(beam) pb assuming a top-quark mass of 172.5 GeV, in good agreement with the theoretical prediction of 253+13−15 pb. The tt¯→(e,μ)+jets production cross section in the fiducial region determined by the detector acceptance is also reported.