26 resultados para Computer Technologies
em Universidade do Minho
Resumo:
Introduction of technologies in the workplace have led to a dramatic change. These changes have come with an increased capacity to gather data about one’s working performance (i.e. productivity), as well as the capacity to track one’s personal responses (i.e. emotional, physiological, etc.) to this changing workplace environment. This movement of self-monitoring or self-sensing using diverse types of wearable sensors combined with the use of computing has been identified as the Quantified-Self. Miniaturization of sensors, reduction in cost and a non-stop increase in the computer power capacity has led to a panacea of wearables and sensors to track and analyze all types of information. Utilized in the personal sphere to track information, a looming question remains, should employers use the information from the Quantified-Self to track their employees’ performance or well-being in the workplace and will this benefit employees? The aim of the present work is to layout the implications and challenges associated with the use of Quantified-Self information in the workplace. The Quantified-Self movement has enabled people to understand their personal life better by tracking multiple information and signals; such an approach could allow companies to gather knowledge on what drives productivity for their business and/or well-being of their employees. A discussion about the implications of this approach will cover 1) Monitoring health and well-being, 2) Oversight and safety, and 3) Mentoring and training. Challenges will address the question of 1) Privacy and Acceptability, 2) Scalability and 3) Creativity. Even though many questions remain regarding their use in the workplace, wearable technologies and Quantified-Self data in the workplace represent an exciting opportunity for the industry and health and safety practitioners who will be using them.
Bidirectional battery charger with grid-to-vehicle, vehicle-to-grid and vehicle-to-home technologies
Resumo:
This paper presents the development of na on-board bidirectional battery charger for Electric Vehicles (EVs) targeting Grid-to-Vehicle (G2V), Vehicle-to-Grid (V2G), and Vehicle-to-Home (V2H) technologies. During the G2V operation mode the batteries are charged from the power grid with sinusoidal current and unitary power factor. During the V2G operation mode the energy stored in the batteries can be delivered back to the power grid contributing to the power system stability. In the V2H operation mode the energy stored in the batteries can be used to supply home loads during power outages, or to supply loads in places without connection to the power grid. Along the paper the hardware topology of the bidirectional battery charger is presented and the control algorithms are explained. Some considerations about the sizing of the AC side passive filter are taken into account in order to improve the performance in the three operation modes. The adopted topology and control algorithms are accessed through computer simulations and validated by experimental results achieved with a developed laboratory prototype operating in the different scenarios.
Resumo:
This book was produced in the scope of a research project entitled “Navigating with ‘Magalhães’: Study on the Impact of Digital Media in Schoolchildren”. This study was conducted between May 2010 and May 2013 at the Communication and Society Research Centre, University of Minho, Portugal and it was funded by the Portuguese Foundation for Science and Technology (PTDC/CCI-COM/101381/2008).
Resumo:
Relatório de projeto de mestrado em Ensino de Informática
Resumo:
Eye tracking as an interface to operate a computer is under research for a while and new systems are still being developed nowadays that provide some encouragement to those bound to illnesses that incapacitates them to use any other form of interaction with a computer. Although using computer vision processing and a camera, these systems are usually based on head mount technology being considered a contact type system. This paper describes the implementation of a human-computer interface based on a fully non-contact eye tracking vision system in order to allow people with tetraplegia to interface with a computer. As an assistive technology, a graphical user interface with special features was developed including a virtual keyboard to allow user communication, fast access to pre-stored phrases and multimedia and even internet browsing. This system was developed with the focus on low cost, user friendly functionality and user independency and autonomy.
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
Tese de Doutoramento em Ciências da Educação (área de especialização em Tecnologia Educativa)
Resumo:
Forming suitable learning groups is one of the factors that determine the efficiency of collaborative learning activities. However, only a few studies were carried out to address this problem in the mobile learning environments. In this paper, we propose a new approach for an automatic, customized, and dynamic group formation in Mobile Computer Supported Collaborative Learning (MCSCL) contexts. The proposed solution is based on the combination of three types of grouping criteria: learner’s personal characteristics, learner’s behaviours, and context information. The instructors can freely select the type, the number, and the weight of grouping criteria, together with other settings such as the number, the size, and the type of learning groups (homogeneous or heterogeneous). Apart from a grouping mechanism, the proposed approach represents a flexible tool to control each learner, and to manage the learning processes from the beginning to the end of collaborative learning activities. In order to evaluate the quality of the implemented group formation algorithm, we compare its Average Intra-cluster Distance (AID) with the one of a random group formation method. The results show a higher effectiveness of the proposed algorithm in forming homogenous and heterogeneous groups compared to the random method.
Resumo:
Tese de Doutoramento Tecnologias e Sistemas de Informação
Resumo:
Increasing the maturity in Project Management (PM) has become a goal for many organizations, leading them to adopt maturity models to assess the current state of its PM practices and compare them with the best practices in the industry where the organization is inserted. One of the main PM maturity models is the Organizational Project Management Maturity Model (OPM3®), developed by the Project Management Institute. This paper presents the Information Systems and Technologies organizations outcome analysis, of the assesses made by the OPM3® Portugal Project, identifying the PM processes that are “best” implemented in this particular industry and those in which it is urgent to improve. Additionally, a comparison between the different organizations’ size analyzed is presented.
Resumo:
Dissertação de mestrado em Engenharia Mecatrónica