10 resultados para HumanComputer-Interaction Wearable Hands-free HealthCare Augmented-Reality Moverio Thalmic-Myo
em Universidad de Alicante
Resumo:
Early education is a key element for the future success of students in the education system. This work analyzes the feasibility of using augmented reality contents with preschool students (four and five years old) as a tool for improving their learning process. A quasi experimental design based on a nonequivalent groups posttest-only design was used. A didactic unit has been developed around the topic “animals” by the participant teachers. The control group followed all the didactic activities defined in the developed didactic materials, while the experimental group was provided in addition with some augmented reality contents. Results show improved learning outcomes in the experimental group with respect to the control group.
Resumo:
The use of 3D imaging techniques has been early adopted in the footwear industry. In particular, 3D imaging could be used to aid commerce and improve the quality and sales of shoes. Footwear customization is an added value aimed not only to improve product quality, but also consumer comfort. Moreover, customisation implies a new business model that avoids the competition of mass production coming from new manufacturers settled mainly in Asian countries. However, footwear customisation implies a significant effort at different levels. In manufacturing, rapid and virtual prototyping is required; indeed the prototype is intended to become the final product. The whole design procedure must be validated using exclusively virtual techniques to ensure the feasibility of this process, since physical prototypes should be avoided. With regard to commerce, it would be desirable for the consumer to choose any model of shoes from a large 3D database and be able to try them on looking at a magic mirror. This would probably reduce costs and increase sales, since shops would not require storing every shoe model and the process of trying several models on would be easier and faster for the consumer. In this paper, new advances in 3D techniques coming from experience in cinema, TV and games are successfully applied to footwear. Firstly, the characteristics of a high-quality stereoscopic vision system for footwear are presented. Secondly, a system for the interaction with virtual footwear models based on 3D gloves is detailed. Finally, an augmented reality system (magic mirror) is presented, which is implemented with low-cost computational elements that allow a hypothetical customer to check in real time the goodness of a given virtual footwear model from an aesthetical point of view.
Resumo:
Building Information Modelling (BIM) provides a shared source of information about a built asset, which creates a collaborative virtual environment for project teams. Literature suggests that to collaborate efficiently, the relationship between the project team is based on sympathy, obligation, trust and rapport. Communication increases in importance when working collaboratively but effective communication can only be achieved when the stakeholders are willing to act, react, listen and share information. Case study research and interviews with Architecture, Engineering and Construction (AEC) industry experts suggest that synchronous face-to-face communication is project teams’ preferred method, allowing teams to socialise and build rapport, accelerating the creation of trust between the stakeholders. However, virtual unified communication platforms are a close second-preferred option for communication between the teams. Effective methods for virtual communication in professional practice, such as virtual collaboration environments (CVE), that build trust and achieve similar spontaneous responses as face-to-face communication, are necessary to face the global challenges and can be achieved with the right people, processes and technology. This research paper investigates current industry methods for virtual communication within BIM projects and explores the suitability of avatar interaction in a collaborative virtual environment as an alternative to face-to-face communication to enhance collaboration between design teams’ professional practice on a project. Hence, this paper presents comparisons between the effectiveness of these communication methods within construction design teams with results of further experiments conducted to test recommendations for more efficient methods for virtual communication to add value in the workplace between design teams.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.
Resumo:
This paper analyzes the learning experiences and opinions obtained from a group of undergraduate students in their interaction with several on-line multimedia resources included in a free on-line course about Computer Networks. These new educational resources employed are based on the Web2.0 approach such as blogs, videos and virtual labs which have been added in a web-site for distance self-learning.
Resumo:
1-Benzyl-3-(2-hydroxy-2-phenylethyl)imidazolium chloride (5), which is a precursor of an N-heterocyclic carbene ligand, in combination with palladium acetate, has been employed as an effective catalyst for the fluorine-free Hiyama reaction. A systematic study of the catalytic mixture, by a 32 factorial design, has revealed that both the amount of palladium and the Pd/NHC precursor ratio are important factors for obtaining good yields of the coupling products, indicating an interaction between them. The best catalytic system involves mixing 0.1 mol-% palladium acetate in a 1:5 ratio (Pd/salt 5), which allows the effective coupling of a range of aryl bromides and chlorides with trimethoxy(phenyl)silane. The Hiyama reactions are carried out in NaOH solution (50 % H2O w/w), at 120 °C under microwave irradiation during 60 min.
Resumo:
New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.
Resumo:
A variety of hydroxy- and amino-functionalized imidazoles were prepared from 1-methyl- and 1-(diethoxymethyl)imidazole by means of isoprene-mediated lithiation followed by reaction with an electrophile. These compounds in combination with palladium acetate were screened as catalyst systems for the Hiyama reaction under fluorine-free conditions using microwave irradiation. The systematic study of the catalytic system showed 1-methyl-2-aminoalkylimidazole derivative L1 to be the best ligand, which was employed under solvent-free conditions with a 1:2 Pd/ligand ratio and TBAB (20 mol-%) as additive. The study has revealed an interaction between the Pd/ligand ratio and the amount of TBAB. The established catalytic system presented a certain degree of robustness, and it has been successfully employed in the coupling of a range of aryl bromides and chlorides with different aryl siloxanes. Furthermore, both reagents were employed in an equimolecular amount, without an excess of organosilane.
Resumo:
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.