979 resultados para Visione Robotica Calibrazione Camera Robot Hand Eye
Resumo:
In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.
Resumo:
This review covers the effect of drugs affecting anxiety using four psychological procedures for inducing experimental anxiety applied to healthy volunteers and patients with anxiety disorders. The first is aversive conditioning of the skin conductance responses to tones. The second is simulated public speaking, which consists of speaking in front of a video camera, with anxiety being measured with psychometric scales. The third is the Stroop Color-Word test, in which words naming colors are painted in the same or in a different shade, the incongruence generating a cognitive conflict. The last test is a human version of a thoroughly studied animal model of anxiety, fear-potentiated startle, in which the eye-blink reflex to a loud noise is recorded. The evidence reviewed led to the conclusion that the aversive conditioning and potentiated startle tests are based on classical conditioning of anticipatory anxiety. Their sensitivity to benzodiazepine anxiolytics suggests that these models generate an emotional state related to generalized anxiety disorder. On the other hand, the increase in anxiety determined by simulated public speaking is resistant to benzodiazepines and sensitive to drugs affecting serotonergic neurotransmission. This pharmacological profile, together with epidemiological evidence indicating its widespread prevalence, suggests that the emotional state generated by public speaking represents a species-specific response that may be related to social phobia and panic disorder. Because of scant pharmacological data, the status of the Stroop Color-Word test remains uncertain. In spite of ethical and economic constraints, human experimental anxiety constitutes a valuable tool for the study of the pathophysiology of anxiety disorders.
Resumo:
The review of intelligent machines shows that the demand for new ways of helping people in perception of the real world is becoming higher and higher every year. This thesis provides information about design and implementation of machine vision for mobile assembly robot. The work has been done as a part of LUT project in Laboratory of Intelligent Machines. The aim of this work is to create a working vision system. The qualitative and quantitative research were done to complete this task. In the first part, the author presents the theoretical background of such things as digital camera work principles, wireless transmission basics, creation of live stream, methods used for pattern recognition. Formulas, dependencies and previous research related to the topic are shown. In the second part, the equipment used for the project is described. There is information about the brands, models, capabilities and also requirements needed for implementation. Although, the author gives a description of LabVIEW software, its add-ons and OpenCV which are used in the project. Furthermore, one can find results in further section of considered thesis. They mainly represented by screenshots from cameras, working station and photos of the system. The key result of this thesis is vision system created for the needs of mobile assembly robot. Therefore, it is possible to see graphically what was done on examples. Future research in this field includes optimization of the pattern recognition algorithm. This will give less response time for recognizing objects. Presented by author system can be used also for further activities which include artificial intelligence usage.
Resumo:
The flexibility of the robot is the key to its success as a viable aid to production. Flexibility of a robot can be explained in two directions. The first is to increase the physical generality of the robot such that it can be easily reconfigured to handle a wide variety of tasks. The second direction is to increase the ability of the robot to interact with its environment such that tasks can still be successfully completed in the presence of uncertainties. The use of articulated hands are capable of adapting to a wide variety of grasp shapes, hence reducing the need for special tooling. The availability of low mass, high bandwidth points close to the manipulated object also offers significant improvements I the control of fine motions. This thesis provides a framework for using articulated hands to perform local manipulation of objects. N particular, it addresses the issues in effecting compliant motions of objects in Cartesian space. The Stanford/JPL hand is used as an example to illustrate a number of concepts. The examples provide a unified methodology for controlling articulated hands grasping with point contacts. We also present a high-level hand programming system based on the methodologies developed in this thesis. Compliant motion of grasped objects and dexterous manipulations can be easily described in the LISP-based hand programming language.
Resumo:
In many motion-vision scenarios, a camera (mounted on a moving vehicle) takes images of an environment to find the "motion'' and shape. We introduce a direct-method called fixation for solving this motion-vision problem in its general case. Fixation uses neither feature-correspondence nor optical-flow. Instead, spatio-temporal brightness gradients are used directly. In contrast to previous direct methods, fixation does not restrict the motion or the environment. Moreover, fixation method neither requires tracked images as its input nor uses mechanical tracking for obtaining fixated images. The experimental results on real images are presented and the implementation issues and techniques are discussed.
Resumo:
This paper describes a system for the computer understanding of English. The system answers questions, executes commands, and accepts information in normal English dialog. It uses semantic information and context to understand discourse and to disambiguate sentences. It combines a complete syntactic analysis of each sentence with a "heuristic understander" which uses different kinds of information about a sentence, other parts of the discourse, and general information about the world in deciding what the sentence means. It is based on the belief that a computer cannot deal reasonably with language unless it can "understand" the subject it is discussing. The program is given a detailed model of the knowledge needed by a simple robot having only a hand and an eye. We can give it instructions to manipulate toy objects, interrogate it about the scene, and give it information it will use in deduction. In addition to knowing the properties of toy objects, the program has a simple model of its own mentality. It can remember and discuss its plans and actions as well as carry them out. It enters into a dialog with a person, responding to English sentences with actions and English replies, and asking for clarification when its heuristic programs cannot understand a sentence through use of context and physical knowledge.
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system
Resumo:
This paper is focused on the robot mobile platform PRIM (platform robot information multimedia). This robot has been made in order to cover two main needs of our group, on one hand the need for a full open mobile robotic platform that is very useful in fulfilling the teaching and research activity of our school community, and on the other hand with the idea of introducing an ethical product which would be useful as mobile multimedia information point as a service tool. This paper introduces exactly how the system is made up and explains just what the philosophy is behind this work. The navigation strategies and sensor fusion, where machine vision system is the most important one, are oriented towards goal achievement and are the key to the behaviour of the robot
Resumo:
Large scale image mosaicing methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the difficulties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman filter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a different solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the efficiency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios
Resumo:
One of the main challenges for developers of new human-computer interfaces is to provide a more natural way of interacting with computer systems, avoiding excessive use of hand and finger movements. In this way, also a valuable alternative communication pathway is provided to people suffering from motor disabilities. This paper describes the construction of a low cost eye tracker using a fixed head setup. Therefore a webcam, laptop and an infrared lighting source were used together with a simple frame to fix the head of the user. Furthermore, detailed information on the various image processing techniques used for filtering the centre of the pupil and different methods to calculate the point of gaze are discussed. An overall accuracy of 1.5 degrees was obtained while keeping the hardware cost of the device below 100 euros.
Resumo:
With the advent of digital era web applications have become inevitable part of our lives. We are using the web to manage even the financially or ethically sensitive issues. For this reason exploration of information seeking behavior is an exciting area of research. Current study provides insight on information seeking behavior using a classic ‘Find the Difference’ game. 50 university students between the age of 19 and 26 participated in the study. Eye movement data were recorded with a Tobii T120 device. Participants carried out 4 continuous tasks. Each task included two pictures side by side with 7 hidden differences. After finishing the tasks, participants were asked to repeat the game with the same picture set. This data collection methodology allows the evaluation of learning curves. Additionally, participants were asked about their hand preference. For the purpose of analysis the following metrics were applied: task times (including saccades), fixation count and fixation duration (without saccades). The right- and left-hand side on each picture was selected as AOI (Area of Interest) to detect side preference in connection with hand preference. Results suggest a significant difference between male and female participants regarding aggregated task times (male 58.37s respectively female 68.37s), deviation in the number of fixations and fixation duration (apparently female have less but longer fixations) and also in the distribution of fixations between AOIs. Using eyetracking data current paper highlights the similarities and differences in information acquisition strategies respectively reveals gender and education (Arts vs. Sciences) dependent characteristics of interaction.
Resumo:
Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.
Resumo:
Mobile robots provide a versatile platform for research, however they can also provide an interesting educational platform for public exhibition at museums. In general museums require exhibits that are both eye catching and exciting to the public whilst requiring a minimum of maintenance time from museum technicians. In many cases it is simply not possible to continuously change batteries and some method of supplying continous power is required. A powered flooring system is described that is capable of providing power continuously to a group of robots. Three different museum exhibit applications are described. All three robot exhibits are of a similar basic design although the exhibits are very different in appearance and behaviour. The durability and versatility of the robots also makes them extremely good candidates for long duration experiments such as those required by evolutionary robotics.