20 resultados para choreography for the camera

em Universidad de Alicante


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a digital CMOS camera was calibrated for use as a non-contact colorimeter for measuring the color of granite artworks. The low chroma values of the granite, which yield similar stimulation of the three color channels of the camera, proved to be the most challenging aspect of the task. The appropriate parameters for converting the device-dependent RGB color space into a device-independent color space were established. For this purpose, the color of a large number of Munsell samples (corresponding to the previously defined color gamut of granite) was measured with a digital camera and with a spectrophotometer (reference instrument). The color data were then compared using the CIELAB color formulae. The best correlations between measurements were obtained when the camera works to 10-bits and the spectrophotometric measures in SCI mode. Finally, the calibrated instrument was used successfully to measure the color of six commercial varieties of Spanish granite.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and “automatically and autonomously” obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Comunicación presentada en EVACES 2011, 4th International Conference on Experimental Vibration Analysis for Civil Engineering Structures, Varenna (Lecco), Italy, October 3-5, 2011.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be improved by using the event-based control paradigm. The system proposed in this paper is based on the idea of activating the visual controller only when something significant has occurred in the system (e.g. when any visual feature can be loosen because it is going outside the frame). Different event triggers have been defined in the image space in order to activate or deactivate the visual controller. The tests implemented to validate the proposal have proved that this new scheme avoids visual features to go out of the image whereas the system complexity is reduced considerably. Events can be used in the future to change different parameters of the visual servoing systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics provides valuable information about the robot’s environment. Traditionally, stereo cameras have been used as a low-cost 3D sensor. However, the lack of precision and texture for some surfaces suggests that the use of other 3D sensors could be more suitable. In this work, we examine the use of two sensors: an infrared SR4000 and a Kinect camera. We use a combination of 3D data obtained by these cameras, along with features obtained from 2D images acquired from these cameras, using a Growing Neural Gas (GNG) network applied to the 3D data. The goal is to obtain a robust egomotion technique. The GNG network is used to reduce the camera error. To calculate the egomotion, we test two methods for 3D registration. One is based on an iterative closest points algorithm, and the other employs random sample consensus. Finally, a simultaneous localization and mapping method is applied to the complete sequence to reduce the global error. The error from each sensor and the mapping results from the proposed method are examined.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional visual servoing systems do not deal with the topic of moving objects tracking. When these systems are employed to track a moving object, depending on the object velocity, visual features can go out of the image, causing the fail of the tracking task. This occurs specially when the object and the robot are both stopped and then the object starts the movement. In this work, we have employed a retina camera based on Address Event Representation (AER) in order to use events as input in the visual servoing system. The events launched by the camera indicate a pixel movement. Event visual information is processed only at the moment it occurs, reducing the response time of visual servoing systems when they are used to track moving objects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Olga Diego Freises (Alacant 1969) estudià Belles Arts a les facultats de València (Universitat Politècnica) i d’Altea (Universitat Miguel Hernández), on es titulà el 2006. La seua activitat artística ―crítica, compromesa i innovadora― s’ha orientat bàsicament a la performança, encara que amb un important suport gràfic i escultòric de dibuixos i de maquetes. Com es habitual, Olga Diego fa servir el vídeo com a medi de registre i/o de suport de les accions. En ocasions el vídeo arriba a ser la base o el format de presentació d’una acció. És, doncs, un registre fotogràfic ―foto mòbil que pot esdevenir foto fixa― allò que conserva la memòria del que es féu o del que passà. En ocasions la gravació és manual i en ocasions la càmera va lligada a un artefacte. S’ofereixen, així, visions diferents de l’acció. En ocasions es treballa amb llum de dia i en ocasions amb llum de foc o de focus, oferint també així visions diferents de l’acció. Per tant, el vídeo es treballa en funció del suggeriment, de la claredat expositiva o de l’ambigüitat que s’intenta. Proposem aquí una introducció als vídeos vinculats a algunes de les accions d’Olga Diego desenvolupades a llocs tan diferents com ara Espanya, Sàhara Occidental Libre i Egipte.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a disposable optical sensor for Ascorbic Acid (AA). It uses a polyaniline based electrochromic sensing film that undergoes a color change when exposed to solutions of ascorbic acid at pH 3.0. The color is monitored by a conventional digital camera working with the hue (H) color coordinate. The electrochromic film was deposited on an Indium Tin Oxide (ITO) electrode by cyclic voltammetry and then characterized by atomic force microscopy, electrochemical and spectroscopic techniques. An estimation of the initial rate of H, as ΔH/Δt, is used as the analytical parameter and resulted in the following logarithmic relationship: ΔH/Δt = 0.029 log[AA] + 0.14, with a limit of detection of 17 μM. The relative standard deviation when using the same membrane 5 times was 7.4% for the blank, and 2.6% (for n = 3) on exposure to ascorbic acid in 160 μM concentration. The sensor is disposable and its applicability to pharmaceutical analysis was demonstrated. This configuration can be extended for future handheld configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a method for fast calculation of the egomotion done by a robot using visual features. The method is part of a complete system for automatic map building and Simultaneous Localization and Mapping (SLAM). The method uses optical flow in order to determine if the robot has done a movement. If so, some visual features which do not accomplish several criteria (like intersection, unicity, etc,) are deleted, and then the egomotion is calculated. We use a state-of-the-art algorithm (TORO) in order to rectify the map and solve the SLAM problem. The proposed method provides better efficiency that other current methods.