850 resultados para Robótica
Resumo:
Reducing the energy consumption for computation and cooling in servers is a major challenge considering the data center energy costs today. To ensure energy-efficient operation of servers in data centers, the relationship among computa- tional power, temperature, leakage, and cooling power needs to be analyzed. By means of an innovative setup that enables monitoring and controlling the computing and cooling power consumption separately on a commercial enterprise server, this paper studies temperature-leakage-energy tradeoffs, obtaining an empirical model for the leakage component. Using this model, we design a controller that continuously seeks and settles at the optimal fan speed to minimize the energy consumption for a given workload. We run a customized dynamic load-synthesis tool to stress the system. Our proposed cooling controller achieves up to 9% energy savings and 30W reduction in peak power in comparison to the default cooling control scheme.
Resumo:
In this paper an on line self-tuned PID controller is proposed for the control of a car whose goal is to follow another one, at distances and speeds typical in urban traffic. The bestknown tuning mechanism is perhaps the MIT rule, due to its ease of implementation. However, as it is well known, this method does not guarantee the stability of the system, providing good results only for constant or slowly varying reference signals and in the absence of noise, which are unrealistic conditions. When the reference input varies with an appreciable rate or in presence of noise, eventually it could result in system instability. In this paper an alternative method is proposed that significantly improves the robustness of the system for varying inputs or in the presence of noise, as demonstrated by simulation.
Resumo:
One of the major challenges in evolutionary robotics is constituted by the need of the robot being able to make decisions on its own, in accordance with the multiple tasks programmed, optimizing its timings and power. In this paper, we present a new automatic decision making mechanism for a robot guide that allows the robot to make the best choice in order to reach its aims, performing its tasks in an optimal way. The election of which is the best alternative is based on a series of criteria and restrictions of the tasks to perform. The software developed in the project has been verified on the tour-guide robot Urbano. The most important aspect of this proposal is that the design uses learning as the means to optimize the quality in the decision making. The modeling of the quality index of the best choice to perform is made using fuzzy logic and it represents the beliefs of the robot, which continue to evolve in order to match the "external reality”. This fuzzy system is used to select the most appropriate set of tasks to perform during the day. With this tool, the tour guide-robot prepares its agenda daily, which satisfies the objectives and restrictions, and it identifies the best task to perform at each moment. This work is part of the ARABOT project of the Intelligent Control Research Group at the Universidad Politécnica de Madrid to create "awareness" in a robot guide.
Resumo:
Se presenta el estado actual del proyecto URBANO, que en la versión 8.02 es una arquitectura distribuida de componentes orientada al diseño de aplicaciones en robots sociales. Se utiliza SOAP como mecanismo de integración remota. Se han diseñado nuevos componentes que permiten diferentes formas de aprendizaje. Por un lado, se ha diseñado una aplicación Android que posibilita la integración del móvil o tablet al control del robot. Por otro se ha desarrollado una ontología que permite representar, no solo conceptos, sino el aprendizaje propiamente dicho y se suman a los ya disponibles para la sintetización y reconocimiento de voces, gestión de gestos de cara y brazos, generación de trayectorias y navegación segura, modelo de estado de ánimo del robot y ejecución de tareas definidas por el usuario mediante el lenguaje propio UPL (Urbano Programming Language).
Resumo:
Se presenta el estado actual del proyecto URBANO, que en la versión 8.02 es una arquitectura distribuida de componentes orientada al diseño de aplicaciones en robots sociales. Se utiliza SOAP como mecanismo de integración remota. Se han diseñado nuevos componentes que permiten diferentes formas de aprendizaje. Por un lado, se ha diseñado una aplicación Android que posibilita la integración del móvil o tablet al control del robot. Por otro se ha desarrollado una ontología que permite representar, no solo conceptos, sino el aprendizaje propiamente dicho y se suman a los ya disponibles para la sintetización y reconocimiento de voces, gestión de gestos de cara y brazos, generación de trayectorias y navegación segura, modelo de estado de ánimo del robot y ejecución de tareas definidas por el usuario mediante el lenguaje propio UPL (Urbano Programming Language)
Resumo:
Active optical sensing (LIDAR and light curtain transmission) devices mounted on a mobile platform can correctly detect, localize, and classify trees. To conduct an evaluation and comparison of the different sensors, an optical encoder wheel was used for vehicle odometry and provided a measurement of the linear displacement of the prototype vehicle along a row of tree seedlings as a reference for each recorded sensor measurement. The field trials were conducted in a juvenile tree nursery with one-year-old grafted almond trees at Sierra Gold Nurseries, Yuba City, CA, United States. Through these tests and subsequent data processing, each sensor was individually evaluated to characterize their reliability, as well as their advantages and disadvantages for the proposed task. Test results indicated that 95.7% and 99.48% of the trees were successfully detected with the LIDAR and light curtain sensors, respectively. LIDAR correctly classified, between alive or dead tree states at a 93.75% success rate compared to 94.16% for the light curtain sensor. These results can help system designers select the most reliable sensor for the accurate detection and localization of each tree in a nursery, which might allow labor-intensive tasks, such as weeding, to be automated without damaging crops.
Resumo:
In this paper we present an adaptive spatio-temporal filter that aims to improve low-cost depth camera accuracy and stability over time. The proposed system is composed by three blocks that are used to build a reliable depth map of static scenes. An adaptive joint-bilateral filter is used to obtain consistent depth maps by jointly considering depth and video information and by adapting its parameters to different levels of estimated noise. Kalman filters are used to reduce the temporal random fluctuations of the measurements. Finally an interpolation algorithm is used to obtain consistent depth maps in the regions where the depth information is not available. Results show that this approach allows to considerably improve the depth maps quality by considering spatio-temporal information and by adapting its parameters to different levels of noise.
Resumo:
The increasing use of video editing software has resulted in a necessity for faster and more efficient editing tools. Here, we propose a lightweight high-quality video indexing tool that is suitable for video editing software.
Resumo:
In this paper we present a low-cost efficient Interactive Whiteboard that, by fusing depth and video information provided by a low-cost depth camera, is able to detect and track user movements.
Resumo:
In this paper we present an efficient hole filling strategy that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a joint-bilateral filtering framework that includes spatial and temporal information. The missing depth values are obtained applying iteratively a joint-bilateral filter to their neighbor pixels. The filter weights are selected considering three different factors: visual data, depth information and a temporal-consistency map. Video and depth data are combined to improve depth map quality in presence of edges and homogeneous regions. Finally, the temporal-consistency map is generated in order to track the reliability of the depth measurements near the hole regions. The obtained depth values are included iteratively in the filtering process of the successive frames and the accuracy of the hole regions depth values increases while new samples are acquired and filtered
Resumo:
In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.
Resumo:
A novel scheme for depth sequences compression, based on a perceptual coding algorithm, is proposed. A depth sequence describes the object position in the 3D scene, and is used, in Free Viewpoint Video, for the generation of synthetic video sequences. In perceptual video coding the human visual system characteristics are exploited to improve the compression efficiency. As depth sequences are never shown, the perceptual video coding, assessed over them, is not effective. The proposed algorithm is based on a novel perceptual rate distortion optimization process, assessed over the perceptual distortion of the rendered views generated through the encoded depth sequences. The experimental results show the effectiveness of the proposed method, able to obtain a very considerable improvement of the rendered view perceptual quality.
Resumo:
This paper presents a strategy for solving the feature matching problem in calibrated very wide-baseline camera settings. In this kind of settings, perspective distortion, depth discontinuities and occlusion represent enormous challenges. The proposed strategy addresses them by using geometrical information, specifically by exploiting epipolar-constraints. As a result it provides a sparse number of reliable feature points for which 3D position is accurately recovered. Special features known as junctions are used for robust matching. In particular, a strategy for refinement of junction end-point matching is proposed which enhances usual junction-based approaches. This allows to compute cross-correlation between perfectly aligned plane patches in both images, thus yielding better matching results. Evaluation of experimental results proves the effectiveness of the proposed algorithm in very wide-baseline environments.
Resumo:
In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.
Resumo:
Video-based vehicle detection is the focus of increasing interest due to its potential towards collision avoidance. In particular, vehicle verification is especially challenging due to the enormous variability of vehicles in size, color, pose, etc. In this paper, a new approach based on supervised learning using Principal Component Analysis (PCA) is proposed that addresses the main limitations of existing methods. Namely, in contrast to classical approaches which train a single classifier regardless of the relative position of the candidate (thus ignoring valuable pose information), a region-dependent analysis is performed by considering four different areas. In addition, a study on the evolution of the classification performance according to the dimensionality of the principal subspace is carried out using PCA features within a SVM-based classification scheme. Indeed, the experiments performed on a publicly available database prove that PCA dimensionality requirements are region-dependent. Hence, in this work, the optimal configuration is adapted to each of them, rendering very good vehicle verification results.