961 resultados para CAMERA TRAPS
Resumo:
Visceral leishmaniasis (VL) is a widespread zoonosis in Brazil and, up to now, there has been no record of the main vector of its agent, Lutzomyia longipalpis, in the Southern Region. Due to the diagnosis of VL in a dog in October 2008 in the city of São Borja, in the southernmost Brazilian state of Rio Grande do Sul, a collection of phlebotomines was undertaken to detect the presence of the vector Lu. longipalpis. The captures were carried out with CDC light traps on three consecutive nights in 2008. A total of 39 specimens of Lu. longipalpis were captured, thereby increasing the knowledge of the geographical distribution of this important vector.
Resumo:
Living in close association with a vertebrate host and feeding on its blood requires different types of adaptations, including behavioural adjustements. Triatomines exhibit particular traits associated with the exploitation of their habitat and food sources and these traits have been the subject of intense analysis. Many aspects of triatomine behaviour have been relatively well characterised and some attempts to exploit the behaviours have been undertaken. Baited traps based on host-associated cues, artificial refuges and light-traps are some of the tools used. Here we discuss how our knowledge of the biology of Chagas disease vectors may help us sample and detect these insects and even increase the efficiency of control measures.
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This paper presents an automatic vision-based system for UUV station keeping. The vehicle is equipped with a down-looking camera, which provides images of the sea-floor. The station keeping system is based on a feature-based motion detection algorithm, which exploits standard correlation and explicit textural analysis to solve the correspondence problem. A visual map of the area surveyed by the vehicle is constructed to increase the flexibility of the system, allowing the vehicle to position itself when it has lost the reference image. The testing platform is the URIS underwater vehicle. Experimental results demonstrating the behavior of the system on a real environment are presented
Resumo:
When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system
Resumo:
This paper presents an approach to ameliorate the reliability of the correspondence points relating two consecutive images of a sequence. The images are especially difficult to handle, since they have been acquired by a camera looking at the sea floor while carried by an underwater robot. Underwater images are usually difficult to process due to light absorption, changing image radiance and lack of well-defined features. A new approach based on gray-level region matching and selective texture analysis significantly improves the matching reliability
Resumo:
Canine American visceral leishmaniasis and American cutaneous leishmaniasis (ACL) cases have been recorded in Espírito Santo do Pinhal. The aim of this study was to gather knowledge of the sand fly community and its population ecology within the municipality. Captures were made weekly over a period of 15 months in the urban, periurban and rural areas of the municipality, using automatic light traps. A total of 5,562 sand flies were collected, comprising 17 species. The most abundant species were Nyssomyia whitmani and Pintomyia pessoai in the rural area, Lutzomyia longipalpis and Ny. whitmani in the periurban area and Lu. longipalpis in the urban area. The highest species richness and greatest index species diversity were found in the rural area. The similarity index showed that urban and periurban areas were most alike. Lu. longipalpis was found in great numbers during both dry and humid periods. The presence of dogs infected with Leishmania infantum chagasi in the urban area indicates a high risk for the establishment of the disease in the region. A high abundance of Ny. whitmani and Pi. pessoai in the rural and periurban areas indicates the possibility of new cases of ACL occurring in and spreading to the periurban area of Espírito Santo do Pinhal.
Resumo:
The breeding sites of Culicoides pachymerus are described for the first time in western Boyacá Province, Colombia, where this species is a public health problem. In addition to being a nuisance due to its enormous density and its high biting rates, C. pachymerus cause dermatological problems in the human population. Analysis of microhabitats by the sugar flotation technique and the use of emergence traps allowed us to recover 155 larvae of Culicoides spp and 65 adults of C. pachymerus from peridomiciliary muddy substrates formed by springs of water and constant rainwater accumulation. These important findings could aid in the design of integrated control meas-ures against this pest.
Resumo:
This paper presents a complete solution for creating accurate 3D textured models from monocular video sequences. The methods are developed within the framework of sequential structure from motion, where a 3D model of the environment is maintained and updated as new visual information becomes available. The camera position is recovered by directly associating the 3D scene model with local image observations. Compared to standard structure from motion techniques, this approach decreases the error accumulation while increasing the robustness to scene occlusions and feature association failures. The obtained 3D information is used to generate high quality, composite visual maps of the scene (mosaics). The visual maps are used to create texture-mapped, realistic views of the scene
Resumo:
Omnidirectional cameras offer a much wider field of view than the perspective ones and alleviate the problems due to occlusions. However, both types of cameras suffer from the lack of depth perception. A practical method for obtaining depth in computer vision is to project a known structured light pattern on the scene avoiding the problems and costs involved by stereo vision. This paper is focused on the idea of combining omnidirectional vision and structured light with the aim to provide 3D information about the scene. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. It is also discussed how this sensor can be used in robot navigation applications
Resumo:
We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc
Resumo:
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production
Resumo:
Path planning and control strategies applied to autonomous mobile robots should fulfil safety rules as well as achieve final goals. Trajectory planning applications should be fast and flexible to allow real time implementations as well as environment interactions. The methodology presented uses the on robot information as the meaningful data necessary to plan a narrow passage by using a corridor based on attraction potential fields that approaches the mobile robot to the final desired configuration. It employs local and dense occupancy grid perception to avoid collisions. The key goals of this research project are computational simplicity as well as the possibility of integrating this method with other methods reported by the research community. Another important aspect of this work consist in testing the proposed method by using a mobile robot with a perception system composed of a monocular camera and odometers placed on the two wheels of the differential driven motion system. Hence, visual data are used as a local horizon of perception in which trajectories without collisions are computed by satisfying final goal approaches and safety criteria
Resumo:
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of non-textured objects or objects for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance