945 resultados para Camera Obscura
Resumo:
A novel and high-quality system for moving object detection in sequences recorded with moving cameras is proposed. This system is based on the collaboration between an automatic homography estimation module for image alignment, and a robust moving object detection using an efficient spatiotemporal nonparametric background modeling.
Resumo:
The Optical, Spectroscopic, and Infrared Remote Imaging System OSIRIS is the scientific camera system onboard the Rosetta spacecraft (Figure 1). The advanced high performance imaging system will be pivotal for the success of the Rosetta mission. OSIRIS will detect 67P/Churyumov-Gerasimenko from a distance of more than 106 km, characterise the comet shape and volume, its rotational state and find a suitable landing spot for Philae, the Rosetta lander. OSIRIS will observe the nucleus, its activity and surroundings down to a scale of ~2 cm px−1. The observations will begin well before the onset of cometary activity and will extend over months until the comet reaches perihelion. During the rendezvous episode of the Rosetta mission, OSIRIS will provide key information about the nature of cometary nuclei and reveal the physics of cometary activity that leads to the gas and dust coma. OSIRIS comprises a high resolution Narrow Angle Camera (NAC) unit and a Wide Angle Camera (WAC) unit accompanied by three electronics boxes. The NAC is designed to obtain high resolution images of the surface of comet 7P/Churyumov-Gerasimenko through 12 discrete filters over the wavelength range 250–1000 nm at an angular resolution of 18.6 μrad px−1. The WAC is optimised to provide images of the near-nucleus environment in 14 discrete filters at an angular resolution of 101 μrad px−1. The two units use identical shutter, filter wheel, front door, and detector systems. They are operated by a common Data Processing Unit. The OSIRIS instrument has a total mass of 35 kg and is provided by institutes from six European countries
Resumo:
Lisa Bell is the founder and CEO of Inspired Life Media Group, a Los Angeles-based international content development and production company that works with a diverse cross-section of venture-backed startups, high profile individuals, and fortune 500 brands. Lisa, past content and business ventures are critically-acclaimed, diverse, and plentiful. She created, directed, and produced The American Dream Revised, a digital docuseries that follows a diverse group of young entrepreneurs. Lisa’s past business ventures include startups in technology, personal development, and original content.After launching her first company at 19 years old, she later started a for-profit social enterprise that reached more than 400,000 girls around the world with active programs in Liberia, England, Brazil, and the US.
Resumo:
This study examined the relationship between land-use practices near tributary rivers in South Lake Maracaibo and the appearance of duckweed (Lemna obscura) in the lake. Four rivers were studied: The Mucujepe, Capaz, Guamo and Frio. Eight factors were assessed: rivers, sediments, erosion, soils, fertilizers, water quality, land use activities and vegetation corridors. Satellite images, official cartography, field visits and observations, water samples and personal communication with organizations involved were held to get an accurate and current assessment of the conditions. The study revealed the land-use practices surrounding the Pan-American Zone Rivers contribute to the duckweed blooming in Lake Maracaibo.
Resumo:
Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.
Resumo:
Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.
Resumo:
Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.
Resumo:
Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.
Resumo:
Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.
Resumo:
In this study, a digital CMOS camera was calibrated for use as a non-contact colorimeter for measuring the color of granite artworks. The low chroma values of the granite, which yield similar stimulation of the three color channels of the camera, proved to be the most challenging aspect of the task. The appropriate parameters for converting the device-dependent RGB color space into a device-independent color space were established. For this purpose, the color of a large number of Munsell samples (corresponding to the previously defined color gamut of granite) was measured with a digital camera and with a spectrophotometer (reference instrument). The color data were then compared using the CIELAB color formulae. The best correlations between measurements were obtained when the camera works to 10-bits and the spectrophotometric measures in SCI mode. Finally, the calibrated instrument was used successfully to measure the color of six commercial varieties of Spanish granite.
Resumo:
We present a disposable optical sensor for Ascorbic Acid (AA). It uses a polyaniline based electrochromic sensing film that undergoes a color change when exposed to solutions of ascorbic acid at pH 3.0. The color is monitored by a conventional digital camera working with the hue (H) color coordinate. The electrochromic film was deposited on an Indium Tin Oxide (ITO) electrode by cyclic voltammetry and then characterized by atomic force microscopy, electrochemical and spectroscopic techniques. An estimation of the initial rate of H, as ΔH/Δt, is used as the analytical parameter and resulted in the following logarithmic relationship: ΔH/Δt = 0.029 log[AA] + 0.14, with a limit of detection of 17 μM. The relative standard deviation when using the same membrane 5 times was 7.4% for the blank, and 2.6% (for n = 3) on exposure to ascorbic acid in 160 μM concentration. The sensor is disposable and its applicability to pharmaceutical analysis was demonstrated. This configuration can be extended for future handheld configurations.
Resumo:
This paper presents a method for fast calculation of the egomotion done by a robot using visual features. The method is part of a complete system for automatic map building and Simultaneous Localization and Mapping (SLAM). The method uses optical flow in order to determine if the robot has done a movement. If so, some visual features which do not accomplish several criteria (like intersection, unicity, etc,) are deleted, and then the egomotion is calculated. We use a state-of-the-art algorithm (TORO) in order to rectify the map and solve the SLAM problem. The proposed method provides better efficiency that other current methods.
Resumo:
Camera traps have become a widely used technique for conducting biological inventories, generating a large number of database records of great interest. The main aim of this paper is to describe a new free and open source software (FOSS), developed to facilitate the management of camera-trapped data which originated from a protected Mediterranean area (SE Spain). In the last decade, some other useful alternatives have been proposed, but ours focuses especially on a collaborative undertaking and on the importance of spatial information underpinning common camera trap studies. This FOSS application, namely, “Camera Trap Manager” (CTM), has been designed to expedite the processing of pictures on the .NET platform. CTM has a very intuitive user interface, automatic extraction of some image metadata (date, time, moon phase, location, temperature, atmospheric pressure, among others), analytical (Geographical Information Systems, statistics, charts, among others), and reporting capabilities (ESRI Shapefiles, Microsoft Excel Spreadsheets, PDF reports, among others). Using this application, we have achieved a very simple management, fast analysis, and a significant reduction of costs. While we were able to classify an average of 55 pictures per hour manually, CTM has made it possible to process over 1000 photographs per hour, consequently retrieving a greater amount of data.