955 resultados para movie camera
Resumo:
Validating modern oceanographic theories using models produced through stereo computer vision principles has recently emerged. Space-time (4-D) models of the ocean surface may be generated by stacking a series of 3-D reconstructions independently generated for each time instant or, in a more robust manner, by simultaneously processing several snapshots coherently in a true ?4-D reconstruction.? However, the accuracy of these computer-vision-generated models is subject to the estimations of camera parameters, which may be corrupted under the influence of natural factors such as wind and vibrations. Therefore, removing the unpredictable errors of the camera parameters is necessary for an accurate reconstruction. In this paper, we propose a novel algorithm that can jointly perform a 4-D reconstruction as well as correct the camera parameter errors introduced by external factors. The technique is founded upon variational optimization methods to benefit from their numerous advantages: continuity of the estimated surface in space and time, robustness, and accuracy. The performance of the proposed algorithm is tested using synthetic data produced through computer graphics techniques, based on which the errors of the camera parameters arising from natural factors can be simulated.
Resumo:
A novel and high-quality system for moving object detection in sequences recorded with moving cameras is proposed. This system is based on the collaboration between an automatic homography estimation module for image alignment, and a robust moving object detection using an efficient spatiotemporal nonparametric background modeling.
Resumo:
The Optical, Spectroscopic, and Infrared Remote Imaging System OSIRIS is the scientific camera system onboard the Rosetta spacecraft (Figure 1). The advanced high performance imaging system will be pivotal for the success of the Rosetta mission. OSIRIS will detect 67P/Churyumov-Gerasimenko from a distance of more than 106 km, characterise the comet shape and volume, its rotational state and find a suitable landing spot for Philae, the Rosetta lander. OSIRIS will observe the nucleus, its activity and surroundings down to a scale of ~2 cm px−1. The observations will begin well before the onset of cometary activity and will extend over months until the comet reaches perihelion. During the rendezvous episode of the Rosetta mission, OSIRIS will provide key information about the nature of cometary nuclei and reveal the physics of cometary activity that leads to the gas and dust coma. OSIRIS comprises a high resolution Narrow Angle Camera (NAC) unit and a Wide Angle Camera (WAC) unit accompanied by three electronics boxes. The NAC is designed to obtain high resolution images of the surface of comet 7P/Churyumov-Gerasimenko through 12 discrete filters over the wavelength range 250–1000 nm at an angular resolution of 18.6 μrad px−1. The WAC is optimised to provide images of the near-nucleus environment in 14 discrete filters at an angular resolution of 101 μrad px−1. The two units use identical shutter, filter wheel, front door, and detector systems. They are operated by a common Data Processing Unit. The OSIRIS instrument has a total mass of 35 kg and is provided by institutes from six European countries
Resumo:
Lisa Bell is the founder and CEO of Inspired Life Media Group, a Los Angeles-based international content development and production company that works with a diverse cross-section of venture-backed startups, high profile individuals, and fortune 500 brands. Lisa, past content and business ventures are critically-acclaimed, diverse, and plentiful. She created, directed, and produced The American Dream Revised, a digital docuseries that follows a diverse group of young entrepreneurs. Lisa’s past business ventures include startups in technology, personal development, and original content.After launching her first company at 19 years old, she later started a for-profit social enterprise that reached more than 400,000 girls around the world with active programs in Liberia, England, Brazil, and the US.
Resumo:
Kinetic anomalies in protein folding can result from changes of the kinetic ground states (D, I, and N), changes of the protein folding transition state, or both. The 102-residue protein U1A has a symmetrically curved chevron plot which seems to result mainly from changes of the transition state. At low concentrations of denaturant the transition state occurs early in the folding reaction, whereas at high denaturant concentration it moves close to the native structure. In this study we use this movement to follow continuously the formation and growth of U1A's folding nucleus by φ analysis. Although U1A's transition state structure is generally delocalized and displays a typical nucleation–condensation pattern, we can still resolve a sequence of folding events. However, these events are sufficiently coupled to start almost simultaneously throughout the transition state structure.
Resumo:
The film and television industry is integral to the economics and culture of the Southern California region. It is also a major contributing factor to the environmental problems in the region. Currently the Motion Picture, Television, and Commercial Industries Act of 1984 is the only regulation written specifically for the entertainment industry. This regulation was created with the purpose of streamlining the film permitting process to prevent run-away production, taking production out of state, and encourage growth. A change in this regulation is needed since studios routinely fail to meet environmental standards or work towards improvement during on-location filming. Amendments to this regulation requiring permits to contain environmental conditions would improve environmental conditions and stay true to the original purpose of the act.
Resumo:
Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.
Resumo:
Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.
Resumo:
Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.
Resumo:
Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.
Resumo:
Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.
Resumo:
In this study, a digital CMOS camera was calibrated for use as a non-contact colorimeter for measuring the color of granite artworks. The low chroma values of the granite, which yield similar stimulation of the three color channels of the camera, proved to be the most challenging aspect of the task. The appropriate parameters for converting the device-dependent RGB color space into a device-independent color space were established. For this purpose, the color of a large number of Munsell samples (corresponding to the previously defined color gamut of granite) was measured with a digital camera and with a spectrophotometer (reference instrument). The color data were then compared using the CIELAB color formulae. The best correlations between measurements were obtained when the camera works to 10-bits and the spectrophotometric measures in SCI mode. Finally, the calibrated instrument was used successfully to measure the color of six commercial varieties of Spanish granite.
Resumo:
We present a disposable optical sensor for Ascorbic Acid (AA). It uses a polyaniline based electrochromic sensing film that undergoes a color change when exposed to solutions of ascorbic acid at pH 3.0. The color is monitored by a conventional digital camera working with the hue (H) color coordinate. The electrochromic film was deposited on an Indium Tin Oxide (ITO) electrode by cyclic voltammetry and then characterized by atomic force microscopy, electrochemical and spectroscopic techniques. An estimation of the initial rate of H, as ΔH/Δt, is used as the analytical parameter and resulted in the following logarithmic relationship: ΔH/Δt = 0.029 log[AA] + 0.14, with a limit of detection of 17 μM. The relative standard deviation when using the same membrane 5 times was 7.4% for the blank, and 2.6% (for n = 3) on exposure to ascorbic acid in 160 μM concentration. The sensor is disposable and its applicability to pharmaceutical analysis was demonstrated. This configuration can be extended for future handheld configurations.