972 resultados para AVT Prosilica GC2450C camera system
Resumo:
In this paper, modernized shipborne procedures are presented to collect and process above-water radiometry for remote sensing applications. A setup of five radiometers and a bidirectional camera system, which provides panoramic sea surface and sky images, is proposed for the collection of high-resolution radiometric quantities. Images from the camera system can be used to determine sky state and potential glint, whitecaps, or foam contamination. A peak in the observed remote sensing reflectance RRS spectra between 750-780 nm was typically found in spectra with relatively high surface reflected glint (SRG), which suggests this waveband could be a useful SRG indicator. Simplified steps for computing uncertainties in SRG corrected RRS are proposed and discussed. The potential of utilizing "unweighted multimodel averaging," which is the average of four or more common SRG correction models, is examined to determine the best approximation RRS. This best approximation RRS provides an estimate of RRS based on various SRG correction models established using radiative transfer simulations and field investigations. Applying the average RRS provides a measure of the inherent uncertainties or biases that result from a user subjectively choosing any one SRG correction model. Comparisons between inherent and apparent optical property derived observations were used to assess the robustness of the SRG multimodel averaging ap- proach. Correlations among the standard SRG models were completed to determine the degree of association or similarities between the SRG models. Results suggest that the choice of glint models strongly affects derived RRS values and can also influence the blue to green band ratios used for modeling biogeochemical parameters such as for chlorophyll a. The objective here is to present a uniform and traceable methodology for determining ship- borne RRS measurements and its associated errors due to glint correction and to ensure the direct comparability of these measurements in future investigations. We encourage the ocean color community to publish radiometric field measurements with matching and complete metadata in open access repositories.
Resumo:
Context. On 12 November 2014, the European mission Rosetta delivered the Philae lander on the nucleus of comet 67P /Churyumov-Gerasimenko (67P). After the first touchdown, the lander bounced three times before finally landing at a site named Abydos. Aims. We provide a morphologically detailed analysis of the Abydos landing site to support Philae's measurements and to give context for the interpretation of the images coming from the Comet Infrared and Visible Analyser (CIVA) camera system onboard the lander. Methods. We used images acquired by the OSIRIS Narrow Angle Camera (NAC) on 6 December 2014 to perform the analysis of the Abydos landing site, which provided the geomorphological map, the gravitational slope map, the size-frequency distribution of the boulders. We also computed the albedo and spectral reddening maps. Results. The morphological analysis of the region could suggest that Philae is located on a primordial terrain. The Abydos site is surrounded by two layered and fractured outcrops and presents a 0.02 km(2) talus deposit rich in boulders. The boulder size frequency distribution gives a cumulative power-law index of 4.0 + 0.3/0.4, which is correlated with gravitational events triggered by sublimation and /or thermal fracturing causing regressive erosion. The average value of the albedo is 5.8% at lambda(1) = 480.7 nm and 7.4% at lambda(2) = 649.2 nm, which is similar to the global albedos derived by OSIRIS and CIVA, respectively.
Resumo:
In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the estimation and frame rate) when estimating the helicopter¿s position and using the extracted information to control the UAV.
Resumo:
Laparoscopic instrument tracking systems are a key element in image-guided interventions, which requires high accuracy to be used in a real surgical scenario. In addition, these systems are a suitable option for objective assessment of laparoscopic technical skills based on instrument motion analysis. This study presents a new approach that improves the accuracy of a previously presented system, which applies an optical pose tracking system to laparoscopic practice. A design enhancement of the artificial markers placed on the laparoscopic instrument as well as an improvement of the calibration process are presented as a means to achieve more accurate results. A technical evaluation has been performed in order to compare the accuracy between the previous design and the new approach. Results show a remarkable improvement in the fluctuation error throughout the measurement platform. Moreover, the accumulated distance error and the inclination error have been improved. The tilt range covered by the system is the same for both approaches, from 90º to 7.5º. The relative position error is better for the new approach mainly at close distances to the camera system
Resumo:
La utilización de una cámara fotogramétrica digital redunda en el aumento demostrable de calidad radiométrica debido a la mejor relación señal/ruido y a los 12 bits de resolución radiométrica por cada pixel de la imagen. Simultáneamente se consigue un notable ahorro de tiempo y coste gracias a la eliminación de las fases de revelado y escaneado de la película y al aumento de las horas de vuelo por día. De otra parte, el sistema láser aerotransportado (LIDAR - Light Detection and Ranging) es un sistema con un elevado rendimiento y rentabilidad para la captura de datos de elevaciones para generar un modelo digital del terreno (MDT) y también de los objetos sobre el terreno, permitiendo así alcanzar alta precisión y densidad de información. Tanto el sistema LIDAR como el sistema de cámara fotogramétrica digital se combinan con otras técnicas bien conocidas: el sistema de posicionamiento global (GPS - Global Positioning System) y la orientación de la unidad de medida inercial (IMU - Inertial Measure Units), que permiten reducir o eliminar el apoyo de campo y realizar la orientación directa de los sensores utilizando datos de efemérides precisas de los satélites. Combinando estas tecnologías, se va a proponer y poner en práctica una metodología para generación automática de ortofotos en países de América del Sur. Analizando la precisión de dichas ortofotos comparándolas con fuente de mayor exactitud y con las especificaciones técnicas del Plan Nacional de Ortofotografía Aérea (PNOA) se determinará la viabilidad de que dicha metodología se pueda aplicar a zonas rurales. ABSTRACT Using a digital photogrammetric camera results in a demonstrable increase of the radiometric quality due to a better improved signal/noise ratio and the radiometric resolution of 12 bits per pixel of the image. Simultaneously a significant saving of time and money is achieved thanks to the elimination of the developing and film scanning stages, as well as to the increase of flying hours per day. On the other hand, airborne laser system Light Detection and Ranging (LIDAR) is a system with high performance and yield for the acquisition of elevation data in order to generate a digital terrain model (DTM), as well as objects on the ground which allows to achieve high accuracy and data density. Both the LIDAR and the digital photogrammetric camera system are combined with other well known techniques: global positioning system (GPS) and inertial measurement unit (IMU) orientation, which are currently in a mature evolutionary stage, which allow to reduce and/or remove field support and perform a direct guidance of sensors using specific historic data from the satellites. By combining these technologies, a methodology for automatic generation of orthophotos in South American countries will be proposed and implemented. Analyzing the accuracy of these orthophotos comparing them with more accurate sources and technical specifications of the National Aerial Orthophoto (PNOA), the viability of whether this methodology should be applied to rural areas, will be determined.
Resumo:
The production and perception of music is a multimodal activity involving auditory, visual and conceptual processing, integrating these with prior knowledge and environmental experience. Musicians utilise expressive physical nuances to highlight salient features of the score. The question arises within the literature as to whether performers’ non-technical, non-sound-producing movements may be communicatively meaningful and convey important structural information to audience members and co-performers. In the light of previous performance research (Vines et al., 2006, Wanderley, 2002, Davidson, 1993), and considering findings within co-speech gestural research and auditory and audio-visual neuroscience, this thesis examines the nature of those movements not directly necessary for the production of sound, and their particular influence on audience perception. Within the current research 3D performance analysis is conducted using the Vicon 12- camera system and Nexus data-processing software. Performance gestures are identified as repeated patterns of motion relating to music structure, which not only express phrasing and structural hierarchy but are consistently and accurately interpreted as such by a perceiving audience. Gestural characteristics are analysed across performers and performance style using two Chopin preludes selected for their diverse yet comparable structures (Opus 28:7 and 6). Effects on perceptual judgements of presentation modes (visual-only, auditory-only, audiovisual, full- and point-light) and viewing conditions are explored. This thesis argues that while performance style is highly idiosyncratic, piano performers reliably generate structural gestures through repeated patterns of upper-body movement. The shapes and locations of phrasing motions are identified particular to the sample of performers investigated. Findings demonstrate that despite the personalised nature of the gestures, performers use increased velocity of movements to emphasise musical structure and that observers accurately and consistently locate phrasing junctures where these patterns and variation in motion magnitude, shape and velocity occur. By viewing performance motions in polar (spherical) rather than cartesian coordinate space it is possible to get mathematically closer to the movement generated by each of the nine performers, revealing distinct patterns of motion relating to phrasing structures, regardless of intended performance style. These patterns are highly individualised both to each performer and performed piece. Instantaneous velocity analysis indicates a right-directed bias of performance motion variation at salient structural features within individual performances. Perceptual analyses demonstrate that audience members are able to accurately and effectively detect phrasing structure from performance motion alone. This ability persists even for degraded point-light performances, where all extraneous environmental information has been removed. The relative contributions of audio, visual and audiovisual judgements demonstrate that the visual component of a performance does positively impact on the over- all accuracy of phrasing judgements, indicating that receivers are most effective in their recognition of structural segmentations when they can both see and hear a performance. Observers appear to make use of a rapid online judgement heuristics, adjusting response processes quickly to adapt and perform accurately across multiple modes of presentation and performance style. In line with existent theories within the literature, it is proposed that this processing ability may be related to cognitive and perceptual interpretation of syntax within gestural communication during social interaction and speech. Findings of this research may have future impact on performance pedagogy, computational analysis and performance research, as well as potentially influencing future investigations of the cognitive aspects of musical and gestural understanding.
Resumo:
A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.
Resumo:
This paper describes a biologically inspired approach to vision-only simultaneous localization and mapping (SLAM) on ground-based platforms. The core SLAM system, dubbed RatSLAM, is based on computational models of the rodent hippocampus, and is coupled with a lightweight vision system that provides odometry and appearance information. RatSLAM builds a map in an online manner, driving loop closure and relocalization through sequences of familiar visual scenes. Visual ambiguity is managed by maintaining multiple competing vehicle pose estimates, while cumulative errors in odometry are corrected after loop closure by a map correction algorithm. We demonstrate the mapping performance of the system on a 66 km car journey through a complex suburban road network. Using only a web camera operating at 10 Hz, RatSLAM generates a coherent map of the entire environment at real-time speed, correctly closing more than 51 loops of up to 5 km in length.
Resumo:
This report summarizes initial work to incorporate Photometries CH250 charge-coupled device (CCD) detectors in the NOAAIMLML Marine Optics System (MOS). The MOS spectroradiometer will be used primarily in the Marine Optics Buoy (MOBY) to surface truth the ocean color satellite, SeaWiFS, scheduled for launch later this year. This work was funded through Contract NAS5-31746 to NASA, Goddard Space Flight Center. (PDF contains 24 pages)
Resumo:
Many people suffer from conditions that lead to deterioration of motor control and makes access to the computer using traditional input devices difficult. In particular, they may loose control of hand movement to the extent that the standard mouse cannot be used as a pointing device. Most current alternatives use markers or specialized hardware to track and translate a user's movement to pointer movement. These approaches may be perceived as intrusive, for example, wearable devices. Camera-based assistive systems that use visual tracking of features on the user's body often require cumbersome manual adjustment. This paper introduces an enhanced computer vision based strategy where features, for example on a user's face, viewed through an inexpensive USB camera, are tracked and translated to pointer movement. The main contributions of this paper are (1) enhancing a video based interface with a mechanism for mapping feature movement to pointer movement, which allows users to navigate to all areas of the screen even with very limited physical movement, and (2) providing a customizable, hierarchical navigation framework for human computer interaction (HCI). This framework provides effective use of the vision-based interface system for accessing multiple applications in an autonomous setting. Experiments with several users show the effectiveness of the mapping strategy and its usage within the application framework as a practical tool for desktop users with disabilities.