979 resultados para camera motion parameters
Resumo:
A new and sensitive molecular probe, 2-(2′-hydroxyphenyl)imidazo[1,2-a]pyridine (HPIP), for monitoring structural changes in lipid bilayers is presented. Migration of HPIP from water into vesicles involves rupture of hydrogen (H) bonds with water and formation of an internal H bond once the probe is inside the vesicle. These structural changes of the dye allow the occurrence of a photoinduced intramolecular proton-transfer reaction and a subsequent twisting/rotational process upon electronic excitation of the probe. The resulting large Stokes-shifted fluorescence band depends on the twisting motion of the zwitterionic phototautomer and is characterized in vesicles of dimyristoyl-phosphatidylcholine and in dipalmitoyl-phosphatidylcholine at the temperature range of interest and in the presence of cholesterol. Because the fluorescence of aqueous HPIP does not interfere in the emission of the probe within the vesicles, HPIP proton-transfer/twisting motion fluorescence directly allows us to monitor and quantify structural changes within bilayers. The static and dynamic fluorescence parameters are sensitive enough to such changes to suggest this photostable dye as a potential molecular probe of the physical properties of lipid bilayers.
Resumo:
When the visual (striate) cortex (V1) is damaged in human subjects, cortical blindness results in the contralateral visual half field. Nevertheless, under some experimental conditions, subjects demonstrate a capacity to make visual discriminations in the blind hemifield (blindsight), even though they have no phenomenal experience of seeing. This capacity must, therefore, be mediated by parallel projections to other brain areas. It is also the case that some subjects have conscious residual vision in response to fast moving stimuli or sudden changes in light flux level presented to the blind hemifield, characterized by a contentless kind of awareness, a feeling of something happening, albeit not normal seeing. The relationship between these two modes of discrimination has never been studied systematically. We examine, in the same experiment, both the unconscious discrimination and the conscious visual awareness of moving stimuli in a subject with unilateral damage to V1. The results demonstrate an excellent capacity to discriminate motion direction and orientation in the absence of acknowledged perceptual awareness. Discrimination of the stimulus parameters for acknowledged awareness apparently follows a different functional relationship with respect to stimulus speed, displacement, and stimulus contrast. As performance in the two modes can be quantitatively matched, the findings suggest that it should be possible to image brain activity and to identify the active areas involved in the same subject performing the same discrimination task, both with and without conscious awareness, and hence to determine whether any structures contribute uniquely to conscious perception.
Resumo:
Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.
Resumo:
Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.
Resumo:
Analysis of vibrations and displacements is a hot topic in structural engineering. Although there is a wide variety of methods for vibration analysis, direct measurement of displacements in the mid and high frequency range is not well solved and accurate devices tend to be very expensive. Low-cost systems can be achieved by applying adequate image processing algorithms. In this paper, we propose the use of a commercial pocket digital camera, which is able to register more than 420 frames per second (fps) at low resolution, for accurate measuring of small vibrations and displacements. The method is based on tracking elliptical targets with sub-pixel accuracy. Our proposal is demonstrated at a 10 m distance with a spatial resolution of 0.15 mm. A practical application over a simple structure is given, and the main parameters of an attenuated movement of a steel column after an impulsive impact are determined with a spatial accuracy of 4 µm.
Resumo:
Context. The current generation of X-ray satellites has discovered many new X-ray sources that are difficult to classify within the well-described subclasses. The hard X-ray source IGR J11215−5952 is a peculiar transient, displaying very short X-ray outbursts every 165 days. Aims. To characterise the source, we obtained high-resolution spectra of the optical counterpart, HD 306414, at different epochs, spanning a total of three months, before and around the 2007 February outburst with the combined aims of deriving its astrophysical parameters and searching for orbital modulation. Methods. We fit model atmospheres generated with the fastwind code to the spectrum, and used the interstellar lines in the spectrum to estimate its distance. We also cross-correlated each individual spectrum to the best-fit model to derive radial velocities. Results. From its spectral features, we classify HD 306414 as B0.5 Ia. From the model fit, we find Teff ≈ 24 700 K and log g ≈ 2.7, in good agreement with the morphological classification. Using the interstellar lines in its spectrum, we estimate a distance to HD 306414 d ≳ 7 kpc. Assuming this distance, we derive R∗ ≈ 40 R⊙ and Mspect ≈ 30 M⊙ (consistent, within errors, with Mevol ≈ 38 M⊙, and in good agreement with calibrations for the spectral type). Analysis of the radial velocity curve reveals that radial velocity changes are not dominated by the orbital motion, and provide an upper limit on the semi-amplitude for the optical component Kopt ≲ 11 ± 6 km s-1. Large variations in the depth and shape of photospheric lines suggest the presence of strong pulsations, which may be the main cause of the radial velocity changes. Very significant variations, uncorrelated with those of the photospheric lines are seen in the shape and position of the Hα emission feature around the time of the X-ray outburst, but large excursions are also observed at other times. Conclusions. HD 306414 is a normal B0.5 Ia supergiant. Its radial velocity curve is dominated by an effect that is different from binary motion, and is most likely stellar pulsations. The data available suggest that the X-ray outbursts are caused by the close passage of the neutron star in a very eccentric orbit, perhaps leading to localised mass outflow.
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.
Resumo:
In this study, a digital CMOS camera was calibrated for use as a non-contact colorimeter for measuring the color of granite artworks. The low chroma values of the granite, which yield similar stimulation of the three color channels of the camera, proved to be the most challenging aspect of the task. The appropriate parameters for converting the device-dependent RGB color space into a device-independent color space were established. For this purpose, the color of a large number of Munsell samples (corresponding to the previously defined color gamut of granite) was measured with a digital camera and with a spectrophotometer (reference instrument). The color data were then compared using the CIELAB color formulae. The best correlations between measurements were obtained when the camera works to 10-bits and the spectrophotometric measures in SCI mode. Finally, the calibrated instrument was used successfully to measure the color of six commercial varieties of Spanish granite.
Resumo:
Measurement of concrete strain through non-invasive methods is of great importance in civil engineering and structural analysis. Traditional methods use laser speckle and high quality cameras that may result too expensive for many applications. Here we present a method for measuring concrete deformations with a standard reflex camera and image processing for tracking objects in the concretes surface. Two different approaches are presented here. In the first one, on-purpose objects are drawn on the surface, while on the second one we track small defects on the surface due to air bubbles in the hardening process. The method has been tested on a concrete sample under several loading/unloading cycles. A stop-motion sequence of the process has been captured and analyzed. Results have been successfully compared with the values given by a strain gauge. Accuracy of our methods in tracking objects is below 8 μm, in the order of more expensive commercial devices.
Resumo:
Atualmente os sistemas de pilotagem autónoma de quadricópteros estão a ser desenvolvidos de forma a efetuarem navegação em espaços exteriores, onde o sinal de GPS pode ser utilizado para definir waypoints de navegação, modos de position e altitude hold, returning home, entre outros. Contudo, o problema de navegação autónoma em espaços fechados sem que se utilize um sistema de posicionamento global dentro de uma sala, subsiste como um problema desafiante e sem solução fechada. Grande parte das soluções são baseadas em sensores dispendiosos, como o LIDAR ou como sistemas de posicionamento externos (p.ex. Vicon, Optitrack). Algumas destas soluções reservam a capacidade de processamento de dados dos sensores e dos algoritmos mais exigentes para sistemas de computação exteriores ao veículo, o que também retira a componente de autonomia total que se pretende num veículo com estas características. O objetivo desta tese pretende, assim, a preparação de um sistema aéreo não-tripulado de pequeno porte, nomeadamente um quadricóptero, que integre diferentes módulos que lhe permitam simultânea localização e mapeamento em espaços interiores onde o sinal GPS ´e negado, utilizando, para tal, uma câmara RGB-D, em conjunto com outros sensores internos e externos do quadricóptero, integrados num sistema que processa o posicionamento baseado em visão e com o qual se pretende que efectue, num futuro próximo, planeamento de movimento para navegação. O resultado deste trabalho foi uma arquitetura integrada para análise de módulos de localização, mapeamento e navegação, baseada em hardware aberto e barato e frameworks state-of-the-art disponíveis em código aberto. Foi também possível testar parcialmente alguns módulos de localização, sob certas condições de ensaio e certos parâmetros dos algoritmos. A capacidade de mapeamento da framework também foi testada e aprovada. A framework obtida encontra-se pronta para navegação, necessitando apenas de alguns ajustes e testes.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Models of visual motion processing that introduce priors for low speed through Bayesian computations are sometimes treated with scepticism by empirical researchers because of the convenient way in which parameters of the Bayesian priors have been chosen. Using the effects of motion adaptation on motion perception to illustrate, we show that the Bayesian prior, far from being convenient, may be estimated on-line and therefore represents a useful tool by which visual motion processes may be optimized in order to extract the motion signals commonly encountered in every day experience. The prescription for optimization, when combined with system constraints on the transmission of visual information, may lead to an exaggeration of perceptual bias through the process of adaptation. Our approach extends the Bayesian model of visual motion proposed byWeiss et al. [Weiss Y., Simoncelli, E., & Adelson, E. (2002). Motion illusions as optimal perception Nature Neuroscience, 5:598-604.], in suggesting that perceptual bias reflects a compromise taken by a rational system in the face of uncertain signals and system constraints. © 2007.
Resumo:
Stimuli from one family of complex motions are defined by their spiral pitch, where cardinal axes represent signed expansion and rotation. Intermediate spirals are represented by intermediate pitches. It is well established that vision contains mechanisms that sum over space and direction to detect these stimuli (Morrone et al., Nature 376 (1995) 507) and one possibility is that four cardinal mechanisms encode the entire family. We extended earlier work (Meese & Harris, Vision Research 41 (2001) 1901) using subthreshold summation of random dot kinematograms and a two-interval forced choice technique to investigate this possibility. In our main experiments, the spiral pitch of one component was fixed and that of another was varied in steps of 15° relative to the first. Regardless of whether the fixed component was aligned with cardinal axes or an intermediate spiral, summation to-coherence-threshold between the two components declined as a function of their difference in spiral pitch. Similar experiments showed that none of the following were critical design features or stimulus parameters for our results: superposition of signal dots, limited life-time dots, the presence of speed gradients, stimulus size or the number of dots. A simplex algorithm was used to fit models containing mechanisms spaced at a pitch of either 90° (cardinal model) or 45° (cardinal+model) and combined using a fourth-root summation rule. For both models, direction half-bandwidth was equated for all mechanisms and was the only free parameter. Only the cardinal+model could account for the full set of results. We conclude that the detection of complex motion in human vision requires both cardinal and spiral mechanisms with a half-bandwidth of approximately 46°. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.
Resumo:
In this paper we present increased adaptivity and robustness in distributed object tracking by multi-camera networks using a socio-economic mechanism for learning the vision graph. To build-up the vision graph autonomously within a distributed smart-camera network, we use an ant-colony inspired mechanism, which exchanges responsibility for tracking objects using Vickrey auctions. Employing the learnt vision graph allows the system to optimise its communication continuously. Since distributed smart camera networks are prone to uncertainties in individual cameras, such as failures or changes in extrinsic parameters, the vision graph should be sufficiently robust and adaptable during runtime to enable seamless tracking and optimised communication. To better reflect real smart-camera platforms and networks, we consider that communication and handover are not instantaneous, and that cameras may be added, removed or their properties changed during runtime. Using our dynamic socio-economic approach, the network is able to continue tracking objects well, despite all these uncertainties, and in some cases even with improved performance. This demonstrates the adaptivity and robustness of our approach.