87 resultados para Parallax
Resumo:
In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the “correct” size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.
Resumo:
This paper describes two solutions for systematic measurement of surface elevation that can be used for both profile and surface reconstructions for quantitative fractography case studies. The first one is developed under Khoros graphical interface environment. It consists of an adaption of the almost classical area matching algorithm, that is based on cross-correlation operations, to the well-known method of parallax measurements from stereo pairs. A normalization function was created to avoid false cross-correlation peaks, driving to the true window best matching solution at each region analyzed on both stereo projections. Some limitations to the use of scanning electron microscopy and the types of surface patterns are also discussed. The second algorithm is based on a spatial correlation function. This solution is implemented under the NIH Image macro programming, combining a good representation for low contrast regions and many improvements on overall user interface and performance. Its advantages and limitations are also presented.
Resumo:
Mode of access: Internet.
Resumo:
"From the Memoirs of the Royal Astronomical Society, vol. XLVIII."
Resumo:
The first US edition of my work, including all of my most recent collection, Parallax, plus a selection from three of my four previous books, as well as a single long poem, "DON JUAN, 2013".
Resumo:
In this article we present an alternative theoretical perspective on contemporary cultural, political and economic practices in advanced countries. Like other articles in this issue of parallax, our focus is on conceptualising the economies of excess. However, our ideas do not draw on the writings of Georges Bataille in The Accursed Share, but principally on Virilio’s Speed & Politics: An Essay on Dromology and Marx’s Capital and the Grundrisse.4 Using a modest synthesis of tools provided by these theorists, we put forward a tentative conceptualisation of ‘dromoeconomics’, or, a political economy of speed.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
Topographic structural complexity of a reef is highly correlated to coral growth rates, coral cover and overall levels of biodiversity, and is therefore integral in determining ecological processes. Modeling these processes commonly includes measures of rugosity obtained from a wide range of different survey techniques that often fail to capture rugosity at different spatial scales. Here we show that accurate estimates of rugosity can be obtained from video footage captured using underwater video cameras (i.e., monocular video). To demonstrate the accuracy of our method, we compared the results to in situ measurements of a 2m x 20m area of forereef from Glovers Reef atoll in Belize. Sequential pairs of images were used to compute fine scale bathymetric reconstructions of the reef substrate from which precise measurements of rugosity and reef topographic structural complexity can be derived across multiple spatial scales. To achieve accurate bathymetric reconstructions from uncalibrated monocular video, the position of the camera for each image in the video sequence and the intrinsic parameters (e.g., focal length) must be computed simultaneously. We show that these parameters can be often determined when the data exhibits parallax-type motion, and that rugosity and reef complexity can be accurately computed from existing video sequences taken from any type of underwater camera from any reef habitat or location. This technique provides an infinite array of possibilities for future coral reef research by providing a cost-effective and automated method of determining structural complexity and rugosity in both new and historical video surveys of coral reefs.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
The entry of the plant toxin ricin and its A- and B-subunits in model membranes in the presence as well as absence of monosialoganglioside (GM(1)) has been studied. Dioleoylphosphatidylcholine and 5-, 10-, and 12-doxyl- or 9,10-dibromophosphatidylcholines serve as quenchers of intrinsic tryptophan fluorescence of the proteins. The parallax method of Chattopadhyay and London [(1987) Biochemistry 26, 39-45] has been employed to measure the average membrane penetration depth of tryptophans of ricin and its B-chain and the actual depth of the sole Trp 211 in the A-chain. The results indicate that both of the chains as well as intact ricin penetrate the membrane deeply and the C-terminal end of the A-chain is well inside the bilayer, especially at pH 4.5. An extrinsic probe N-(iodoacetyl)-N'-(5-sulfo-1-naphthyl) ethylenediamine (I-AEDANS) has been attached to Cys 259 of the A-chain, and the kinetics of penetration has been followed by monitoring the increase in AEDANS fluorescence at 480 nm. The insertion follows first-order kinetics, and the rate constant is higher at a lower pH. The energy transfer distance analysis between Trp 211 and AEDANS points out that the conformation of the A-chain changes as it inserts into the membrane. CD studies indicate that the helicity of the proteins increases after penetration, which implies that some of the unordered structure in the native protein is converted to the ordered form during this process. Hydrophobic forces seem to be responsible for stabilizing a particular protein conformation inside the membrane.
Resumo:
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum.We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologramplane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique. © 2009 Optical Society of America.
Resumo:
Esta tese propôs uma metodologia para detecção de áreas susceptíveis a deslizamentos de terra a partir de imagens aéreas, culminando no desenvolvimento de uma ferramenta computacional, denominada SASD/T, para testar a metodologia. Para justificar esta pesquisa, um levantamento sobre os desastres naturais da história brasileira relacionada a deslizamentos de terra e as metodologias utilizadas para a detecção e análise de áreas susceptíveis a deslizamentos de terra foi realizado. Estudos preliminares de visualização 3D e conceitos relacionados ao mapeamento 3D foram realizados. Estereoscopia foi implementada para visualizar tridimensionalmente a região selecionada. As altitudes foram encontradas através de paralaxe, a partir dos pontos homólogos encontrados pelo algoritmo SIFT. Os experimentos foram realizados com imagens da cidade de Nova Friburgo. O experimento inicial mostrou que o resultado obtido utilizando SIFT em conjunto com o filtro proposto, foi bastante significativo ao ser comparado com os resultados de Fernandes (2008) e Carmo (2010), devido ao número de pontos homólogos encontrados e da superfície gerada. Para detectar os locais susceptíveis a deslizamentos, informações como altitude, declividade, orientação e curvatura foram extraídas dos pares estéreos e, em conjunto com as variáveis inseridas pelo usuário, forneceram uma análise de quão uma determinada área é susceptível a deslizamentos. A metodologia proposta pode ser estendida para a avaliação e previsão de riscos de deslizamento de terra de qualquer outra região, uma vez que permite a interação com o usuário, de modo que este especifique as características, os itens e as ponderações necessárias à análise em questão.
Resumo:
Neste trabalho pretendemos demonstrar que a utilização da impunidade como suposto motivador criminal - seja como conceito, seja como conteúdo -, é equivocada na medida em que aquela não passa de um defeito funcional advindo do descompasso entre o programa criminalizador primário e a criminalização secundária, ainda que intermediada pela criminalização terciária (midiática). A consequência dessa paralaxe seria a migração léxica não só do verbete impunidade para o verbete impunização, senão a consideração de que essa não passa de um apontar de dedo político, útil ao sistema penal que, descaradamente, utiliza-se daquela desafinação para manter ou aumentar o seu poder punitivo. Para tanto, utilizamo-nos do método indiciário, haja vista não nos ser possível decifrar todas as causas e consequências que envolvem o discurso da impunidade criminógena, embora isso não nos tenha impedido de concluir que a seletividade inerente ao sistema penal, equivocadamente nomeada de impunidade, serve, em última medida, quando bem utilizada, como corretivo da voracidade do poder punitivo. Corretivo que, todavia, para exercer todo seu poder curativo, não pode continuar se valendo da própria seletividade, senão de uma redução do próprio poder punitivo.
Resumo:
Mobile video and gaming are now widely used, and delivery of a glass-free 3D experience is of both research and development interest. The key drawbacks of a conventional 3D display based on a static lenticular lenslet array and parallax barriers are low resolution, limited viewing angle and reduced brightness, mainly because of the need of multiple-pixels for each object point. This study describes the concept and performance of pixel-level cylindrical liquid crystal (LC) lenses, which are designed to steer light to the left and right eye sequentially to form stereo parallax. The width of the LC lenses can be as small as 20-30 μm, so that the associated auto-stereoscopic display will have the same resolution as the 2D display panel in use. Such a thin sheet of tunable LC lens array can be applied directly on existing mobile displays, and can deliver 3D viewing experience while maintaining 2D viewing capability. Transparent electrodes were laser patterned to achieve the single pixel lens resolution, and a high birefringent LC material was used to realise a large diffraction angle for a wide field of view. Simulation was carried out to model the intensity profile at the viewing plane and optimise the lens array based on the measured LC phase profile. The measured viewing angle and intensity profile were compared with the simulation results. © 2014 SPIE.
Resumo:
Stereopsis and motion parallax are two methods for recovering three dimensional shape. Theoretical analyses of each method show that neither alone can recover rigid 3D shapes correctly unless other information, such as perspective, is included. The solutions for recovering rigid structure from motion have a reflection ambiguity; the depth scale of the stereoscopic solution will not be known unless the fixation distance is specified in units of interpupil separation. (Hence the configuration will appear distorted.) However, the correct configuration and the disposition of a rigid 3D shape can be recovered if stereopsis and motion are integrated, for then a unique solution follows from a set of linear equations. The correct interpretation requires only three points and two stereo views.