87 resultados para Émotion
Resumo:
An instrument is described which carries three orthogonal geomagnetic field sensors on a standard meteorological balloon package, to sense rapid motion and position changes during ascent through the atmosphere. Because of the finite data bandwidth available over the UHF radio link, a burst sampling strategy is adopted. Bursts of 9s of measurements at 3.6Hz are interleaved with periods of slow data telemetry lasting 25s. Calculation of the variability in each channel is used to determine position changes, a method robust to periods of poor radio signals. During three balloon ascents, variability was found repeatedly at similar altitudes, simultaneously in each of three orthogonal sensors carried. This variability is attributed to atmospheric motions. It is found that the vertical sensor is least prone to stray motions, and that the use of two horizontal sensors provides no additional information over a single horizontal sensor
Resumo:
Many algorithms have been developed to achieve motion segmentation for video surveillance. The algorithms produce varying performances under the infinite amount of changing conditions. It has been recognised that individually these algorithms have useful properties. Fusing the statistical result of these algorithms is investigated, with robust motion segmentation in mind.
Resumo:
Magnetic sensors have been added to a standard weather balloon radiosonde package to detect motion in turbulent air. These measure the terrestrial magnetic field and return data over the standard uhf radio telemetry. Variability in the magnetic sensor data is caused by motion of the instrument package. A series of radiosonde ascents carrying these sensors has been made near a Doppler lidar measuring atmospheric properties. Lidar-retrieved quantities include vertical velocity (w) profile and its standard deviation (w). w determined over 1 h is compared with the radiosonde motion variability at the same heights. Vertical motion in the radiosonde is found to be robustly increased when w>0.75 m s−1 and is linearly proportional to w. ©2009 American Institute of Physics
Resumo:
A survey of the non-radial flows (NRFs) during nearly five years of interplanetary observations revealed the average non-radial speed of the solar wind flows to be �30 km/s, with approximately one-half of the large (>100 km/s) NRFs associated with ICMEs. Conversely, the average non-radial flow speed upstream of all ICMEs is �100 km/s, with just over one-third preceded by large NRFs. These upstream flow deflections are analysed in the context of the large-scale structure of the driving ICME. We chose 5 magnetic clouds with relatively uncomplicated upstream flow deflections. Using variance analysis it was possible to infer the local axis orientation, and to qualitatively estimate the point of interception of the spacecraft with the ICME. For all 5 events the observed upstream flows were in agreement with the point of interception predicted by variance analysis. Thus we conclude that the upstream flow deflections in these events are in accord with the current concept of the large scale structure of an ICME: a curved axial loop connected to the Sun, bounded by a curved (though not necessarily circular)cross section.
Resumo:
A problem is discussed which is generated by shadows and which is a generalization of simple harmonic motion.
Resumo:
Two formulations for the potential energy for slantwise motion are compared: one which applies strictly only to two-dimensional flows (SCAPE) and a three-dimensional formulation based on a Bernoulli equation. The two formulations share an identical contribution from the vertically integrated buoyancy anomaly and a contribution from different Coriolis terms. The latter arise from the neglect of (different) components of the total change in kinetic energy along a trajectory in the two formulations. This neglect is necessary in order to quantify the potential energy available for slantwise motion relative to a defined steady environment. Copyright © 2000 Royal Meteorological Society.
Resumo:
In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the “correct” size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.
Resumo:
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Static movement aftereffects (MAEs) were measured after adaptation to vertical square-wave luminance gratings drifting horizontally within a central window in a surrounding stationary vertical grating. The relationship between the stationary test grating and the surround was manipulated by varying the alignment of the stationary stripes in the window and those in the surround, and the type of outline separating the window and the surround [no outline, black outline (invisible on black stripes), and red outline (visible throughout its length)]. Offsetting the stripes in the window significantly increased both the duration and ratings of the strength of MAEs. Manipulating the outline had no significant effect on either measure of MAE strength. In a second experiment, in which the stationary test fields alone were presented, participants judged how segregated the test field appeared from its surround. In contrast to the MAE measures, outline as well as offset contributed to judged segregation. In a third experiment, in which test-stripe offset wits systematically manipulated, segregation ratings rose with offset. However, MAE strength was greater at medium than at either small or large (180 degrees phase shift) offsets. The effects of these manipulations on the MAE are interpreted in terms of a spatial mechanism which integrates motion signals along collinear contours of the test field and surround, and so causes a reduction of motion contrast at the edges of the test field.
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
An increasing number of neuroscience experiments are using virtual reality to provide a more immersive and less artificial experimental environment. This is particularly useful to navigation and three-dimensional scene perception experiments. Such experiments require accurate real-time tracking of the observer's head in order to render the virtual scene. Here, we present data on the accuracy of a commonly used six degrees of freedom tracker (Intersense IS900) when it is moved in ways typical of virtual reality applications. We compared the reported location of the tracker with its location computed by an optical tracking method. When the tracker was stationary, the root mean square error in spatial accuracy was 0.64 mm. However, we found that errors increased over ten-fold (up to 17 mm) when the tracker moved at speeds common in virtual reality applications. We demonstrate that the errors we report here are predominantly due to inaccuracies of the IS900 system rather than the optical tracking against which it was compared. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The perceived displacement of motion-defined contours in peripheral vision was examined in four experiments. In Experiment 1, in line with Ramachandran and Anstis' finding [Ramachandran, V. S., & Anstis, S. M. (1990). Illusory displacement of equiluminous kinetic edges. Perception, 19, 611-616], the border between a field of drifting dots and a static dot pattern was apparently displaced in the same direction as the movement of the dots. When a uniform dark area was substituted for the static dots, a similar displacement was found, but this was smaller and statistically insignificant. In Experiment 2, the border between two fields of dots moving in opposite directions was displaced in the direction of motion of the dots in the more eccentric field, so that the location of a boundary defined by a diverging pattern is perceived as more eccentric, and that defined by a converging as less eccentric. Two explanations for this effect (that the displacement reflects a greater weight given to the more eccentric motion, or that the region containing stronger centripetal motion components expands perceptually into that containing centrifugal motion) were tested in Experiment 3, by varying the velocity of the more eccentric region. The results favoured the explanation based on the expansion of an area in centripetal motion. Experiment 4 showed that the difference in perceived location was unlikely to be due to differences in the discriminability of contours in diverging and converging pattems, and confirmed that this effect is due to a difference between centripetal and centrifugal motion rather than motion components in other directions. Our result provides new evidence for a bias towards centripetal motion in human vision, and suggests that the direction of motion-induced displacement of edges is not always in the direction of an adjacent moving pattern. (C) 2008 Elsevier Ltd. All rights reserved.