908 resultados para Cilia and ciliary motion
Resumo:
In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the “correct” size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.
Resumo:
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
This paper presents a study investigating how the performance of motion-impaired computer users in point and click tasks varies with target distance (A), target width (W), and force-feedback gravity well width (GWW). Six motion-impaired users performed point and click tasks across a range of values for A, W, and GWW. Times were observed to increase with A, and to decrease with W. Times also improved with GWW, and, with the addition of a gravity well, a greater improvement was observed for smaller targets than for bigger ones. It was found that Fitts Law gave a good description of behaviour for each value of GWW, and that gravity wells reduced the effect of task difficulty on performance. A model based on Fitts Law is proposed, which incorporates the effect of GWW on movement time. The model accounts for 88.8% of the variance in the observed data.
Resumo:
This paper presents a novel two-pass algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for block base motion compensation. On the basis of research from previous algorithms, especially an on-the-edge motion estimation algorithm called hexagonal search (HEXBS), we propose the LHMEA and the Two-Pass Algorithm (TPA). We introduced hashtable into video compression. In this paper we employ LHMEA for the first-pass search in all the Macroblocks (MB) in the picture. Motion Vectors (MV) are then generated from the first-pass and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of MBs. The evaluation of the algorithm considers the three important metrics being time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms, Experimental results show that the proposed algorithm can offer the same compression rate as the Full Search. LHMEA with TPA has significant improvement on HEXBS and shows a direction for improving other fast motion estimation algorithms, for example Diamond Search.
Resumo:
Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.
Resumo:
We test Slobin's (2003) Thinking-for-Speaking hypothesis on data from different groups of Turkish-German bilinguals, those living in Germany and those who have returned to Germany.
Resumo:
This work presents two schemes of measuring the linear and angular kinematics of a rigid body using a kinematically redundant array of triple-axis accelerometers with potential applications in biomechanics. A novel angular velocity estimation algorithm is proposed and evaluated that can compensate for angular velocity errors using measurements of the direction of gravity. Analysis and discussion of optimal sensor array characteristics are provided. A damped 2 axis pendulum was used to excite all 6 DoF of the a suspended accelerometer array through determined complex motion and is the basis of both simulation and experimental studies. The relationship between accuracy and sensor redundancy is investigated for arrays of up to 100 triple axis (300 accelerometer axes) accelerometers in simulation and 10 equivalent sensors (30 accelerometer axes) in the laboratory test rig. The paper also reports on the sensor calibration techniques and hardware implementation.
Resumo:
Jean-François Lyotard's 1973 essay ‘Acinema’ is explicitly concerned with the cinematic medium, but has received scant critical attention. Lyotard's acinema conceives of an experimental, excessive form of film-making that uses stillness and movement to shift away from the orderly process of meaning-making within mainstream cinema. What motivates this present paper is a striking link between Lyotard's writing and contemporary Hollywood production; both are concerned with a sense of excess, especially within moments of motion. Using Charlie's Angels (McG, 2000) as a case study – a film that has been critically dismissed as ‘eye candy for the blind’ – my methodology brings together two different discourses, high culture theory and mainstream film-making, to test out and propose the value of Lyotard's ideas for the study of contemporary film. Combining close textual analysis and engagement with key scholarship on film spectacle, I reflexively engage with the process of film analysis and re-direct attention to a neglected essay by a major theorist, in order to stimulate further engagement with his work.
Resumo:
In this article, we explore whether cross-linguistic differences in grammatical aspect encoding may give rise to differences in memory and cognition. We compared native speakers of two languages that encode aspect differently (English and Swedish) in four tasks that examined verbal descriptions of stimuli, online triads matching, and memory-based triads matching with and without verbal interference. Results showed between-group differences in verbal descriptions and in memory-based triads matching. However, no differences were found in online triads matching and in memory-based triads matching with verbal interference. These findings need to be interpreted in the context of the overall pattern of performance, which indicated that both groups based their similarity judgments on common perceptual characteristics of motion events. These results show for the first time a cross-linguistic difference in memory as a function of differences in grammatical aspect encoding, but they also contribute to the emerging view that language fine tunes rather than shapes perceptual processes that are likely to be universal and unchanging.
Resumo:
Research on the relationship between grammatical aspect and motion event construal has posited that speakers of non-aspect languages are more prone to encoding event endpoints than are speakers of aspect languages (e.g., von Stutterheim and Carroll 2011). In the present study, we test this hypothesis by extending this line of inquiry to Afrikaans, a non-aspect language which is previously unexplored in this regard. Motion endpoint behavior among Afrikaans speakers was measured by means of a linguistic retelling task and a non-linguistic similarity judgment task, and then compared with the behavior of speakers of a non-aspect language (Swedish) and speakers of an aspect language (English). Results showed the Afrikaans speakers' endpoint patterns aligned with Swedish patterns, but were significantly different from English patterns. It was also found that the variation among the Afrikaans speakers could be partially explained by taking into account their frequency of use of English, such that those who used English more frequently exhibited an endpoint behavior that was more similar to English speakers. The current study thus lends further support to the hypothesis that speakers of different languages attend differently to event endpoints as a function of the grammatical category of aspect.
Resumo:
Using data from the EISCAT (European Incoherent Scatter) VHF and CUTLASS (Co-operative UK Twin- Located Auroral Sounding System) HF radars, we study the formation of ionospheric polar cap patches and their relationship to the magnetopause reconnection pulses identified in the companion paper by Lockwood et al. (2005). It is shown that the poleward-moving, high-concentration plasma patches observed in the ionosphere by EISCAT on 23 November 1999, as reported by Davies et al. (2002), were often associated with corresponding reconnection rate pulses. However, not all such pulses generated a patch and only within a limited MLT range (11:00–12:00 MLT) did a patch result from a reconnection pulse. Three proposed mechanisms for the production of patches, and of the concentration minima that separate them, are analysed and evaluated: (1) concentration enhancement within the patches by cusp/cleft precipitation; (2) plasma depletion in the minima between the patches by fast plasma flows; and (3) intermittent injection of photoionisation-enhanced plasma into the polar cap. We devise a test to distinguish between the effects of these mechanisms. Some of the events repeat too frequently to apply the test. Others have sufficiently long repeat periods and mechanism (3) is shown to be the only explanation of three of the longer-lived patches seen on this day. However, effect (2) also appears to contribute to some events. We conclude that plasma concentration gradients on the edges of the larger patches arise mainly from local time variations in the subauroral plasma, via the mechanism proposed by Lockwood et al. (2000).
Resumo:
Virtual Reality (VR) can provide visual stimuli for EEG studies that can be altered in real time and can produce effects that are difficult or impossible to reproduce in a non-virtual experimental platform. As part of this experiment the Oculus Rift, a commercial-grade, low-cost, Head Mounted Display (HMD) was assessed as a visual stimuli platform for experiments recording EEG. Following, the device was used to investigate the effect of congruent visual stimuli on Event Related Desynchronisation (ERD) due to motion imagery.
Resumo:
Studies show cross-linguistic differences in motion event encoding, such that English speakers preferentially encode manner of motion more than Spanish speakers, who preferentially encode path of motion. Focusing on native Spanish speaking children (aged 5;00-9;00) learning L2 English, we studied path and manner verb preferences during descriptions of motion stimuli, and tested the linguistic relativity hypothesis by investigating categorization preferences in a non-verbal similarity judgement task of motion clip triads. Results revealed L2 influence on L1 motion event encoding, such that bilinguals used more manner verbs and fewer path verbs in their L1, under the influence of English. We found no effects of linguistic structure on non-verbal similarity judgements, and demonstrate for the first time effects of L2 on L1 lexicalization in child L2 learners in the domain of motion events. This pattern of verbal behaviour supports theories of bilingual semantic representation that postulate a merged lexico-semantic system in early bilinguals.