321 resultados para Visual Perception

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of Performance Capture techniques in the creation of games that involve Motion Capture is a relatively new phenomenon. To date there is no prescribed methodology that prepares actors for the rigors of this new industry and as such there are many questions to be answered around how actors navigate these environments successfully when all available training and theoretical material is focused on performance for theatre and film. This article proposes that through a deployment of an Ecological Approach to Visual Perception we may begin to chart this territory for actors and begin to contend with the demands of performing for the motion captured gaming scenario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scene understanding has been investigated from a mainly visual information point of view. Recently depth has been provided an extra wealth of information, allowing more geometric knowledge to fuse into scene understanding. Yet to form a holistic view, especially in robotic applications, one can create even more data by interacting with the world. In fact humans, when growing up, seem to heavily investigate the world around them by haptic exploration. We show an application of haptic exploration on a humanoid robot in cooperation with a learning method for object segmentation. The actions performed consecutively improve the segmentation of objects in the scene.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. METHODS: 36 visually normal participants (aged 19 – 80 years), completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields. and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus, and sensitivity for displacement in a random-dot kinematogram (Dmin). Participants also completed a hazard perception test (HPT) which measured participants’ response times to hazards embedded in video recordings of real world driving which has been shown to be linked to crash risk. RESULTS: Dmin for the random-dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random-dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. CONCLUSION: These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception in order to develop better interventions to improve road safety.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we present a tutorial introduction to two important senses for biological and robotic systems — inertial and visual perception. We discuss the fundamentals of these two sensing modalities from a biological and an engineering perspective. Digital camera chips and micro-machined accelerometers and gyroscopes are now commodities, and when combined with today's available computing can provide robust estimates of self-motion as well 3D scene structure, without external infrastructure. We discuss the complementarity of these sensors, describe some fundamental approaches to fusing their outputs and survey the field.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This collaborative project by Daniel Mafe and Andrew Brown, one of a number in they have been involved in together, conjoins painting and digital sound into a single, large scale, immersive exhibition/installation. The work as a whole acts as an interstitial point between contrasting approaches to abstraction: the visual and aural, the digital and analogue are pushed into an alliance and each works to alter perceptions of the other. For example, the paintings no longer mutely sit on the wall to be stared into. The sound seemingly emanating from each work shifts the viewer’s typical visual perception and engages their aural sensibilities. This seems to make one more aware of the objects as objects – the surface of each piece is brought into scrutiny – and immerses the viewer more viscerally within the exhibition. Similarly, the sonic experience is focused and concentrated spatially by each painted piece even as the exhibition is dispersed throughout the space. The sounds and images are similar in each local but not identical, even though they may seem to be the same from casual interaction, closer attention will quickly show this is not the case. In preparing this exhibition each artist has had to shift their mode of making to accommodate the other’s contribution. This was mainly done by a process of emptying whereby each was called upon to do less to the works they were making and to iterate the works toward a shared conception, blurring notions of individual imagination while maintaining material authorship. Empting was necessary to enable sufficient porosity where each medium allowed the other entry to its previously gated domain. The paintings are simple and subtle to allow the odd sonic textures a chance to work on the viewer’s engagement with them. The sound remains both abstract, using noise-like textures, and at a low volume to allow the audience’s attention to wander back and forth between aspects of the works.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Evidence currently supports the view that intentional interpersonal coordination (IIC) is a self-organizing phenomenon facilitated by visual perception of co-actors in a coordinative coupling (Schmidt, Richardson, Arsenault, & Galantucci, 2007). The present study examines how apparent IIC is achieved in situations where visual information is limited for co-actors in a rowing boat. In paired rowing boats only one of the actors, [bow seat] gets to see the actions of the other [stroke seat]. Thus IIC appears to be facilitated despite the lack of important visual information for the control of the dyad. Adopting a mimetic approach to expert coordination, the present study qualitatively examined the experiences of expert performers (N=9) and coaches (N=4) with respect to how IIC was achieved in paired rowing boats. Themes were explored using inductive content analysis, which led to layered model of control. Rowers and coaches reported the use of multiple perceptual sources in order to achieve IIC. As expected(Kelso, 1995; Schmidt & O’Brien, 1997; Turvey, 1990), rowers in the bow of a pair boat make use of visual information provided by the partner in front of them [stroke]. However, this perceptual information is subordinate to perception Motor Learning and Control S111 of the relationship between the boat hull and water passing beside it. Stroke seat, in the absence of visual information about his/her partner, achieves coordination by picking up information about the lifting or looming of the boat’s stern along with water passage past the hull. In this case it appears that apparent or desired IIC is supported by the perception of extra-personal variables, in this case boat behavior; as this perceptual information source is used by both actors. To conclude, co-actors in two person rowing boats use multiple sources of perceptual information for apparent IIC that changes according to task constraints. Where visual information is restricted IIC is facilitated via extra-personal perceptual information and apparent IIC switches to intentional extra-personal coordination.