53 resultados para Binocular Vision
Resumo:
Although tactile representations of the two body sides are initially segregated into opposite hemispheres of the brain, behavioural interactions between body sides exist and can be revealed under conditions of tactile double simultaneous stimulation (DSS) at the hands. Here we examined to what extent vision can affect body side segregation in touch. To this aim, we changed hand-related visual input while participants performed a go/no-go task to detect a tactile stimulus delivered to one target finger (e.g., right index), stimulated alone or with a concurrent non-target finger either on the same hand (e.g., right middle finger) or on the other hand (e.g., left index finger = homologous; left middle finger = non-homologous). Across experiments, the two hands were visible or occluded from view (Experiment 1), images of the two hands were either merged using a morphing technique (Experiment 2), or were shown in a compatible vs incompatible position with respect to the actual posture (Experiment 3). Overall, the results showed reliable interference effects of DSS, as compared to target-only stimulation. This interference varied as a function of which non-target finger was stimulated, and emerged both within and between hands. These results imply that the competition between tactile events is not clearly segregated across body sides. Crucially, non-informative vision of the hand affected overall tactile performance only when a visual/proprioceptive conflict was present, while neither congruent nor morphed hand vision affected tactile DSS interference. This suggests that DSS operates at a tactile processing stage in which interactions between body sides can occur regardless of the available visual input from the body.
Resumo:
Food is fundamental to human wellbeing and development. Increased food production remains a cornerstone strategy in the effort to alleviate global food insecurity. But despite the fact that global food production over the past half century has kept ahead of demand, today around one billion people do not have enough to eat, and a further billion lack adequate nutrition. Food insecurity is facing mounting supply-side and demand-side pressures; key among these are climate change, urbanisation, globalisation, population increases, disease, as well as a number of other factors that are changing patterns of food consumption. Many of the challenges to equitable food access are concentrated in developing countries where environmental pressures including climate change, population growth and other socio-economic issues are concentrated. Together these factors impede people's access to sufficient, nutritious food; chiefly through affecting livelihoods, income and food prices. Food security and human development go hand in hand, and their outcomes are co-determined to a significant degree. The challenge of food security is multi-scalar and cross-sector in nature. Addressing it will require the work of diverse actors to bring sustained improvements inhuman development and to reduce pressure on the environment. Unless there is investment in future food systems that are similarly cross-level, cross-scale and cross-sector, sustained improvements in human wellbeing together with reduced environmental risks and scarcities will not be achieved. This paper reviews current thinking, and outlines these challenges. It suggests that essential elements in a successfully adaptive and proactive food system include: learning through connectivity between scales to local experience and technologies high levels of interaction between diverse actors and sectors ranging from primary producers to retailers and consumers, and use of frontier technologies.
Resumo:
Neurocognitive theories of anxiety predict that threat-related information can be evaluated before attentional selection, and can influence behaviour differentially in high anxious compared to low anxious individuals. We investigate this further by presenting emotional and neutral faces in an adapted binocular rivalry paradigm. We show that the initial selection of emotional faces presented in binocular rivalry is highly influenced by self-reported state and trait anxiety-level. Heightened anxiety was correlated with increased perception of angry and fearful faces, and decreased perception of happy expressions. These results are consistent with recent evidence of involuntary selection of threat in anxiety.
Resumo:
A wealth of literature suggests that emotional faces are given special status as visual objects: Cognitive models suggest that emotional stimuli, particularly threat-relevant facial expressions such as fear and anger, are prioritized in visual processing and may be identified by a subcortical “quick and dirty” pathway in the absence of awareness (Tamietto & de Gelder, 2010). Both neuroimaging studies (Williams, Morris, McGlone, Abbott, & Mattingley, 2004) and backward masking studies (Whalen, Rauch, Etcoff, McInerney, & Lee, 1998) have supported the notion of emotion processing without awareness. Recently, our own group (Adams, Gray, Garner, & Graf, 2010) showed adaptation to emotional faces that were rendered invisible using a variant of binocular rivalry: continual flash suppression (CFS, Tsuchiya & Koch, 2005). Here we (i) respond to Yang, Hong, and Blake's (2010) criticisms of our adaptation paper and (ii) provide a unified account of adaptation to facial expression, identity, and gender, under conditions of unawareness
Resumo:
Human observers exhibit large systematic distance-dependent biases when estimating the three-dimensional (3D) shape of objects defined by binocular image disparities. This has led some to question the utility of disparity as a cue to 3D shape and whether accurate estimation of 3D shape is at all possible. Others have argued that accurate perception is possible, but only with large continuous perspective transformations of an object. Using a stimulus that is known to elicit large distance-dependent perceptual bias (random dot stereograms of elliptical cylinders) we show that contrary to these findings the simple adoption of a more naturalistic viewing angle completely eliminates this bias. Using behavioural psychophysics, coupled with a novel surface-based reverse correlation methodology, we show that it is binocular edge and contour information that allows for accurate and precise perception and that observers actively exploit and sample this information when it is available.
Resumo:
Observers generally fail to recover three-dimensional shape accurately from binocular disparity. Typically, depth is overestimated at near distances and underestimated at far distances [Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31, 1351–1360]. A simple prediction from this is that disparity-defined objects should appear to expand in depth when moving towards the observer, and compress in depth when moving away. However, additional information is provided when an object moves from which 3D Euclidean shape can be recovered, be this through the addition of structure from motion information [Richards, W. (1985). Structure from stereo and motion. Journal of the Optical Society of America A, 2, 343–349], or the use of non-generic strategies [Todd, J. T., & Norman, J. F. (2003). The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception and Psychophysics, 65, 31–47]. Here, we investigated shape constancy for objects moving in depth. We found that to be perceived as constant in shape, objects needed to contract in depth when moving toward the observer, and expand in depth when moving away, countering the effects of incorrect distance scaling (Johnston, 1991). This is a striking example of the failure of shape con- stancy, but one that is predicted if observers neither accurately estimate object distance in order to recover Euclidean shape, nor are able to base their responses on a simpler processing strategy.
Resumo:
For many tasks, such as retrieving a previously viewed object, an observer must form a representation of the world at one location and use it at another. A world-based 3D reconstruction of the scene built up from visual information would fulfil this requirement, something computer vision now achieves with great speed and accuracy. However, I argue that it is neither easy nor necessary for the brain to do this. I discuss biologically plausible alternatives, including the possibility of avoiding 3D coordinate frames such as ego-centric and world-based representations. For example, the distance, slant and local shape of surfaces dictate the propensity of visual features to move in the image with respect to one another as the observer’s perspective changes (through movement or binocular viewing). Such propensities can be stored without the need for 3D reference frames. The problem of representing a stable scene in the face of continual head and eye movements is an appropriate starting place for understanding the goal of 3D vision, more so, I argue, than the case of a static binocular observer.