857 resultados para Fixational eye movements
Resumo:
The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?
In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.
Resumo:
In the last decade, research efforts into directly interfacing with the neurons of individuals with motor deficits have increased. The goal of such research is clear: Enable individuals affected by paralysis or amputation to regain control of their environments by manipulating external devices with thought alone. Though the motor cortices are the usual brain areas upon which neural prosthetics depend, research into the parietal lobe and its subregions, primarily in non-human primates, has uncovered alternative areas that could also benefit neural interfaces. Similar to the motor cortical areas, parietal regions can supply information about the trajectories of movements. In addition, the parietal lobe also contains cognitive signals like movement goals and intentions. But, these areas are also known to be tuned to saccadic eye movements, which could interfere with the function of a prosthetic designed to capture motor intentions only. In this thesis, we develop and examine the functionality of a neural prosthetic with a non-human primate model using the superior parietal lobe to examine the effectiveness of such an interface and the effects of unconstrained eye movements in a task that more closely simulates clinical applications. Additionally, we examine methods for improving usability of such interfaces.
The parietal cortex is also believed to contain neural signals relating to monitoring of the state of the limbs through visual and somatosensory feedback. In one of the world’s first clinical neural prosthetics based on the human parietal lobe, we examine the extent to which feedback regarding the state of a movement effector alters parietal neural signals and what the implications are for motor neural prosthetics and how this informs our understanding of this area of the human brain.
Resumo:
Predatory behaviour of Nandus nandus was studied by offering Cyprinus carpio as prey. The study was conducted with six N. namdus (8.2 ±0.2 cm and 7.60 ±0.3g) represented as P 1, P 2, P 3, P 4, P 5 and P 6. Three size categories of prey (C. carpio) such as small (2.0 ±0.1 cm and 0.23 ±0.01g), large (3.6 ±0.1 cm and 0.57 ±O.O.lg) and mixed group consisting of both small and large prey were used for 14 days of trial. Predatory behavior was classified as targeting, driving, catching, handling, resting and next attempt of catching prey. After introduction of prey into the aquarium predators followed the movement of preys by eye movements and tried to target smaller one first. The predator grasped the head of the prey by its jaws by a drive and engulfed it wholly into the mouth. The average handling time (time taken to manipulate and swallow prey from capture to ceasation of pharyngeal movement) was 42±2 sec and 47±2 sec for small and large prey respectively. N. nandus were ingested more small prey than large prey though the size classes were equally available in case of mixed prey used. Although the prey consumption was higher in number when small prey were ingested but in weight the consumption was higher when ingested large size of prey. The study indicated that N. nandus, ingested more small prey and grasped the headfirst.
Resumo:
While searching for objects, we combine information from multiple visual modalities. Classical theories of visual search assume that features are processed independently prior to an integration stage. Based on this, one would predict that features that are equally discriminable in single feature search should remain so in conjunction search. We test this hypothesis by examining whether search accuracy in feature search predicts accuracy in conjunction search. Subjects searched for objects combining color and orientation or size; eye movements were recorded. Prior to the main experiment, we matched feature discriminability, making sure that in feature search, 70% of saccades were likely to go to the correct target stimulus. In contrast to this symmetric single feature discrimination performance, the conjunction search task showed an asymmetry in feature discrimination performance: In conjunction search, a similar percentage of saccades went to the correct color as in feature search but much less often to correct orientation or size. Therefore, accuracy in feature search is a good predictor of accuracy in conjunction search for color but not for size and orientation. We propose two explanations for the presence of such asymmetries in conjunction search: the use of conjunctively tuned channels and differential crowding effects for different features.
Resumo:
A common approach to visualise multidimensional data sets is to map every data dimension to a separate visual feature. It is generally assumed that such visual features can be judged independently from each other. However, we have recently shown that interactions between features do exist [Hannus et al. 2004; van den Berg et al. 2005]. In those studies, we first determined individual colour and size contrast or colour and orientation contrast necessary to achieve a fixed level of discrimination performance in single feature search tasks. These contrasts were then used in a conjunction search task in which the target was defined by a combination of a colour and a size or a colour and an orientation. We found that in conjunction search, despite the matched feature discriminability, subjects significantly more often chose an item with the correct colour than one with correct size or orientation. This finding may have consequences for visualisation: the saliency of information coded by objects' size or orientation may change when there is a need to simultaneously search for colour that codes another aspect of the information. In the present experiment, we studied whether a colour bias can also be found in a more complex and continuous task, Subjects had to search for a target in a node-link diagram consisting of SO nodes, while their eye movements were being tracked, Each node was assigned a random colour and size (from a range of 10 possible values with fixed perceptual distances). We found that when we base the distances on the mean threshold contrasts that were determined in our previous experiments, the fixated nodes tend to resemble the target colour more than the target size (Figure 1a). This indicates that despite the perceptual matching, colour is judged with greater precision than size during conjunction search. We also found that when we double the size contrast (i.e. the distances between the 10 possible node sizes), this effect disappears (Figure 1b). Our findings confirm that the previously found decrease in salience of other features during colour conjunction search is also present in more complex (more 'visualisation- realistic') visual search tasks. The asymmetry in visual search behaviour can be compensated for by manipulating step sizes (perceptual distances) within feature dimensions. Our results therefore also imply that feature hierarchies are not completely fixed and may be adapted to the requirements of a particular visualisation. Copyright © 2005 by the Association for Computing Machinery, Inc.
Resumo:
Whether facial identity and facial expression was processed independently has long been a controversy. Studies at levels of experimental, neuropsychological, functional imaging and cell-recording all failed to consistently support either independent or interdependent processing. Present study proposed that familiarity and discriminability of facial identity and expression was important variable in mediating the relation between facial identity and facial expression recognition. Effects of familiarity on recognition of facial identity and expression had been examined (e.g. Ganel & Goshen-Gottstein, 2004) but the role of the discriminability in recognition of facial identity and expression has not yet been carefully examined. To examine the role of discriminability of facial identity and expression, 8 experiments were conducted with Garner’s speeded classification task in recognition of identity and expression of unfamiliar faces. The discriminability of facial identity and expression was manipulated, and the measurements of Garner interference and facilitation indicated that: 1. The discriminability of facial identity and expression mediate the relation between facial identity and expression recognition. Four possible discriminability combinations between identity and expression predicted 4 interference patterns between them. Low discriminability accounted for the interference either in facial identity judgment or in facial expression judgment task. 2. The measurements of eye movements indicated that either in facial identity or in facial expression recognition low discriminability led to a narrowly-distributed eye fixation pattern while high discriminability led to a widely-distributed eye fixation pattern. 3. By combining the morphing technique with the Garner paradigm, study 2 successfully demonstrated the linar relation between discriminability and Garner facilitation effects, confirmed the discriminability effects in the measurements of Garner facilitation effects.. 4. By providing the varying information of facial expression, study 2 revealed that varying information improved the discriminability of facial expression, and then enhanced the recognition of facial expression. All the results indicated that the discriminability of facial identity and expression could mediate the independent or interdependent processing between them, and the discriminability effects on recognition of identity and expression of unfamiliar faces was identified. The results from interference effects and facilitation effects both indicated that the dimensional relation between facial identity and expression was separable but not asymmetric claimed by previous studies(Schweinberger et al, 1998, 1999). Absolutedly independent or interdependent processing between facial identity and expression recognition were both impossible, discriminability of identity and expression mediated the relation between them. The discriminability effects revealed in present study could explain the conflicts between existing findings well.
Resumo:
The present study used Dynamical Causal Modeling (DCM) to reveal the influence of difficult to decompose Chinese characters on the effective connectivity of “where” and “what” visual stream。 Chunk decomposition is to decompose the familiar items to their components and then to make up new items with the decomposed components。 Some previous studies with eye movements and brain image had revealed that the chunk decomposition was involved visual-spatial information process, and suggested that “what” and “where” visual streams contributed to the course of chunk decomposition。 However, how they worked to complete the chunk decomposition task is still unknown。 The present study has two factors, familiarity and tightness of the spatial structures, each with 2 levels: real words vs. pseudowords and tight chunks vs. loose chunks。 The results indicates that in the loose conditions, familiarity increases the effective connectivity of “where ” stream, while in the pseudowords conditions, tightness of the spatial structures increases the effective connectivity of both “where” and “what” streams, and familiarity and tightness combined to increase not only both the “what” and “where” streams, but also the effective connectivity from the inferior temporal gyrus to the superior parietal lobule.
Resumo:
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.
Resumo:
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.
Resumo:
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.
Resumo:
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.
Resumo:
This paper describes a self-organizing neural network that rapidly learns a body-centered representation of 3-D target positions. This representation remains invariant under head and eye movements, and is a key component of sensory-motor systems for producing motor equivalent reaches to targets (Bullock, Grossberg, and Guenther, 1993).
Resumo:
A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.
Resumo:
A neural network theory of :3-D vision, called FACADE Theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a Boundary Contour System (BCS) and a Feature Contour System (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that arc mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, DaVinci stereopsis, a 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analysed. The BCS and FCS sub-systems model aspects of how the two parvocellular cortical processing streams that join the Lateral Geniculate Nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-Depth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact. with cortical mechanisms of spatial attention, attentive objcect learning, and visual search. Adaptive Resonance Theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal cortex (IT) for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular Motion BCS signals interact with the model Where stream. Reciprocal interactions between these visual, What, and Where mechanisms arc used to discuss data about visual search and saccadic eye movements, including fast search of conjunctive targets, search of 3-D surfaces, selective search of like-colored targets, attentive tracking of multi-element groupings, and recursive search of simultaneously presented targets.