977 resultados para Visual discrimination


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new co-clustering problem of images and visual features. The problem involves a set of non-object images in addition to a set of object images and features to be co-clustered. Co-clustering is performed in a way that maximises discrimination of object images from non-object images, thus emphasizing discriminative features. This provides a way of obtaining perceptual joint-clusters of object images and features. We tackle the problem by simultaneously boosting multiple strong classifiers which compete for images by their expertise. Each boosting classifier is an aggregation of weak-learners, i.e. simple visual features. The obtained classifiers are useful for object detection tasks which exhibit multimodalities, e.g. multi-category and multi-view object detection tasks. Experiments on a set of pedestrian images and a face data set demonstrate that the method yields intuitive image clusters with associated features and is much superior to conventional boosting classifiers in object detection tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While searching for objects, we combine information from multiple visual modalities. Classical theories of visual search assume that features are processed independently prior to an integration stage. Based on this, one would predict that features that are equally discriminable in single feature search should remain so in conjunction search. We test this hypothesis by examining whether search accuracy in feature search predicts accuracy in conjunction search. Subjects searched for objects combining color and orientation or size; eye movements were recorded. Prior to the main experiment, we matched feature discriminability, making sure that in feature search, 70% of saccades were likely to go to the correct target stimulus. In contrast to this symmetric single feature discrimination performance, the conjunction search task showed an asymmetry in feature discrimination performance: In conjunction search, a similar percentage of saccades went to the correct color as in feature search but much less often to correct orientation or size. Therefore, accuracy in feature search is a good predictor of accuracy in conjunction search for color but not for size and orientation. We propose two explanations for the presence of such asymmetries in conjunction search: the use of conjunctively tuned channels and differential crowding effects for different features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many different spatial discrimination tasks, such as in determining the sign of the offset in a vernier stimulus, the human visual system exhibits hyperacuity-level performance by evaluating spatial relations with the precision of a fraction of a photoreceptor"s diameter. We propose that this impressive performance depends in part on a fast learning process that uses relatively few examples and occurs at an early processing stage in the visual pathway. We show that this hypothesis is plausible by demonstrating that it is possible to synthesize, from a small number of examples of a given task, a simple (HyperBF) network that attains the required performance level. We then verify with psychophysical experiments some of the key predictions of our conjecture. In particular, we show that fast timulus-specific learning indeed takes place in the human visual system and that this learning does not transfer between two slightly different hyperacuity tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Successful interaction with the world depends on accurate perception of the timing of external events. Neurons at early stages of the primate visual system represent time-varying stimuli with high precision. However, it is unknown whether this temporal fidelity is maintained in the prefrontal cortex, where changes in neuronal activity generally correlate with changes in perception. One reason to suspect that it is not maintained is that humans experience surprisingly large fluctuations in the perception of time. To investigate the neuronal correlates of time perception, we recorded from neurons in the prefrontal cortex and midbrain of monkeys performing a temporal-discrimination task. Visual time intervals were presented at a timescale relevant to natural behavior (<500 ms). At this brief timescale, neuronal adaptation--time-dependent changes in the size of successive responses--occurs. We found that visual activity fluctuated with timing judgments in the prefrontal cortex but not in comparable midbrain areas. Surprisingly, only response strength, not timing, predicted task performance. Intervals perceived as longer were associated with larger visual responses and shorter intervals with smaller responses, matching the dynamics of adaptation. These results suggest that the magnitude of prefrontal activity may be read out to provide temporal information that contributes to judging the passage of time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nociception allows for immediate reflex withdrawal whereas pain allows for longer-term protection via rapid learning. We examine here whether shore crabs placed within a brightly lit chamber learn to avoid one of two dark shelters when that shelter consistently results in shock. Crabs were randomly selected to receive shock or not prior to making their first choice and were tested again over 10 trials. Those that received shock in trial 2, irrespective of shock in trial 1, were more likely to switch shelter choice in the next trial and thus showed rapid discrimination. During trial 1, many crabs emerged from the shock shelter and an increasing proportion emerged in later trials, thus avoiding shock by entering a normally avoided light area. In a final test we switched distinctive visual stimuli positioned above each shelter and/or changed the orientation of the crab when placed in the chamber for the test. The visual stimuli had no effect on choice, but crabs with altered orientation now selected the shock shelter, indicating that they had discriminated between the two shelters on the basis of movement direction. These data, and those of other recent experiments, are consistent with key criteria for pain experience and are broadly similar to those from vertebrate studies. © 2013. Published by The Company of Biologists Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The neuropsychological phenomenon of blindsight has been taken to suggest that the primary visual cortex (V1) plays a unique role in visual awareness, and that extrastriate activation needs to be fed back to V1 in order for the content of that activation to be consciously perceived. The aim of this review is to evaluate this theoretical framework and to revisit its key tenets. Firstly, is blindsight truly a dissociation of awareness and visual detection? Secondly, is there sufficient evidence to rule out the possibility that the loss of awareness resulting from a V1 lesion simply reflects reduced extrastriate responsiveness, rather than a unique role of V1 in conscious experience? Evaluation of these arguments and the empirical evidence leads to the conclusion that the loss of phenomenal awareness in blindsight may not be due to feedback activity in V1 being the hallmark awareness. On the basis of existing literature, an alternative explanation of blindsight is proposed. In this view, visual awareness is a “global” cognitive function as its hallmark is the availability of information to a large number of perceptual and cognitive systems; this requires inter-areal long-range synchronous oscillatory activity. For these oscillations to arise, a specific temporal profile of neuronal activity is required, which is established through recurrent feedback activity involving V1 and the extrastriate cortex. When V1 is lesioned, the loss of recurrent activity prevents inter-areal networks on the basis of oscillatory activity. However, as limited amount of input can reach extrastriate cortex and some extrastriate neuronal selectivity is preserved, computations involving comparison of neural firing rates within a cortical area remain possible. This enables “local” read-out from specific brain regions, allowing for the detection and discrimination of basic visual attributes. Thus blindsight is blind due to lack of “global” long-range synchrony, and it functions via “local” neural readout from extrastriate areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’implant cochléaire devient une ressource importante pour contrer la surdité alors qu’il a été démontré qu’une privation auditive précoce ou tardive affecte le développement des systèmes auditif et visuel. Le but des études présentées dans cette thèse est d’évaluer l’impact développemental d’une privation auditive sur les systèmes auditif et visuel. En premier lieu, l’étude du développement chez une population entendante a montré que les systèmes auditif et visuel se développent à des rythmes distincts et qu’ils atteignent leur maturité respective à des âges différents. Ces conclusions suggèrent que les mécanismes qui sous-tendent ces deux systèmes sont différents et que leur développement respectif est indépendant. Aussi, tel qu’observé par une mesure comportementale et électrophysiologique, la discrimination fréquentielle auditive chez les personnes porteuses d’un implant cochléaire est altérée et corrélée aux performances de perception de la parole. Ces deux études suggèrent que suite à une privation auditive, le traitement auditif diffère d’une personne malentendante à une autre, et que ces différences touchent les processus de bas-niveaux, tel que suggéré par la disparité présente dans les performances de discrimination fréquentielle. La dernière étude observe qu’une privation auditive affecte aussi le développement de la modalité visuelle, tel qu’indiqué par une diminution des capacités de discrimination visuelle observée chez des malentendants. Cette indication appuie l’hypothèse qu’un développement normal de chacun des sens est requis pour un développement optimal des autres sens. Globalement, les résultats présentés dans cette thèse suggèrent que les systèmes auditif et visuel se développent de façon distincte, mais demeurent toutefois interreliés. En effet, une privation auditive affecte non seulement le développement des habiletés auditives, mais aussi celui des habiletés visuelles, suggérant une interdépendance entre les deux systèmes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human object recognition is generally considered to tolerate changes of the stimulus position in the visual field. A number of recent studies, however, have cast doubt on the completeness of translation invariance. In a new series of experiments we tried to investigate whether positional specificity of short-term memory is a general property of visual perception. We tested same/different discrimination of computer graphics models that were displayed at the same or at different locations of the visual field, and found complete translation invariance, regardless of the similarity of the animals and irrespective of direction and size of the displacement (Exp. 1 and 2). Decisions were strongly biased towards same decisions if stimuli appeared at a constant location, while after translation subjects displayed a tendency towards different decisions. Even if the spatial order of animal limbs was randomized ("scrambled animals"), no deteriorating effect of shifts in the field of view could be detected (Exp. 3). However, if the influence of single features was reduced (Exp. 4 and 5) small but significant effects of translation could be obtained. Under conditions that do not reveal an influence of translation, rotation in depth strongly interferes with recognition (Exp. 6). Changes of stimulus size did not reduce performance (Exp. 7). Tolerance to these object transformations seems to rely on different brain mechanisms, with translation and scale invariance being achieved in principle, while rotation invariance is not.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objetivo: Establecer la correlación entre condiciones de iluminación, ángulo visual, discriminación de contrastes y agudeza visual en la aparición de síntomas visuales en operarios de computador. Materiales y métodos: Estudio de corte transversal y correlación en muestra de 136 trabajadores administrativos de un “call center” perteneciente a una entidad de salud en la ciudad de Bogotá, utilizando un cuestionario con el que se evaluaron las variables sociodemográficas y ocupacionales; aplicando la escala de síntomas visión – computador (CVSS17), realizando evaluación médica y midiendo iluminación y distancia operario pantalla de computador y con los datos recolectados se realizó un análisis estadístico bivariado y se estableció la correlación entre las condiciones de iluminación, ángulo visual, discriminación de contrataste y agudeza visual; frente a la aparición de síntomas visuales asociados con el uso del computador. El análisis se llevó a cabo con medidas de tendencia central y dispersión y con el coeficiente de correlación paramétrico de Pearson o no-paramétrico de Spearman, previamente se evaluó la normalidad con la prueba de Shapiro-Wilk. Las pruebas estadísticas se evaluarán a un nivel de significancia del 5% (p<0.05). Resultados: El promedio de edad de los participantes en el estudio fue de 36,3 años con un rango entre los 22 y 57 años y en donde el género predominante fue el femenino con el 79,4%. Se encontraron síntomas visuales asociados al uso de pantalla de computador del 59,6%, siendo los más frecuentes la epifora (70,6%), fotofobia (67,6%) y ardor ocular (54,4%). Se reportó una correlación inversa significativa entre niveles de iluminación y manifestación de fotofobia (p=0.02; r= 0,262). Por otra parte no se encontró correlación significativa entre los síntomas referidos con ángulo de visión y agudeza visual y discriminación de contrastes. Conclusión: Las condiciones laborales de iluminación del grupo de estudio están relacionadas con la manifestación de fotofobia, Se encontró asociación entre síntomas visuales y variables sociodemográficas, específicamente con el género, fotofobia a pantalla, fatiga visual y fotofobia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rats with fornix transection, or with cytotoxic retrohippocampal lesions that removed entorhinal cortex plus ventral subiculum, performed a task that permits incidental learning about either allocentric (Allo) or egocentric (Ego) spatial cues without the need to navigate by them. Rats learned eight visual discriminations among computer-displayed scenes in a Y-maze, using the constant-negative paradigm. Every discrimination problem included two familiar scenes (constants) and many less familiar scenes (variables). On each trial, the rats chose between a constant and a variable scene, with the choice of the variable rewarded. In six problems, the two constant scenes had correlated spatial properties, either Alto (each constant appeared always in the same maze arm) or Ego (each constant always appeared in a fixed direction from the start arm) or both (Allo + Ego). In two No-Cue (NC) problems, the two constants appeared in randomly determined arms and directions. Intact rats learn problems with an added Allo or Ego cue faster than NC problems; this facilitation provides indirect evidence that they learn the associations between scenes and spatial cues, even though that is not required for problem solution. Fornix and retrohippocampal-lesioned groups learned NC problems at a similar rate to sham-operated controls and showed as much facilitation of learning by added spatial cues as did the controls; therefore, both lesion groups must have encoded the spatial cues and have incidentally learned their associations with particular constant scenes. Similar facilitation was seen in subgroups that had short or long prior experience with the apparatus and task. Therefore, neither major hippocampal input-output system is crucial for learning about allocentric or egocentric cues in this paradigm, which does not require rats to control their choices or navigation directly by spatial cues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The visuospatial perceptual abilities of individuals with Williams syndrome (WS) were investigated in two experiments. Experiment I measured the ability of participants to discriminate between oblique and between nonoblique orientations. Individuals with WS showed a smaller effect of obliqueness in response time, when compared to controls matched for nonverbal mental age. Experiment 2 investigated the possibility that this deviant pattern of orientation discrimination accounts for the poor ability to perform mental rotation in WS (Farran, Jarrold, & Gathercole, 2001). A size transformation task was employed, which shares the image transformation requirements of mental rotation, but not the orientation discrimination demands. Individuals with WS performed at the same level as controls. The results suggest a deviance at the perceptual level in WS, in processing orientation, which fractionates from the ability to mentally transform images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much is known about the functional mechanisms involved in visual search. Yet, the fundamental question of whether the visual system can perform different types of visual analysis at different spatial resolutions still remains unsettled. In the visual-attention literature, the distinction between different spatial scales of visual processing corresponds to the distinction between distributed and focused attention. Some authors have argued that singleton detection can be performed in distributed attention, whereas others suggest that even such a simple visual operation involves focused attention. Here we showed that microsaccades were spatially biased during singleton discrimination but not during singleton detection. The results provide support to the hypothesis that some coarse visual analysis can be performed in a distributed attention mode.