940 resultados para experimental visual perception
Resumo:
Temporal-order judgment (TOJ) and simultaneity judgment (SJ) tasks are used to study differences in speed of processing across sensory modalities, stimulus types, or experimental conditions. Matthews and Welch (2015) reported that observed performance in SJ and TOJ tasks is superior when visual stimuli are presented in the left visual field (LVF) compared to the right visual field (RVF), revealing an LVF advantage presumably reflecting attentional influences. Because observed performance reflects the interplay of perceptual and decisional processes involved in carrying out the tasks, analyses that separate out these influences are needed to determine the origin of the LVF advantage. We re-analyzed the data of Matthews and Welch (2015) using a model of performance in SJ and TOJ tasks that separates out these influences. Parameter estimates capturing the operation of perceptual processes did not differ between hemifields by these analyses, whereas parameter estimates capturing the operation of decisional processes differed. In line with other evidence, perceptual processing also did not differ between SJ and TOJ tasks. Thus, the LVF advantage occurs with identical speeds of processing in both visual hemifields. If attention is responsible for the LVF advantage, it does not exert its influence via prior entry.
Resumo:
Acknowledgements We thank Brian Roberts and Mike Harris for responding to our questions regarding their paper; Zoltan Dienes for advice on Bayes factors; Denise Fischer, Melanie Römer, Ioana Stanciu, Aleksandra Romanczuk, Stefano Uccelli, Nuria Martos Sánchez, and Rosa María Beño Ruiz de la Sierra for help collecting data; Eva Viviani for managing data collection in Parma. We thank Maurizio Gentilucci for letting us use his lab, and the Centro Intradipartimentale Mente e Cervello (CIMeC), University of Trento, and especially Francesco Pavani for lending us his motion tracking equipment. We thank Rachel Foster for proofreading. KKK was supported by a Ph.D. scholarship as part of a grant to VHF within the International Graduate Research Training Group on Cross-Modal Interaction in Natural and Artificial Cognitive Systems (CINACS; DFG IKG-1247) and TS by a grant (DFG – SCHE 735/3-1); both from the German Research Council.
Resumo:
Peer reviewed
Resumo:
Acknowledgements Anna Nowakowska is supported by an ESRC doctoral studentship. A James S McDonnell scholar award to Amelia R. Hunt also provided financial support. We are grateful to Edvinas Pilipavicius and Juraj Sikra for data collection. We also wish to thank W. Joseph MacInnes for help with programming the experiment and Paul Hibbard for help with filtering the faces.
Resumo:
Peer reviewed
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Corticobasal degeneration is a rare, progressive neurodegenerative disease and a member of the 'parkinsonian' group of disorders, which also includes Parkinson's disease, progressive supranuclear palsy, dementia with Lewy bodies and multiple system atrophy. The most common initial symptom is limb clumsiness, usually affecting one side of the body, with or without accompanying rigidity or tremor. Subsequently, the disease affects gait and there is a slow progression to influence ipsilateral arms and legs. Apraxia and dementia are the most common cortical signs. Corticobasal degeneration can be difficult to distinguish from other parkinsonian syndromes but if ocular signs and symptoms are present, they may aid clinical diagnosis. Typical ocular features include increased latency of saccadic eye movements ipsilateral to the side exhibiting apraxia, impaired smooth pursuit movements and visuo-spatial dysfunction, especially involving spatial rather than object-based tasks. Less typical features include reduction in saccadic velocity, vertical gaze palsy, visual hallucinations, sleep disturbance and an impaired electroretinogram. Aspects of primary vision such as visual acuity and colour vision are usually unaffected. Management of the condition to deal with problems of walking, movement, daily tasks and speech problems is an important aspect of the disease.
Resumo:
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF's local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Preterm infants are exposed to high levels of modified early sensory experience in the Neonatal Intensive Care Unit (NICU). Reports that preterm infants show deficits in contingency detection and learning when compared to full-term infants (Gekoski, Fagen, & Pearlman, 1984; Haley, Weinberg, & Grunau, 2006) suggest that their exposure to atypical amounts or types of sensory stimulation might contribute to deficits in these critical skills. Experimental modifications of sensory experience are severely limited with human fetuses and preterm infants, and previous studies with precocial bird embryos that develop in ovo have proven useful to assess the effects of modified perinatal sensory experience on subsequent perceptual and cognitive development. In the current study, I assessed whether increasing amounts of prenatal auditory or visual stimulation can interfere with quail neonates’ contingency detection and contingency learning in the days following hatching. Results revealed that augmented prenatal visual stimulation prior to hatching does not disrupt the ability of bobwhite chicks to recognize and prefer information learned in a contingent fashion, whereas augmented prenatal auditory stimulation disrupted the ability of chicks to benefit from contingently presented information. These results suggest that specific types of augmented prenatal stimulation that embryos receive during late prenatal period can impair the ability to learn and remember contingently presented information. These results provide testable developmental hypotheses, with the goal of improving the developmental care and management of preterm neonates in the NICU setting.
Resumo:
Moving through a stable, three-dimensional world is a hallmark of our motor and perceptual experience. This stability is constantly being challenged by movements of the eyes and head, inducing retinal blur and retino-spatial misalignments for which the brain must compensate. To do so, the brain must account for eye and head kinematics to transform two-dimensional retinal input into the reference frame necessary for movement or perception. The four studies in this thesis used both computational and psychophysical approaches to investigate several aspects of this reference frame transformation. In the first study, we examined the neural mechanism underlying the visuomotor transformation for smooth pursuit using a feedforward neural network model. After training, the model performed the general, three-dimensional transformation using gain modulation. This gave mechanistic significance to gain modulation observed in cortical pursuit areas while also providing several testable hypotheses for future electrophysiological work. In the second study, we asked how anticipatory pursuit, which is driven by memorized signals, accounts for eye and head geometry using a novel head-roll updating paradigm. We showed that the velocity memory driving anticipatory smooth pursuit relies on retinal signals, but is updated for the current head orientation. In the third study, we asked how forcing retinal motion to undergo a reference frame transformation influences perceptual decision making. We found that simply rolling one's head impairs perceptual decision making in a way captured by stochastic reference frame transformations. In the final study, we asked how torsional shifts of the retinal projection occurring with almost every eye movement influence orientation perception across saccades. We found a pre-saccadic, predictive remapping consistent with maintaining a purely retinal (but spatially inaccurate) orientation perception throughout the movement. Together these studies suggest that, despite their spatial inaccuracy, retinal signals play a surprisingly large role in our seamless visual experience. This work therefore represents a significant advance in our understanding of how the brain performs one of its most fundamental functions.
Resumo:
In this study three chronicles from national newspapers (one generalist and two sport press) were analyzed. The chronicles belong to Spain’s soccer final of the King’s Cup in 2014. The aim of the study was to know if there was any influence on the readers’ perception of justice and consequently if this influence could cause a particular predisposition to participate in acts of protest. 462 university students participated. The results showed that different chronicles caused differences in the perception of justice depending on the chronicle read. However, a clear influence on the willingness to participate in acts of protest was not obtained. These results should make us think about the impact of sport press and its influence, and to be aware of the indirect responsibility of every sector on the antisocial behaviors generated by soccer in our country.
Resumo:
Oncological patients are submitted to invasive exams in order to obtain an accurate diagnosis; these procedures may cause maladaptative reactions (fear, anxiety and pain). Particularly in breast cancer, the most common diagnose technique is the incisional biopsy. Most of the patients are unaware about the procedure and for that reason they may focus their thoughts on possible events such as pain, bleeding, the anesthesia, or the later surgical wound care. Anxiety and pain may provoke physiological, behavioral and emotional complications, and because of this reason, the Behavioral Medicine trained psychologist takes an active role before and after the biopsy. The aim of this study was to evaluate the effect of a cognitive-behavioral program to reduce anxiety in women submitted to incisional biopsy for the first time. There were 10 participants from the Hospital Juárez de México, Oncology service; all of them were treated as external patients. The intervention program focused in psycho-education and passive relaxation training using videos, tape-recorded instructions and pamphlets. Anxiety measures were performed using the IDARE-State inventory, and a visual-analogue scale of anxiety (EEF-A), and the measurement of blood pressure and heart rate). Data were analyzed both intrasubject and intersubject using the Wilcoxon test (p≤0.05). The results show a reduction in anxiety (as in punctuation as in ranges) besides, a reduction in the EEF-A.
Resumo:
In this work, we propose a biologically inspired appearance model for robust visual tracking. Motivated in part by the success of the hierarchical organization of the primary visual cortex (area V1), we establish an architecture consisting of five layers: whitening, rectification, normalization, coding and polling. The first three layers stem from the models developed for object recognition. In this paper, our attention focuses on the coding and pooling layers. In particular, we use a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations. An extensive experimental study shows that the proposed method has higher tracking accuracy than several state-of-the-art trackers.
Resumo:
Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.