42 resultados para Visual search method
Resumo:
The efficacy of explicit and implicit learning paradigms was examined during the very early stages of learning the perceptual-motor anticipation task of predicting ball direction from temporally occluded footage of soccer penalty kicks. In addition, the effect of instructional condition on point-of-gaze during learning was examined. A significant improvement in horizontal prediction accuracy was observed in the explicit learning group; however, similar improvement was evident in a placebo group who watched footage of soccer matches. Only the explicit learning intervention resulted in changes in eye movement behaviour and increased awareness of relevant postural cues. Results are discussed in terms of methodological and practical issues regarding the employment of implicit perceptual training interventions. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, “face in the crowd” paradigm with a HFA/AS group, which explored responses to numerous facial expressions using real-face stimuli. Results showed slower response times for processing fear, anger and sad expressions in the HFA/AS group, relative to the TD CA group, but not the TD V/NV group. Reponses to happy, disgust and surprise expressions showed no group differences. Results are discussed with reference to the amygdala theory of autism.
Resumo:
Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias.
Resumo:
The challenge of moving past the classic Window Icons Menus Pointer (WIMP) interface, i.e. by turning it ‘3D’, has resulted in much research and development. To evaluate the impact of 3D on the ‘finding a target picture in a folder’ task, we built a 3D WIMP interface that allowed the systematic manipulation of visual depth, visual aides, semantic category distribution of targets versus non-targets; and the detailed measurement of lower-level stimuli features. Across two separate experiments, one large sample web-based experiment, to understand associations, and one controlled lab environment, using eye tracking to understand user focus, we investigated how visual depth, use of visual aides, use of semantic categories, and lower-level stimuli features (i.e. contrast, colour and luminance) impact how successfully participants are able to search for, and detect, the target image. Moreover in the lab-based experiment, we captured pupillometry measurements to allow consideration of the influence of increasing cognitive load as a result of either an increasing number of items on the screen, or due to the inclusion of visual depth. Our findings showed that increasing the visible layers of depth, and inclusion of converging lines, did not impact target detection times, errors, or failure rates. Low-level features, including colour, luminance, and number of edges, did correlate with differences in target detection times, errors, and failure rates. Our results also revealed that semantic sorting algorithms significantly decreased target detection times. Increased semantic contrasts between a target and its neighbours correlated with an increase in detection errors. Finally, pupillometric data did not provide evidence of any correlation between the number of visible layers of depth and pupil size, however, using structural equation modelling, we demonstrated that cognitive load does influence detection failure rates when there is luminance contrasts between the target and its surrounding neighbours. Results suggest that WIMP interaction designers should consider stimulus-driven factors, which were shown to influence the efficiency with which a target icon can be found in a 3D WIMP interface.
Resumo:
Much is known about the functional mechanisms involved in visual search. Yet, the fundamental question of whether the visual system can perform different types of visual analysis at different spatial resolutions still remains unsettled. In the visual-attention literature, the distinction between different spatial scales of visual processing corresponds to the distinction between distributed and focused attention. Some authors have argued that singleton detection can be performed in distributed attention, whereas others suggest that even such a simple visual operation involves focused attention. Here we showed that microsaccades were spatially biased during singleton discrimination but not during singleton detection. The results provide support to the hypothesis that some coarse visual analysis can be performed in a distributed attention mode.
Resumo:
Three experiments examined the cultural relativity of emotion recognition using the visual search task. Caucasian-English and Japanese participants were required to search for an angry or happy discrepant face target against an array of competing distractor faces. Both cultural groups performed the task with displays that consisted of Caucasian and Japanese faces in order to investigate the effects of racial congruence on emotion detection performance. Under high perceptual load conditions, both cultural groups detected the happy face more efficiently than the angry face. When perceptual load was reduced such that target detection could be achieved by feature-matching, the English group continued to show a happiness advantage in search performance that was more strongly pronounced for other race faces. Japanese participants showed search time equivalence for happy and angry targets. Experiment 3 encouraged participants to adopt a perceptual based strategy for target detection by removing the term 'emotion' from the instructions. Whilst this manipulation did not alter the happiness advantage displayed by our English group, it reinstated it for our Japanese group, who showed a detection advantage for happiness only for other race faces. The results demonstrate cultural and linguistic modifiers on the perceptual saliency of the emotional signal and provide new converging evidence from cognitive psychology for the interactionist perspective on emotional expression recognition.
Resumo:
Clinical evidence suggests that a persistent search for solutions for chronic pain may bring along costs at the cognitive, affective, and behavioral level. Specifically, attempts to control pain may fuel hypervigilance and prioritize attention towards pain-related information. This hypothesis was investigated in an experiment with 41 healthy volunteers. Prioritization of attention towards a signal for pain was measured using an adaptation of a visual search paradigm in which participants had to search for a target presented in a varying number of colored circles. One of these colors (Conditioned Stimulus) became a signal for pain (Unconditioned Stimulus: electrocutaneous stimulus at tolerance level) using a classical conditioning procedure. Intermixed with the visual search task, participants also performed another task. In the pain-control group, participants were informed that correct and fast responses on trials of this second task would result in an avoidance of the Unconditioned Stimulus. In the comparison group, performance on the second task was not instrumental in controlling pain. Results showed that in the pain-control group, attention was more prioritized towards the Conditioned Stimulus than in the comparison group. The theoretical and clinical implications of these results are discussed.
Resumo:
Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.
Resumo:
Facilitating the visual exploration of scientific data has received increasing attention in the past decade or so. Especially in life science related application areas the amount of available data has grown at a breath taking pace. In this paper we describe an approach that allows for visual inspection of large collections of molecular compounds. In contrast to classical visualizations of such spaces we incorporate a specific focus of analysis, for example the outcome of a biological experiment such as high throughout screening results. The presented method uses this experimental data to select molecular fragments of the underlying molecules that have interesting properties and uses the resulting space to generate a two dimensional map based on a singular value decomposition algorithm and a self organizing map. Experiments on real datasets show that the resulting visual landscape groups molecules of similar chemical properties in densely connected regions.
Resumo:
Visual exploration of scientific data in life science area is a growing research field due to the large amount of available data. The Kohonen’s Self Organizing Map (SOM) is a widely used tool for visualization of multidimensional data. In this paper we present a fast learning algorithm for SOMs that uses a simulated annealing method to adapt the learning parameters. The algorithm has been adopted in a data analysis framework for the generation of similarity maps. Such maps provide an effective tool for the visual exploration of large and multi-dimensional input spaces. The approach has been applied to data generated during the High Throughput Screening of molecular compounds; the generated maps allow a visual exploration of molecules with similar topological properties. The experimental analysis on real world data from the National Cancer Institute shows the speed up of the proposed SOM training process in comparison to a traditional approach. The resulting visual landscape groups molecules with similar chemical properties in densely connected regions.
Resumo:
In this paper, we present a distributed computing framework for problems characterized by a highly irregular search tree, whereby no reliable workload prediction is available. The framework is based on a peer-to-peer computing environment and dynamic load balancing. The system allows for dynamic resource aggregation, does not depend on any specific meta-computing middleware and is suitable for large-scale, multi-domain, heterogeneous environments, such as computational Grids. Dynamic load balancing policies based on global statistics are known to provide optimal load balancing performance, while randomized techniques provide high scalability. The proposed method combines both advantages and adopts distributed job-pools and a randomized polling technique. The framework has been successfully adopted in a parallel search algorithm for subgraph mining and evaluated on a molecular compounds dataset. The parallel application has shown good calability and close-to linear speedup in a distributed network of workstations.