921 resultados para visual attention
Resumo:
Although advertising is pervasive in our daily, it proves to be not necessarily efficient all the times due to bad conditions or bad contexts of reception. Indeed, the communication process might be jeopardized at its very last stage because of advertising exposure quality. However critical it may be, ad exposure quality is not very much examined by researchers or practitioners. In this paper, we investigate how tiredness combined with ad complexity might influence the way consumers extract and process ad elements. Investigating tiredness is useful because it is a common daily state experienced by everyone at various moments of the day. And although it might drastically alter ad reception, it has not been studied in advertising for the moment. In this regards, we observe eye movement patterns of consumers viewing simple or complex advertisements being tired or not. We surprisingly find that tired subjects viewing complex ads don’t adopt a lessening effort visual strategy. They rather use a resource demanding one. We assume that the Sustained Attention strategy occurring is a kind of adaptive strategy allowing to deal with an anticipated lack of resource.
Resumo:
Dementia with Lewy bodies ('Lewy body dementia' or 'diffuse Lewy body disease') (DLB) is the second most common form of dementia to affect elderly people, after Alzheimer's disease. A combination of the clinical symptoms of Alzheimer's disease and Parkinson's disease is present in DLB and the disorder is classified as a 'parkinsonian syndrome', a group of diseases which also includes Parkinson's disease, progressive supranuclear palsy, corticobasal degeneration and multiple system atrophy. Characteristics of DLB are fluctuating cognitive ability with pronounced variations in attention and alertness, recurrent visual hallucinations and spontaneous motor features, including akinesia, rigidity and tremor. In addition, DLB patients may exhibit visual signs and symptoms, including defects in eye movement, pupillary function and complex visual functions. Visual symptoms may aid the differential diagnoses of parkinsonian syndromes. Hence, the presence of visual hallucinations supports a diagnosis of Parkinson's disease or DLB rather than progressive supranuclear palsy. DLB and Parkinson's disease may exhibit similar impairments on a variety of saccadic and visual perception tasks (visual discrimination, space-motion and object-form recognition). Nevertheless, deficits in orientation, trail-making and reading the names of colours are often significantly greater in DLB than in Parkinson's disease. As primary eye-care practitioners, optometrists should be able to work with patients with DLB and their carers to manage their visual welfare.
Resumo:
We report the performance of a group of adult dyslexics and matched controls in an array-matching task where two strings of either consonants or symbols are presented side by side and have to be judged to be the same or different. The arrays may differ either in the order or identity of two adjacent characters. This task does not require naming – which has been argued to be the cause of dyslexics’ difficulty in processing visual arrays – but, instead, has a strong serial component as demonstrated by the fact that, in both groups, Reaction times (RTs) increase monotonically with position of a mismatch. The dyslexics are clearly impaired in all conditions and performance in the identity conditions predicts performance across orthographic tasks even after age, performance IQ and phonology are partialled out. Moreover, the shapes of serial position curves are revealing of the underlying impairment. In the dyslexics, RTs increase with position at the same rate as in the controls (lines are parallel) ruling out reduced processing speed or difficulties in shifting attention. Instead, error rates show a catastrophic increase for positions which are either searched later or more subject to interference. These results are consistent with a reduction in the attentional capacity needed in a serial task to bind together identity and positional information. This capacity is best seen as a reduction in the number of spotlights into which attention can be split to process information at different locations rather than as a more generic reduction of resources which would also affect processing the details of single objects.
Resumo:
If humans monitor streams of rapidly presented (approximately 100-ms intervals) visual stimuli, which are typically specific single letters of the alphabet, for two targets (T1 and T2), they often miss T2 if it follows T1 within an interval of 200-500 ms. If T2 follows T1 directly (within 100 ms; described as occurring at 'Lag 1'), however, performance is often excellent: the so-called 'Lag-1 sparing' phenomenon. Lag-1 sparing might result from the integration of the two targets into the same 'event representation', which fits with the observation that sparing is often accompanied by a loss of T1-T2 order information. Alternatively, this might point to competition between the two targets (implying a trade-off between performance on T1 and T2) and Lag-1 sparing might solely emerge from conditional data analysis (i.e. T2 performance given T1 correct). We investigated the neural correlates of Lag-1 sparing by carrying out magnetoencephalography (MEG) recordings during an attentional blink (AB) task, by presenting two targets with a temporal lag of either 1 or 2 and, in the case of Lag 2, with a nontarget or a blank intervening between T1 and T2. In contrast to Lag 2, where two distinct neural responses were observed, at Lag 1 the two targets produced one common neural response in the left temporo-parieto-frontal (TPF) area but not in the right TPF or prefrontal areas. We discuss the implications of this result with respect to competition and integration hypotheses, and with respect to the different functional roles of the cortical areas considered. We suggest that more than one target can be identified in parallel in left TPF, at least in the absence of intervening nontarget information (i.e. masks), yet identified targets are processed and consolidated as two separate events by other cortical areas (right TPF and PFC, respectively).
Resumo:
After exogenously cueing attention to a peripheral location, the return of attention and response to the location can be inhibited. We demonstrate that these inhibitory mechanisms of attention can be associated with objects and can be automatically and implicitly retrieved over relatively long periods. Furthermore, we also show that when face stimuli are associated with inhibition, the effect is more robust for faces presented in the left visual field. This effect can be even more spatially specific, where most robust inhibition is obtained for faces presented in the upper as compared to the lower visual field. Finally, it is revealed that the inhibition is associated with an object’s identity, as inhibition moves with an object to a new location; and that the retrieved inhibition is only transiently present after retrieval.
Resumo:
Detection thresholds for two visual- and two auditory-processing tasks were obtained for 73 children and young adults who varied broadly in reading ability. A reading-disabled subgroup had significantly higher thresholds than a normal-reading subgroup for the auditory tasks only. When analyzed across the whole group, the auditory tasks and one of the visual tasks, coherent motion detection, were significantly related to word reading. These effects were largely independent of ADHD ratings; however, none of these measures accounted for significant variance in word reading after controlling for full-scale IQ. In contrast, phoneme awareness, rapid naming, and nonword repetition each explained substantial, significant word reading variance after controlling for IQ, suggesting more specific roles for these oral language skills in the development of word reading. © 2004 Elsevier Inc. All rights reserved.
Resumo:
During search of the environment, the inhibition of the return (IOR) of attention to already-examined information ensures that the target will ultimately be detected. Until now, inhibition was assumed to support search of information during one processing episode. However, in some situations search may have to be completed long after it was begun. We therefore propose that inhibition can be associated with an episode encoded into memory such that later retrieval reinstates inhibitory processing and encourages examination of new information. In two experiments in which attention was drawn to face stimuli with an exogenous cue, we demonstrated for the first time the existence of long-term IOR. Interestingly, this was the case only for faces in the left visual field, perhaps because more efficient processing of faces in the right hemisphere than the left hemisphere results in richer, more retrievable memory representations.
Resumo:
In recent years there has been an increasing use of visual methods in ageing research. There are, however, limited reflections and critical explorations of the implications of using visual methods in research with people in mid to later life. This paper examines key methodological complexities when researching the daily lives of people as they grow older and the possibilities and limitations of using participant-generated visual diaries. The paper will draw on our experiences of an empirical study, which included a sample of 62 women and men aged 50 years and over with different daily routines. Participant-led photography was drawn upon as a means to create visual diaries, followed by in-depth, photo-elicitation interviews. The paper will critically reflect on the use of visual methods for researching the daily lives of people in mid to later life, as well as suggesting some wider tensions within visual methods that warrant attention. First, we explore the extent to which photography facilitates a ‘collaborative’ research process; second, complexities around capturing the ‘everydayness’ of daily routines are explored; third, the representation and presentation of ‘self’ by participants within their images and interview narratives is examined; and, finally, we highlight particular emotional considerations in visualising daily life.
Resumo:
It has been well documented that traffic accidents that can be avoided occur when the motorists miss or ignore traffic signs. With the attention of drivers getting diverted due to distractions like cell phone conversations, missing traffic signs has become more prevalent. Also, poor weather and other unfriendly driving conditions sometimes makes the motorists not to be alert all the time and see every traffic sign on the road. Besides, most cars do not have any form of traffic assistance. Because of heavy traffic and proliferation of traffic signs on the roads, there is a need for a system that assists the driver not to miss a traffic sign to reduce the probability of an accident. Since visual information is critical for driving, processed video signals from cameras have been chosen to assist drivers. These inexpensive cameras can be easily mounted on the automobile. The objective of the present investigation and the traffic system development is to recognize the traffic signs electronically and alert drivers. For the case study and the system development, five important and critical traffic signs have been selected. They are: STOP, NO ENTER, NO RIGHT TURN, NO LEFT TURN, and YIELD. The system was evaluated processing still pictures taken from the public roads, and the recognition results were presented in an analysis table to indicate the correct identifications and the false ones. The system reached the acceptable recognition rate of 80% for all five traffic signs. The processing rate was about three seconds. The capabilities of MATLAB, VLSI design platforms and coding have been used to generate a visual warning to complement the visual driver support system with a Field Programmable Gate Array (FPGA) on a XUP Virtex-II Pro Development System.
Resumo:
One of the overarching questions in the field of infant perceptual and cognitive development concerns how selective attention is organized during early development to facilitate learning. The following study examined how infants' selective attention to properties of social events (i.e., prosody of speech and facial identity) changes in real time as a function of intersensory redundancy (redundant audiovisual, nonredundant unimodal visual) and exploratory time. Intersensory redundancy refers to the spatially coordinated and temporally synchronous occurrence of information across multiple senses. Real time macro- and micro-structural change in infants' scanning patterns of dynamic faces was also examined. ^ According to the Intersensory Redundancy Hypothesis, information presented redundantly and in temporal synchrony across two or more senses recruits infants' selective attention and facilitates perceptual learning of highly salient amodal properties (properties that can be perceived across several sensory modalities such as the prosody of speech) at the expense of less salient modality specific properties. Conversely, information presented to only one sense facilitates infants' learning of modality specific properties (properties that are specific to a particular sensory modality such as facial features) at the expense of amodal properties (Bahrick & Lickliter, 2000, 2002). ^ Infants' selective attention and discrimination of prosody of speech and facial configuration was assessed in a modified visual paired comparison paradigm. In redundant audiovisual stimulation, it was predicted infants would show discrimination of prosody of speech in the early phases of exploration and facial configuration in the later phases of exploration. Conversely, in nonredundant unimodal visual stimulation, it was predicted infants would show discrimination of facial identity in the early phases of exploration and prosody of speech in the later phases of exploration. Results provided support for the first prediction and indicated that following redundant audiovisual exposure, infants showed discrimination of prosody of speech earlier in processing time than discrimination of facial identity. Data from the nonredundant unimodal visual condition provided partial support for the second prediction and indicated that infants showed discrimination of facial identity, but not prosody of speech. The dissertation study contributes to the understanding of the nature of infants' selective attention and processing of social events across exploratory time.^
Resumo:
Temporal-order judgment (TOJ) and simultaneity judgment (SJ) tasks are used to study differences in speed of processing across sensory modalities, stimulus types, or experimental conditions. Matthews and Welch (2015) reported that observed performance in SJ and TOJ tasks is superior when visual stimuli are presented in the left visual field (LVF) compared to the right visual field (RVF), revealing an LVF advantage presumably reflecting attentional influences. Because observed performance reflects the interplay of perceptual and decisional processes involved in carrying out the tasks, analyses that separate out these influences are needed to determine the origin of the LVF advantage. We re-analyzed the data of Matthews and Welch (2015) using a model of performance in SJ and TOJ tasks that separates out these influences. Parameter estimates capturing the operation of perceptual processes did not differ between hemifields by these analyses, whereas parameter estimates capturing the operation of decisional processes differed. In line with other evidence, perceptual processing also did not differ between SJ and TOJ tasks. Thus, the LVF advantage occurs with identical speeds of processing in both visual hemifields. If attention is responsible for the LVF advantage, it does not exert its influence via prior entry.
Resumo:
Peer reviewed
Resumo:
For over 50 years, the Satisfaction of Search effect, and more recently known as the Subsequent Search Miss (SSM) effect, has plagued the field of radiology. Defined as a decrease in additional target accuracy after detecting a prior target in a visual search, SSM errors are known to underlie both real-world search errors (e.g., a radiologist is more likely to miss a tumor if a different tumor was previously detected) and more simplified, lab-based search errors (e.g., an observer is more likely to miss a target ‘T’ if a different target ‘T’ was previously detected). Unfortunately, little was known about this phenomenon’s cognitive underpinnings and SSM errors have proven difficult to eliminate. However, more recently, experimental research has provided evidence for three different theories of SSM errors: the Satisfaction account, the Perceptual Set account, and the Resource Depletion account. A series of studies examined performance in a multiple-target visual search and aimed to provide support for the Resource Depletion account—a first target consumes cognitive resources leaving less available to process additional targets.
To assess a potential mechanism underlying SSM errors, eye movements were recorded in a multiple-target visual search and were used to explore whether a first target may result in an immediate decrease in second-target accuracy, which is known as an attentional blink. To determine whether other known attentional distractions amplified the effects of finding a first target has on second-target detection, distractors within the immediate vicinity of the targets (i.e., clutter) were measured and compared to accuracy for a second target. To better understand which characteristics of attention were impacted by detecting a first target, individual differences within four characteristics of attention were compared to second-target misses in a multiple-target visual search.
The results demonstrated that an attentional blink underlies SSM errors with a decrease in second-target accuracy from 135ms-405ms after detection or re-fixating a first target. The effects of clutter were exacerbated after finding a first target causing a greater decrease in second-target accuracy as clutter increased around a second-target. The attentional characteristics of modulation and vigilance were correlated with second- target misses and suggest that worse attentional modulation and vigilance are predictive of more second-target misses. Taken together, these result are used as the foundation to support a new theory of SSM errors, the Flux Capacitor theory. The Flux Capacitor theory predicts that once a target is found, it is maintained as an attentional template in working memory, which consumes attentional resources that could otherwise be used to detect additional targets. This theory not only proposes why attentional resources are consumed by a first target, but encompasses the research in support of all three SSM theories in an effort to establish a grand, unified theory of SSM errors.
Resumo:
In this work, we propose a biologically inspired appearance model for robust visual tracking. Motivated in part by the success of the hierarchical organization of the primary visual cortex (area V1), we establish an architecture consisting of five layers: whitening, rectification, normalization, coding and polling. The first three layers stem from the models developed for object recognition. In this paper, our attention focuses on the coding and pooling layers. In particular, we use a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations. An extensive experimental study shows that the proposed method has higher tracking accuracy than several state-of-the-art trackers.
Resumo:
[EN]Social robots are receiving much interest in the robotics community. The most important goal for such robots lies in their interaction capabilities. An attention system is crucial, both as a filter to center the robot’s perceptual resources and as a mean of letting the observer know that the robot has intentionality. In this paper a simple but flexible and functional attentional model is described. The model, which has been implemented in an interactive robot currently under development, fuses both visual and auditive information extracted from the robot’s environment, and can incorporate knowledge-based influences on attention.