867 resultados para Visual perception.
Resumo:
Parkinson’s disease (PD) is a common disorder of middle-aged and elderly people in which degeneration of the extrapyramidal motor system causes significant movement problems. In some patients, however, there are additional disturbances in sensory systems including loss of the sense of smell and auditory and/or visual problems. This article is a general overview of the visual problems likely to be encountered in PD. Changes in vision in PD may result from alterations in visual acuity, contrast sensitivity, colour discrimination, pupil reactivity, eye movements, motion perception, visual field sensitivity and visual processing speeds. Slower visual processing speeds can also lead to a decline in visual perception especially for rapidly changing visual stimuli. In addition, there may be disturbances of visuo-spatial orientation, facial recognition problems, and chronic visual hallucinations. Some of the treatments used in PD may also have adverse ocular reactions. The pattern electroretinogram (PERG) is useful in evaluating retinal dopamine mechanisms and in monitoring dopamine therapies in PD. If visual problems are present, they can have an important effect on the quality of life of the patient, which can be improved by accurate diagnosis and where possible, correction of such defects.
Resumo:
Dementia with Lewy bodies ('Lewy body dementia' or 'diffuse Lewy body disease') (DLB) is the second most common form of dementia to affect elderly people, after Alzheimer's disease. A combination of the clinical symptoms of Alzheimer's disease and Parkinson's disease is present in DLB and the disorder is classified as a 'parkinsonian syndrome', a group of diseases which also includes Parkinson's disease, progressive supranuclear palsy, corticobasal degeneration and multiple system atrophy. Characteristics of DLB are fluctuating cognitive ability with pronounced variations in attention and alertness, recurrent visual hallucinations and spontaneous motor features, including akinesia, rigidity and tremor. In addition, DLB patients may exhibit visual signs and symptoms, including defects in eye movement, pupillary function and complex visual functions. Visual symptoms may aid the differential diagnoses of parkinsonian syndromes. Hence, the presence of visual hallucinations supports a diagnosis of Parkinson's disease or DLB rather than progressive supranuclear palsy. DLB and Parkinson's disease may exhibit similar impairments on a variety of saccadic and visual perception tasks (visual discrimination, space-motion and object-form recognition). Nevertheless, deficits in orientation, trail-making and reading the names of colours are often significantly greater in DLB than in Parkinson's disease. As primary eye-care practitioners, optometrists should be able to work with patients with DLB and their carers to manage their visual welfare.
Resumo:
Visual stress is a condition characterised by symptoms of eyestrain, headaches and distortions of visual perception when reading text. The symptoms are frequently alleviated with spectral filters and precision tinted ophthalmic lenses. Visual stress is thought to arise due to cortical hyperexcitability and is associated with a range of neurological conditions. Cortical hyperexcitability is known to occur following stroke. The case presented describes visual stress symptoms resulting from stroke, subsequently managed with spectral filters and precision tinted ophthalmic lenses. The case also highlights that the spectral properties of the tint may need to be modified if the disease course alters.
Resumo:
Peer reviewed
Resumo:
My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').
Resumo:
This collaborative project by Daniel Mafe and Andrew Brown, one of a number in they have been involved in together, conjoins painting and digital sound into a single, large scale, immersive exhibition/installation. The work as a whole acts as an interstitial point between contrasting approaches to abstraction: the visual and aural, the digital and analogue are pushed into an alliance and each works to alter perceptions of the other. For example, the paintings no longer mutely sit on the wall to be stared into. The sound seemingly emanating from each work shifts the viewer’s typical visual perception and engages their aural sensibilities. This seems to make one more aware of the objects as objects – the surface of each piece is brought into scrutiny – and immerses the viewer more viscerally within the exhibition. Similarly, the sonic experience is focused and concentrated spatially by each painted piece even as the exhibition is dispersed throughout the space. The sounds and images are similar in each local but not identical, even though they may seem to be the same from casual interaction, closer attention will quickly show this is not the case. In preparing this exhibition each artist has had to shift their mode of making to accommodate the other’s contribution. This was mainly done by a process of emptying whereby each was called upon to do less to the works they were making and to iterate the works toward a shared conception, blurring notions of individual imagination while maintaining material authorship. Empting was necessary to enable sufficient porosity where each medium allowed the other entry to its previously gated domain. The paintings are simple and subtle to allow the odd sonic textures a chance to work on the viewer’s engagement with them. The sound remains both abstract, using noise-like textures, and at a low volume to allow the audience’s attention to wander back and forth between aspects of the works.
Resumo:
Evidence currently supports the view that intentional interpersonal coordination (IIC) is a self-organizing phenomenon facilitated by visual perception of co-actors in a coordinative coupling (Schmidt, Richardson, Arsenault, & Galantucci, 2007). The present study examines how apparent IIC is achieved in situations where visual information is limited for co-actors in a rowing boat. In paired rowing boats only one of the actors, [bow seat] gets to see the actions of the other [stroke seat]. Thus IIC appears to be facilitated despite the lack of important visual information for the control of the dyad. Adopting a mimetic approach to expert coordination, the present study qualitatively examined the experiences of expert performers (N=9) and coaches (N=4) with respect to how IIC was achieved in paired rowing boats. Themes were explored using inductive content analysis, which led to layered model of control. Rowers and coaches reported the use of multiple perceptual sources in order to achieve IIC. As expected(Kelso, 1995; Schmidt & O’Brien, 1997; Turvey, 1990), rowers in the bow of a pair boat make use of visual information provided by the partner in front of them [stroke]. However, this perceptual information is subordinate to perception Motor Learning and Control S111 of the relationship between the boat hull and water passing beside it. Stroke seat, in the absence of visual information about his/her partner, achieves coordination by picking up information about the lifting or looming of the boat’s stern along with water passage past the hull. In this case it appears that apparent or desired IIC is supported by the perception of extra-personal variables, in this case boat behavior; as this perceptual information source is used by both actors. To conclude, co-actors in two person rowing boats use multiple sources of perceptual information for apparent IIC that changes according to task constraints. Where visual information is restricted IIC is facilitated via extra-personal perceptual information and apparent IIC switches to intentional extra-personal coordination.
Resumo:
Visual abnormalities, both at the sensory input and the higher interpretive levels, have been associated with many of the symptoms of schizophrenia. Individuals with schizophrenia typically experience distortions of sensory perception, resulting in perceptual hallucinations and delusions that are related to the observed visual deficits. Disorganised speech, thinking and behaviour are commonly experienced by sufferers of the disorder, and have also been attributed to perceptual disturbances associated with anomalies in visual processing. Compounding these issues are marked deficits in cognitive functioning that are observed in approximately 80% of those with schizophrenia. Cognitive impairments associated with schizophrenia include: difficulty with concentration and memory (i.e. working, visual and verbal), an impaired ability to process complex information, response inhibition and deficits in speed of processing, visual and verbal learning. Deficits in sustained attention or vigilance, poor executive functioning such as poor reasoning, problem solving, and social cognition, are all influenced by impaired visual processing. These symptoms impact on the internal perceptual world of those with schizophrenia, and hamper their ability to navigate their external environment. Visual processing abnormalities in schizophrenia are likely to worsen personal, social and occupational functioning. Binocular rivalry provides a unique opportunity to investigate the processes involved in visual awareness and visual perception. Binocular rivalry is the alternation of perceptual images that occurs when conflicting visual stimuli are presented to each eye in the same retinal location. The observer perceives the opposing images in an alternating fashion, despite the sensory input to each eye remaining constant. Binocular rivalry tasks have been developed to investigate specific parts of the visual system. The research presented in this Thesis provides an explorative investigation into binocular rivalry in schizophrenia, using the method of Pettigrew and Miller (1998) and comparing individuals with schizophrenia to healthy controls. This method allows manipulations to the spatial and temporal frequency, luminance contrast and chromaticity of the visual stimuli. Manipulations to the rival stimuli affect the rate of binocular rivalry alternations and the time spent perceiving each image (dominance duration). Binocular rivalry rate and dominance durations provide useful measures to investigate aspects of visual neural processing that lead to the perceptual disturbances and cognitive dysfunction attributed to schizophrenia. However, despite this promise the binocular rivalry phenomenon has not been extensively explored in schizophrenia to date. Following a review of the literature, the research in this Thesis examined individual variation in binocular rivalry. The initial study (Chapter 2) explored the effect of systematically altering the properties of the stimuli (i.e. spatial and temporal frequency, luminance contrast and chromaticity) on binocular rivalry rate and dominance durations in healthy individuals (n=20). The findings showed that altering the stimuli with respect to temporal frequency and luminance contrast significantly affected rate. This is significant as processing of temporal frequency and luminance contrast have consistently been demonstrated to be abnormal in schizophrenia. The current research then explored binocular rivalry in schizophrenia. The primary research question was, "Are binocular rivalry rates and dominance durations recorded in participants with schizophrenia different to those of the controls?" In this second study binocular rivalry data that were collected using low- and highstrength binocular rivalry were compared to alternations recorded during a monocular rivalry task, the Necker Cube task to replicate and advance the work of Miller et al., (2003). Participants with schizophrenia (n=20) recorded fewer alternations (i.e. slower alternation rates) than control participants (n=20) on both binocular rivalry tasks, however no difference was observed between the groups on the Necker cube task. Magnocellular and parvocellular visual pathways, thought to be abnormal in schizophrenia, were also investigated in binocular rivalry. The binocular rivalry stimuli used in this third study (Chapter 4) were altered to bias the task for one of these two pathways. Participants with schizophrenia recorded slower binocular rivalry rates than controls in both binocular rivalry tasks. Using a ‘within subject design’, binocular rivalry data were compared to data collected from a backwardmasking task widely accepted to bias both these pathways. Based on these data, a model of binocular rivalry, based on the magnocellular and parvocellular pathways that contribute to the dorsal and ventral visual streams, was developed. Binocular rivalry rates were compared with performance on the Benton’s Judgment of Line Orientation task, in individuals with schizophrenia compared to healthy controls (Chapter 5). The Benton’s Judgment of Line Orientation task is widely accepted to be processed within the right cerebral hemisphere, making it an appropriate task to investigate the role of the cerebral hemispheres in binocular rivalry, and to investigate the inter-hemispheric switching hypothesis of binocular rivalry proposed by Pettigrew and Miller (1998, 2003). The data were suggestive of intra-hemispheric rather than an inter-hemispheric visual processing in binocular rivalry. Neurotransmitter involvement in binocular rivalry, backward masking and Judgment of Line Orientation in schizophrenia were investigated using a genetic indicator of dopamine receptor distribution and functioning; the presence of the Taq1 allele of the dopamine D2 receptor (DRD2) receptor gene. This final study (Chapter 6) explored whether the presence of the Taq1 allele of the DRD2 receptor gene, and thus, by inference the distribution of dopamine receptors and dopamine function, accounted for the large individual variation in binocular rivalry. The presence of the Taq1 allele was associated with slower binocular rivalry rates or poorer performance in the backward masking and Judgment of Line Orientation tasks seen in the group with schizophrenia. This Thesis has contributed to what is known about binocular rivalry in schizophrenia. Consistently slower binocular rivalry rates were observed in participants with schizophrenia, indicating abnormally-slow visual processing in this group. These data support previous studies reporting visual processing abnormalities in schizophrenia and suggest that a slow binocular rivalry rate is not a feature specific to bipolar disorder, but may be a feature of disorders with psychotic features generally. The contributions of the magnocellular or dorsal pathways and parvocellular or ventral pathways to binocular rivalry, and therefore to perceptual awareness, were investigated. The data presented supported the view that the magnocellular system initiates perceptual awareness of an image and the parvocellular system maintains the perception of the image, making it available to higher level processing occurring within the cortical hemispheres. Abnormal magnocellular and parvocellular processing may both contribute to perceptual disturbances that ultimately contribute to the cognitive dysfunction associated with schizophrenia. An alternative model of binocular rivalry based on these observations was proposed.
Resumo:
Objectives: Adaptive patterning of human movement is context specific and dependent on interacting constraints of the performer–environment relationship. Flexibility of skilled behaviour is predicated on the capacity of performers to move between different states of movement organisation to satisfy dynamic task constraints, previously demonstrated in studies of visual perception, bimanual coordination, and an interceptive combat task. Metastability is a movement system property that helps performers to remain in a state of relative coordination with their performance environments, poised between multiple co-existing states (stable and distinct movement patterns or responses). The aim of this study was to examine whether metastability could be exploited in externally paced interceptive actions in fast ball sports, such as cricket. Design: Here we report data on metastability in performance of multi-articular hitting actions by skilled junior cricket batters (n = 5). Methods: Participants’ batting actions (key movement timings and performance outcomes) were analysed in four distinct performance regions varied by ball pitching (bounce) location. Results: Results demonstrated that, at a pre-determined distance to the ball, participants were forced into a meta-stable region of performance where rich and varied patterns of functional movement behaviours emerged. Participants adapted the organisation of responses, resulting in higher levels of variability in movement timing in this performance region, without detrimental effects on the quality of interceptive performance outcomes. Conclusions: Findings provide evidence for the emergence of metastability in a dynamic interceptive action in cricket batting. Flexibility and diversity of movement responses were optimised using experiential knowledge and careful manipulation of key task constraints of the specific sport context.
Resumo:
In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.
Resumo:
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Resumo:
In this paper we present an update on our novel visualization technologies based on cellular immune interaction from both large-scale spatial and temporal perspectives. We do so with a primary motive: to present a visually and behaviourally realistic environment to the community of experimental biologists and physicians such that their knowledge and expertise may be more readily integrated into the model creation and calibration process. Visualization aids understanding as we rely on visual perception to make crucial decisions. For example, with our initial model, we can visualize the dynamics of an idealized lymphatic compartment, with antigen presenting cells (APC) and cytotoxic T lymphocyte (CTL) cells. The visualization technology presented here offers the researcher the ability to start, pause, zoom-in, zoom-out and navigate in 3-dimensions through an idealised lymphatic compartment.
Resumo:
The International Journal of Robotics Research (IJRR) has a long history of publishing the state-of-the-art in the field of robotic vision. This is the fourth special issue devoted to the topic. Previous special issues were published in 2012 (Volume 31, No. 4), 2010 (Volume 29, Nos 2–3) and 2007 (Volume 26, No. 7, jointly with the International Journal of Computer Vision). In a closely related field was the special issue on Visual Servoing published in IJRR, 2003 (Volume 22, Nos 10–11). These issues nicely summarize the highlights and progress of the past 12 years of research devoted to the use of visual perception for robotics.
Resumo:
This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images.