17 resultados para Visual representation
em CentAUR: Central Archive University of Reading - UK
Resumo:
How does the manipulation of visual representations play a role in the practices of generating, evolving and exchanging knowledge? The role of visual representation in mediating knowledge work is explored in a study of design work of an architectural practice, Edward Cullinan Architects. The intensity of interactions with visual representations in the everyday activities on design projects is immediately striking. Through a discussion of observed design episodes, two ways are articulated in which visual representations act as 'artefacts of knowing'. As communication media they are symbolic representations, rich in meaning, through which ideas are articulated, developed and exchanged. Furthermore, as tangible artefacts they constitute material entities with which to interact and thereby develop knowledge. The communicative and interactive properties of visual representations constitute them as central elements of knowledge work. The paper explores emblematic knowledge practices supported by visual representation and concludes by pinpointing avenues for further research.
Resumo:
Visually impaired people have a very different view of the world such that seemingly simple environments as viewed by a ‘normally’ sighted people can be difficult for people with visual impairments to access and move around. This is a problem that can be hard to fully comprehend by people with ‘normal vision’ even when guidelines for inclusive design are available. This paper investigates ways in which image processing techniques can be used to simulate the characteristics of a number of common visual impairments in order to provide, planners, designers and architects, with a visual representation of how people with visual impairments view their environment, thereby promoting greater understanding of the issues, the creation of more accessible buildings and public spaces and increased accessibility for visually impaired people in everyday situations.
Resumo:
This article considers how visual practices are used to manage knowledge in project-based work. It compares project-based work in a capital goods manufacturer and an architectural firm. Visual representations are used extensively in both cases, but the nature of visual practice differs significantly between the two. The research explores the kinds of knowledge that are (and aren't) developed and made visible in strategizing and planning activities. For example, whereas the emphasis of project-based work in the former firm is on exploitation of knowledge and it visualizes its project context largely in commercial and processual terms, the emphasis in the latter is on exploration and it uses a wide range of visual materials to understand physical interdependencies across the project boundary. We contend particular kinds of visual tools can help project teams step between exploration and exploitation within a project, and articulate the types of representations, foci of attention and patterns of interaction involved. The findings suggest that business managers can make more deliberate choices about how knowledge is made visible, and can change visual practice to align the project with exploring and exploiting opportunities. It raises the question: What don't you see within your organization? The work contributes to academic debates about managing through projects, strategising and organizing, while the focus on visual representation disrupts the tacit-codified dichotomy in the broad debate on knowledge and learning, and highlights the craft skills central to strategizing and organizing.
Resumo:
Saccadic eye movements and fixations are the behavioral means by which we visually sample text during reading. Human oculomotor control is governed by a complex neurophysiological system involving the brain stem, superior colliculus, and several cortical areas [1, 2]. A very widely held belief among researchers investigating primate vision is that the oculomotor system serves to orient the visual axes of both eyes to fixate the same target point in space. It is argued that such precise positioning of the eyes is necessary to place images on corresponding retinal locations, such that on each fixation a single, nondiplopic, visual representation is perceived [3]. Vision works actively through a continual sampling process involving saccades and fixations [4]. Here we report that during normal reading, the eyes do not always fixate the same letter within a word. We also demonstrate that saccadic targeting is yoked and based on a unified cyclopean percept of a whole word since it is unaffected if different word parts are delivered exclusively to each eye via a dichoptic presentation technique. These two findings together suggest that the visual signal from each eye is fused at a very early stage in the visual pathway, even when the fixation disparity is greater than one character (0.29 deg), and that saccade metrics for each eye are computed on the basis of that fused signal.
Resumo:
Haptic computer interfaces provide users with feedback through the sense of touch, thereby allowing users to feel a graphical user interface. Force feedback gravity wells, i.e. attractive basins that can pull the cursor toward a target, are one type of haptic effect that have been shown to provide improvements in "point and click" tasks. For motion-impaired users, gravity wells could improve times by as much as 50%. It has been reported that the presentation of information to multiple sensory modalities, e.g. haptics and vision, can provide performance benefits. However, previous studies investigating the use of force feedback gravity wells have generally not provided visual representations of the haptic effect. Where force fields extend beyond clickable targets, the addition of visual cues may affect performance. This paper investigates how the performance of motion-impaired computer users is affected by having visual representations of force feedback gravity wells presented on-screen. Results indicate that the visual representation does not affect times and errors in a "point and click" task involving multiple targets.
Resumo:
In their sparse and isolated spaces, Samuel Beckett's figures imagine the touch of a lost love or dream of the comfort and care that the hands of a dear one might bring. Applying philosophical writings that feature sensation, particularly touch, this study examines how Beckett's later work for stage and screen dramatizes moments of contact between self and self, self and world, and self and other. With implications for how gender and ethics can be approached within Beckett's aesthetic, this study explores the employment of haptic imagery as an alternative to certain dominant codes of visual representation.
Resumo:
The study investigated early years teachers’ understanding and use of graphic symbols, defined as the visual representation(s) used to communicate one or more “linguistic” concepts, which can be used to facilitate science learning. The study was conducted in Cyprus where six early years teachers were observed and interviewed. The results indicate that the teachers had a good understanding of the role of symbols, but demonstrated a lack of understanding in regards to graphic symbols specifically. None of the teachers employed them in their observed science lesson, although some of them claimed that they did so. Findings suggest a gap in participants’ acquaintance with the terminology regarding different types of symbols and a lack of awareness about the use and availability of graphic symbols for the support of learning. There is a need to inform and train early years teachers about graphic symbols and their potential applications in supporting children’s learning.
Resumo:
Perception of our own bodies is based on integration of visual and tactile inputs, notably by neurons in the brain’s parietal lobes. Here we report a behavioural consequence of this integration process. Simply viewing the arm can speed up reactions to an invisible tactile stimulus on the arm. We observed this visual enhancement effect only when a tactile task required spatial computation within a topographic map of the body surface and the judgements made were close to the limits of performance. This effect of viewing the body surface was absent or reversed in tasks that either did not require a spatial computation or in which judgements were well above performance limits. We consider possible mechanisms by which vision may influence tactile processing.
Resumo:
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
Resumo:
The nature of the spatial representations that underlie simple visually guided actions early in life was investigated in toddlers with Williams syndrome (WS), Down syndrome (DS), and healthy chronological age- and mental age-matched controls, through the use of a "double-step" saccade paradigm. The experiment tested the hypothesis that, compared to typically developing infants and toddlers, and toddlers with DS, those with WS display a deficit in using spatial representations to guide actions. Levels of sustained attention were also measured within these groups, to establish whether differences in levels of engagement influenced performance on the double-step saccade task. The results showed that toddlers with WS were unable to combine extra-retinal information with retinal information to the same extent as the other groups, and displayed evidence of other deficits in saccade planning, suggesting a greater reliance on sub-cortical mechanisms than the other populations. Results also indicated that their exploration of the visual environment is less developed. The sustained attention task revealed shorter and fewer periods of sustained attention in toddlers with DS, but not those with WS, suggesting that WS performance on the double-step saccade task is not explained by poorer engagement. The findings are also discussed in relation to a possible attention disengagement deficit in WS toddlers. Our study highlights the importance of studying genetic disorders early in development. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.
Resumo:
Background The information processing capacity of the human mind is limited, as is evidenced by the attentional blink (AB) - a deficit in identifying the second of two temporally-close targets (T1 and T2) embedded in a rapid stream of distracters. Theories of the AB generally agree that it results from competition between stimuli for conscious representation. However, they disagree in the specific mechanisms, in particular about how attentional processing of T1 determines the AB to T2. Methodology/Principal Findings The present study used the high spatial resolution of functional magnetic resonance imaging (fMRI) to examine the neural mechanisms underlying the AB. Our research approach was to design T1 and T2 stimuli that activate distinguishable brain areas involved in visual categorization and representation. ROI and functional connectivity analyses were then used to examine how attentional processing of T1, as indexed by activity in the T1 representation area, affected T2 processing. Our main finding was that attentional processing of T1 at the level of the visual cortex predicted T2 detection rates Those individuals who activated the T1 encoding area more strongly in blink versus no-blink trials generally detected T2 on a lower percentage of trials. The coupling of activity between T1 and T2 representation areas did not vary as a function of conscious T2 perception. Conclusions/Significance These data are consistent with the notion that the AB is related to attentional demands of T1 for selection, and indicate that these demands are reflected at the level of visual cortex. They also highlight the importance of individual differences in attentional settings in explaining AB task performance.
Resumo:
It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A > B > D yet also A < C < D) and hence no single one-to-one mapping between participants’ perceived space and any real 3D environment. Instead, factors that affect pairwise comparisons of distances dictate participants’ performance. These data contradict, more directly than previous experiments, the idea that the visual system builds and uses a coherent 3D internal representation of a scene.
Resumo:
The aim of Terrorist Transgressions is to analyse the myths inscribed in images of the terrorist and identify how agency is attributed to representation through invocations and inversions of gender stereotypes. In modern discourses on the terrorist the horror experienced in Western societies was the appearance of a new sense of the vulnerability of the body politic, and therefore of the modern self with its direct dependency on security and property. The terrorist has been constructed as the epitome of transgression against economic resources and moral, physical and political boundaries. Although terrorism has been the focus of intense academic activity, cultural representations of the terrorist have received less attention. Yet terrorism is dependent on spectacle and the topic is subject to forceful exposure in popular media. While the terrorist is predominantly aligned with masculinity, women have been active in terrorist organisations since the late 19th century and in suicidal terrorist attacks since the 1980s. Such attacks have confounded constructions of femininity and masculinity, with profound implications for the gendering of violence and horror. The publication arises from an AHRC networking grant, 2011-12, with Birkbeck, and includes collaboration with the army at Sandhurst RMA. The project relates to a wider investigation into feminism, violence and contemporary art.
Resumo:
Emerging evidence suggests that items held in working memory(WM)might not all be in the same representational state. One item might be privileged over others, making it more accessible and thereby recalled with greater precision. Here, using transcranial magnetic stimulation (TMS), we provide causal evidence in human participants that items inWMare differentially susceptible to disruptive TMS, depending on their state, determined either by task relevance or serial position. Across two experiments, we applied TMS to area MT during the WM retention of two motion directions. In Experiment 1, we used an “incidental cue” to bring one of the two targets into a privileged state. In Experiment 2, we presented the targets sequentially so that the last item was in a privileged state by virtue of recency. In both experiments, recall precision of motion direction was differentially affected by TMS, depending on the state of the memory target at the time of disruption. Privileged items were recalled with less precision, whereas nonprivileged items were recalled with higher precision. Thus, only the privileged item was susceptible to disruptive TMS over MT�. By contrast, precision of the nonprivileged item improved either directly because of facilitation by TMS or indirectly through reduced interference from the privileged item. Our results provide a unique line of evidence, as revealed by TMS over a posterior sensory brain region, for at least two different states of item representation in WM.