947 resultados para Auditory-visual teaching
Resumo:
Selective attention refers to the process in which certain information is actively selected for conscious processing, while other information is ignored. The aim of the present studies was to investigate the human brain mechanisms of auditory and audiovisual selective attention with functional magnetic resonance imaging (fMRI), electroencephalography (EEG) and magnetoencephalography (MEG). The main focus was on attention-related processing in the auditory cortex. It was found that selective attention to sounds strongly enhances auditory cortex activity associated with processing the sounds. In addition, the amplitude of this attention-related modulation was shown to increase with the presentation rate of attended sounds. Attention to the pitch of sounds and to their location appeared to enhance activity in overlapping auditory-cortex regions. However, attention to location produced stronger activity than attention to pitch in the temporo-parietal junction and frontal cortical regions. In addition, a study on bimodal attentional selection found stronger audiovisual than auditory or visual attention-related modulations in the auditory cortex. These results were discussed in light of Näätänen s attentional-trace theory and other research concerning the brain mechanisms of selective attention.
Resumo:
This thesis examines brain networks involved in auditory attention and auditory working memory using measures of task performance, brain activity, and neuroanatomical connectivity. Auditory orienting and maintenance of attention were compared with visual orienting and maintenance of attention, and top-down controlled attention was compared to bottom-up triggered attention in audition. Moreover, the effects of cognitive load on performance and brain activity were studied using an auditory working memory task. Corbetta and Shulman s (2002) model of visual attention suggests that what is known as the dorsal attention system (intraparietal sulcus/superior parietal lobule, IPS/SPL and frontal eye field, FEF) is involved in the control of top-down controlled attention, whereas what is known as the ventral attention system (temporo-parietal junction, TPJ and areas of the inferior/middle frontal gyrus, IFG/MFG) is involved in bottom-up triggered attention. The present results show that top-down controlled auditory attention also activates IPS/SPL and FEF. Furthermore, in audition, TPJ and IFG/MFG were activated not only by bottom-up triggered attention, but also by top-down controlled attention. In addition, the posterior cerebellum and thalamus were activated by top-down controlled attention shifts and the ventromedial prefrontal cortex (VMPFC) was activated by to-be-ignored, but attention-catching salient changes in auditory input streams. VMPFC may be involved in the evaluation of environmental events causing the bottom-up triggered engagement of attention. Auditory working memory activated a brain network that largely overlapped with the one activated by top-down controlled attention. The present results also provide further evidence of the role of the cerebellum in cognitive processing: During auditory working memory tasks, both activity in the posterior cerebellum (the crus I/II) and reaction speed increased when the cognitive load increased. Based on the present results and earlier theories on the role of the cerebellum in cognitive processing, the function of the posterior cerebellum in cognitive tasks may be related to the optimization of response speed.
Resumo:
In design studio, sketching or visual thinking is part of processes that assist students to achieve final design solutions. At QUT’s First and Third Year industrial design studio classes we engage in a variety of teaching pedagogies from which we identify ‘Concept Bombs’ as instrumental in the development of students’ visual thinking and reflective design process, and also as a vehicle to foster positive student engagement. In First year studios our Concept Bombs’ consist of 20 minute individual design tasks focusing on rapid development of initial concept designs and free-hand sketching. In Third Year studios we adopt a variety of formats and different timing, combining individual and team based tasks. Our experience and surveys tell us that students value intensive studio activities especially when combined with timely assessment and feedback. While conventional longer-duration design projects are essential for allowing students to engage with the full depth and complexity of the design process, short and intensive design activities introduce variety to the learning experience and enhance student engagement. This paper presents a comparative analysis of First and Third Year students’ Concept Bomb sketches to describe the types of design knowledge embedded in them, a discussion of limitations and opportunities of this pedagogical technique, as well as considerations for future development of studio based tasks of this kind as design pedagogies in the midst of current university education trends.
Resumo:
In the life of the Law School, focus on the “visual” can operate at three different levels: learning, teaching, and examining (legal concepts). My main interest in this paper is to explore the latter level, “examining”, broadly considered so as to encompass evaluation in general. Furthermore, that interest is pinned down here to the area of constitutional rights and human rights in general, even though the conclusions reached can (and should) likely be extrapolated to other areas of the law... In effect, the first logical step regarding the relevance of the visual approach has to do with using it yourself when you study —assuming that you came to the conclusion that you are a “visual learner”. As you know, VARK theorists propose a quadripartite classification of learners. The acronym VARK stands for Visual, Aural, Read/write, and Kinesthetic sensory modalities that are used for learning information. This model was designed in the late 80s by Neil Fleming and it has received some acceptance and a lot of attention...
Resumo:
The lateral intraparietal area (LIP) of macaque posterior parietal cortex participates in the sensorimotor transformations underlying visually guided eye movements. Area LIP has long been considered unresponsive to auditory stimulation. However, recent studies have shown that neurons in LIP respond to auditory stimuli during an auditory-saccade task, suggesting possible involvement of this area in auditory-to-oculomotor as well as visual-to-oculomotor processing. This dissertation describes investigations which clarify the role of area LIP in auditory-to-oculomotor processing.
Extracellular recordings were obtained from a total of 332 LIP neurons in two macaque monkeys, while the animals performed fixation and saccade tasks involving auditory and visual stimuli. No auditory activity was observed in area LIP before animals were trained to make saccades to auditory stimuli, but responses to auditory stimuli did emerge after auditory-saccade training. Auditory responses in area LIP after auditory-saccade training were significantly stronger in the context of an auditory-saccade task than in the context of a fixation task. Compared to visual responses, auditory responses were also significantly more predictive of movement-related activity in the saccade task. Moreover, while visual responses often had a fast transient component, responses to auditory stimuli in area LIP tended to be gradual in onset and relatively prolonged in duration.
Overall, the analyses demonstrate that responses to auditory stimuli in area LIP are dependent on auditory-saccade training, modulated by behavioral context, and characterized by slow-onset, sustained response profiles. These findings suggest that responses to auditory stimuli are best interpreted as supramodal (cognitive or motor) responses, rather than as modality-specific sensory responses. Auditory responses in area LIP seem to reflect the significance of auditory stimuli as potential targets for eye movements, and may differ from most visual responses in the extent to which they arc abstracted from the sensory parameters of the stimulus.
Resumo:
Admitindo a produção estética como uma importante condição da existência humana, não é difícil entender a importância de se dar voz à juventude que tem uma produção poética rica, ainda desconhecida e pouco explorada à seu favor. Dar voz, aqui, sobretudo às suas imagens visuais, criar oportunidades de explorar a eloquência e as significações dessa literacia visual específica (Gil, 2011) e dar ouvidos ao que nos gritam tais imagens. A pagada aqui defendida se estende aos gadgets, às telas de celular, computadores, videoclipes, games, mangás, entre tantas outras fontes visuais e comportamentais. Assim, no permanente processo de ressignificação da escola, nos parece promissor o máximo aproveitamento das imagens que constituem a cultura visual que envolve o cotidiano dos estudantes. Esperamos que esta pesquisa mostre um pouco da riqueza, força ou energia cultural que existe no universo da pichação e a pertinência de sua reflexão em sala de aula como um caminho de elucidação não apenas dos seus aspectos estéticos e plásticos mas, também redefinir o papel político da afirmação de padrões estético-culturais e assim fortalecer o diálogo com os jovens estudantes periferizados
Resumo:
Physical modelling of interesting geotechnical problems has helped clarify behaviours and failure mechanisms of many civil engineering systems. Interesting visual information from physical modelling can also be used in teaching to foster interest in geotechnical engineering and recruit young researchers to our field. With this intention, the Teaching Committee of TC2 developed a web-based teaching resources centre. In this paper, the development and organisation of the resource centre using Wordpress. Wordpress is an open-source content management system which allows user content to be edited and site administration to be controlled remotely via a built-in interface. Example data from a centrifuge test on shallow foundations which could be used for undergraduate or graduate level courses is presented and its use illustrated. A discussion on the development of wiki-style addition to the resource centre for commonly used physical model terms is also presented. © 2010 Taylor & Francis Group, London.
Resumo:
How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.
Resumo:
Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.
Resumo:
Little is known about similarities and differences in voice hearing in schizophrenia and dissociative identity disorder (DID) and the role of child maltreatment and dissociation. This study examined various aspects of voice hearing, along with childhood maltreatment and pathological dissociation in 3 samples: schizophrenia without child maltreatment (n = 18), schizophrenia with child maltreatment (n = 16), and DID (n = 29). Compared with the schizophrenia groups, the DID sample was more likely to have voices starting before 18, hear more than 2 voices, have both child and adult voices and experience tactile and visual hallucinations. The 3 groups were similar in that voice content was incongruent with mood and the location was more likely internal than external. Pathological dissociation predicted several aspects of voice hearing and appears an important variable in voice hearing, at least where maltreatment is present.
Resumo:
In this paper we present the first data from the research conducted to determine the relationship between traditional visual arts and other forms of visual culture closer to the experiences of high school youth. The hypothesis of this research is that while students are nurtured and live primarily with the images provided by the media culture, their textbooks basically refer to the more traditional art images. The research has been limited to a review and analysis of the most common educational materials for teaching visual arts in high school. After the systematization and analysis of the images appeared in textbooks, we have detected three major types: the artistics, those who belong to media culture and others. The most relevant conclusions indicate that: there are hardly any connections between different types of images, they offer a very traditional view of art and they are far removed from the experiences of young book users.
Resumo:
The contemporary dominance of visuality has turned our understanding of space into a mode of unidirectional experience that externalizes other sensual capacities of the body while perceiving the built environment. This affects not only architectural practice but also architectural education when an introduction to the concept of space is often challenging, especially for the students who have limited spatial and sensual training. Considering that an architectural work is not perceived as a series of retinal pictures but as a repeated multi-sensory experience, the problem definitions in the design studio need to be disengaged from the dominance of a ‘focused vision’ and be re-constructed in a holistic manner. A method to address this approach is to enable the students to refer to their own sensual experiences of the built environment as a part of their design processes. This paper focuses on a particular approach to the second year architectural design teaching which has been followed in the Department of Architecture at Izmir University of Economics for the last three years. The very first architectural project of the studio and the program, entitled ‘Sensing Spaces’, is conducted as a multi-staged design process including ‘sense games, analyses of organs and their interpretations into space’. The objectives of this four-week project are to explore the sense of space through the design of a three-dimensional assembly, to create an awareness of the significance of the senses in the design process and to experiment with re-interpreted forms of bodily parts. Hence, the students are encouraged to explore architectural space through their ‘tactile, olfactory, auditory, gustative and visual stimuli’. In this paper, based on a series of examples, architectural space is examined beyond its boundaries of structure, form and function, and spatial design is considered as an activity of re-constructing the built environment through the awareness of bodily senses.
Resumo:
A software system, recently developed by the authors for the efficient capturing, editing, and delivery of audio-visual web lectures, was used to create a series of lectures for a first-year undergraduate course in Dynamics. These web lectures were developed to serve as an extra study resource for students attending lectures and not as a replacement. A questionnaire was produced to obtain feedback from students. The overall response was very favorable and numerous requests were made for other lecturers to adopt this technology. Despite the students' approval of this added resource, there was no significant improvement in overall examination performance
Resumo:
Objectives: A common behavioural symptom of Parkinson’s disease (PD) is reduced step length (SL). Whilst sensory cueing strategies can be effective in increasing SL and reducing gait variability, current cueing strategies conveying spatial or temporal information are generally confined to the use of either visual or auditory cue modalities, respectively. We describe a novel cueing strategy using ecologically-valid ‘action-related’ sounds (footsteps on gravel) that convey both spatial and temporal parameters of a specific action within a single cue.
Methods: The current study used a real-time imitation task to examine whether PD affects the ability to re-enact changes in spatial characteristics of stepping actions, based solely on auditory information. In a second experimental session, these procedures were repeated using synthesized sounds derived from recordings of the kinetic interactions between the foot and walking surface. A third experimental session examined whether adaptations observed when participants walked to action-sounds were preserved when participants imagined either real recorded or synthesized sounds.
Results: Whilst healthy control participants were able to re-enact significant changes in SL in all cue conditions, these adaptations, in conjunction with reduced variability of SL were only observed in the PD group when walking to, or imagining the recorded sounds.
Conclusions: The findings show that while recordings of stepping sounds convey action information to allow PD patients to re-enact and imagine spatial characteristics of gait, synthesis of sounds purely from gait kinetics is insufficient to evoke similar changes in behaviour, perhaps indicating that PD patients have a higher threshold to cue sensorimotor resonant responses.
Resumo:
Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.