20 resultados para Audio-visual education.
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements
Resumo:
OBJECTIVE To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. METHODS Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). CONCLUSION Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Resumo:
BACKGROUND Efficiently performed basic life support (BLS) after cardiac arrest is proven to be effective. However, cardiopulmonary resuscitation (CPR) is strenuous and rescuers' performance declines rapidly over time. Audio-visual feedback devices reporting CPR quality may prevent this decline. We aimed to investigate the effect of various CPR feedback devices on CPR quality. METHODS In this open, prospective, randomised, controlled trial we compared three CPR feedback devices (PocketCPR, CPRmeter, iPhone app PocketCPR) with standard BLS without feedback in a simulated scenario. 240 trained medical students performed single rescuer BLS on a manikin for 8min. Effective compression (compressions with correct depth, pressure point and sufficient decompression) as well as compression rate, flow time fraction and ventilation parameters were compared between the four groups. RESULTS Study participants using the PocketCPR performed 17±19% effective compressions compared to 32±28% with CPRmeter, 25±27% with the iPhone app PocketCPR, and 35±30% applying standard BLS (PocketCPR vs. CPRmeter p=0.007, PocketCPR vs. standard BLS p=0.001, others: ns). PocketCPR and CPRmeter prevented a decline in effective compression over time, but overall performance in the PocketCPR group was considerably inferior to standard BLS. Compression depth and rate were within the range recommended in the guidelines in all groups. CONCLUSION While we found differences between the investigated CPR feedback devices, overall BLS quality was suboptimal in all groups. Surprisingly, effective compression was not improved by any CPR feedback device compared to standard BLS. All feedback devices caused substantial delay in starting CPR, which may worsen outcome.
Resumo:
Under the name Nollywood a unique video film industry has developed in Nigeria in the last few decades, which now forms one of the world’s biggest entertainment industries. With its focus on stories reflecting „the values, desires and fears” (Haynes 2007: 133) of African viewers and its particular way of production, Nollywood brings „lived practices and its representation together in ways that make the films deeply accessible and entirely familiar to their audience“ (Marston et al. 2007: 57). In doing so, Nollywood shows its spectators new postcolonial forms of performative self‐expression and becomes a point of reference for a wide range of people. However, Nollywood not only excites a large number of viewers inside and outside Nigeria, it also inspires some of them to become active themselves and make their own films. This effect of Nigerian filmmaking can be found in many parts of sub‐Saharan Africa as well as in African diasporas all over the world – including Switzerland (Mooser 2011: 63‐66). As a source of inspiration, Nollywood and its unconventional ways of filmmaking offer African migrants a benchmark that meets their wish to express themselves as minority group in a foreign country. As Appadurai (1996: 53), Ginsburg (2003: 78) and Marks (2000: 21) assume, filmmakers with a migratory background have a specific need to express themselves through media. As minority group members in their country of residence they not only wish to reflect upon their situation within the diaspora and illustrate their everyday struggles as foreigners, but to also express their own views and ideas in order to challenge dominant public opinion (Ginsburg 2003: 78). They attempt to “talk back to the structures of power” (2003: 78) they live in. In this process, their audio-visual works become a means of response and “an answering echo to a previous presentation or representation” (Mitchell 1994: 421). The American art historian Mitchell, therefore, suggests interpreting representation as “the relay mechanism in exchange of power, value, and publicity” (1994: 420). This desire of interacting with the local public has also been expressed during a film project of African, mainly Nigerian, first-generation migrants in Switzerland I am currently partnering in. Several cast and crew members have expressed feelings of being under-represented, even misrepresented, in the dominant Swiss media discourse. In order to create a form of exchange and give themselves a voice, they consequently produce a Nollywood inspired film and wish to present it to the society they live in. My partnership in this on‐going film production (which forms the foundation of my PhD field study) allows me to observe and experience this process. By employing qualitative media anthropological methods and in particular Performance Ethnography, I seek to find out more about the ways African migrants represent themselves as a community through audio‐visual media and the effect the transnational use of Nollywood has on their form of self‐representations as well as the ways they express themselves.
Resumo:
BACKGROUND Resuscitation guidelines encourage the use of cardiopulmonary resuscitation (CPR) feedback devices implying better outcomes after sudden cardiac arrest. Whether effective continuous feedback could also be given verbally by a second rescuer ("human feedback") has not been investigated yet. We, therefore, compared the effect of human feedback to a CPR feedback device. METHODS In an open, prospective, randomised, controlled trial, we compared CPR performance of three groups of medical students in a two-rescuer scenario. Group "sCPR" was taught standard BLS without continuous feedback, serving as control. Group "mfCPR" was taught BLS with mechanical audio-visual feedback (HeartStart MRx with Q-CPR-Technology™). Group "hfCPR" was taught standard BLS with human feedback. Afterwards, 326 medical students performed two-rescuer BLS on a manikin for 8 min. CPR quality parameters, such as "effective compression ratio" (ECR: compressions with correct hand position, depth and complete decompression multiplied by flow-time fraction), and other compression, ventilation and time-related parameters were assessed for all groups. RESULTS ECR was comparable between the hfCPR and the mfCPR group (0.33 vs. 0.35, p = 0.435). The hfCPR group needed less time until starting chest compressions (2 vs. 8 s, p < 0.001) and showed fewer incorrect decompressions (26 vs. 33 %, p = 0.044). On the other hand, absolute hands-off time was higher in the hfCPR group (67 vs. 60 s, p = 0.021). CONCLUSIONS The quality of CPR with human feedback or by using a mechanical audio-visual feedback device was similar. Further studies should investigate whether extended human feedback training could further increase CPR quality at comparable costs for training.
Resumo:
Purpose: Most recently light and mobile reading devices with high display resolutions have become popular and they may open new possibilities for reading applications in education, business and the private sector. The ability to adapt font size may also open new reading opportunities for people with impaired or low vision. Based on their display technology two major groups of reading devices can be distinguished. One type, predominantly found in dedicated e-book readers, uses electronic paper also known as e-Ink. Other devices, mostly multifunction tablet-PCs, are equipped with backlit LCD displays. While it has long been accepted that reading on electronic displays is slow and associated with visual fatigue, this new generation is explicitly promoted for reading. Since research has shown that, compared to reading on electronic displays, reading on paper is faster and requires fewer fixations per line, one would expect differential effects when comparing reading behaviour on e-Ink and LCD. In the present study we therefore compared experimentally how these two display types are suited for reading over an extended period of time. Methods: Participants read for several hours on either e-Ink or LCD, and different measures of reading behaviour and visual strain were regularly recorded. These dependent measures included subjective (visual) fatigue, a letter search task, reading speed, oculomotor behaviour and the pupillary light reflex. Results: Results suggested that reading on the two display types is very similar in terms of both subjective and objective measures. Conclusions: It is not the technology itself, but rather the image quality that seems crucial for reading. Compared to the visual display units used in the previous few decades, these more recent electronic displays allow for good and comfortable reading, even for extended periods of time.
Resumo:
By means of fixed-links modeling, the present study identified different processes of visual short-term memory (VSTM) functioning and investigated how these processes are related to intelligence. We conducted an experiment where the participants were presented with a color change detection task. Task complexity was manipulated through varying the number of presented stimuli (set size). We collected hit rate and reaction time (RT) as indicators for the amount of information retained in VSTM and speed of VSTM scanning, respectively. Due to the impurity of these measures, however, the variability in hit rate and RT was assumed to consist not only of genuine variance due to individual differences in VSTM retention and VSTM scanning but also of other, non-experimental portions of variance. Therefore, we identified two qualitatively different types of components for both hit rate and RT: (1) non-experimental components representing processes that remained constant irrespective of set size and (2) experimental components reflecting processes that increased as a function of set size. For RT, intelligence was negatively associated with the non-experimental components, but was unrelated to the experimental components assumed to represent variability in VSTM scanning speed. This finding indicates that individual differences in basic processing speed, rather than in speed of VSTM scanning, differentiates between high- and low-intelligent individuals. For hit rate, the experimental component constituting individual differences in VSTM retention was positively related to intelligence. The non-experimental components of hit rate, representing variability in basal processes, however, were not associated with intelligence. By decomposing VSTM functioning into non-experimental and experimental components, significant associations with intelligence were revealed that otherwise might have been obscured.
Resumo:
OBJECTIVE Visuoperceptual deficits are common in dementia with Lewy bodies (DLB) and Alzheimer disease (AD). Testing visuoperception in dementia is complicated by decline in other cognitive domains and extrapyramidal features. To overcome these issues, we developed a computerized test, the Newcastle visuoperception battery (NEVIP), which is independent of motor function and has minimal cognitive load.We aimed to test its utility to identify visuoperceptual deficits in people with dementia. PARTICIPANTS AND MEASUREMENTS We recruited 28 AD and 26 DLB participants with 35 comparison participants of similar age and education. The NEVIP was used to test angle, color, and form discrimination along with motion perception to obtain a composite visuoperception score. RESULTS Those with DLB performed significantly worse than AD participants on the composite visuoperception score (Mann-Whitney U = 142, p = 0.01). Visuoperceptual deficits (defined as 2 SD below the performance of comparisons) were present in 71% of the DLB group and 40% of the AD group. Performance was not significantly correlated with motor impairment, but was significantly related to global cognitive impairment in DLB (rs = -0.689, p <0.001), but not in AD. CONCLUSION Visuoperceptual deficits can be detected in both DLB and AD participants using the NEVIP, with the DLB group performing significantly worse than AD. Visuoperception scores obtained by the NEVIP are independent of participant motor deficits and participants are able to comprehend and perform the tasks.
Resumo:
OBJECTIVE To quantify visual discrimination, space-motion, and object-form perception in patients with Parkinson disease dementia (PDD), dementia with Lewy bodies (DLB), and Alzheimer disease (AD). METHODS The authors used a cross-sectional study to compare three demented groups matched for overall dementia severity (PDD: n = 24; DLB: n = 20; AD: n = 23) and two age-, sex-, and education-matched control groups (PD: n = 24, normal controls [NC]: n = 25). RESULTS Visual perception was globally more impaired in PDD than in nondemented controls (NC, PD), but was not different from DLB. Compared to AD, PDD patients tended to perform worse in all perceptual scores. Visual perception of patients with PDD/DLB and visual hallucinations was significantly worse than in patients without hallucinations. CONCLUSIONS Parkinson disease dementia (PDD) is associated with profound visuoperceptual impairments similar to dementia with Lewy bodies (DLB) but different from Alzheimer disease. These findings are consistent with previous neuroimaging studies reporting hypoactivity in cortical areas involved in visual processing in PDD and DLB.
Resumo:
The present study was designed to elucidate sex-related differences in two basic auditory and one basic visual aspect of sensory functioning, namely sensory discrimination of pitch, loudness, and brightness. Although these three aspects of sensory functioning are of vital importance in everyday life, little is known about whether men and women differ from each other in these sensory functions. Participants were 100 male and 100 female volunteers ranging in age from 18 to 30 years. Since sensory sensitivity may be positively related to individual levels of intelligence and musical experience, measures of psychometric intelligence and musical background were also obtained. Reliably better performance for men compared to women was found for pitch and loudness, but not for brightness discrimination. Furthermore, performance on loudness discrimination was positively related to psychometric intelligence, while pitch discrimination was positively related to both psychometric intelligence and levels of musical training. Additional regression analyses revealed that each of three predictor variables (sex, psychometric intelligence, and musical training) accounted for a statistically significant portion of unique variance in pitch discrimination. With regard to loudness discrimination, regression analysis yielded a statistically significant portion of unique variance for sex as a predictor variable, whereas psychometric intelligence just failed to reach statistical significance. The potential influence of sex hormones on sex-related differences in sensory functions is discussed.