859 resultados para Multimodal
Resumo:
PURPOSE The purpose of this study was to identify SD-OCT changes that correspond to leakage on fluorescein (FA) and indocyanine angiography (ICGA) and evaluate effect of half-fluence photodynamic therapy (PDT) on choroidal volume in chronic central serous choroidoretinopathy (CSC). METHODS Retrospective analysis of patients with chronic CSC who had undergone PDT. Baseline FA and ICGA images were overlaid on SD-OCT to identify OCT correlates of FA or ICGA hyperfluorescence. Choroidal volume was evaluated in a subgroup of eyes before and after PDT. RESULTS Twenty eyes were evaluated at baseline, of which seven eyes had choroidal volume evaluations at baseline and 3 months following PDT. SD-OCT changes corresponding to FA hyperfluorescence were subretinal fluid (73%), RPE microrip (50%), RPE double-layer sign (31%), RPE detachment (15%), and RPE thickening (8%). ICGA hyperfluoresence was correlated in 93% with hyperreflective spots in the superficial choroid. Choroidal volume decreased from 9.35 ± 1.99 to 8.52 ± 1.92 and 8.04 ± 1.7 mm(3) (at 1 and 3 months post PDT, respectively, p ≤ 0.001). CONCLUSIONS We identified specific OCT findings that correlate with FA and ICGA leakage sites. SD-OCT is a valuable tool to localize CSC lesions and may be useful to guide PDT treatment. Generalized choroidal volume decrease occurs following PDT and extends beyond PDT treatment site.
Resumo:
Federal Highway Administration, Office of Safety and Traffic Operations Research and Development, McLean, Va.
Resumo:
I denna uppsats undersöker jag vilka modelläsare som skapas i två olika sorters mejl från Greenpeace i Sverige till personer som engagerar sig i organisationens arbete. Jag gör en multimodal textanalys med utgångspunkt i dialogism och sociosemiotisk teori, och jag använder analysmetoder från den systemisk-funktionella grammatiken. Resultatet visar att de två mejltyperna i det stora hela är mycket lika varandra, men att det finns vissa skillnader och att de olika mejltyperna därigenom konstruerar delvis olika modelläsare som verkliga läsare måste förhålla sig till. Modelläsarna skapas genom realiseringar av olika språkliga och visuella betydelseskapande resurser som t.ex. presuppositioner, processer, distans och språk- och bildhandlingar. Det gemensamma för modelläsarna är att de sympatiserar med Greenpeace, har en aktiv aktörsroll och har en nära och jämlik relation till organisationen.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
THE RIGORS OF ESTABLISHING INNATENESS and domain specificity pose challenges to adaptationist models of music evolution. In articulating a series of constraints, the authors of the target articles provide strategies for investigating the potential origins of music. We propose additional approaches for exploring theories based on exaptation. We discuss a view of music as a multimodal system of engaging with affect, enabled by capacities of symbolism and a theory of mind.
Resumo:
A vision of the future of intraoperative monitoring for anesthesia is presented-a multimodal world based on advanced sensing capabilities. I explore progress towards this vision, outlining the general nature of the anesthetist's monitoring task and the dangers of attentional capture. Research in attention indicates different kinds of attentional control, such as endogenous and exogenous orienting, which are critical to how awareness of patient state is maintained, but which may work differently across different modalities. Four kinds of medical monitoring displays are surveyed: (1) integrated visual displays, (2) head-mounted displays, (3) advanced auditory displays and (4) auditory alarms. Achievements and challenges in each area are outlined. In future research, we should focus more clearly on identifying anesthetists' information needs and we should develop models of attention in different modalities and across different modalities that are more capable of guiding design. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This paper reflects upon our attempts to bring a participatory design approach to design research into interfaces that better support dental practice. The project brought together design researchers, general and specialist dental practitioners, the CEO of a dental software company and, to a limited extent, dental patients. We explored the potential for deployment of speech and gesture technologies in the challenging and authentic context of dental practices. The paper describes the various motivations behind the project, the negotiation of access and the development of the participant relationships as seen from the researchers' perspectives. Conducting participatory design sessions with busy professionals demands preparation, improvisation, and clarity of purpose. The paper describes how we identified what went well and when to shift tactics. The contribution of the paper is in its description of what we learned in bringing participatory design principles to a project that spanned technical research interests, commercial objectives and placing demands upon the time of skilled professionals. Copyright © 2010 ACM, Inc
Resumo:
This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.
Resumo:
Motion is an important aspect of face perception that has been largely neglected to date. Many of the established findings are based on studies that use static facial images, which do not reflect the unique temporal dynamics available from seeing a moving face. In the present thesis a set of naturalistic dynamic facial emotional expressions was purposely created and used to investigate the neural structures involved in the perception of dynamic facial expressions of emotion, with both functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG). Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend the distributed neural system for face perception (Haxby et al.,2000). Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as inferior occipital gyri and superior temporal sulci, along with coupling between superior temporal sulci and amygdalae, as well as with inferior frontal gyri. MEG and Synthetic Aperture Magnetometry (SAM) were used to examine the spatiotemporal profile of neurophysiological activity within this dynamic face perception network. SAM analysis revealed a number of regions showing differential activation to dynamic versus static faces in the distributed face network, characterised by decreases in cortical oscillatory power in the beta band, which were spatially coincident with those regions that were previously identified with fMRI. These findings support the presence of a distributed network of cortical regions that mediate the perception of dynamic facial expressions, with the fMRI data providing information on the spatial co-ordinates paralleled by the MEG data, which indicate the temporal dynamics within this network. This integrated multimodal approach offers both excellent spatial and temporal resolution, thereby providing an opportunity to explore dynamic brain activity and connectivity during face processing.
Resumo:
The studies in this project have investigated the ongoing neuronal network oscillatory activity found in the sensorimotor cortex using two modalities: magnetoencephalography (MEG) and in vitro slice recordings. The results have established that ongoing sensorimotor oscillations span the mu and beta frequency region both in vitro and in MEG recordings, with distinct frequency profiles for each recorded laminae in vitro, while MI and SI show less difference in humans. In addition, these studies show that connections between MI and SI modulate the ongoing neuronal network activity in these areas. The stimulation studies indicate that specific frequencies of stimulation affect the ongoing activity in the sensorimotor cortex. The continuous theta burst stimulation (cTBS) study demonstrates that cTBS predominantly enhances the power of the local ongoing activity. The stimulation studies in this project show limited comparison between modalities, which is informative of the role of connectivity in these effects. However, independently these studies provide novel information on the mechanisms on sensorimotor oscillatory interaction. The pharmacological studies reveal that GABAergic modulation with zolpidem changes the neuronal oscillatory network activity in both healthy and pathological MI. Zolpidem enhances the power of ongoing oscillatory activity in both sensorimotor laminae and in healthy subjects. In contrast, zolpidem attenuates the “abnormal” beta oscillatory activity in the affected hemisphere in Parkinsonian patients, while restoring the hemispheric beta power ratio and frequency variability and thereby improving motor symptomatology. Finally we show that independent signals from MI laminae can be integrated in silico to resemble the aggregate MEG MI oscillatory signals. This highlights the usefulness of combining these two methods when elucidating neuronal network oscillations in the sensorimotor cortex and any interventions.
A multimodal perspective on the composition of cortical oscillations:frontiers in human neuroscience
Resumo:
An expanding corpus of research details the relationship between functional magnetic resonance imaging (fMRI) measures and neuronal network oscillations. Typically, integratedelectroencephalography(EEG) and fMRI,orparallel magnetoencephalography (MEG) and fMRI are used to draw inference about the consanguinity of BOLD and electrical measurements. However, there is a relative dearth of information about the relationship between E/MEG and the focal networks from which these signals emanate. Consequently, the genesis and composition of E/MEG oscillations requires further clarification. Here we aim to contribute to understanding through a series of parallel measurements of primary motor cortex (M1) oscillations, using human MEG and in-vitro rodent local field potentials. We compare spontaneous activity in the ~10Hz mu and 15-30Hz beta frequency ranges and compare MEG signals with independent and integrated layers III and V(LIII/LV) from in vitro recordings. We explore the mechanisms of oscillatory generation, using specific pharmacological modulation with the GABA-A alpha-1 subunit modulator zolpidem. Finally, to determine the contribution of cortico-cortical connectivity, we recorded in-vitro M1, during an incision to sever lateral connections between M1 and S1 cortices. We demonstrate that frequency distribution of MEG signals appear have closer statistically similarity with signals from integrated rather than independent LIII/LV laminae. GABAergic modulation in both modalities elicited comparable changes in the power of the beta band. Finally, cortico-cortical connectivity in sensorimotor cortex (SMC) appears to directly influence the power of the mu rhythm in LIII. These findings suggest that the MEG signal is an amalgam of outputs from LIII and LV, that multiple frequencies can arise from the same cortical area and that in vitro and MEG M1 oscillations are driven by comparable mechanisms. Finally, corticocortical connectivity is reflected in the power of the SMC mu rhythm. © 2013 Ronnqvist, Mcallister, Woodhall, Stanford and Hall.
Resumo:
Generation of stable dual and/or multiple longitudinal modes emitted from a single quantum dot (QD) laser diode (LD) over a broad wavelength range by using volume Bragg gratings (VBG's) in an external cavity setup is reported. The LD operates in both the ground and excited states and the gratings give a dual-mode separation around each emission peak of 5 nm, which is suitable as a continuous wave (CW) optical pump signal for a terahertz (THz) photomixer device. The setup also generates dual modes around both 1180m and 1260 nm simultaneously, giving four simultaneous narrow linewidth modes comprising two simultaneous difference frequency pump signals. (C) 2011 American Institute of Physics.
Resumo:
Mobile technology has not yet achieved widespread acceptance in the Architectural, Engineering, and Construction (AEC) industry. This paper presents work that is part of an ongoing research project focusing on the development of multimodal mobile applications for use in the AEC industry. This paper focuses specifically on a context-relevant lab-based evaluation of two input modalities – stylus and soft-keyboard v. speech-based input – for use with a mobile data collection application for concrete test technicians. The manner in which the evaluation was conducted as well as the results obtained are discussed in detail.
Resumo:
Mobile technologies have yet to be widely adopted by the Architectural, Engineering, and Construction (AEC) industry despite being one of the major growth areas in computing in recent years. This lack of uptake in the AEC industry is likely due, in large part, to the combination of small screen size and inappropriate interaction demands of current mobile technologies. This paper discusses the scope for multimodal interaction design with a specific focus on speech-based interaction to enhance the suitability of mobile technology use within the AEC industry by broadening the field data input capabilities of such technologies. To investigate the appropriateness of using multimodal technology for field data collection in the AEC industry, we have developed a prototype Multimodal Field Data Entry (MFDE) application. This application, which allows concrete testing technicians to record quality control data in the field, has been designed to support two different modalities of data input speech-based data entry and stylus-based data entry. To compare the effectiveness or usability of, and user preference for, the different input options, we have designed a comprehensive lab-based evaluation of the application. To appropriately reflect the anticipated context of use within the study design, careful consideration had to be given to the key elements of a construction site that would potentially influence a test technician's ability to use the input techniques. These considerations and the resultant evaluation design are discussed in detail in this paper.
Resumo:
Mobile and wearable computers present input/output prob-lems due to limited screen space and interaction techniques. When mobile, users typically focus their visual attention on navigating their environment - making visually demanding interface designs hard to operate. This paper presents two multimodal interaction techniques designed to overcome these problems and allow truly mobile, 'eyes-free' device use. The first is a 3D audio radial pie menu that uses head gestures for selecting items. An evaluation of a range of different audio designs showed that egocentric sounds re-duced task completion time, perceived annoyance, and al-lowed users to walk closer to their preferred walking speed. The second is a sonically enhanced 2D gesture recognition system for use on a belt-mounted PDA. An evaluation of the system with and without audio feedback showed users' ges-tures were more accurate when dynamically guided by au-dio-feedback. These novel interaction techniques demon-strate effective alternatives to visual-centric interface de-signs on mobile devices.