859 resultados para Multimodal
Resumo:
Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors.
Resumo:
Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.
Resumo:
Objectives: Neurofunctional alterations are correlates of vulnerability to psychosis, as well as of the disorder itself. How these abnormalities relate to different probabilities for later transition to psychosis is unclear. We investigated vulnerability- versus disease-related versus resilience biomarkers of psychosis during working memory (WM) processing in individuals with an at-risk mental state (ARMS). Experimental design: Patients with “first-episode psychosis” (FEP, n = 21), short-term ARMS (ARMS-ST, n = 17), long-term ARMS (ARMS-LT, n = 16), and healthy controls (HC, n = 20) were investigated with an n-back WM task. We examined functional magnetic resonance imaging (fMRI) and structural magnetic resonance imaging (sMRI) data in conjunction using biological parametric mapping (BPM) toolbox. Principal observations: There were no differences in accuracy, but the FEP and the ARMS-ST group had longer reaction times compared with the HC and the ARMS-LT group. With the 2-back > 0-back contrast, we found reduced functional activation in ARMS-ST and FEP compared with the HC group in parietal and middle frontal regions. Relative to ARMS-LT individuals, FEP patients showed decreased activation in the bilateral inferior frontal gyrus and insula, and in the left prefrontal cortex. Compared with the ARMS-LT, the ARMS-ST subjects showed reduced activation in the right inferior frontal gyrus and insula. Reduced insular and prefrontal activation was associated with gray matter volume reduction in the same area in the ARMS-LT group. Conclusions: These findings suggest that vulnerability to psychosis was associated with neurofunctional alterations in fronto-temporo-parietal networks in a WM task. Neurofunctional differences within the ARMS were related to different duration of the prodromal state and resilience factors
Resumo:
Image-guided, computer-assisted neurosurgery has emerged to improve localization and targeting, to provide a better anatomic definition of the surgical field, and to decrease invasiveness. Usually, in image-guided surgery, a computer displays the surgical field in a CT/MR environment, using axial, coronal or sagittal views, or even a 3D representation of the patient. Such a system forces the surgeon to look away from the surgical scene to the computer screen. Moreover, this kind of information, being pre-operative imaging, can not be modified during the operation, so it remains valid for guidance in the first stage of the surgical procedure, and mainly for rigid structures like bones. In order to solve the two constraints mentioned before, we are developing an ultrasoundguided surgical microscope. Such a system takes the advantage that surgical microscopy and ultrasound systems are already used in neurosurgery, so it does not add more complexity to the surgical procedure. We have integrated an optical tracking device in the microscope and an augmented reality overlay system with which we avoid the need to look away from the scene, providing correctly aligned surgical images with sub-millimeter accuracy. In addition to the standard CT and 3D views, we are able to track an ultrasound probe, and using a previous calibration and registration of the imaging, the image obtained is correctly projected to the overlay system, so the surgeon can always localize the target and verify the effects of the intervention. Several tests of the system have been already performed to evaluate the accuracy, and clinical experiments are currently in progress in order to validate the clinical usefulness of the system.
Resumo:
BACKGROUND: Digital imaging methods are a centrepiece for diagnosis and management of macular disease. A recently developed imaging device is composed of simultaneous confocal scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT). By means of clinical samples the benefit of this technique concerning diagnostic and therapeutic follow-up will be assessed. METHODS: The combined OCT-SLO-System (Ophthalmic Technologies Inc., Toronto, Canada) allows for confocal en-face fundus imaging and high resolution OCT scanning at the same time. OCT images are obtained from transversal line scans. One light source and the identical scanning rate yield a pixel-to-pixel correspondence of images. Three-dimensional thickness maps are derived from C-scan stacking. RESULTS: We followed-up patients with cystoid macular edema, pigment epithelium detachment, macular hole, venous branch occlusion, and vitreoretinal tractions during their course of therapy. The new imaging method illustrates the reduction of cystoid volume, e.g. after intravitreal injections of either angiostatic drugs or steroids. C-scans are used for appreciation of lesion diameters, visualisation of pathologies involving the vitreoretinal interface, and quantification of retinal thickness change. CONCLUSION: The combined OCT-SLO system creates both topographic and tomographic images of the retina. New therapeutic options can be followed-up closely by observing changes in lesion thickness and cyst volumes. For clinical use further studies are needed.
Resumo:
OBJECTIVE: The aim of the present pilot study is to show initial results of a multimodal approach using clinical scoring, morphological magnetic resonance imaging (MRI) and biochemical T2-relaxation and diffusion-weighted imaging (DWI) in their ability to assess differences between cartilage repair tissue after microfracture therapy (MFX) and matrix-associated autologous chondrocyte transplantation (MACT). METHOD: Twenty patients were cross-sectionally evaluated at different post-operative intervals from 12 to 63 months after MFX and 12-59 months after MACT. The two groups were matched by age (MFX: 36.0+/-10.4 years; MACT: 35.1+/-7.7 years) and post-operative interval (MFX: 32.6+/-16.7 months; MACT: 31.7+/-18.3 months). After clinical evaluation using the Lysholm score, 3T-MRI was performed obtaining the MR observation of cartilage repair tissue (MOCART) score as well as T2-mapping and DWI for multi-parametric MRI. Quantitative T2-relaxation was achieved using a multi-echo spin-echo sequence; semi-quantitative diffusion-quotient (signal intensity without diffusion-weighting divided by signal intensity with diffusion weighting) was prepared by a partially balanced, steady-state gradient-echo pulse sequence. RESULTS: No differences in Lysholm (P=0.420) or MOCART (P=0.209) score were observed between MFX and MACT. T2-mapping showed lower T2 values after MFX compared to MACT (P=0.039). DWI distinguished between healthy cartilage and cartilage repair tissue in both procedures (MFX: P=0.001; MACT: P=0.007). Correlations were found between the Lysholm and the MOCART score (Pearson: 0.484; P=0.031), between the Lysholm score and DWI (Pearson:-0.557; P=0.011) and a trend between the Lysholm score and T2 (Person: 0.304; P=0.193). CONCLUSION: Using T2-mapping and DWI, additional information could be gained compared to clinical scoring or morphological MRI. In combination clinical, MR-morphological and MR-biochemical parameters can be seen as a promising multimodal tool in the follow-up of cartilage repair.
Resumo:
The past few years, multimodal interaction has been gaining importance in virtual environments. Although multimodality renders interacting with an environment more natural and intuitive, the development cycle of such an application is often long and expensive. In our overall field of research, we investigate how modelbased design can facilitate the development process by designing environments through the use of highlevel diagrams. In this scope, we present ‘NiMMiT’, a graphical notation for expressing and evaluating multimodal user interaction; we elaborate on the NiMMiT primitives and demonstrate its use by means of a comprehensive example.
Resumo:
Information theory-based metric such as mutual information (MI) is widely used as similarity measurement for multimodal registration. Nevertheless, this metric may lead to matching ambiguity for non-rigid registration. Moreover, maximization of MI alone does not necessarily produce an optimal solution. In this paper, we propose a segmentation-assisted similarity metric based on point-wise mutual information (PMI). This similarity metric, termed SPMI, enhances the registration accuracy by considering tissue classification probabilities as prior information, which is generated from an expectation maximization (EM) algorithm. Diffeomorphic demons is then adopted as the registration model and is optimized in a hierarchical framework (H-SPMI) based on different levels of anatomical structure as prior knowledge. The proposed method is evaluated using Brainweb synthetic data and clinical fMRI images. Both qualitative and quantitative assessment were performed as well as a sensitivity analysis to the segmentation error. Compared to the pure intensity-based approaches which only maximize mutual information, we show that the proposed algorithm provides significantly better accuracy on both synthetic and clinical data.
Resumo:
Multimodality – the interdependence of semiotic resources in text – is an existential element of today’s media. The term multimodality attends systematically to the social interpretation of a wide range of communicational forms used in meaning making. A primary focus of social- semiotic multimodal analysis is on mapping how modal resources are used by people in a given social context. In November 2012 the “Ola ke ase” catchphrase, which is a play on “Hola ¿qué hace?”, appeared for the first time in Spain and immediately has been adopted as a Twitter hashtag and an image macro series. Its viral spread on social networks has been tremendous, being a trending topic in various Spanish-speaking countries. The objective of analysis is how language and image work together in the “Ola ke ase” meme. The interplay between text and image in one of the original memes and some of its variations is quantitatively analysed applying a social-semiotic approach. Results demonstrate how the “Ola ke ase” meme functions through its multimodal character and the non-standard orthography. The spread of uncountable variations of the meme shows the social process that goes on in the meaning making of the semiotic elements.
Resumo:
We present a fully automatic segmentation method for multi-modal brain tumor segmentation. The proposed generative-discriminative hybrid model generates initial tissue probabilities, which are used subsequently for enhancing the classi�cation and spatial regularization. The model has been evaluated on the BRATS2013 training set, which includes multimodal MRI images from patients with high- and low-grade gliomas. Our method is capable of segmenting the image into healthy (GM, WM, CSF) and pathological tissue (necrotic, enhancing and non-enhancing tumor, edema). We achieved state-of-the-art performance (Dice mean values of 0.69 and 0.8 for tumor subcompartments and complete tumor respectively) within a reasonable timeframe (4 to 15 minutes).