838 resultados para multimodal perception
Resumo:
An algorithm for the real-time registration of a retinal video sequence captured with a scanning digital ophthalmoscope (SDO) to a retinal composite image is presented. This method is designed for a computer-assisted retinal laser photocoagulation system to compensate for retinal motion and hence enhance the accuracy, speed, and patient safety of retinal laser treatments. The procedure combines intensity and feature-based registration techniques. For the registration of an individual frame, the translational frame-to-frame motion between preceding and current frame is detected by normalized cross correlation. Next, vessel points on the current video frame are identified and an initial transformation estimate is constructed from the calculated translation vector and the quadratic registration matrix of the previous frame. The vessel points are then iteratively matched to the segmented vessel centerline of the composite image to refine the initial transformation and register the video frame to the composite image. Criteria for image quality and algorithm convergence are introduced, which assess the exclusion of single frames from the registration process and enable a loss of tracking signal if necessary. The algorithm was successfully applied to ten different video sequences recorded from patients. It revealed an average accuracy of 2.47 ± 2.0 pixels (∼23.2 ± 18.8 μm) for 2764 evaluated video frames and demonstrated that it meets the clinical requirements.
Resumo:
PET/CT guidance for percutaneous interventions allows biopsy of suspicious metabolically active bone lesions even when no morphological correlation is delineable in the CT images. Clinical use of PET/CT guidance with conventional step-by-step technique is time consuming and complicated especially in cases in which the target lesion is not shown in the CT image. Our recently developed multimodal instrument guidance system (IGS) for PET/CT improved this situation. Nevertheless, bone biopsies even with IGS have a trade-off between precision and intervention duration which is proportional to patient and personnel exposure to radiation. As image acquisition and reconstruction of PET may take up to 10 minutes, preferably only one time consuming combined PET/CT acquisition should be needed during an intervention. In case of required additional control images in order to check for possible patient movements/deformations, or to verify the final needle position in the target, only fast CT acquisitions should be performed. However, for precise instrument guidance accounting for patient movement and/or deformation without having a control PET image, it is essential to be able to transfer the position of the target as identified in the original PET/CT to a changed situation as shown in the control CT.
Resumo:
A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.
Resumo:
As more investigations into factors affecting the quality of life of patients with multiple sclerosis (MS) are undertaken, it is becoming increasingly apparent that certain comorbidities and associated symptoms commonly found in these patients differ in incidence, pathophysiology and other factors compared with the general population. Many of these MS-related symptoms are frequently ignored in assessments of disease status and are often not considered to be associated with the disease. Research into how such comorbidities and symptoms can be diagnosed and treated within the MS population is lacking. This information gap adds further complexity to disease management and represents an unmet need in MS, particularly as early recognition and treatment of these conditions can improve patient outcomes. In this manuscript, we sought to review the literature on the comorbidities and symptoms of MS and to summarize the evidence for treatments that have been or may be used to alleviate them.
Resumo:
Background: Visuoperceptual deficits in dementia are common and can reduce quality of life. Testing of visuoperceptual function is often confounded by impairments in other cognitive domains and motor dysfunction. We aimed to develop, pilot, and test a novel visuocognitive prototype test battery which addressed these issues, suitable for both clinical and functional imaging use. Methods: We recruited 23 participants (14 with dementia, 6 of whom had extrapyramidal motor features, and 9 age-matched controls). The novel Newcastle visual perception prototype battery (NEVIP-B-Prototype) included angle, color, face, motion and form perception tasks, and an adapted response system. It allows for individualized task difficulties. Participants were tested outside and inside the 3T functional magnetic resonance imaging (fMRI) scanner. Functional magnetic resonance imaging data were analyzed using SPM8. Results: All participants successfully completed the task inside and outside the scanner. Functional magnetic resonance imaging analysis showed activation regions corresponding well to the regional specializations of the visual association cortex. In both groups, there was significant activity in the ventral occipital-temporal region in the face and color tasks, whereas the motion task activated the V5 region. In the control group, the angle task activated the occipitoparietal cortex. Patients and controls showed similar levels of activation, except on the angle task for which occipitoparietal activation was lower in patients than controls. Conclusion: Distinct visuoperceptual functions can be tested in patients with dementia and extrapyramidal motor features when tests use individualized thresholds, adapted tasks, and specialized response systems.