985 resultados para swd: Multimodal System
Resumo:
Este artículo muestra cómo con bajo coste y riesgo se puede desarrollar un sistema de planificación de viaje multimodal, basado en un enfoque de código abierto y estándares ‘de facto’. Se ha desarrollado completamente una solución de código abierto para un sistema de información de transporte público puerta a puerta basado en estándares ‘de facto’. El cálculo de rutas se realiza mediante Graphserver, mientras que la cartografía se basa en OpenStreetMap. También se ha demostrado cómo exportar una base de datos real de horarios de transporte público como la del operador ETM (Empresa de Transporte Metropolitano de València) a la especificación de Google Transit, para permitir el cálculo de rutas, tanto desde nuestro prototipo como desde Google Transit
Resumo:
This paper presents a queue-based agent architecture for multimodal interfaces. Using a novel approach to intelligently organise both agents and input data, this system has the potential to outperform current state-of-the-art multimodal systems, while at the same time allowing greater levels of interaction and flexibility. This assertion is supported by simulation test results showing that significant improvements can be obtained over normal sequential agent scheduling architectures. For real usage, this translates into faster, more comprehensive systems, without the limited application domain that restricts current implementations.
Resumo:
TESSA is a toolkit for experimenting with sensory augmentation. It includes hardware and software to facilitate rapid prototyping of interfaces that can enhance one sense using information gathered from another sense. The toolkit contains a range of sensors (e.g. ultrasonics, temperature sensors) and actuators (e.g. tactors or stereo sound), designed modularly so that inputs and outputs can be easily swapped in and out and customized using TESSA’s graphical user interface (GUI), with “real time” feedback. The system runs on a Raspberry Pi with a built-in touchscreen, providing a compact and portable form that is amenable for field trials. At CHI Interactivity, the audience will have the opportunity to experience sensory augmentation effects using this system, and design their own sensory augmentation interfaces.
Resumo:
Aim: The objective of the present study was to investigate the effect of a multimodal exercise intervention on frontal cognitive functions and kinematic gait parameters in patients with Alzheimer's disease. Methods: A sample of elderly patients with Alzheimer's disease (n=27) were assigned to a training group (n=14; aged 78.0±7.3years) and a control group (n=13; aged 77.1±7.4years). Multimodal exercise intervention includes motor activities and cognitive tasks simultaneously. The participants attended a 1-h session three times a week for 16weeks, and the control participants maintained their regular daily activities during the same period. The frontal cognitive functions were evaluated using the Frontal Assessment Battery, the Clock Drawing Test and the Symbol Search Subtest. The kinematic parameters of gait-cadence, stride length and stride speed were analyzed under two conditions: (i) free gait (single task); and (ii) gait with frontal cognitive task (walking and counting down from 20 - dual task). Results and discussion: The patients in the intervention group significantly increased the scores in frontal cognitive variables, Frontal Assessment Battery (P<0.001) and Symbol Search Subtest (P<0.001) after the 16-week period. The control group decreased the scores in the Clock Drawing Test (P=0.001) and increased the number of counting errors during the dual task (P=0.008) after the same period. Conclusion: The multimodal exercise intervention improved the frontal cognitive functions in patients with Alzheimer's disease. © 2012 Japan Geriatrics Society.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
[EN]This paper does not propose a new technique for face representationorclassification. Insteadtheworkdescribed here investigates the evolution of an automatic system which, based on a currently common framework, and starting from an empty memory, modifies its classifiers according to experience. In the experiments we reproduce up to a certain extent the process of successive meetings. The results achieved, even when the number of different individuals is still reduced compared to off-line classifiers, are promising.
Resumo:
PET/CT guidance for percutaneous interventions allows biopsy of suspicious metabolically active bone lesions even when no morphological correlation is delineable in the CT images. Clinical use of PET/CT guidance with conventional step-by-step technique is time consuming and complicated especially in cases in which the target lesion is not shown in the CT image. Our recently developed multimodal instrument guidance system (IGS) for PET/CT improved this situation. Nevertheless, bone biopsies even with IGS have a trade-off between precision and intervention duration which is proportional to patient and personnel exposure to radiation. As image acquisition and reconstruction of PET may take up to 10 minutes, preferably only one time consuming combined PET/CT acquisition should be needed during an intervention. In case of required additional control images in order to check for possible patient movements/deformations, or to verify the final needle position in the target, only fast CT acquisitions should be performed. However, for precise instrument guidance accounting for patient movement and/or deformation without having a control PET image, it is essential to be able to transfer the position of the target as identified in the original PET/CT to a changed situation as shown in the control CT.
Resumo:
CONCLUSION: Our self-developed planning and navigation system has proven its capacity for accurate surgery on the anterior and lateral skull base. With the incorporation of augmented reality, image-guided surgery will evolve into 'information-guided surgery'. OBJECTIVE: Microscopic or endoscopic skull base surgery is technically demanding and its outcome has a great impact on a patient's quality of life. The goal of the project was aimed at developing and evaluating enabling navigation surgery tools for simulation, planning, training, education, and performance. This clinically applied technological research was complemented by a series of patients (n=406) who were treated by anterior and lateral skull base procedures between 1997 and 2006. MATERIALS AND METHODS: Optical tracking technology was used for positional sensing of instruments. A newly designed dynamic reference base with specific registration techniques using fine needle pointer or ultrasound enables the surgeon to work with a target error of < 1 mm. An automatic registration assessment method, which provides the user with a color-coded fused representation of CT and MR images, indicates to the surgeon the location and extent of registration (in)accuracy. Integration of a small tracker camera mounted directly on the microscope permits an advantageous ergonomic way of working in the operating room. Additionally, guidance information (augmented reality) from multimodal datasets (CT, MRI, angiography) can be overlaid directly onto the surgical microscope view. The virtual simulator as a training tool in endonasal and otological skull base surgery provides an understanding of the anatomy as well as preoperative practice using real patient data. RESULTS: Using our navigation system, no major complications occurred in spite of the fact that the series included difficult skull base procedures. An improved quality in the surgical outcome was identified compared with our control group without navigation and compared with the literature. The surgical time consumption was reduced and more minimally invasive approaches were possible. According to the participants' questionnaires, the educational effect of the virtual simulator in our residency program received a high ranking.
Resumo:
BACKGROUND: Digital imaging methods are a centrepiece for diagnosis and management of macular disease. A recently developed imaging device is composed of simultaneous confocal scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT). By means of clinical samples the benefit of this technique concerning diagnostic and therapeutic follow-up will be assessed. METHODS: The combined OCT-SLO-System (Ophthalmic Technologies Inc., Toronto, Canada) allows for confocal en-face fundus imaging and high resolution OCT scanning at the same time. OCT images are obtained from transversal line scans. One light source and the identical scanning rate yield a pixel-to-pixel correspondence of images. Three-dimensional thickness maps are derived from C-scan stacking. RESULTS: We followed-up patients with cystoid macular edema, pigment epithelium detachment, macular hole, venous branch occlusion, and vitreoretinal tractions during their course of therapy. The new imaging method illustrates the reduction of cystoid volume, e.g. after intravitreal injections of either angiostatic drugs or steroids. C-scans are used for appreciation of lesion diameters, visualisation of pathologies involving the vitreoretinal interface, and quantification of retinal thickness change. CONCLUSION: The combined OCT-SLO system creates both topographic and tomographic images of the retina. New therapeutic options can be followed-up closely by observing changes in lesion thickness and cyst volumes. For clinical use further studies are needed.
Resumo:
In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.
Resumo:
Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.
Resumo:
For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.
Resumo:
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all subregions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
Resumo:
The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.