824 resultados para decentralised data fusion framework
Resumo:
The last few years have seen the advent of high-throughput technologies to analyze various properties of the transcriptome and proteome of several organisms. The congruency of these different data sources, or lack thereof, can shed light on the mechanisms that govern cellular function. A central challenge for bioinformatics research is to develop a unified framework for combining the multiple sources of functional genomics information and testing associations between them, thus obtaining a robust and integrated view of the underlying biology. We present a graph theoretic approach to test the significance of the association between multiple disparate sources of functional genomics data by proposing two statistical tests, namely edge permutation and node label permutation tests. We demonstrate the use of the proposed tests by finding significant association between a Gene Ontology-derived "predictome" and data obtained from mRNA expression and phenotypic experiments for Saccharomyces cerevisiae. Moreover, we employ the graph theoretic framework to recast a surprising discrepancy presented in Giaever et al. (2002) between gene expression and knockout phenotype, using expression data from a different set of experiments.
Resumo:
We propose a novel class of models for functional data exhibiting skewness or other shape characteristics that vary with spatial or temporal location. We use copulas so that the marginal distributions and the dependence structure can be modeled independently. Dependence is modeled with a Gaussian or t-copula, so that there is an underlying latent Gaussian process. We model the marginal distributions using the skew t family. The mean, variance, and shape parameters are modeled nonparametrically as functions of location. A computationally tractable inferential framework for estimating heterogeneous asymmetric or heavy-tailed marginal distributions is introduced. This framework provides a new set of tools for increasingly complex data collected in medical and public health studies. Our methods were motivated by and are illustrated with a state-of-the-art study of neuronal tracts in multiple sclerosis patients and healthy controls. Using the tools we have developed, we were able to find those locations along the tract most affected by the disease. However, our methods are general and highly relevant to many functional data sets. In addition to the application to one-dimensional tract profiles illustrated here, higher-dimensional extensions of the methodology could have direct applications to other biological data including functional and structural MRI.
Resumo:
INTRODUCTION: Recent advances in medical imaging have brought post-mortem minimally invasive computed tomography (CT) guided percutaneous biopsy to public attention. AIMS: The goal of the following study was to facilitate and automate post-mortem biopsy, to suppress radiation exposure to the investigator, as may occur when tissue sampling under computer tomographic guidance, and to minimize the number of needle insertion attempts for each target for a single puncture. METHODS AND MATERIALS: Clinically approved and post-mortem tested ACN-III biopsy core needles (14 gauge x 160 mm) with an automatic pistol device (Bard Magnum, Medical Device Technologies, Denmark) were used for probe sampling. The needles were navigated in gelatine/peas phantom, ex vivo porcine model and subsequently in two human bodies using a navigation system (MEM centre/ISTB Medical Application Framework, Marvin, Bern, Switzerland) with guidance frame and a CT (Emotion 6, Siemens, Germany). RESULTS: Biopsy of all peas could be performed within a single attempt. The average distance between the inserted needle tip and the pea centre was 1.4mm (n=10; SD 0.065 mm; range 0-2.3 mm). The targets in the porcine liver were also accurately punctured. The average of the distance between the needle tip and the target was 0.5 mm (range 0-1 mm). Biopsies of brain, heart, lung, liver, pancreas, spleen, and kidney were performed on human corpses. For each target the biopsy needle was only inserted once. The examination of one body with sampling of tissue probes at the above-mentioned locations took approximately 45 min. CONCLUSIONS: Post-mortem navigated biopsy can reliably provide tissue samples from different body locations. Since the continuous update of positional data of the body and the biopsy needle is performed using optical tracking, no control CT images verifying the positional data are necessary and no radiation exposure to the investigator need be taken into account. Furthermore, the number of needle insertions for each target can be minimized to a single one with the ex vivo proven adequate accuracy and, in contrast to conventional CT guided biopsy, the insertion angle may be oblique. Navigation for minimally invasive tissue sampling is a useful addition to post-mortem CT guided biopsy.
Resumo:
PURPOSE OF REVIEW: Intensive care medicine consumes a high share of healthcare costs, and there is growing pressure to use the scarce resources efficiently. Accordingly, organizational issues and quality management have become an important focus of interest in recent years. Here, we will review current concepts of how outcome data can be used to identify areas requiring action. RECENT FINDINGS: Using recently established models of outcome assessment, wide variability between individual ICUs is found, both with respect to outcome and resource use. Such variability implies that there are large differences in patient care processes not only within the ICU but also in pre-ICU and post-ICU care. Indeed, measures to improve the patient process in the ICU (including care of the critically ill, patient safety, and management of the ICU) have been presented in a number of recently published papers. SUMMARY: Outcome assessment models provide an important framework for benchmarking. They may help the individual ICU to spot appropriate fields of action, plan and initiate quality improvement projects, and monitor the consequences of such activity.
Resumo:
The present distribution of freshwater fish in the Alpine region has been strongly affected by colonization events occurring after the last glacial maximum (LGM), some 20,000 years ago. We use here a spatially explicit simulation framework to model and better understand their colonization dynamics in the Swiss Rhine basin. This approach is applied to the European bullhead (Cottus gobio), which is an ideal model organism to study fish past demographic processes since it has not been managed by humans. The molecular diversity of eight sampled populations is simulated and compared to observed data at six microsatellite loci under an approximate Bayesian computation framework to estimate the parameters of the colonization process. Our demographic estimates fit well with current knowledge about the biology of this species, but they suggest that the Swiss Rhine basin was colonized very recently, after the Younger Dryas some 6600 years ago. We discuss the implication of this result, as well as the strengths and limits of the spatially explicit approach coupled to the approximate Bayesian computation framework.
Resumo:
This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.
Resumo:
OBJECTIVE: The objective of the study was to evaluate tissue reactions such as bone genesis, cartilage genesis and graft materials in the early phase of lumbar intertransverse process fusion in a rabbit model using computed tomography (CT) imaging with CT intensity (Hounsfield units) measurement, and to compare these data with histological results. MATERIALS AND METHODS: Lumbar intertransverse process fusion was performed on 18 rabbits. Four graft materials were used: autograft bone (n = 3); collagen membrane soaked with recombinant human bone morphogenetic protein-2 (rhBMP-2) (n = 5); granular calcium phosphate (n = 5); and granular calcium phosphate coated with rhBMP-2 (n = 5). All rabbits were euthanized 3 weeks post-operatively and lumbar spines were removed for CT imaging and histological examination. RESULTS: Computed tomography imaging demonstrated that each fusion mass component had the appropriate CT intensity range. CT also showed the different distributions and intensities of bone genesis in the fusion masses between the groups. Each component of tissue reactions was identified successfully on CT images using the CT intensity difference. Using CT color mapping, these observations could be easily visualized, and the results correlated well with histological findings. CONCLUSIONS: The use of CT intensity is an effective approach for observing and comparing early tissue reactions such as newly synthesized bone, newly synthesized cartilage, and graft materials after lumbar intertransverse process fusion in a rabbit model.
Resumo:
BACKGROUND: In clinical practice a diagnosis is based on a combination of clinical history, physical examination and additional diagnostic tests. At present, studies on diagnostic research often report the accuracy of tests without taking into account the information already known from history and examination. Due to this lack of information, together with variations in design and quality of studies, conventional meta-analyses based on these studies will not show the accuracy of the tests in real practice. By using individual patient data (IPD) to perform meta-analyses, the accuracy of tests can be assessed in relation to other patient characteristics and allows the development or evaluation of diagnostic algorithms for individual patients. In this study we will examine these potential benefits in four clinical diagnostic problems in the field of gynaecology, obstetrics and reproductive medicine. METHODS/DESIGN: Based on earlier systematic reviews for each of the four clinical problems, studies are considered for inclusion. The first authors of the included studies will be invited to participate and share their original data. After assessment of validity and completeness the acquired datasets are merged. Based on these data, a series of analyses will be performed, including a systematic comparison of the results of the IPD meta-analysis with those of a conventional meta-analysis, development of multivariable models for clinical history alone and for the combination of history, physical examination and relevant diagnostic tests and development of clinical prediction rules for the individual patients. These will be made accessible for clinicians. DISCUSSION: The use of IPD meta-analysis will allow evaluating accuracy of diagnostic tests in relation to other relevant information. Ultimately, this could increase the efficiency of the diagnostic work-up, e.g. by reducing the need for invasive tests and/or improving the accuracy of the diagnostic workup. This study will assess whether these benefits of IPD meta-analysis over conventional meta-analysis can be exploited and will provide a framework for future IPD meta-analyses in diagnostic and prognostic research.
Resumo:
In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.
Resumo:
Paramyxovirus cell entry is controlled by the concerted action of two viral envelope glycoproteins, the fusion (F) and the receptor-binding (H) proteins, which together with a cell surface receptor mediate plasma membrane fusion activity. The paramyxovirus F protein belongs to class I viral fusion proteins which typically contain two heptad repeat regions (HR). Particular to paramyxovirus F proteins is a long intervening sequence (IS) located between both HR domains. To investigate the role of the IS domain in regulating fusogenicity, we mutated in the canine distemper virus (CDV) F protein IS domain a highly conserved leucine residue (L372) previously reported to cause a hyperfusogenic phenotype. Beside one F mutant, which elicited significant defects in processing, transport competence, and fusogenicity, all remaining mutants were characterized by enhanced fusion activity despite normal or slightly impaired processing and cell surface targeting. Using anti-CDV-F monoclonal antibodies, modified conformational F states were detected in F mutants compared to the parental protein. Despite these structural differences, coimmunoprecipitation assays did not reveal any drastic modulation in F/H avidity of interaction. However, we found that F mutants had significantly enhanced fusogenicity at low temperature only, suggesting that they folded into conformations requiring less energy to activate fusion. Together, these data provide strong biochemical and functional evidence that the conserved leucine 372 at the base of the HRA coiled-coil of F(wt) controls the stabilization of the prefusogenic state, restraining the conformational switch and thereby preventing extensive cell-cell fusion activity.
Resumo:
Complementary to automatic extraction processes, Virtual Reality technologies provide an adequate framework to integrate human perception in the exploration of large data sets. In such multisensory system, thanks to intuitive interactions, a user can take advantage of all his perceptual abilities in the exploration task. In this context the haptic perception, coupled to visual rendering, has been investigated for the last two decades, with significant achievements. In this paper, we present a survey related to exploitation of the haptic feedback in exploration of large data sets. For each haptic technique introduced, we describe its principles and its effectiveness.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.
Resumo:
Mit der Idee eines generischen, an vielfältige Hochschulanforderungen anpassbaren Studierenden-App-Frameworks haben sich innerhalb des Arbeitskreises Web der ZKI ca. 30 Hochschulen zu einem Entwicklungsverbund zusammengefunden. Ziel ist es, an den beteiligten Einrichtungen eine umfassende Zusammenstellung aller elektronischen Studienservices zu evaluieren, übergreifende Daten- und Metadatenmodelle für die Beschreibung dieser Dienste zu erstellen und Schnittstellen zu den gängigen Campusmanagementsystemen sowie zu Infrastrukturen der elektronischen Lehre (LMS, Druckdienste, elektronischen Katalogen usw.) zu entwickeln. In einem abschließenden Schritt werden auf dieser Middleware aufsetzende Studienmanagement-Apps für Studierende erstellt, die die verschiedenen Daten- und Kommunikationsströme der standardisierten Dienste und Kommunikationskanäle bündeln und in eine für den Studierenden leicht zu durchschauende, navigationsfreundliche Aufbereitung kanalisiert. Mit der Konzeption eines dezentralen, über eine Vielzahl von Hochschulen verteilten Entwicklungsprojektes unter einer zentralen Projektleitung wird sichergestellt, dass redundante Entwicklungen vermieden, bundesweit standardisierte Serviceangebote angeboten und Wissenstransferprozesse zwischen einer Vielzahl von Hochschulen zur Nutzung mobiler Devices (Smartphones, Tablets und entsprechende Apps) angeregt werden können. Die Unterstützung der Realisierung klarer Schnittstellenspezifikationen zu Campusmanagementsystemen durch deren Anbieter kann durch diese breite Interessensgemeinschaft ebenfalls gestärkt werden. Weiterhin zentraler Planungsinhalt ist ein Angebot für den App-Nutzer zum Aufbau eines datenschutzrechtlich integeren, persönlichen E-Portfolios. Details finden sich im Kapitel Projektziele weiter unten.
Resumo:
We use electronic communication networks for more than simply traditional telecommunications: we access the news, buy goods online, file our taxes, contribute to public debate, and more. As a result, a wider array of privacy interests is implicated for users of electronic communications networks and services. . This development calls into question the scope of electronic communications privacy rules. This paper analyses the scope of these rules, taking into account the rationale and the historic background of the European electronic communications privacy framework. We develop a framework for analysing the scope of electronic communications privacy rules using three approaches: (i) a service-centric approach, (ii) a data-centric approach, and (iii) a value-centric approach. We discuss the strengths and weaknesses of each approach. The current e-Privacy Directive contains a complex blend of the three approaches, which does not seem to be based on a thorough analysis of their strengths and weaknesses. The upcoming review of the directive announced by the European Commission provides an opportunity to improve the scoping of the rules.