953 resultados para open data capabilities
Resumo:
he physics program of the NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) experiment at the CERN SPS consists of three subjects. In the first stage of data taking (2007-2009) measurements of hadron production in hadron-nucleus interactions needed for neutrino (T2K) and cosmic-ray (Pierre Auger and KASCADE) experiments will be performed. In the second stage (2009-2010) hadron production in proton-proton and proton-nucleus interactions needed as reference data for a better understanding of nucleus-nucleus reactions will be studied. In the third stage (2009-2013) energy dependence of hadron production properties will be measured in p+p, p+Pb interactions and nucleus-nucleus collisions, with the aim to identify the properties of the onset of deconfinement and find evidence for the critical point of strongly interacting matter. The NA61 experiment was approved at CERN in June 2007. The first pilot run was performed during October 2007. Calibrations of all detector components have been performed successfully and preliminary uncorrected spectra have been obtained. High quality of track reconstruction and particle identification similar to NA49 has been achieved. The data and new detailed simulations confirm that the NA61 detector acceptance and particle identification capabilities cover the phase space required by the T2K experiment. This document reports on the progress made in the calibration and analysis of the 2007 data.
Resumo:
In this paper we introduce a cooperative environment between the Interactive Digital TV (IDTV) and home networking with the aim of allowing the interaction between interactive TV applications and the controllers of the in-home appliances in a natural way. More specifically, our proposal consists of merging MHP (Multimedia Home Platform), one of the main standard frameworks for IDTV, with OSGi (Open Service Gateway Initiative), the most widely used open platform to set up Residential Gateways. To overcome the radically different nature of these specifications the function-oriented MHP middleware and the service-oriented OSGi framework , we define a new kind of application, coined as XbundLET. Although this software bridge is suitable to enable the interaction between MHP and OSGi applications in both directions, we concretely focus on exposing our implementation experience in only one direction: from MHP to the OSGi world.
Resumo:
We describe the use of log file analysis to investigate whether the use of CSCL applications corresponds to its didactical purposes. Exemplarily we examine the use of the web-based system CommSy as software support for project-oriented university courses. We present two findings: (1) We suggest measures to shape the context of CSCL applications and support their initial and continuous use. (2) We show how log files can be used to analyze how, when and by whom a CSCL system is used and thus help to validate further empirical findings. However, log file analyses can only be interpreted reasonably when additional data concerning the context of use is available.
Resumo:
BACKGROUND Ankle arthrodesis results in measurable improvements in terms of pain and function in patients with end-stage ankle arthritis. Arthroscopic ankle arthrodesis has gained increasing popularity, with reports of shorter hospital stays, shorter time to solid fusion, and equivalent union rates when compared with open arthrodesis. However, there remains a lack of high-quality prospective data. METHODS We evaluated the results of open and arthroscopic ankle arthrodesis in a comparative case series of patients who were managed at two institutions and followed for two years. The primary outcome was the Ankle Osteoarthritis Scale score, and secondary outcomes included the Short Form-36 physical and mental component scores, the length of hospital stay, and radiographic alignment. There were thirty patients in each group. RESULTS Both groups showed significant improvement in the Ankle Osteoarthritis Scale score and the Short Form-36 physical component score at one and two years. There was significantly greater improvement in the Ankle Osteoarthritis Scale score at one year and two years and shorter hospital stay in the arthroscopic arthrodesis group. Complications, surgical time, and radiographic alignment were similar between the two groups. CONCLUSIONS Open and arthroscopic ankle arthrodesis were associated with significant improvement in terms of pain and function as measured with the Ankle Osteoarthritis Scale score. Arthroscopic arthrodesis resulted in a shorter hospital stay and showed better outcomes at one and two years.
Resumo:
Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.
Resumo:
BACKGROUND Open radical cystectomy (ORC) is associated with substantial blood loss and a high incidence of perioperative blood transfusions. Strategies to reduce blood loss and blood transfusion are warranted. OBJECTIVE To determine whether continuous norepinephrine administration combined with intraoperative restrictive hydration with Ringer's maleate solution can reduce blood loss and the need for blood transfusion. DESIGN, SETTING, AND PARTICIPANTS This was a double-blind, randomised, parallel-group, single-centre trial including 166 consecutive patients undergoing ORC with urinary diversion (UD). Exclusion criteria were severe hepatic or renal dysfunction, congestive heart failure, and contraindications to epidural analgesia. INTERVENTION Patients were randomly allocated to continuous norepinephrine administration starting with 2 μg/kg per hour combined with 1 ml/kg per hour until the bladder was removed, then to 3 ml/kg per hour of Ringer's maleate solution (norepinephrine/low-volume group) or 6 ml/kg per hour of Ringer's maleate solution throughout surgery (control group). OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Intraoperative blood loss and the percentage of patients requiring blood transfusions perioperatively were assessed. Data were analysed using nonparametric statistical models. RESULTS AND LIMITATIONS Total median blood loss was 800 ml (range: 300-1700) in the norepinephrine/low-volume group versus 1200 ml (range: 400-2800) in the control group (p<0.0001). In the norepinephrine/low-volume group, 27 of 83 patients (33%) required an average of 1.8 U (±0.8) of packed red blood cells (PRBCs). In the control group, 50 of 83 patients (60%) required an average of 2.9 U (±2.1) of PRBCs during hospitalisation (relative risk: 0.54; 95% confidence interval [CI], 0.38-0.77; p=0.0006). The absolute reduction in transfusion rate throughout hospitalisation was 28% (95% CI, 12-45). In this study, surgery was performed by three high-volume surgeons using a standardised technique, so whether these significant results are reproducible in other centres needs to be shown. CONCLUSIONS Continuous norepinephrine administration combined with restrictive hydration significantly reduces intraoperative blood loss, the rate of blood transfusions, and the number of PRBC units required per patient undergoing ORC with UD.
Resumo:
Background A beneficial effect of regional anesthesia on cancer related outcome in various solid tumors has been proposed. The data on prostate cancer is conflicting and reports on long-term cancer specific survival are lacking. Methods In a retrospective, single-center study, outcomes of 148 consecutive patients with locally advanced prostate cancer pT3/4 who underwent retropubic radical prostatectomy (RRP) with general anesthesia combined with intra- and postoperative epidural analgesia (n=67) or with postoperative ketorolac-morphine analgesia (n=81) were reviewed. The median observation time was 14.00 years (range 10.87-17.75 yrs). Biochemical recurrence (BCR)-free, local and distant recurrence-free, cancer-specific, and overall survival were estimated using the Kaplan-Meier technique. Multivariate Cox proportional-hazards regression models were used to analyze clinicopathologic variables associated with disease progression and death. Results The survival estimates for BCR-free, local and distant recurrence-free, cancer-specific survival and overall survival did not differ between the two groups (P=0.64, P=0.75, P=0.18, P=0.32 and P=0.07). For both groups, higher preoperative PSA (hazard ratio (HR) 1.02, 95% confidence interval (CI) 1.01-1.02, P<0.0001), increased specimen Gleason score (HR 1.24, 95% CI 1.06-1.46, P=0.007) and positive nodal status (HR 1.66, 95% CI 1.03-2.67, P=0.04) were associated with higher risk of BCR. Increased specimen Gleason score predicted death from prostate cancer (HR 2.46, 95% CI 1.65-3.68, P<0.0001). Conclusions General anaesthesia combined with epidural analgesia did not reduce the risk of cancer progression or improve survival after RRP for prostate cancer in this group of patients at high risk for disease progression with a median observation time of 14.00 yrs.
Resumo:
This paper presents an overview of the Mobile Data Challenge (MDC), a large-scale research initiative aimed at generating innovations around smartphone-based research, as well as community-based evaluation of mobile data analysis methodologies. First, we review the Lausanne Data Collection Campaign (LDCC), an initiative to collect unique longitudinal smartphone dataset for the MDC. Then, we introduce the Open and Dedicated Tracks of the MDC, describe the specific datasets used in each of them, discuss the key design and implementation aspects introduced in order to generate privacy-preserving and scientifically relevant mobile data resources for wider use by the research community, and summarize the main research trends found among the 100+ challenge submissions. We finalize by discussing the main lessons learned from the participation of several hundred researchers worldwide in the MDC Tracks.
Resumo:
Molybdenum isotopes are increasingly widely applied in Earth Sciences. They are primarily used to investigate the oxygenation of Earth's ocean and atmosphere. However, more and more fields of application are being developed, such as magmatic and hydrothermal processes, planetary sciences or the tracking of environmental pollution. Here, we present a proposal for a unifying presentation of Mo isotope ratios in the studies of mass-dependent isotope fractionation. We suggest that the δ98/95Mo of the NIST SRM 3134 be defined as +0.25‰. The rationale is that the vast majority of published data are presented relative to reference materials that are similar, but not identical, and that are all slightly lighter than NIST SRM 3134. Our proposed data presentation allows a direct first-order comparison of almost all old data with future work while referring to an international measurement standard. In particular, canonical δ98/95Mo values such as +2.3‰ for seawater and −0.7‰ for marine Fe–Mn precipitates can be kept for discussion. As recent publications show that the ocean molybdenum isotope signature is homogeneous, the IAPSO ocean water standard or any other open ocean water sample is suggested as a secondary measurement standard, with a defined δ98/95Mo value of +2.34 ± 0.10‰ (2s). Les isotopes du molybdène (Mo) sont de plus en plus largement utilisés dans les sciences de la Terre. Ils sont principalement utilisés pour étudier l'oxygénation de l'océan et de l'atmosphère de la Terre. Cependant, de plus en plus de domaines d'application sont en cours de développement, tels que ceux concernant les processus magmatiques et hydrothermaux, les sciences planétaires ou encore le suivi de la pollution environnementale. Ici, nous présentons une proposition de présentation unifiée des rapports isotopiques du Mo dans les études du fractionnement isotopique dépendant de la masse. Nous suggérons que le δ98/95Mo du NIST SRM 3134 soit définit comme étant égal à +0.25 ‰. La raison est que la grande majorité des données publiées sont présentés par rapport à des matériaux de référence qui sont similaires, mais pas identiques, et qui sont tous légèrement plus léger que le NIST SRM 3134. Notre proposition de présentation des données permet une comparaison directe au premier ordre de presque toutes les anciennes données avec les travaux futurs en se référant à un standard international. En particulier, les valeurs canoniques du δ98/95Mo comme celle de +2,3 ‰ pour l'eau de mer et de -0,7 ‰ pour les précipités de Fe-Mn marins peuvent être conservés pour la discussion. Comme les publications récentes montrent que la signature isotopique moyenne du molybdène de l'océan est homogène, le standard de l'eau océanique IAPSO ou tout autre échantillon d'eau provenant de l'océan ouvert sont proposé comme standards secondaires, avec une valeur définie du δ98/95 Mo de 2.34 ± 0.10 ‰ (2s).
Resumo:
Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. In order to evaluate this hypothesis, the general goal of this research is to build models for survival prediction of glioma patients using DNA molecular profiles (U133 Affymetrix gene expression microarrays) along with clinical information. First, a predictive Random Forest model is built for binary outcomes (i.e. short vs. long-term survival) and a small subset of genes whose expression values can be used to predict survival time is selected. Following, a new statistical methodology is developed for predicting time-to-death outcomes using Bayesian ensemble trees. Due to a large heterogeneity observed within prognostic classes obtained by the Random Forest model, prediction can be improved by relating time-to-death with gene expression profile directly. We propose a Bayesian ensemble model for survival prediction which is appropriate for high-dimensional data such as gene expression data. Our approach is based on the ensemble "sum-of-trees" model which is flexible to incorporate additive and interaction effects between genes. We specify a fully Bayesian hierarchical approach and illustrate our methodology for the CPH, Weibull, and AFT survival models. We overcome the lack of conjugacy using a latent variable formulation to model the covariate effects which decreases computation time for model fitting. Also, our proposed models provides a model-free way to select important predictive prognostic markers based on controlling false discovery rates. We compare the performance of our methods with baseline reference survival methods and apply our methodology to an unpublished data set of brain tumor survival times and gene expression data, selecting genes potentially related to the development of the disease under study. A closing discussion compares results obtained by Random Forest and Bayesian ensemble methods under the biological/clinical perspectives and highlights the statistical advantages and disadvantages of the new methodology in the context of DNA microarray data analysis.
Resumo:
The current state of health and biomedicine includes an enormity of heterogeneous data ‘silos’, collected for different purposes and represented differently, that are presently impossible to share or analyze in toto. The greatest challenge for large-scale and meaningful analyses of health-related data is to achieve a uniform data representation for data extracted from heterogeneous source representations. Based upon an analysis and categorization of heterogeneities, a process for achieving comparable data content by using a uniform terminological representation is developed. This process addresses the types of representational heterogeneities that commonly arise in healthcare data integration problems. Specifically, this process uses a reference terminology, and associated "maps" to transform heterogeneous data to a standard representation for comparability and secondary use. The capture of quality and precision of the “maps” between local terms and reference terminology concepts enhances the meaning of the aggregated data, empowering end users with better-informed queries for subsequent analyses. A data integration case study in the domain of pediatric asthma illustrates the development and use of a reference terminology for creating comparable data from heterogeneous source representations. The contribution of this research is a generalized process for the integration of data from heterogeneous source representations, and this process can be applied and extended to other problems where heterogeneous data needs to be merged.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
Neodymium (Nd) isotopes are an important geochemical tool to trace the present and past water mass mixing as well as continental inputs. The distribution of Nd concentrations in open ocean surface waters (0�100 m) is generally assumed to be controlled by lateral mixing of Nd from coastal surface currents and by removal through reversible particle scavenging. However, using 228Ra activity as an indicator of coastal water mass influence, surface water Nd concentration data available on key oceanic transects as a whole do not support the above scenario. From a global compilation of available data, we find that more stratified regions are generally associated with low surface Nd concentrations. This implies that upper ocean vertical supply may be an as yet neglected primary factor in determining the basin-scale variations of surface water Nd concentrations. Similar to the mechanism of nutrients supply, it is likely that stratification inhibits vertical supply of Nd from the subsurface thermocline waters and thus the magnitude of Nd flux to the surface layer. Consistently, the estimated required input flux of Nd to the surface layer to maintain the observed concentrations could be nearly two orders of magnitudes larger than riverine/dust flux, and also larger than the model-based estimation on shelf-derived coastal flux. In addition, preliminary results from modeling experiments reveal that the input from shallow boundary sources, riverine input, and release from dust are actually not the primary factors controlling Nd concentrations most notably in the Pacific and Southern Ocean surface waters.
Resumo:
High-throughput assays, such as yeast two-hybrid system, have generated a huge amount of protein-protein interaction (PPI) data in the past decade. This tremendously increases the need for developing reliable methods to systematically and automatically suggest protein functions and relationships between them. With the available PPI data, it is now possible to study the functions and relationships in the context of a large-scale network. To data, several network-based schemes have been provided to effectively annotate protein functions on a large scale. However, due to those inherent noises in high-throughput data generation, new methods and algorithms should be developed to increase the reliability of functional annotations. Previous work in a yeast PPI network (Samanta and Liang, 2003) has shown that the local connection topology, particularly for two proteins sharing an unusually large number of neighbors, can predict functional associations between proteins, and hence suggest their functions. One advantage of the work is that their algorithm is not sensitive to noises (false positives) in high-throughput PPI data. In this study, we improved their prediction scheme by developing a new algorithm and new methods which we applied on a human PPI network to make a genome-wide functional inference. We used the new algorithm to measure and reduce the influence of hub proteins on detecting functionally associated proteins. We used the annotations of the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) as independent and unbiased benchmarks to evaluate our algorithms and methods within the human PPI network. We showed that, compared with the previous work from Samanta and Liang, our algorithm and methods developed in this study improved the overall quality of functional inferences for human proteins. By applying the algorithms to the human PPI network, we obtained 4,233 significant functional associations among 1,754 proteins. Further comparisons of their KEGG and GO annotations allowed us to assign 466 KEGG pathway annotations to 274 proteins and 123 GO annotations to 114 proteins with estimated false discovery rates of <21% for KEGG and <30% for GO. We clustered 1,729 proteins by their functional associations and made pathway analysis to identify several subclusters that are highly enriched in certain signaling pathways. Particularly, we performed a detailed analysis on a subcluster enriched in the transforming growth factor β signaling pathway (P<10-50) which is important in cell proliferation and tumorigenesis. Analysis of another four subclusters also suggested potential new players in six signaling pathways worthy of further experimental investigations. Our study gives clear insight into the common neighbor-based prediction scheme and provides a reliable method for large-scale functional annotations in this post-genomic era.
Resumo:
Dendrogeomorphology uses information sources recorded in the roots, trunks and branches of trees and bushes located in the fluvial system to complement (or sometimes even replace) systematic and palaeohydrological records of past floods. The application of dendrogeomorphic data sources and methods to palaeoflood analysis over nearly 40 years has allowed improvements to be made in frequency and magnitude estimations of past floods. Nevertheless, research carried out so far has shown that the dendrogeomorphic indicators traditionally used (mainly scar evidence), and their use to infer frequency and magnitude, have been restricted to a small, limited set of applications. New possibilities with enormous potential remain unexplored. New insights in future research of palaeoflood frequency and magnitude using dendrogeomorphic data sources should: (1) test the application of isotopic indicators (16O/18O ratio) to discover the meteorological origin of past floods; (2) use different dendrogeomorphic indicators to estimate peak flows with 2D (and 3D) hydraulic models and study how they relate to other palaeostage indicators; (3) investigate improved calibration of 2D hydraulic model parameters (roughness); and (4) apply statistics-based cost–benefit analysis to select optimal mitigation measures. This paper presents an overview of these innovative methodologies, with a focus on their capabilities and limitations in the reconstruction of recent floods and palaeofloods.