953 resultados para LOD (Linked Open Data)
Resumo:
We describe the use of log file analysis to investigate whether the use of CSCL applications corresponds to its didactical purposes. Exemplarily we examine the use of the web-based system CommSy as software support for project-oriented university courses. We present two findings: (1) We suggest measures to shape the context of CSCL applications and support their initial and continuous use. (2) We show how log files can be used to analyze how, when and by whom a CSCL system is used and thus help to validate further empirical findings. However, log file analyses can only be interpreted reasonably when additional data concerning the context of use is available.
Resumo:
BACKGROUND Ankle arthrodesis results in measurable improvements in terms of pain and function in patients with end-stage ankle arthritis. Arthroscopic ankle arthrodesis has gained increasing popularity, with reports of shorter hospital stays, shorter time to solid fusion, and equivalent union rates when compared with open arthrodesis. However, there remains a lack of high-quality prospective data. METHODS We evaluated the results of open and arthroscopic ankle arthrodesis in a comparative case series of patients who were managed at two institutions and followed for two years. The primary outcome was the Ankle Osteoarthritis Scale score, and secondary outcomes included the Short Form-36 physical and mental component scores, the length of hospital stay, and radiographic alignment. There were thirty patients in each group. RESULTS Both groups showed significant improvement in the Ankle Osteoarthritis Scale score and the Short Form-36 physical component score at one and two years. There was significantly greater improvement in the Ankle Osteoarthritis Scale score at one year and two years and shorter hospital stay in the arthroscopic arthrodesis group. Complications, surgical time, and radiographic alignment were similar between the two groups. CONCLUSIONS Open and arthroscopic ankle arthrodesis were associated with significant improvement in terms of pain and function as measured with the Ankle Osteoarthritis Scale score. Arthroscopic arthrodesis resulted in a shorter hospital stay and showed better outcomes at one and two years.
Resumo:
Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.
Resumo:
BACKGROUND Open radical cystectomy (ORC) is associated with substantial blood loss and a high incidence of perioperative blood transfusions. Strategies to reduce blood loss and blood transfusion are warranted. OBJECTIVE To determine whether continuous norepinephrine administration combined with intraoperative restrictive hydration with Ringer's maleate solution can reduce blood loss and the need for blood transfusion. DESIGN, SETTING, AND PARTICIPANTS This was a double-blind, randomised, parallel-group, single-centre trial including 166 consecutive patients undergoing ORC with urinary diversion (UD). Exclusion criteria were severe hepatic or renal dysfunction, congestive heart failure, and contraindications to epidural analgesia. INTERVENTION Patients were randomly allocated to continuous norepinephrine administration starting with 2 μg/kg per hour combined with 1 ml/kg per hour until the bladder was removed, then to 3 ml/kg per hour of Ringer's maleate solution (norepinephrine/low-volume group) or 6 ml/kg per hour of Ringer's maleate solution throughout surgery (control group). OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Intraoperative blood loss and the percentage of patients requiring blood transfusions perioperatively were assessed. Data were analysed using nonparametric statistical models. RESULTS AND LIMITATIONS Total median blood loss was 800 ml (range: 300-1700) in the norepinephrine/low-volume group versus 1200 ml (range: 400-2800) in the control group (p<0.0001). In the norepinephrine/low-volume group, 27 of 83 patients (33%) required an average of 1.8 U (±0.8) of packed red blood cells (PRBCs). In the control group, 50 of 83 patients (60%) required an average of 2.9 U (±2.1) of PRBCs during hospitalisation (relative risk: 0.54; 95% confidence interval [CI], 0.38-0.77; p=0.0006). The absolute reduction in transfusion rate throughout hospitalisation was 28% (95% CI, 12-45). In this study, surgery was performed by three high-volume surgeons using a standardised technique, so whether these significant results are reproducible in other centres needs to be shown. CONCLUSIONS Continuous norepinephrine administration combined with restrictive hydration significantly reduces intraoperative blood loss, the rate of blood transfusions, and the number of PRBC units required per patient undergoing ORC with UD.
Resumo:
Background A beneficial effect of regional anesthesia on cancer related outcome in various solid tumors has been proposed. The data on prostate cancer is conflicting and reports on long-term cancer specific survival are lacking. Methods In a retrospective, single-center study, outcomes of 148 consecutive patients with locally advanced prostate cancer pT3/4 who underwent retropubic radical prostatectomy (RRP) with general anesthesia combined with intra- and postoperative epidural analgesia (n=67) or with postoperative ketorolac-morphine analgesia (n=81) were reviewed. The median observation time was 14.00 years (range 10.87-17.75 yrs). Biochemical recurrence (BCR)-free, local and distant recurrence-free, cancer-specific, and overall survival were estimated using the Kaplan-Meier technique. Multivariate Cox proportional-hazards regression models were used to analyze clinicopathologic variables associated with disease progression and death. Results The survival estimates for BCR-free, local and distant recurrence-free, cancer-specific survival and overall survival did not differ between the two groups (P=0.64, P=0.75, P=0.18, P=0.32 and P=0.07). For both groups, higher preoperative PSA (hazard ratio (HR) 1.02, 95% confidence interval (CI) 1.01-1.02, P<0.0001), increased specimen Gleason score (HR 1.24, 95% CI 1.06-1.46, P=0.007) and positive nodal status (HR 1.66, 95% CI 1.03-2.67, P=0.04) were associated with higher risk of BCR. Increased specimen Gleason score predicted death from prostate cancer (HR 2.46, 95% CI 1.65-3.68, P<0.0001). Conclusions General anaesthesia combined with epidural analgesia did not reduce the risk of cancer progression or improve survival after RRP for prostate cancer in this group of patients at high risk for disease progression with a median observation time of 14.00 yrs.
Resumo:
BACKGROUND: Although brucellosis (Brucella spp.) and Q Fever (Coxiella burnetii) are zoonoses of global importance, very little high quality data are available from West Africa. METHODS/PRINCIPAL FINDINGS: A serosurvey was conducted in Togo's main livestock-raising zone in 2011 in 25 randomly selected villages, including 683 people, 596 cattle, 465 sheep and 221 goats. Additionally, 464 transhumant cattle from Burkina Faso were sampled in 2012. The serological analyses performed were the Rose Bengal Test and ELISA for brucellosis and ELISA and the immunofluorescence assay (IFA) for Q Fever Brucellosis did not appear to pose a major human health problem in the study zone, with only 7 seropositive participants. B. abortus was isolated from 3 bovine hygroma samples, and is likely to be the predominant circulating strain. This may explain the observed seropositivity amongst village cattle (9.2%, 95%CI:4.3-18.6%) and transhumant cattle (7.3%, 95%CI:3.5-14.7%), with an absence of seropositive small ruminants. Exposure of livestock and people to C. burnetii was common, potentially influenced by cultural factors. People of Fulani ethnicity had greater livestock contact and a significantly higher seroprevalence than other ethnic groups (Fulani: 45.5%, 95%CI:37.7-53.6%; non-Fulani: 27.1%, 95%CI:20.6-34.7%). Appropriate diagnostic test cut-off values in endemic settings requires further investigation. Both brucellosis and Q Fever appeared to impact on livestock production. Seropositive cows were more likely to have aborted a foetus during the previous year than seronegative cows, when adjusted for age. This odds was 3.8 times higher (95%CI: 1.2-12.1) for brucellosis and 6.7 times higher (95%CI: 1.3-34.8) for Q Fever. CONCLUSIONS: This is the first epidemiological study of zoonoses in Togo in linked human and animal populations, providing much needed data for West Africa. Exposure to Brucella and C. burnetii is common but further research is needed into the clinical and economic impact.
Resumo:
Internet of Things based systems are anticipated to gain widespread use in industrial applications. Standardization efforts, like 6L0WPAN and the Constrained Application Protocol (CoAP) have made the integration of wireless sensor nodes possible using Internet technology and web-like access to data (RESTful service access). While there are still some open issues, the interoperability problem in the lower layers can now be considered solved from an enterprise software vendors' point of view. One possible next step towards integration of real-world objects into enterprise systems and solving the corresponding interoperability problems at higher levels is to use semantic web technologies. We introduce an abstraction of real-world objects, called Semantic Physical Business Entities (SPBE), using Linked Data principles. We show that this abstraction nicely fits into enterprise systems, as SPBEs allow a business object centric view on real-world objects, instead of a pure device centric view. The interdependencies between how currently services in an enterprise system are used and how this can be done in a semantic real-world aware enterprise system are outlined, arguing for the need of semantic services and semantic knowledge repositories. We introduce a lightweight query language, which we use to perform a quantitative analysis of our approach to demonstrate its feasibility.
Resumo:
This paper presents an overview of the Mobile Data Challenge (MDC), a large-scale research initiative aimed at generating innovations around smartphone-based research, as well as community-based evaluation of mobile data analysis methodologies. First, we review the Lausanne Data Collection Campaign (LDCC), an initiative to collect unique longitudinal smartphone dataset for the MDC. Then, we introduce the Open and Dedicated Tracks of the MDC, describe the specific datasets used in each of them, discuss the key design and implementation aspects introduced in order to generate privacy-preserving and scientifically relevant mobile data resources for wider use by the research community, and summarize the main research trends found among the 100+ challenge submissions. We finalize by discussing the main lessons learned from the participation of several hundred researchers worldwide in the MDC Tracks.
Resumo:
Molybdenum isotopes are increasingly widely applied in Earth Sciences. They are primarily used to investigate the oxygenation of Earth's ocean and atmosphere. However, more and more fields of application are being developed, such as magmatic and hydrothermal processes, planetary sciences or the tracking of environmental pollution. Here, we present a proposal for a unifying presentation of Mo isotope ratios in the studies of mass-dependent isotope fractionation. We suggest that the δ98/95Mo of the NIST SRM 3134 be defined as +0.25‰. The rationale is that the vast majority of published data are presented relative to reference materials that are similar, but not identical, and that are all slightly lighter than NIST SRM 3134. Our proposed data presentation allows a direct first-order comparison of almost all old data with future work while referring to an international measurement standard. In particular, canonical δ98/95Mo values such as +2.3‰ for seawater and −0.7‰ for marine Fe–Mn precipitates can be kept for discussion. As recent publications show that the ocean molybdenum isotope signature is homogeneous, the IAPSO ocean water standard or any other open ocean water sample is suggested as a secondary measurement standard, with a defined δ98/95Mo value of +2.34 ± 0.10‰ (2s). Les isotopes du molybdène (Mo) sont de plus en plus largement utilisés dans les sciences de la Terre. Ils sont principalement utilisés pour étudier l'oxygénation de l'océan et de l'atmosphère de la Terre. Cependant, de plus en plus de domaines d'application sont en cours de développement, tels que ceux concernant les processus magmatiques et hydrothermaux, les sciences planétaires ou encore le suivi de la pollution environnementale. Ici, nous présentons une proposition de présentation unifiée des rapports isotopiques du Mo dans les études du fractionnement isotopique dépendant de la masse. Nous suggérons que le δ98/95Mo du NIST SRM 3134 soit définit comme étant égal à +0.25 ‰. La raison est que la grande majorité des données publiées sont présentés par rapport à des matériaux de référence qui sont similaires, mais pas identiques, et qui sont tous légèrement plus léger que le NIST SRM 3134. Notre proposition de présentation des données permet une comparaison directe au premier ordre de presque toutes les anciennes données avec les travaux futurs en se référant à un standard international. En particulier, les valeurs canoniques du δ98/95Mo comme celle de +2,3 ‰ pour l'eau de mer et de -0,7 ‰ pour les précipités de Fe-Mn marins peuvent être conservés pour la discussion. Comme les publications récentes montrent que la signature isotopique moyenne du molybdène de l'océan est homogène, le standard de l'eau océanique IAPSO ou tout autre échantillon d'eau provenant de l'océan ouvert sont proposé comme standards secondaires, avec une valeur définie du δ98/95 Mo de 2.34 ± 0.10 ‰ (2s).
Resumo:
Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. In order to evaluate this hypothesis, the general goal of this research is to build models for survival prediction of glioma patients using DNA molecular profiles (U133 Affymetrix gene expression microarrays) along with clinical information. First, a predictive Random Forest model is built for binary outcomes (i.e. short vs. long-term survival) and a small subset of genes whose expression values can be used to predict survival time is selected. Following, a new statistical methodology is developed for predicting time-to-death outcomes using Bayesian ensemble trees. Due to a large heterogeneity observed within prognostic classes obtained by the Random Forest model, prediction can be improved by relating time-to-death with gene expression profile directly. We propose a Bayesian ensemble model for survival prediction which is appropriate for high-dimensional data such as gene expression data. Our approach is based on the ensemble "sum-of-trees" model which is flexible to incorporate additive and interaction effects between genes. We specify a fully Bayesian hierarchical approach and illustrate our methodology for the CPH, Weibull, and AFT survival models. We overcome the lack of conjugacy using a latent variable formulation to model the covariate effects which decreases computation time for model fitting. Also, our proposed models provides a model-free way to select important predictive prognostic markers based on controlling false discovery rates. We compare the performance of our methods with baseline reference survival methods and apply our methodology to an unpublished data set of brain tumor survival times and gene expression data, selecting genes potentially related to the development of the disease under study. A closing discussion compares results obtained by Random Forest and Bayesian ensemble methods under the biological/clinical perspectives and highlights the statistical advantages and disadvantages of the new methodology in the context of DNA microarray data analysis.
Resumo:
The current state of health and biomedicine includes an enormity of heterogeneous data ‘silos’, collected for different purposes and represented differently, that are presently impossible to share or analyze in toto. The greatest challenge for large-scale and meaningful analyses of health-related data is to achieve a uniform data representation for data extracted from heterogeneous source representations. Based upon an analysis and categorization of heterogeneities, a process for achieving comparable data content by using a uniform terminological representation is developed. This process addresses the types of representational heterogeneities that commonly arise in healthcare data integration problems. Specifically, this process uses a reference terminology, and associated "maps" to transform heterogeneous data to a standard representation for comparability and secondary use. The capture of quality and precision of the “maps” between local terms and reference terminology concepts enhances the meaning of the aggregated data, empowering end users with better-informed queries for subsequent analyses. A data integration case study in the domain of pediatric asthma illustrates the development and use of a reference terminology for creating comparable data from heterogeneous source representations. The contribution of this research is a generalized process for the integration of data from heterogeneous source representations, and this process can be applied and extended to other problems where heterogeneous data needs to be merged.
Resumo:
A three-point linkage group comprised of loci coding for adenosine deaminase (ADA), glucose-6-phosphate dehydrogenase (G6PDH), and 6-phospho-gluconate dehydrogenase (6PGD) is described in fish of the genus Xiphophorus (Poeciliidae). The alleles at loci in this group were shown to assort independently from the alleles at three other loci--isocitrate dehydrogenase 1 and 2, and glyceraldehyde-3-phosphate dehydrogenase 1. Alleles at the latter three loci also assort independently from each other. Data were obtained by observing the segregation of electrophoretically variant alleles in reciprocal backcross hybrids derived from crosses between either X. helleri guentheri or X. h. strigatus and X. maculatus. The linkage component of chi2 was significant (less than 0.01) in all crosses, indicating that the linkage group is conserved in all populations of both species of Xiphophorus examined. While data from X. h. guentheri backcrosses indicate the linkage relationship ADA--6%--G6PDH--24%--6PGD, and ADA--29%--6PGD (30% when corrected for double crossovers), data from backcrosses involving strigatus, while supporting the same gene order, yielded significantly different recombination frequencies. The likelihood of the difference being due to an inversion could not be separated from the possibility of a sex effect on recombination in the present data. The linkage of 6PGD and G6PDH has been shown to exist in species of at least three classes of vertebrates, indicating the possibility of evolutionary conservation of this linkage.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
Neodymium (Nd) isotopes are an important geochemical tool to trace the present and past water mass mixing as well as continental inputs. The distribution of Nd concentrations in open ocean surface waters (0�100 m) is generally assumed to be controlled by lateral mixing of Nd from coastal surface currents and by removal through reversible particle scavenging. However, using 228Ra activity as an indicator of coastal water mass influence, surface water Nd concentration data available on key oceanic transects as a whole do not support the above scenario. From a global compilation of available data, we find that more stratified regions are generally associated with low surface Nd concentrations. This implies that upper ocean vertical supply may be an as yet neglected primary factor in determining the basin-scale variations of surface water Nd concentrations. Similar to the mechanism of nutrients supply, it is likely that stratification inhibits vertical supply of Nd from the subsurface thermocline waters and thus the magnitude of Nd flux to the surface layer. Consistently, the estimated required input flux of Nd to the surface layer to maintain the observed concentrations could be nearly two orders of magnitudes larger than riverine/dust flux, and also larger than the model-based estimation on shelf-derived coastal flux. In addition, preliminary results from modeling experiments reveal that the input from shallow boundary sources, riverine input, and release from dust are actually not the primary factors controlling Nd concentrations most notably in the Pacific and Southern Ocean surface waters.
Resumo:
High-throughput assays, such as yeast two-hybrid system, have generated a huge amount of protein-protein interaction (PPI) data in the past decade. This tremendously increases the need for developing reliable methods to systematically and automatically suggest protein functions and relationships between them. With the available PPI data, it is now possible to study the functions and relationships in the context of a large-scale network. To data, several network-based schemes have been provided to effectively annotate protein functions on a large scale. However, due to those inherent noises in high-throughput data generation, new methods and algorithms should be developed to increase the reliability of functional annotations. Previous work in a yeast PPI network (Samanta and Liang, 2003) has shown that the local connection topology, particularly for two proteins sharing an unusually large number of neighbors, can predict functional associations between proteins, and hence suggest their functions. One advantage of the work is that their algorithm is not sensitive to noises (false positives) in high-throughput PPI data. In this study, we improved their prediction scheme by developing a new algorithm and new methods which we applied on a human PPI network to make a genome-wide functional inference. We used the new algorithm to measure and reduce the influence of hub proteins on detecting functionally associated proteins. We used the annotations of the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) as independent and unbiased benchmarks to evaluate our algorithms and methods within the human PPI network. We showed that, compared with the previous work from Samanta and Liang, our algorithm and methods developed in this study improved the overall quality of functional inferences for human proteins. By applying the algorithms to the human PPI network, we obtained 4,233 significant functional associations among 1,754 proteins. Further comparisons of their KEGG and GO annotations allowed us to assign 466 KEGG pathway annotations to 274 proteins and 123 GO annotations to 114 proteins with estimated false discovery rates of <21% for KEGG and <30% for GO. We clustered 1,729 proteins by their functional associations and made pathway analysis to identify several subclusters that are highly enriched in certain signaling pathways. Particularly, we performed a detailed analysis on a subcluster enriched in the transforming growth factor β signaling pathway (P<10-50) which is important in cell proliferation and tumorigenesis. Analysis of another four subclusters also suggested potential new players in six signaling pathways worthy of further experimental investigations. Our study gives clear insight into the common neighbor-based prediction scheme and provides a reliable method for large-scale functional annotations in this post-genomic era.