38 resultados para Online services using open-source NLP tools
Resumo:
Abstract Radiation metabolomics employing mass spectral technologies represents a plausible means of high-throughput minimally invasive radiation biodosimetry. A simplified metabolomics protocol is described that employs ubiquitous gas chromatography-mass spectrometry and open source software including random forests machine learning algorithm to uncover latent biomarkers of 3 Gy gamma radiation in rats. Urine was collected from six male Wistar rats and six sham-irradiated controls for 7 days, 4 prior to irradiation and 3 after irradiation. Water and food consumption, urine volume, body weight, and sodium, potassium, calcium, chloride, phosphate and urea excretion showed major effects from exposure to gamma radiation. The metabolomics protocol uncovered several urinary metabolites that were significantly up-regulated (glyoxylate, threonate, thymine, uracil, p-cresol) and down-regulated (citrate, 2-oxoglutarate, adipate, pimelate, suberate, azelaate) as a result of radiation exposure. Thymine and uracil were shown to derive largely from thymidine and 2'-deoxyuridine, which are known radiation biomarkers in the mouse. The radiation metabolomic phenotype in rats appeared to derive from oxidative stress and effects on kidney function. Gas chromatography-mass spectrometry is a promising platform on which to develop the field of radiation metabolomics further and to assist in the design of instrumentation for use in detecting biological consequences of environmental radiation release.
Resumo:
BACKGROUND: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e.g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. RESULTS: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/~vpopovic/research/ CONCLUSION: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.
Resumo:
Online reputation management deals with monitoring and influencing the online record of a person, an organization or a product. The Social Web offers increasingly simple ways to publish and disseminate personal or opinionated information, which can rapidly have a disastrous influence on the online reputation of some of the entities. The author focuses on the Social Web and possibilities of its integration with the Semantic Web as resource for a semi-automated tracking of online reputations using imprecise natural language terms. The inherent structure of natural language supports humans not only in communication but also in the perception of the world. Thereby fuzziness is a promising tool for transforming those human perceptions into computer artifacts. Through fuzzy grassroots ontologies, the Social Semantic Web becomes more naturally and thus can streamline online reputation management. For readers interested in the cross-over field of computer science, information systems, and social sciences, this book is an ideal source for becoming acquainted with the evolving field of fuzzy online reputation management in the Social Semantic Web area.
Resumo:
An interdisciplinary research unit consisting of 30 teams in the natural, economic and social sciences analyzed biodiversity and ecosystem services of a mountain rainforest ecosystem in the hotspot of the tropical Andes, with special reference to past, current and future environmental changes. The group assessed ecosystem services using data from ecological field and scenario-driven model experiments, and with the help of comparative field surveys of the natural forest and its anthropogenic replacement system for agriculture. The book offers insights into the impacts of environmental change on various service categories mentioned in the Millennium Ecosystem Assessment (2005): cultural, regulating, supporting and provisioning ecosystem services. Examples focus on biodiversity of plants and animals including trophic networks, and abiotic/biotic parameters such as soils, regional climate, water, nutrient and sediment cycles. The types of threats considered include land use and climate changes, as well as atmospheric fertilization. In terms of regulating and provisioning services, the emphasis is primarily on water regulation and supply as well as climate regulation and carbon sequestration. With regard to provisioning services, the synthesis of the book provides science-based recommendations for a sustainable land use portfolio including several options such as forestry, pasture management and the practices of indigenous peoples. In closing, the authors show how they integrated the local society by pursuing capacity building in compliance with the CBD-ABS (Convention on Biological Diversity - Access and Benefit Sharing), in the form of education and knowledge transfer for application.
Resumo:
Introduction. Erroneous answers in studies on the misinformation effect (ME) can be reduced in different ways. In some studies, ME was reduced by SM questions, warnings, or a low credibility of the source of post-event information (PEI). Results are inconsistent, however. Of course, a participant can deliberately decide to refrain from reporting a critical item only when the difference between the original event and the PEI is distinguishable in principle. We were interested in the question to what extent the influence of erroneous information on a central aspect of the original event can be reduced by different means applied singly or in combination. Method. With a 2 (credibility; high vs. low) x 2 (warning; present vs. absent) between subjects design and an additional control group that received neither misinformation nor a warning (N = 116), we examined the above-mentioned factors’ influence on the ME. Participants viewed a short video of a robbery. The critical item suggested in the PEI was that the victim was given a kick by the perpetrator (which he was actually not). The memory test consisted of a two-forced-choice recognition test followed by a SM test. Results. To our surprise, neither a main effect of erroneous PEI nor a main effect of credibility was found. The error rates for the critical item in the control group (50%) as well as in the high (65%) and low (52%) credibility condition without warning did not significantly differ. A warning about possible misleading information in the PEI significantly reduced the influence of misinformation in both credibility conditions by 32-37%. Using a SM question significantly reduced the error rate too, but only in the high credibility no warning condition. Conclusion and Future Research. Our results show that, contrary to a warning or the use of a SM question, low source credibility did not reduce the ME. The most striking finding was, however, the absence of a main effect of erroneous PEI. Due to the high error rate in the control group, we suspect that the wrong answers might have been caused either by the response format (recognition test) or by autosuggestion possibly promoted by the high schema-consistency of the critical item. First results of a post-study in which we used open-ended questions before the recognition test support the former assumption. Results of a replication of this study using open-ended questions prior to the recognition test will be available by June.
Resumo:
Null dereferencing is one of the most frequent bugs in Java systems causing programs to crash due to the uncaught NullPointerException. Developers often fix this bug by introducing a guard (i.e., null check) on the potentially-null objects before using them. In this paper we investigate the null checks in 717 open-source Java systems to understand when and why developers introduce null checks. We find that 35 of the if-statements are null checks. A deeper investigation shows that 71 of the checked-for-null objects are returned from method calls. This indicates that null checks have a serious impact on performance and that developers introduce null checks when they use methods that return null.
Resumo:
Swiss ambulatory care is characterized by independent, and primarily practice-based, physicians, receiving fee for service reimbursement. This study analyses supply sensitive services using ambulatory care claims data from mandatory health insurance. A first research question was aimed at the hypothesis that physicians with large patient lists decrease their intensity of services and bill less per patient to health insurance, and vice versa: physicians with smaller patient lists compensate for the lack of patients with additional visits and services. A second research question relates to the fact that several cantons are allowing physicians to directly dispense drugs to patients ('self-dispensation') whereas other cantons restrict such direct sales to emergencies only. This second question was based on the assumption that patterns of rescheduling patients for consultations may differ across channels of dispensing prescription drugs and therefore the hypothesis of different consultation costs in this context was investigated.
Resumo:
Software evolution research has focused mostly on analyzing the evolution of single software systems. However, it is rarely the case that a project exists as standalone, independent of others. Rather, projects exist in parallel within larger contexts in companies, research groups or even the open-source communities. We call these contexts software ecosystems, and on this paper we present The Small Project Observatory, a prototype tool which aims to support the analysis of project ecosystems through interactive visualization and exploration. We present a case-study of exploring an ecosystem using our tool, we describe about the architecture of the tool, and we distill the lessons learned during the tool-building experience.
Resumo:
Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Resumo:
Recovering the architecture is the first step towards reengineering a software system. Many reverse engineering tools use top-down exploration as a way of providing a visual and interactive process for architecture recovery. During the exploration process, the user navigates through various views on the system by choosing from several exploration operations. Although some sequences of these operations lead to views which, from the architectural point of view, are mode relevant than others, current tools do not provide a way of predicting which exploration paths are worth taking and which are not. In this article we propose a set of package patterns which are used for augmenting the exploration process with in formation about the worthiness of the various exploration paths. The patterns are defined based on the internal package structure and on the relationships between the package and the other packages in the system. To validate our approach, we verify the relevance of the proposed patterns for real-world systems by analyzing their frequency of occurrence in six open-source software projects.
Resumo:
A major barrier to widespread clinical implementation of Monte Carlo dose calculation is the difficulty in characterizing the radiation source within a generalized source model. This work aims to develop a generalized three-component source model (target, primary collimator, flattening filter) for 6- and 18-MV photon beams that match full phase-space data (PSD). Subsource by subsource comparison of dose distributions, using either source PSD or the source model as input, allows accurate source characterization and has the potential to ease the commissioning procedure, since it is possible to obtain information about which subsource needs to be tuned. This source model is unique in that, compared to previous source models, it retains additional correlations among PS variables, which improves accuracy at nonstandard source-to-surface distances (SSDs). In our study, three-dimensional (3D) dose calculations were performed for SSDs ranging from 50 to 200 cm and for field sizes from 1 x 1 to 30 x 30 cm2 as well as a 10 x 10 cm2 field 5 cm off axis in each direction. The 3D dose distributions, using either full PSD or the source model as input, were compared in terms of dose-difference and distance-to-agreement. With this model, over 99% of the voxels agreed within +/-1% or 1 mm for the target, within 2% or 2 mm for the primary collimator, and within +/-2.5% or 2 mm for the flattening filter in all cases studied. For the dose distributions, 99% of the dose voxels agreed within 1% or 1 mm when the combined source model-including a charged particle source and the full PSD as input-was used. The accurate and general characterization of each photon source and knowledge of the subsource dose distributions should facilitate source model commissioning procedures by allowing scaling the histogram distributions representing the subsources to be tuned.
Resumo:
BACKGROUND Fractures of the mandible (lower jaw) are a common occurrence and usually related to interpersonal violence or road traffic accidents. Mandibular fractures may be treated using open (surgical) and closed (non-surgical) techniques. Fracture sites are immobilized with intermaxillary fixation (IMF) or other external or internal devices (i.e. plates and screws) to allow bone healing. Various techniques have been used, however uncertainty exists with respect to the specific indications for each approach. OBJECTIVES The objective of this review is to provide reliable evidence of the effects of any interventions either open (surgical) or closed (non-surgical) that can be used in the management of mandibular fractures, excluding the condyles, in adult patients. SEARCH METHODS We searched the following electronic databases: the Cochrane Oral Health Group's Trials Register (to 28 February 2013), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 1), MEDLINE via OVID (1950 to 28 February 2013), EMBASE via OVID (1980 to 28 February 2013), metaRegister of Controlled Trials (to 7 April 2013), ClinicalTrials.gov (to 7 April 2013) and the WHO International Clinical Trials Registry Platform (to 7 April 2013). The reference lists of all trials identified were checked for further studies. There were no restrictions regarding language or date of publication. SELECTION CRITERIA Randomised controlled trials evaluating the management of mandibular fractures without condylar involvement. Any studies that compared different treatment approaches were included. DATA COLLECTION AND ANALYSIS At least two review authors independently assessed trial quality and extracted data. Results were to be expressed as random-effects models using mean differences for continuous outcomes and risk ratios for dichotomous outcomes with 95% confidence intervals. Heterogeneity was to be investigated to include both clinical and methodological factors. MAIN RESULTS Twelve studies, assessed as high (six) and unclear (six) risk of bias, comprising 689 participants (830 fractures), were included. Interventions examined different plate materials and morphology; use of one or two lag screws; microplate versus miniplate; early and delayed mobilization; eyelet wires versus Rapid IMF™ and the management of angle fractures with intraoral access alone or combined with a transbuccal approach. Patient-oriented outcomes were largely ignored and post-operative pain scores were inadequately reported. Unfortunately, only one or two trials with small sample sizes were conducted for each comparison and outcome. Our results and conclusions should therefore be interpreted with caution. We were able to pool the results for two comparisons assessing one outcome. Pooled data from two studies comparing two miniplates versus one miniplate revealed no significant difference in the risk of post-operative infection of surgical site (risk ratio (RR) 1.32, 95% CI 0.41 to 4.22, P = 0.64, I(2) = 0%). Similarly, no difference in post-operative infection between the use of two 3-dimensional (3D) and standard (2D) miniplates was determined (RR 1.26, 95% CI 0.19 to 8.13, P = 0.81, I(2) = 27%). The included studies involved a small number of participants with a low number of events. AUTHORS' CONCLUSIONS This review illustrates that there is currently inadequate evidence to support the effectiveness of a single approach in the management of mandibular fractures without condylar involvement. The lack of high quality evidence may be explained by clinical diversity, variability in assessment tools used and difficulty in grading outcomes with existing measurement tools. Until high level evidence is available, treatment decisions should continue to be based on the clinician's prior experience and the individual circumstances.
Resumo:
High reflective materials in the microwave region play a very important role in the realization of antenna reflectors for a broad range of applications, including radiometry. These reflectors have a characteristic emissivity which needs to be characterized accurately in order to perform a correct radiometric calibration of the instrument. Such a characterization can be performed by using open resonators, waveguide cavities or by radiometric measurements. The latter consists of comparative radiometric observations of absorbers, reference mirrors and the sample under test, or using the cold sky radiation as a direct reference source. While the first two mentioned techniques are suitable for the characterization of metal plates and mirrors, the latter has the advantages to be also applicable to soft materials. This paper describes how, through this radiometric techniques, it is possible to characterize the emissivity of the sample relative to a reference mirror and how to characterize the absolute emissivity of the latter by performing measurements at different incident angles. The results presented in this paper are based on our investigations on emissivity of a multilayer insulation material (MLI) for space mission, at the frequencies of 22 and 90 GHz.
Resumo:
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
Resumo:
Is numerical mimicry a third way of establishing truth? Kevin Heng received his M.S. and Ph.D. in astrophysics from the Joint Institute for Laboratory Astrophysics (JILA) and the University of Colorado at Boulder. He joined the Institute for Advanced Study in Princeton from 2007 to 2010, first as a Member and later as the Frank & Peggy Taplin Member. From 2010 to 2012 he was a Zwicky Prize Fellow at ETH Z¨urich (the Swiss Federal Institute of Technology). In 2013, he joined the Center for Space and Habitability (CSH) at the University of Bern, Switzerland, as a tenure-track assistant professor, where he leads the Exoplanets and Exoclimes Group. He has worked on, and maintains, a broad range of interests in astrophysics: shocks, extrasolar asteroid belts, planet formation, fluid dynamics, brown dwarfs and exoplanets. He coordinates the Exoclimes Simulation Platform (ESP), an open-source set of theoretical tools designed for studying the basic physics and chemistry of exoplanetary atmospheres and climates (www.exoclime.org). He is involved in the CHEOPS (Characterizing Exoplanet Satellite) space telescope, a mission approved by the European Space Agency (ESA) and led by Switzerland. He spends a fair amount of time humbly learning the lessons gleaned from studying the Earth and Solar System planets, as related to him by atmospheric, climate and planetary scientists. He received a Sigma Xi Grant-in-Aid of Research in 2006