38 resultados para benchmark
em Université de Lausanne, Switzerland
Resumo:
MOTIVATION: Microarray results accumulated in public repositories are widely reused in meta-analytical studies and secondary databases. The quality of the data obtained with this technology varies from experiment to experiment, and an efficient method for quality assessment is necessary to ensure their reliability. RESULTS: The lack of a good benchmark has hampered evaluation of existing methods for quality control. In this study, we propose a new independent quality metric that is based on evolutionary conservation of expression profiles. We show, using 11 large organ-specific datasets, that IQRray, a new quality metrics developed by us, exhibits the highest correlation with this reference metric, among 14 metrics tested. IQRray outperforms other methods in identification of poor quality arrays in datasets composed of arrays from many independent experiments. In contrast, the performance of methods designed for detecting outliers in a single experiment like Normalized Unscaled Standard Error and Relative Log Expression was low because of the inability of these methods to detect datasets containing only low-quality arrays and because the scores cannot be directly compared between experiments. AVAILABILITY AND IMPLEMENTATION: The R implementation of IQRray is available at: ftp://lausanne.isb-sib.ch/pub/databases/Bgee/general/IQRray.R. CONTACT: Marta.Rosikiewicz@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Resumo:
Phylogenomic databases provide orthology predictions for species with fully sequenced genomes. Although the goal seems well-defined, the content of these databases differs greatly. Seven ortholog databases (Ensembl Compara, eggNOG, HOGENOM, InParanoid, OMA, OrthoDB, Panther) were compared on the basis of reference trees. For three well-conserved protein families, we observed a generally high specificity of orthology assignments for these databases. We show that differences in the completeness of predicted gene relationships and in the phylogenetic information are, for the great majority, not due to the methods used, but to differences in the underlying database concepts. According to our metrics, none of the databases provides a fully correct and comprehensive protein classification. Our results provide a framework for meaningful and systematic comparisons of phylogenomic databases. In the future, a sustainable set of 'Gold standard' phylogenetic trees could provide a robust method for phylogenomic databases to assess their current quality status, measure changes following new database releases and diagnose improvements subsequent to an upgrade of the analysis procedure.
Resumo:
Natural genetic variation can have a pronounced influence on human taste perception, which in turn may influence food preference and dietary choice. Genome-wide association studies represent a powerful tool to understand this influence. To help optimize the design of future genome-wide-association studies on human taste perception we have used the well-known TAS2R38-PROP association as a tool to determine the relative power and efficiency of different phenotyping and data-analysis strategies. The results show that the choice of both data collection and data processing schemes can have a very substantial impact on the power to detect genotypic variation that affects chemosensory perception. Based on these results we provide practical guidelines for the design of future GWAS studies on chemosensory phenotypes. Moreover, in addition to the TAS2R38 gene past studies have implicated a number of other genetic loci to affect taste sensitivity to PROP and the related bitter compound PTC. None of these other locations showed genome-wide significant associations in our study. To facilitate further, target-gene driven, studies on PROP taste perception we provide the genome-wide list of p-values for all SNPs genotyped in the current study.
Resumo:
Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models
Resumo:
We address the challenges of treating polarization and covalent interactions in docking by developing a hybrid quantum mechanical/molecular mechanical (QM/MM) scoring function based on the semiempirical self-consistent charge density functional tight-binding (SCC-DFTB) method and the CHARMM force field. To benchmark this scoring function within the EADock DSS docking algorithm, we created a publicly available dataset of high-quality X-ray structures of zinc metalloproteins ( http://www.molecular-modelling.ch/resources.php ). For zinc-bound ligands (226 complexes), the QM/MM scoring yielded a substantially improved success rate compared to the classical scoring function (77.0% vs 61.5%), while, for allosteric ligands (55 complexes), the success rate remained constant (49.1%). The QM/MM scoring significantly improved the detection of correct zinc-binding geometries and improved the docking success rate by more than 20% for several important drug targets. The performance of both the classical and the QM/MM scoring functions compare favorably to the performance of AutoDock4, AutoDock4Zn, and AutoDock Vina.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
Background: Anaesthesia Databank Switzerland (ADS) is a voluntary data registry introduced in 1996. The goal was to promote quality in anaesthesiology. Methods: Analysis of routinely recorded adverse events. Internal and external benchmark comparisons between anaesthesia departments. Results: In 2010, the database included 2'158'735 anaesthetic procedures. Forty-four anaesthesia departments were participating to the data collection in 2010. Over time, the number of patients in older age groups increased, the largest group being patients aged 50 to 64 years. Over time, the percentage of patients with ASA physical status score 1 decreased while the number of ASA 2 or 3 patients increased. The most frequent co-morbidities were hypertension (21%), smoking (16%), allergy (15%), and obesity (12%). Between 1996 and 2010, 146'459 adverse events were recorded, of which 34% were cardiovascular, 7% respiratory, 39% specific to anaesthesia and 17% nonspecific. The overall proportion of adverse events decreased over time, whatever their severity. Conclusion: The ADS routine data collection contributes to monitoring the trends of anaesthesia care in Switzerland.
Resumo:
Cloud computing has recently become very popular, and several bioinformatics applications exist already in that domain. The aim of this article is to analyse a current cloud system with respect to usability, benchmark its performance and compare its user friendliness with a conventional cluster job submission system. Given the current hype on the theme, user expectations are rather high, but current results show that neither the price/performance ratio nor the usage model is very satisfactory for large-scale embarrassingly parallel applications. However, for small to medium scale applications that require CPU time at certain peak times the cloud is a suitable alternative.
Resumo:
The aim of this contribution is to highlight the long-term evolution of family capitalism in Switzerland during the twentieth century. We focus on 22 large companies of the machine, electrotechnical and metallurgy (MEM) sector whose boards of directors and general managers have been identified in five benchmark years across the twentieth century, which allows us to distinguish between family-owned and family-controlled firms. Our results show that family firms prevailed until the 1980s and thus contradict the dominance of 'managerial capitalism'. Although we observe a decline of family capitalism during the last decade of the century, the significant remaining presence of family firms in 2000 allows us to relativise the advent of investor capitalism.
Resumo:
Aims and background. In 2002, a survey including 1759 patients treated from 1980 to 1998 established a "benchmark" Italian data source for prostate cancer radiotherapy. This report updates the previous one. Methods. Data on clinical management and outcomes of 3001 patients treated in 15 centers from 1999 through 2003 were analyzed and compared with those of the previous survey. Results. Significant differences in clinical management (-10% had abdominal ma-gnetic resonance imaging; +26% received ≥70 Gy, +48% conformal radiotherapy, -20% pelvic radiotherapy) and in G3-4 toxicity rates (-3.8%) were recorded. Actuarial 5-year overall, disease-specific, clinical relapse-free, and biochemical relapse-free survival rates were 88%, 96%, 96% and 88%, respectively. At multivariate analysis, D'Amico risk categories significantly impacted on all the outcomes; higher radiotherapy doses were significantly related with better overall survival rates, and a similar trend was evident for disease-specific and biochemical relapse-free survival; cumulative probability of 5-year late G1-4 toxicity was 24.8% and was significantly related to higher radiotherapy doses (P <0.001). Conclusions. The changing patterns of practice described seem related to an improvement in efficacy and safety of radiotherapy for prostate cancer. However, the impact of the new radiotherapy techniques should be prospectively evaluated.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Humans differ substantially with respect to susceptibility to human immunodeficiency virus type 1 (HIV-1). We evaluated variants of nine host genes participating in the viral life cycle for their role in modulating HIV-1 infection. Alleles were assessed ex vivo for their impact on viral replication in purified CD4 T cells from healthy blood donors (n = 128). Thereafter, candidate alleles were assessed in vivo in a cohort of HIV-1-infected individuals (n = 851) not receiving potent antiretroviral therapy. As a benchmark test, we tested 12 previously reported host genetic variants influencing HIV-1 infection as well as single nucleotide polymorphisms in the nine candidate genes. This led to the proposition of three alleles of PML, TSG101, and PPIA as potentially associated with differences in progression of HIV-1 disease. In a model considering the combined effects of new and previously reported gene variants, we estimated that their effect might be responsible for lengthening or shortening by up to 2.8 years the period from 500 CD4 T cells/mul to <200 CD4 T cells/mul.
Resumo:
This working paper presents the Basic Indicators for Better Governance in International Sport (BIBGIS) as a tool to assess and measure the state of governance of international sport governing bodies. The working paper is organised as follows. We start by presenting different definitions of governance and some examples of principles of good governance in sport and critique them. We then introduce our approach which is based on a limited number of indicators divided among seven dimensions and apply it to the International Olympic Committee (IOC) and other international sport governing bodies. Although our approach can also be used to benchmark the governance of different sport organisations, we demonstrate that it faces limitations. We conclude with suggested next steps for future BIBGIS developments.
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.