22 resultados para sisäinen benchmarking
em Université de Lausanne, Switzerland
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
Participation is a key indicator of the potential effectiveness of any population-based intervention. Defining, measuring and reporting participation in cancer screening programmes has become more heterogeneous as the number and diversity of interventions have increased, and the purposes of this benchmarking parameter have broadened. This study, centred on colorectal cancer, addresses current issues that affect the increasingly complex task of comparing screening participation across settings. Reports from programmes with a defined target population and active invitation scheme, published between 2005 and 2012, were reviewed. Differences in defining and measuring participation were identified and quantified, and participation indicators were grouped by aims of measure and temporal dimensions. We found that consistent terminology, clear and complete reporting of participation definition and systematic documentation of coverage by invitation were lacking. Further, adherence to definitions proposed in the 2010 European Guidelines for Quality Assurance in Colorectal Cancer Screening was suboptimal. Ineligible individuals represented 1% to 15% of invitations, and variable criteria for ineligibility yielded differences in participation estimates that could obscure the interpretation of colorectal cancer screening participation internationally. Excluding ineligible individuals from the reference population enhances comparability of participation measures. Standardised measures of cumulative participation to compare screening protocols with different intervals and inclusion of time since invitation in definitions are urgently needed to improve international comparability of colorectal cancer screening participation. Recommendations to improve comparability of participation indicators in cancer screening interventions are made.
Resumo:
Background and aims: Family-centred care is an expected standard in PICU and parent reported outcomes are rarely measured. The Dutch validated EMPATHIC questionnaire provides accurate measures of parental perceptions of family-centred care in PICU. A French version would provide an important resource for quality control and benchmarking with other PICUs. The study aimed to translate and to assess the French cultural adaptation of the EMPATHIC questionnaire. Methods: In September 2012, following approval from the developer, translation and cultural adaptation were performed using a structured method (Wild et al. 2005). This included forward-backward translation and reconciliation by an official translator, harmonization assessed by the research team, and cognitive debriefing with the target users' population. In this last step, a convenience sample of parents with PICU experience assessed the comprehensibility and cultural relevance of the 65-item French EMPATHIC questionnaire. The PICUs in Lausanne, Switzerland and Lille, France participated. Results: Seventeen parents, including 13 French native and 4 French as second language speakers, tested the cognitive equivalence and cultural relevance of the French EMPATHIC questionnaire. The mean agreement for comprehensibility of all 65 items reached 90.2%. Three items fell below the cut-off 80% agreement and were revised for inclusion in the final French version. Conclusions: The translation and the cultural adaptation permitted to highlight a few cultural differences that did not interfere with the main construct of the EMPATHIC questionnaire. Reliability and validity testing with a new sample of parents is needed to strengthen the psychometric properties of the French EMPATHIC questionnaire.
Resumo:
Given the rapid increase of species with a sequenced genome, the need to identify orthologous genes between them has emerged as a central bioinformatics task. Many different methods exist for orthology detection, which makes it difficult to decide which one to choose for a particular application. Here, we review the latest developments and issues in the orthology field, and summarize the most recent results reported at the third 'Quest for Orthologs' meeting. We focus on community efforts such as the adoption of reference proteomes, standard file formats and benchmarking. Progress in these areas is good, and they are already beneficial to both orthology consumers and providers. However, a major current issue is that the massive increase in complete proteomes poses computational challenges to many of the ortholog database providers, as most orthology inference algorithms scale at least quadratically with the number of proteomes. The Quest for Orthologs consortium is an open community with a number of working groups that join efforts to enhance various aspects of orthology analysis, such as defining standard formats and datasets, documenting community resources and benchmarking. AVAILABILITY AND IMPLEMENTATION: All such materials are available at http://questfororthologs.org. CONTACT: erik.sonnhammer@scilifelab.se or c.dessimoz@ucl.ac.uk.
Resumo:
European regulatory networks (ERNs) constitute the main governance instrument for the informal co-ordination of public regulation at the European Union (EU) level. They are in charge of co-ordinating national regulators and ensuring the implementation of harmonized regulatory policies across the EU, while also offering sector-specific expertise to the Commission. To this aim, ERNs develop 'best practices' and benchmarking procedures in the form of standards, norms and guidelines to be adopted in member states. In this paper, we focus on the Committee of European Securities Regulators and examine the consequences of the policy-making structure of ERNs on the domestic adoption of standards. We find that the regulators of countries with larger financial industries tend to occupy more central positions in the network, especially among newer member states. In turn, network centrality is associated with a more prompt domestic adoption of standards.
Resumo:
National and international registries are essential tools for establishing new standards and comparing success rates, but they do not take into account the total pregnancy/delivery rate per oocyte recovery. In Switzerland and Germany, because of legal constraints, a maximum of three two-pronuclear zygotes are allocated for transfer whereas all the supernumerary pronuclear zygotes are immediately cryopreserved, preventing selection of the transferred embryos. We report on a 10 years' experience (1993-2002) of our centre which performs transfers of unselected embryos and cryopreservation at the two-pronuclear zygote stage. As approximately 30% of all deliveries are from cryo cycles, it is essential to take into account the contribution of the cryo transfers, and we propose therefore to evaluate, as a measure of IVF performance, the cumulated delivery rate per oocyte pick-up. This delivery rate is broken down further into the cumulated singleton delivery rate (CUSIDERA) and the cumulated twin delivery rate (CUTWIDERA). The sum (S) of these two rates is a measure of efficacy while the ratio CUTWIDERA/S as a percentage is a measure of safety of IVF treatments. Using these new indexes, the average 10 year efficacy and safety of our IVF programme were 26 and 19%, respectively. Both CUSIDERA and CUTWIDERA can be calculated easily in any clinical situation and yield useful parameters for patient counselling and internal/external benchmarking purposes.
Resumo:
Natural genetic variation can have a pronounced influence on human taste perception, which in turn may influence food preference and dietary choice. Genome-wide association studies represent a powerful tool to understand this influence. To help optimize the design of future genome-wide-association studies on human taste perception we have used the well-known TAS2R38-PROP association as a tool to determine the relative power and efficiency of different phenotyping and data-analysis strategies. The results show that the choice of both data collection and data processing schemes can have a very substantial impact on the power to detect genotypic variation that affects chemosensory perception. Based on these results we provide practical guidelines for the design of future GWAS studies on chemosensory phenotypes. Moreover, in addition to the TAS2R38 gene past studies have implicated a number of other genetic loci to affect taste sensitivity to PROP and the related bitter compound PTC. None of these other locations showed genome-wide significant associations in our study. To facilitate further, target-gene driven, studies on PROP taste perception we provide the genome-wide list of p-values for all SNPs genotyped in the current study.
Resumo:
Participation is a key indicator of the potential effectiveness of any population-based intervention. Defining, measuring and reporting participation in cancer screening programmes has become more heterogeneous as the number and diversity of interventions have increased, and the purposes of this benchmarking parameter have broadened. This study, centred on colorectal cancer, addresses current issues that affect the increasingly complex task of comparing screening participation across settings. Reports from programmes with a defined target population and active invitation scheme, published between 2005 and 2012, were reviewed. Differences in defining and measuring participation were identified and quantified, and participation indicators were grouped by aims of measure and temporal dimensions. We found that consistent terminology, clear and complete reporting of participation definition and systematic documentation of coverage by invitation were lacking. Further, adherence to definitions proposed in the 2010 European Guidelines for Quality Assurance in Colorectal Cancer Screening was suboptimal. Ineligible individuals represented 1% to 15% of invitations, and variable criteria for ineligibility yielded differences in participation estimates that could obscure the interpretation of colorectal cancer screening participation internationally. Excluding ineligible individuals from the reference population enhances comparability of participation measures. Standardised measures of cumulative participation to compare screening protocols with different intervals and inclusion of time since invitation in definitions are urgently needed to improve international comparability of colorectal cancer screening participation. Recommendations to improve comparability of participation indicators in cancer screening interventions are made.
Resumo:
La course effrénée à la publication, « publish or perish », participe au renforcement d'une (nouvelle) norme scientifique : la compétitivité. Celle-ci s'est institutionnalisée lors de réformes néolibérales, tant au niveau des systèmes nationaux de recherche et d'enseignement que des instances européennes et internationales. En partant de ce contexte politique, nous avons problématisé la «fabrique» d'un sujet néolibéral, celle du chercheur-entrepreneur, sous l'angle des techniques de gouvernement à distance (benchmarking, systèmes de classement, grilles d'évaluation, etc.) compris comme des «techniques de soi» pouvant amener les enseignants-chercheurs à s'identifier et à agir comme des chercheurs-entrepreneurs. De cette perspective foucaldienne de la «fabrique» du chercheur-entrepreneur, nous distinguons, in fine, une double posture herméneutique possible: un «souci de soi» a-critique, d'ordre gestionnaire, confiant dans les règles méritocratiques et un «souci de soi» critique, d'ordre intellectuel, inquiet de voir s'installer la compétitivité comme norme dominante du champ scientifique.
Resumo:
The infinite slope method is widely used as the geotechnical component of geomorphic and landscape evolution models. Its assumption that shallow landslides are infinitely long (in a downslope direction) is usually considered valid for natural landslides on the basis that they are generally long relative to their depth. However, this is rarely justified, because the critical length/depth (L/H) ratio below which edge effects become important is unknown. We establish this critical L/H ratio by benchmarking infinite slope stability predictions against finite element predictions for a set of synthetic two-dimensional slopes, assuming that the difference between the predictions is due to error in the infinite slope method. We test the infinite slope method for six different L/H ratios to find the critical ratio at which its predictions fall within 5% of those from the finite element method. We repeat these tests for 5000 synthetic slopes with a range of failure plane depths, pore water pressures, friction angles, soil cohesions, soil unit weights and slope angles characteristic of natural slopes. We find that: (1) infinite slope stability predictions are consistently too conservative for small L/H ratios; (2) the predictions always converge to within 5% of the finite element benchmarks by a L/H ratio of 25 (i.e. the infinite slope assumption is reasonable for landslides 25 times longer than they are deep); but (3) they can converge at much lower ratios depending on slope properties, particularly for low cohesion soils. The implication for catchment scale stability models is that the infinite length assumption is reasonable if their grid resolution is coarse (e.g. >25?m). However, it may also be valid even at much finer grid resolutions (e.g. 1?m), because spatial organization in the predicted pore water pressure field reduces the probability of short landslides and minimizes the risk that predicted landslides will have L/H ratios less than 25. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.