67 resultados para success models comparison
em Université de Lausanne, Switzerland
Resumo:
BACKGROUND: We sought to improve upon previously published statistical modeling strategies for binary classification of dyslipidemia for general population screening purposes based on the waist-to-hip circumference ratio and body mass index anthropometric measurements. METHODS: Study subjects were participants in WHO-MONICA population-based surveys conducted in two Swiss regions. Outcome variables were based on the total serum cholesterol to high density lipoprotein cholesterol ratio. The other potential predictor variables were gender, age, current cigarette smoking, and hypertension. The models investigated were: (i) linear regression; (ii) logistic classification; (iii) regression trees; (iv) classification trees (iii and iv are collectively known as "CART"). Binary classification performance of the region-specific models was externally validated by classifying the subjects from the other region. RESULTS: Waist-to-hip circumference ratio and body mass index remained modest predictors of dyslipidemia. Correct classification rates for all models were 60-80%, with marked gender differences. Gender-specific models provided only small gains in classification. The external validations provided assurance about the stability of the models. CONCLUSIONS: There were no striking differences between either the algebraic (i, ii) vs. non-algebraic (iii, iv), or the regression (i, iii) vs. classification (ii, iv) modeling approaches. Anticipated advantages of the CART vs. simple additive linear and logistic models were less than expected in this particular application with a relatively small set of predictor variables. CART models may be more useful when considering main effects and interactions between larger sets of predictor variables.
Resumo:
Due to the existence of free software and pedagogical guides, the use of data envelopment analysis (DEA) has been further democratized in recent years. Nowadays, it is quite usual for practitioners and decision makers with no or little knowledge in operational research to run themselves their own efficiency analysis. Within DEA, several alternative models allow for an environment adjustment. Five alternative models, each of them easily accessible to and achievable by practitioners and decision makers, are performed using the empirical case of the 90 primary schools of the State of Geneva, Switzerland. As the State of Geneva practices an upstream positive discrimination policy towards schools, this empirical case is particularly appropriate for an environment adjustment. The alternative of the majority of DEA models deliver divergent results. It is a matter of concern for applied researchers and a matter of confusion for practitioners and decision makers. From a political standpoint, these diverging results could lead to potentially opposite decisions. Grâce à l'existence de logiciels en libre accès et de guides pédagogiques, la méthode data envelopment analysis (DEA) s'est démocratisée ces dernières années. Aujourd'hui, il n'est pas rare que les décideurs avec peu ou pas de connaissances en recherche opérationnelle réalisent eux-mêmes leur propre analyse d'efficience. A l'intérieur de la méthode DEA, plusieurs modèles permettent de tenir compte des conditions plus ou moins favorables de l'environnement. Cinq de ces modèles, facilement accessibles et applicables par les décideurs, sont utilisés pour mesurer l'efficience des 90 écoles primaires du canton de Genève, Suisse. Le canton de Genève pratiquant une politique de discrimination positive envers les écoles défavorisées, ce cas pratique est particulièrement adapté pour un ajustement à l'environnement. La majorité des modèles DEA génèrent des résultats divergents. Ce constat est préoccupant pour les chercheurs appliqués et perturbant pour les décideurs. D'un point de vue politique, ces résultats divergents conduisent à des prises de décision différentes selon le modèle sur lequel elles sont fondées.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Due to the existence of free software and pedagogical guides, the use of Data Envelopment Analysis (DEA) has been further democratized in recent years. Nowadays, it is quite usual for practitioners and decision makers with no or little knowledge in operational research to run their own efficiency analysis. Within DEA, several alternative models allow for an environmental adjustment. Four alternative models, each user-friendly and easily accessible to practitioners and decision makers, are performed using empirical data of 90 primary schools in the State of Geneva, Switzerland. Results show that the majority of alternative models deliver divergent results. From a political and a managerial standpoint, these diverging results could lead to potentially ineffective decisions. As no consensus emerges on the best model to use, practitioners and decision makers may be tempted to select the model that is right for them, in other words, the model that best reflects their own preferences. Further studies should investigate how an appropriate multi-criteria decision analysis method could help decision makers to select the right model.
Resumo:
The authors investigated the dimensionality of the French version of the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965) using confirmatory factor analysis. We tested models of 1 or 2 factors. Results suggest the RSES is a 1-dimensional scale with 3 highly correlated items. Comparison with the Revised NEO-Personality Inventory (NEO-PI-R; Costa, McCrae, & Rolland, 1998) demonstrated that Neuroticism correlated strongly and Extraversion and Conscientiousness moderately with the RSES. Depression accounted for 47% of the variance of the RSES. Other NEO-PI-R facets were also moderately related with self-esteem.
Resumo:
A better understanding of the factors that mould ecological community structure is required to accurately predict community composition and to anticipate threats to ecosystems due to global changes. We tested how well stacked climate-based species distribution models (S-SDMs) could predict butterfly communities in a mountain region. It has been suggested that climate is the main force driving butterfly distribution and community structure in mountain environments, and that, as a consequence, climate-based S-SDMs should yield unbiased predictions. In contrast to this expectation, at lower altitudes, climate-based S-SDMs overpredicted butterfly species richness at sites with low plant species richness and underpredicted species richness at sites with high plant species richness. According to two indices of composition accuracy, the Sorensen index and a matching coefficient considering both absences and presences, S-SDMs were more accurate in plant-rich grasslands. Butterflies display strong and often specialised trophic interactions with plants. At lower altitudes, where land use is more intense, considering climate alone without accounting for land use influences on grassland plant richness leads to erroneous predictions of butterfly presences and absences. In contrast, at higher altitudes, where climate is the main force filtering communities, there were fewer differences between observed and predicted butterfly richness. At high altitudes, even if stochastic processes decrease the accuracy of predictions of presence, climate-based S-SDMs are able to better filter out butterfly species that are unable to cope with severe climatic conditions, providing more accurate predictions of absences. Our results suggest that predictions should account for plants in disturbed habitats at lower altitudes but that stochastic processes and heterogeneity at high altitudes may limit prediction success of climate-based S-SDMs.
Resumo:
Recognition by the T-cell receptor (TCR) of immunogenic peptides (p) presented by Class I major histocompatibility complexes (MHC) is the key event in the immune response against virus-infected cells or tumor cells. A study of the 2C TCR/SIYR/H-2K(b) system using a computational alanine scanning and a much faster binding free energy decomposition based on the Molecular Mechanics-Generalized Born Surface Area (MM-GBSA) method is presented. The results show that the TCR-p-MHC binding free energy decomposition using this approach and including entropic terms provides a detailed and reliable description of the interactions between the molecules at an atomistic level. Comparison of the decomposition results with experimentally determined activity differences for alanine mutants yields a correlation of 0.67 when the entropy is neglected and 0.72 when the entropy is taken into account. Similarly, comparison of experimental activities with variations in binding free energies determined by computational alanine scanning yields correlations of 0.72 and 0.74 when the entropy is neglected or taken into account, respectively. Some key interactions for the TCR-p-MHC binding are analyzed and some possible side chains replacements are proposed in the context of TCR protein engineering. In addition, a comparison of the two theoretical approaches for estimating the role of each side chain in the complexation is given, and a new ad hoc approach to decompose the vibrational entropy term into atomic contributions, the linear decomposition of the vibrational entropy (LDVE), is introduced. The latter allows the rapid calculation of the entropic contribution of interesting side chains to the binding. This new method is based on the idea that the most important contributions to the vibrational entropy of a molecule originate from residues that contribute most to the vibrational amplitude of the normal modes. The LDVE approach is shown to provide results very similar to those of the exact but highly computationally demanding method.
Resumo:
Predictive species distribution modelling (SDM) has become an essential tool in biodiversity conservation and management. The choice of grain size (resolution) of environmental layers used in modelling is one important factor that may affect predictions. We applied 10 distinct modelling techniques to presence-only data for 50 species in five different regions, to test whether: (1) a 10-fold coarsening of resolution affects predictive performance of SDMs, and (2) any observed effects are dependent on the type of region, modelling technique, or species considered. Results show that a 10 times change in grain size does not severely affect predictions from species distribution models. The overall trend is towards degradation of model performance, but improvement can also be observed. Changing grain size does not equally affect models across regions, techniques, and species types. The strongest effect is on regions and species types, with tree species in the data sets (regions) with highest locational accuracy being most affected. Changing grain size had little influence on the ranking of techniques: boosted regression trees remain best at both resolutions. The number of occurrences used for model training had an important effect, with larger sample sizes resulting in better models, which tended to be more sensitive to grain. Effect of grain change was only noticeable for models reaching sufficient performance and/or with initial data that have an intrinsic error smaller than the coarser grain size.
Resumo:
BACKGROUND: Frailty, as defined by the index derived from the Cardiovascular Health Study (CHS index), predicts risk of adverse outcomes in older adults. Use of this index, however, is impractical in clinical practice. METHODS: We conducted a prospective cohort study in 6701 women 69 years or older to compare the predictive validity of a simple frailty index with the components of weight loss, inability to rise from a chair 5 times without using arms, and reduced energy level (Study of Osteoporotic Fractures [SOF index]) with that of the CHS index with the components of unintentional weight loss, poor grip strength, reduced energy level, slow walking speed, and low level of physical activity. Women were classified as robust, of intermediate status, or frail using each index. Falls were reported every 4 months for 1 year. Disability (> or =1 new impairment in performing instrumental activities of daily living) was ascertained at 4(1/2) years, and fractures and deaths were ascertained during 9 years of follow-up. Area under the curve (AUC) statistics from receiver operating characteristic curve analysis and -2 log likelihood statistics were compared for models containing the CHS index vs the SOF index. RESULTS: Increasing evidence of frailty as defined by either the CHS index or the SOF index was similarly associated with an increased risk of adverse outcomes. Frail women had a higher age-adjusted risk of recurrent falls (odds ratio, 2.4), disability (odds ratio, 2.2-2.8), nonspine fracture (hazard ratio, 1.4-1.5), hip fracture (hazard ratio, 1.7-1.8), and death (hazard ratio, 2.4-2.7) (P < .001 for all models). The AUC comparisons revealed no differences between models with the CHS index vs the SOF index in discriminating falls (AUC = 0.61 for both models; P = .66), disability (AUC = 0.64; P = .23), nonspine fracture (AUC = 0.55; P = .80), hip fracture (AUC = 0.63; P = .64), or death (AUC = 0.72; P = .10). Results were similar when -2 log likelihood statistics were compared. CONCLUSION: The simple SOF index predicts risk of falls, disability, fracture, and death as well as the more complex CHS index and may provide a useful definition of frailty to identify older women at risk of adverse health outcomes in clinical practice.
Resumo:
IMPORTANCE: The 2013 American College of Cardiology/American Heart Association (ACC/AHA) guidelines introduced a prediction model and lowered the threshold for treatment with statins to a 7.5% 10-year hard atherosclerotic cardiovascular disease (ASCVD) risk. Implications of the new guideline's threshold and model have not been addressed in non-US populations or compared with previous guidelines. OBJECTIVE: To determine population-wide implications of the ACC/AHA, the Adult Treatment Panel III (ATP-III), and the European Society of Cardiology (ESC) guidelines using a cohort of Dutch individuals aged 55 years or older. DESIGN, SETTING, AND PARTICIPANTS: We included 4854 Rotterdam Study participants recruited in 1997-2001. We calculated 10-year risks for "hard" ASCVD events (including fatal and nonfatal coronary heart disease [CHD] and stroke) (ACC/AHA), hard CHD events (fatal and nonfatal myocardial infarction, CHD mortality) (ATP-III), and atherosclerotic CVD mortality (ESC). MAIN OUTCOMES AND MEASURES: Events were assessed until January 1, 2012. Per guideline, we calculated proportions of individuals for whom statins would be recommended and determined calibration and discrimination of risk models. RESULTS: The mean age was 65.5 (SD, 5.2) years. Statins would be recommended for 96.4% (95% CI, 95.4%-97.1%; n = 1825) of men and 65.8% (95% CI, 63.8%-67.7%; n = 1523) of women by the ACC/AHA, 52.0% (95% CI, 49.8%-54.3%; n = 985) of men and 35.5% (95% CI, 33.5%-37.5%; n = 821) of women by the ATP-III, and 66.1% (95% CI, 64.0%-68.3%; n = 1253) of men and 39.1% (95% CI, 37.1%-41.2%; n = 906) of women by ESC guidelines. With the ACC/AHA model, average predicted risk vs observed cumulative incidence of hard ASCVD events was 21.5% (95% CI, 20.9%-22.1%) vs 12.7% (95% CI, 11.1%-14.5%) for men (192 events) and 11.6% (95% CI, 11.2%-12.0%) vs 7.9% (95% CI, 6.7%-9.2%) for women (151 events). Similar overestimation occurred with the ATP-III model (98 events in men and 62 events in women) and ESC model (50 events in men and 37 events in women). The C statistic was 0.67 (95% CI, 0.63-0.71) in men and 0.68 (95% CI, 0.64-0.73) in women for hard ASCVD (ACC/AHA), 0.67 (95% CI, 0.62-0.72) in men and 0.69 (95% CI, 0.63-0.75) in women for hard CHD (ATP-III), and 0.76 (95% CI, 0.70-0.82) in men and 0.77 (95% CI, 0.71-0.83) in women for CVD mortality (ESC). CONCLUSIONS AND RELEVANCE: In this European population aged 55 years or older, proportions of individuals eligible for statins differed substantially among the guidelines. The ACC/AHA guideline would recommend statins for nearly all men and two-thirds of women, proportions exceeding those with the ATP-III or ESC guidelines. All 3 risk models provided poor calibration and moderate to good discrimination. Improving risk predictions and setting appropriate population-wide thresholds are necessary to facilitate better clinical decision making.
Resumo:
Aim This study compares the direct, macroecological approach (MEM) for modelling species richness (SR) with the more recent approach of stacking predictions from individual species distributions (S-SDM). We implemented both approaches on the same dataset and discuss their respective theoretical assumptions, strengths and drawbacks. We also tested how both approaches performed in reproducing observed patterns of SR along an elevational gradient.Location Two study areas in the Alps of Switzerland.Methods We implemented MEM by relating the species counts to environmental predictors with statistical models, assuming a Poisson distribution. S-SDM was implemented by modelling each species distribution individually and then stacking the obtained prediction maps in three different ways - summing binary predictions, summing random draws of binomial trials and summing predicted probabilities - to obtain a final species count.Results The direct MEM approach yields nearly unbiased predictions centred around the observed mean values, but with a lower correlation between predictions and observations, than that achieved by the S-SDM approaches. This method also cannot provide any information on species identity and, thus, community composition. It does, however, accurately reproduce the hump-shaped pattern of SR observed along the elevational gradient. The S-SDM approach summing binary maps can predict individual species and thus communities, but tends to overpredict SR. The two other S-SDM approaches the summed binomial trials based on predicted probabilities and summed predicted probabilities - do not overpredict richness, but they predict many competing end points of assembly or they lose the individual species predictions, respectively. Furthermore, all S-SDM approaches fail to appropriately reproduce the observed hump-shaped patterns of SR along the elevational gradient.Main conclusions Macroecological approach and S-SDM have complementary strengths. We suggest that both could be used in combination to obtain better SR predictions by following the suggestion of constraining S-SDM by MEM predictions.
Resumo:
The shape of the energy spectrum produced by an x-ray tube has a great importance in mammography. Many anode-filtration combinations have been proposed to obtain the most effective spectrum shape for the image quality-dose relationship. On the other hand, third generation synchrotrons such as the European Synchrotron Radiation Facility in Grenoble are able to produce a high flux of monoenergetic radiation. It is thus a powerful tool to study the effect of beam energy on image quality and dose in mammography. An objective method was used to evaluate image quality and dose in mammography with synchrotron radiation and to compare them to standard conventional units. It was performed systematically in the energy range of interest for mammography through the evaluation of a global image quality index and through the measurement of the mean glandular dose. Compared to conventional mammography units, synchrotron radiation shows a great improvement of the image quality-dose relationship, which is due to the beam monochromaticity and to the high intrinsic collimation of the beam, which allows the use of a slit instead of an anti-scatter grid for scatter rejection.