790 resultados para electoral prediction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bioactive small molecules, such as drugs or metabolites, bind to proteins or other macro-molecular targets to modulate their activity, which in turn results in the observed phenotypic effects. For this reason, mapping the targets of bioactive small molecules is a key step toward unraveling the molecular mechanisms underlying their bioactivity and predicting potential side effects or cross-reactivity. Recently, large datasets of protein-small molecule interactions have become available, providing a unique source of information for the development of knowledge-based approaches to computationally identify new targets for uncharacterized molecules or secondary targets for known molecules. Here, we introduce SwissTargetPrediction, a web server to accurately predict the targets of bioactive molecules based on a combination of 2D and 3D similarity measures with known ligands. Predictions can be carried out in five different organisms, and mapping predictions by homology within and between different species is enabled for close paralogs and orthologs. SwissTargetPrediction is accessible free of charge and without login requirement at http://www.swisstargetprediction.ch.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background/objectives:Bioelectrical impedance analysis (BIA) is used in population and clinical studies as a technique for estimating body composition. Because of significant under-representation in existing literature, we sought to develop and validate predictive equation(s) for BIA for studies in populations of African origin.Subjects/methods:Among five cohorts of the Modeling the Epidemiologic Transition Study, height, weight, waist circumference and body composition, using isotope dilution, were measured in 362 adults, ages 25-45 with mean body mass indexes ranging from 24 to 32. BIA measures of resistance and reactance were measured using tetrapolar placement of electrodes and the same model of analyzer across sites (BIA 101Q, RJL Systems). Multiple linear regression analysis was used to develop equations for predicting fat-free mass (FFM), as measured by isotope dilution; covariates included sex, age, waist, reactance and height(2)/resistance, along with dummy variables for each site. Developed equations were then tested in a validation sample; FFM predicted by previously published equations were tested in the total sample.Results:A site-combined equation and site-specific equations were developed. The mean differences between FFM (reference) and FFM predicted by the study-derived equations were between 0.4 and 0.6âeuro0/00kg (that is, 1% difference between the actual and predicted FFM), and the measured and predicted values were highly correlated. The site-combined equation performed slightly better than the site-specific equations and the previously published equations.Conclusions:Relatively small differences exist between BIA equations to estimate FFM, whether study-derived or published equations, although the site-combined equation performed slightly better than others. The study-derived equations provide an important tool for research in these understudied populations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Several markers of atherosclerosis and of inflammation have been shown to predict coronary heart disease (CHD) individually. However, the utility of markers of atherosclerosis and of inflammation on prediction of CHD over traditional risk factors has not been well established, especially in the elderly. METHODS: We studied 2202 men and women, aged 70-79, without baseline cardiovascular disease over 6-year follow-up to assess the risk of incident CHD associated with baseline noninvasive measures of atherosclerosis (ankle-arm index [AAI], aortic pulse wave velocity [aPWV]) and inflammatory markers (interleukin-6 [IL-6], C-reactive protein [CRP], tumor necrosis factor-a [TNF-a]). CHD events were studied as either nonfatal myocardial infarction or coronary death ("hard" events), and "hard" events plus hospitalization for angina, or the need for coronary-revascularization procedures (total CHD events). RESULTS: During the 6-year follow-up, 283 participants had CHD events (including 136 "hard" events). IL-6, TNF-a and AAI independently predicted CHD events above Framingham Risk Score (FRS) with hazard ratios [HR] for the highest as compared with the lowest quartile for IL-6 of 1.95 (95%CI: 1.38-2.75, p for trend<0.001), TNF-a of 1.45 (95%CI: 1.04-2.02, p for trend 0.03), of 1.66 (95%CI: 1.19-2.31) for AAI £0.9, as compared to AAI 1.01-1.30. CRP and aPWV were not independently associated with CHD events. Results were similar for "hard" CHD events. Addition of IL-6 and AAI to traditional cardiovascular risk factors yielded the greatest improvement in the prediction of CHD; C-index for "hard"/total CHD events increased from 0.62/0.62 for traditional risk factors to 0.64/0.64 for IL-6 addition, 0.65/0.63 for AAI, and 0.66/0.64 for IL-6 combined with AAI. Being in the highest quartile of IL-6 combined with an AAI £ 0.90 or >1.40 yielded an HR of 2.51 (1.50-4.19) and 4.55 (1.65-12.50) above FRS, respectively. With use of CHD risk categories, risk prediction at 5 years was more accurate in models that included IL-6, AAI or both, with 8.0, 8.3 and 12.1% correctly reclassified respectively. CONCLUSIONS: Among older adults, markers of atherosclerosis and of inflammation, particularly IL-6 and AAI, are independently associated with CHD. However, these markers only modestly improve cardiovascular risk prediction beyond traditional risk factors. Acknowledgments: This study was supported by Contracts NO1-AG-6-2101, NO1-AG-6- 2103, and NO1-AG-6-2106 of the National Institute on Aging. This research was supported in part by the Intramural Research Program of the NIH, National Institute on Aging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Osteoporotic hip fractures increase dramatically with age and are responsible for considerable morbidity and mortality. Several treatments to prevent the occurrence of hip fracture have been validated in large randomized trials and the current challenge is to improve the identification of individuals at high risk of fracture who would benefit from therapeutic or preventive intervention. We have performed an exhaustive literature review on hip fracture predictors, focusing primarily on clinical risk factors, dual X-ray absorptiometry (DXA), quantitative ultrasound, and bone markers. This review is based on original articles and meta-analyses. We have selected studies that aim both to predict the risk of hip fracture and to discriminate individuals with or without fracture. We have included only postmenopausal women in our review. For studies involving both men and women, only results concerning women have been considered. Regarding clinical factors, only prospective studies have been taken into account. Predictive factors have been used as stand-alone tools to predict hip fracture or sequentially through successive selection processes or by combination into risk scores. There is still much debate as to whether or not the combination of these various parameters, as risk scores or as sequential or concurrent combinations, could help to better predict hip fracture. There are conflicting results on whether or not such combinations provide improvement over each method alone. Sequential combination of bone mineral density and ultrasound parameters might be cost-effective compared with DXA alone, because of fewer bone mineral density measurements. However, use of multiple techniques may increase costs. One problem that precludes comparison of most published studies is that they use either relative risk, or absolute risk, or sensitivity and specificity. The absolute risk of individuals given their risk factors and bone assessment results would be a more appropriate model for decision-making than relative risk. Currently, a group appointed by the World Health Organization and lead by Professor John Kanis is working on such a model. It will therefore be possible to further assess the best choice of threshold to optimize the number of women needed to screen for each country and each treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Guidelines for the prevention of coronary heart disease (CHD) recommend use of Framingham-based risk scores that were developed in white middle-aged populations. It remains unclear whether and how CHD risk prediction might be improved among older adults. We aimed to compare the prognostic performance of the Framingham risk score (FRS), directly and after recalibration, with refit functions derived from the present cohort, as well as to assess the utility of adding other routinely available risk parameters to FRS.¦METHODS: Among 2193 black and white older adults (mean age, 73.5 years) without pre-existing cardiovascular disease from the Health ABC cohort, we examined adjudicated CHD events, defined as incident myocardial infarction, CHD death, and hospitalization for angina or coronary revascularization.¦RESULTS: During 8-year follow-up, 351 participants experienced CHD events. The FRS poorly discriminated between persons who experienced CHD events vs. not (C-index: 0.577 in women; 0.583 in men) and underestimated absolute risk prediction by 51% in women and 8% in men. Recalibration of the FRS improved absolute risk prediction, particulary for women. For both genders, refitting these functions substantially improved absolute risk prediction, with similar discrimination to the FRS. Results did not differ between whites and blacks. The addition of lifestyle variables, waist circumference and creatinine did not improve risk prediction beyond risk factors of the FRS.¦CONCLUSIONS: The FRS underestimates CHD risk in older adults, particularly in women, although traditional risk factors remain the best predictors of CHD. Re-estimated risk functions using these factors improve accurate estimation of absolute risk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El principal objectiu del projecte era desenvolupar millores conceptuals i metodològiques que permetessin una millor predicció dels canvis en la distribució de les espècies (a una escala de paisatge) derivats de canvis ambientals en un context dominat per pertorbacions. En un primer estudi, vàrem comparar l'eficàcia de diferents models dinàmics per a predir la distribució de l'hortolà (Emberiza hortulana). Els nostres resultats indiquen que un model híbrid que combini canvis en la qualitat de l'hàbitat, derivats de canvis en el paisatge, amb un model poblacional espacialment explícit és una aproximació adequada per abordar canvis en la distribució d'espècies en contextos de dinàmica ambiental elevada i una capacitat de dispersió limitada de l'espècie objectiu. En un segon estudi abordarem la calibració mitjançant dades de seguiment de models de distribució dinàmics per a 12 espècies amb preferència per hàbitats oberts. Entre les conclusions extretes destaquem: (1) la necessitat de que les dades de seguiment abarquin aquelles àrees on es produeixen els canvis de qualitat; (2) el biaix que es produeix en la estimació dels paràmetres del model d'ocupació quan la hipòtesi de canvi de paisatge o el model de qualitat d'hàbitat són incorrectes. En el darrer treball estudiarem el possible impacte en 67 espècies d’ocells de diferents règims d’incendis, definits a partir de combinacions de nivells de canvi climàtic (portant a un augment esperat de la mida i freqüència d’incendis forestals), i eficiència d’extinció per part dels bombers. Segons els resultats dels nostres models, la combinació de factors antropogènics del regim d’incendis, tals com l’abandonament rural i l’extinció, poden ser més determinants per als canvis de distribució que els efectes derivats del canvi climàtic. Els productes generats inclouen tres publicacions científiques, una pàgina web amb resultats del projecte i una llibreria per a l'entorn estadístic R.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional methods of gene prediction rely on the recognition of DNA-sequence signals, the coding potential or the comparison of a genomic sequence with a cDNA, EST, or protein database. Reasons for limited accuracy in many circumstances are species-specific training and the incompleteness of reference databases. Lately, comparative genome analysis has attracted increasing attention. Several analysis tools that are based on human/mouse comparisons are already available. Here, we present a program for the prediction of protein-coding genes, termed SGP-1 (Syntenic Gene Prediction), which is based on the similarity of homologous genomic sequences. In contrast to most existing tools, the accuracy of SGP-1 depends little on species-specific properties such as codon usage or the nucleotide distribution. SGP-1 may therefore be applied to nonstandard model organisms in vertebrates as well as in plants, without the need for extensive parameter training. In addition to predicting genes in large-scale genomic sequences, the program may be useful to validate gene structure annotations from databases. To this end, SGP-1 output also contains comparisons between predicted and annotated gene structures in HTML format. The program can be accessed via a Web server at http://soft.ice.mpg.de/sgp-1. The source code, written in ANSI C, is available on request from the authors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the first useful products from the human genome will be a set of predicted genes. Besides its intrinsic scientific interest, the accuracy and completeness of this data set is of considerable importance for human health and medicine. Though progress has been made on computational gene identification in terms of both methods and accuracy evaluation measures, most of the sequence sets in which the programs are tested are short genomic sequences, and there is concern that these accuracy measures may not extrapolate well to larger, more challenging data sets. Given the absence of experimentally verified large genomic data sets, we constructed a semiartificial test set comprising a number of short single-gene genomic sequences with randomly generated intergenic regions. This test set, which should still present an easier problem than real human genomic sequence, mimics the approximately 200kb long BACs being sequenced. In our experiments with these longer genomic sequences, the accuracy of GENSCAN, one of the most accurate ab initio gene prediction programs, dropped significantly, although its sensitivity remained high. Conversely, the accuracy of similarity-based programs, such as GENEWISE, PROCRUSTES, and BLASTX was not affected significantly by the presence of random intergenic sequence, but depended on the strength of the similarity to the protein homolog. As expected, the accuracy dropped if the models were built using more distant homologs, and we were able to quantitatively estimate this decline. However, the specificities of these techniques are still rather good even when the similarity is weak, which is a desirable characteristic for driving expensive follow-up experiments. Our experiments suggest that though gene prediction will improve with every new protein that is discovered and through improvements in the current set of tools, we still have a long way to go before we can decipher the precise exonic structure of every gene in the human genome using purely computational methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The completion of the sequencing of the mouse genome promises to help predict human genes with greater accuracy. While current ab initio gene prediction programs are remarkably sensitive (i.e., they predict at least a fragment of most genes), their specificity is often low, predicting a large number of false-positive genes in the human genome. Sequence conservation at the protein level with the mouse genome can help eliminate some of those false positives. Here we describe SGP2, a gene prediction program that combines ab initio gene prediction with TBLASTX searches between two genome sequences to provide both sensitive and specific gene predictions. The accuracy of SGP2 when used to predict genes by comparing the human and mouse genomes is assessed on a number of data sets, including single-gene data sets, the highly curated human chromosome 22 predictions, and entire genome predictions from ENSEMBL. Results indicate that SGP2 outperforms purely ab initio gene prediction methods. Results also indicate that SGP2 works about as well with 3x shotgun data as it does with fully assembled genomes. SGP2 provides a high enough specificity that its predictions can be experimentally verified at a reasonable cost. SGP2 was used to generate a complete set of gene predictions on both the human and mouse by comparing the genomes of these two species. Our results suggest that another few thousand human and mouse genes currently not in ENSEMBL are worth verifying experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A number of studies have used protein interaction data alone for protein function prediction. Here, we introduce a computational approach for annotation of enzymes, based on the observation that similar protein sequences are more likely to perform the same function if they share similar interacting partners. Results: The method has been tested against the PSI-BLAST program using a set of 3,890 protein sequences from which interaction data was available. For protein sequences that align with at least 40% sequence identity to a known enzyme, the specificity of our method in predicting the first three EC digits increased from 80% to 90% at 80% coverage when compared to PSI-BLAST. Conclusion: Our method can also be used in proteins for which homologous sequences with known interacting partners can be detected. Thus, our method could increase 10% the specificity of genome-wide enzyme predictions based on sequence matching by PSI-BLAST alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Building a personalized model to describe the drug concentration inside the human body for each patient is highly important to the clinical practice and demanding to the modeling tools. Instead of using traditional explicit methods, in this paper we propose a machine learning approach to describe the relation between the drug concentration and patients' features. Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. We focus mainly on the prediction of the drug concentrations as well as the analysis of different features' influence. Models are built based on Support Vector Machine and the prediction results are compared with the traditional analytical models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to verify if replacing the Injury Severity Score (ISS) by the New Injury Severity Score (NISS) in the original Trauma and Injury Severity Score (TRISS) form would improve the survival rate estimation. This retrospective study was performed in a level I trauma center during one year. ROC curve was used to identify the best indicator (TRISS or NTRISS) for survival probability prediction. Participants were 533 victims, with a mean age of 38±16 years. There was predominance of motor vehicle accidents (61.9%). External injuries were more frequent (63.0%), followed by head/neck injuries (55.5%). Survival rate was 76.9%. There is predominance of ISS scores ranging from 9-15 (40.0%), and NISS scores ranging from 16-24 (25.5%). Survival probability equal to or greater than 75.0% was obtained for 83.4% of the victims according to TRISS, and for 78.4% according to NTRISS. The new version (NTRISS) is better than TRISS for survival prediction in trauma patients.