797 resultados para interval prediction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To comprehensively assess pre-, intra-, and postoperative delirium risk factors as potential targets for intervention. BACKGROUND: Delirium after cardiac surgery is associated with longer intensive care unit (ICU) stay, and poorer functional and cognitive outcomes. Reports on delirium risk factors so far did not cover the full range of patients' presurgical conditions, intraoperative factors, and postoperative course. METHODS: After written informed consent, 221 consecutive patients ≥ 50 years scheduled for cardiac surgery were assessed for preoperative cognitive performance, and functional and physical status. Clinical and biochemical data were systematically recorded perioperatively. RESULTS: Of the 215 patients remaining for analysis, 31% developed delirium in the intensive care unit. Using logistic regression models, older age [73.3 (71.2-75.4) vs 68.5 (67.0-70.0); P = 0.016], higher Charlson's comorbidity index [3.0 (1.5-4.0) vs 2.0 (1.0-3.0) points; P = 0.009], lower Mini-Mental State Examination (MMSE) score (MMSE, [27 (23-29) vs 28 (27-30) points; P = 0.021], length of cardiopulmonary bypass (CPB) [CPB; 133 (112-163) vs 119 (99-143) min; P = 0.004], and systemic inflammatory response syndrome in the intensive care unit [25 (36.2%) vs 13 (8.9%); P = 0.001] were independently associated with delirium. Combining age, MMSE score, Charlson's comorbidity index, and length of CPB in a regression equation allowed for a prediction of postoperative delirium with a sensitivity of 71.19% and a specificity of 76.26% (receiver operating analysis, area under the curve: 0.791; 95% confidence interval: 0.727-0.845). CONCLUSIONS: Further research will evaluate if modification of these risk factors prevents delirium and improves outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A model-based approach for fault diagnosis is proposed, where the fault detection is based on checking the consistencyof the Analytical Redundancy Relations (ARRs) using an interval tool. The tool takes into account the uncertainty in theparameters and the measurements using intervals. Faults are explicitly included in the model, which allows for the exploitation of additional information. This information is obtained from partial derivatives computed from the ARRs. The signs in the residuals are used to prune the candidate space when performing the fault diagnosis task. The method is illustrated using a two-tank example, in which these aspects are shown to have an impact on the diagnosis and fault discrimination, since the proposed method goes beyond the structural methods

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Imatinib (Glivec®) has transformed the treatment and short-term prognosis of chronic myeloid leukaemia (CML) and gastro-intestinal stromal tumour (GIST). However, the treatment must be taken indefinitely, it is not devoid of inconvenience and toxicity. Moreover, resistance or escape from disease control occur in a significant number of patients. Imatinib is a substrate of the cytochromes P450 CYP3A4/5 and of the multidrug transporter P glycoprotein (product of the MDR1 gene). Considering the large inter-individual differences in the expression and function of those systems, the disposition and clinical activity of imatinib can be expected to vary widely among patients, calling for dosage individualisation. The aim of this exploratory study was to determine the average pharmacokinetic parameters characterizing the disposition of imatinib in the target population, to assess their inter-individual variability, and to identify influential factors affecting them. A total of 321 plasma concentrations, taken at various sampling times after latest dose, were measured in 59 patients receiving Glivec® at diverse regimens, using a validated chromatographic method (HPLC-UV) developed for this study. The results were analysed by non-linear mixed effect modelling (NONMEM). A one- compartment model with first-order absorption appeared appropriate to describe the data, with an average apparent clearance of 12.4 l/h, a distribution volume of 268 l and an absorption constant of 0.47 h-1. The clearance was affected by body weight, age and sex. No influences of interacting drugs were found. DNA samples were used for pharmacogenetic explorations. The MDR1 polymorphism 3435C>T appears to affect the disposition of imatinib. Large inter-individual variability remained unexplained by the demographic covariates considered, both on clearance (40%) and distribution volume (71%). Together with intra-patient variability (34%), this translates into an 8-fold width of the 90%-prediction interval of plasma concentrations expected under a fixed dosing regimen ! This is a strong argument to further investigate the possible usefulness of a therapeutic drug monitoring programme for imatinib. It may help to individualise the dosing regimen before overt disease progression or observation of treatment toxicity, thus improving both the long-term therapeutic effectiveness and tolerability of this drug.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El principal objectiu del projecte era desenvolupar millores conceptuals i metodològiques que permetessin una millor predicció dels canvis en la distribució de les espècies (a una escala de paisatge) derivats de canvis ambientals en un context dominat per pertorbacions. En un primer estudi, vàrem comparar l'eficàcia de diferents models dinàmics per a predir la distribució de l'hortolà (Emberiza hortulana). Els nostres resultats indiquen que un model híbrid que combini canvis en la qualitat de l'hàbitat, derivats de canvis en el paisatge, amb un model poblacional espacialment explícit és una aproximació adequada per abordar canvis en la distribució d'espècies en contextos de dinàmica ambiental elevada i una capacitat de dispersió limitada de l'espècie objectiu. En un segon estudi abordarem la calibració mitjançant dades de seguiment de models de distribució dinàmics per a 12 espècies amb preferència per hàbitats oberts. Entre les conclusions extretes destaquem: (1) la necessitat de que les dades de seguiment abarquin aquelles àrees on es produeixen els canvis de qualitat; (2) el biaix que es produeix en la estimació dels paràmetres del model d'ocupació quan la hipòtesi de canvi de paisatge o el model de qualitat d'hàbitat són incorrectes. En el darrer treball estudiarem el possible impacte en 67 espècies d’ocells de diferents règims d’incendis, definits a partir de combinacions de nivells de canvi climàtic (portant a un augment esperat de la mida i freqüència d’incendis forestals), i eficiència d’extinció per part dels bombers. Segons els resultats dels nostres models, la combinació de factors antropogènics del regim d’incendis, tals com l’abandonament rural i l’extinció, poden ser més determinants per als canvis de distribució que els efectes derivats del canvi climàtic. Els productes generats inclouen tres publicacions científiques, una pàgina web amb resultats del projecte i una llibreria per a l'entorn estadístic R.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional methods of gene prediction rely on the recognition of DNA-sequence signals, the coding potential or the comparison of a genomic sequence with a cDNA, EST, or protein database. Reasons for limited accuracy in many circumstances are species-specific training and the incompleteness of reference databases. Lately, comparative genome analysis has attracted increasing attention. Several analysis tools that are based on human/mouse comparisons are already available. Here, we present a program for the prediction of protein-coding genes, termed SGP-1 (Syntenic Gene Prediction), which is based on the similarity of homologous genomic sequences. In contrast to most existing tools, the accuracy of SGP-1 depends little on species-specific properties such as codon usage or the nucleotide distribution. SGP-1 may therefore be applied to nonstandard model organisms in vertebrates as well as in plants, without the need for extensive parameter training. In addition to predicting genes in large-scale genomic sequences, the program may be useful to validate gene structure annotations from databases. To this end, SGP-1 output also contains comparisons between predicted and annotated gene structures in HTML format. The program can be accessed via a Web server at http://soft.ice.mpg.de/sgp-1. The source code, written in ANSI C, is available on request from the authors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the first useful products from the human genome will be a set of predicted genes. Besides its intrinsic scientific interest, the accuracy and completeness of this data set is of considerable importance for human health and medicine. Though progress has been made on computational gene identification in terms of both methods and accuracy evaluation measures, most of the sequence sets in which the programs are tested are short genomic sequences, and there is concern that these accuracy measures may not extrapolate well to larger, more challenging data sets. Given the absence of experimentally verified large genomic data sets, we constructed a semiartificial test set comprising a number of short single-gene genomic sequences with randomly generated intergenic regions. This test set, which should still present an easier problem than real human genomic sequence, mimics the approximately 200kb long BACs being sequenced. In our experiments with these longer genomic sequences, the accuracy of GENSCAN, one of the most accurate ab initio gene prediction programs, dropped significantly, although its sensitivity remained high. Conversely, the accuracy of similarity-based programs, such as GENEWISE, PROCRUSTES, and BLASTX was not affected significantly by the presence of random intergenic sequence, but depended on the strength of the similarity to the protein homolog. As expected, the accuracy dropped if the models were built using more distant homologs, and we were able to quantitatively estimate this decline. However, the specificities of these techniques are still rather good even when the similarity is weak, which is a desirable characteristic for driving expensive follow-up experiments. Our experiments suggest that though gene prediction will improve with every new protein that is discovered and through improvements in the current set of tools, we still have a long way to go before we can decipher the precise exonic structure of every gene in the human genome using purely computational methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The completion of the sequencing of the mouse genome promises to help predict human genes with greater accuracy. While current ab initio gene prediction programs are remarkably sensitive (i.e., they predict at least a fragment of most genes), their specificity is often low, predicting a large number of false-positive genes in the human genome. Sequence conservation at the protein level with the mouse genome can help eliminate some of those false positives. Here we describe SGP2, a gene prediction program that combines ab initio gene prediction with TBLASTX searches between two genome sequences to provide both sensitive and specific gene predictions. The accuracy of SGP2 when used to predict genes by comparing the human and mouse genomes is assessed on a number of data sets, including single-gene data sets, the highly curated human chromosome 22 predictions, and entire genome predictions from ENSEMBL. Results indicate that SGP2 outperforms purely ab initio gene prediction methods. Results also indicate that SGP2 works about as well with 3x shotgun data as it does with fully assembled genomes. SGP2 provides a high enough specificity that its predictions can be experimentally verified at a reasonable cost. SGP2 was used to generate a complete set of gene predictions on both the human and mouse by comparing the genomes of these two species. Our results suggest that another few thousand human and mouse genes currently not in ENSEMBL are worth verifying experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A number of studies have used protein interaction data alone for protein function prediction. Here, we introduce a computational approach for annotation of enzymes, based on the observation that similar protein sequences are more likely to perform the same function if they share similar interacting partners. Results: The method has been tested against the PSI-BLAST program using a set of 3,890 protein sequences from which interaction data was available. For protein sequences that align with at least 40% sequence identity to a known enzyme, the specificity of our method in predicting the first three EC digits increased from 80% to 90% at 80% coverage when compared to PSI-BLAST. Conclusion: Our method can also be used in proteins for which homologous sequences with known interacting partners can be detected. Thus, our method could increase 10% the specificity of genome-wide enzyme predictions based on sequence matching by PSI-BLAST alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Building a personalized model to describe the drug concentration inside the human body for each patient is highly important to the clinical practice and demanding to the modeling tools. Instead of using traditional explicit methods, in this paper we propose a machine learning approach to describe the relation between the drug concentration and patients' features. Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. We focus mainly on the prediction of the drug concentrations as well as the analysis of different features' influence. Models are built based on Support Vector Machine and the prediction results are compared with the traditional analytical models.