957 resultados para Computer Prediction Program


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Patient outcomes in transplantation would improve if dosing of immunosuppressive agents was individualized. The aim of this study is to develop a population pharmacokinetic model of tacrolimus in adult liver transplant recipients and test this model in individualizing therapy. Population analysis was performed on data from 68 patients. Estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F) using the nonlinear mixed effects model program (NONMEM). Factors screened for influence on these parameters were weight, age, sex, transplant type, biliary reconstructive procedure, postoperative day, days of therapy, liver function test results, creatinine clearance, hematocrit, corticosteroid dose, and interacting drugs. The predictive performance of the developed model was evaluated through Bayesian forecasting in an independent cohort of 36 patients. No linear correlation existed between tacrolimus dosage and trough concentration (r(2) = 0.005). Mean individual Bayesian estimates for CL/F and V/F were 26.5 8.2 (SD) L/hr and 399 +/- 185 L, respectively. CL/F was greater in patients with normal liver function. V/F increased with patient weight. CL/F decreased with increasing hematocrit. Based on the derived model, a 70-kg patient with an aspartate aminotransferase (AST) level less than 70 U/L would require a tacrolimus dose of 4.7 mg twice daily to achieve a steady-state trough concentration of 10 ng/mL. A 50-kg patient with an AST level greater than 70 U/L would require a dose of 2.6 mg. Marked interindividual variability (43% to 93%) and residual random error (3.3 ng/mL) were observed. Predictions made using the final model were reasonably nonbiased (0.56 ng/mL), but imprecise (4.8 ng/mL). Pharmacokinetic information obtained will assist in tacrolimus dosing; however, further investigation into reasons for the pharmacokinetic variability of tacrolimus is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The large number of protein kinases makes it impractical to determine their specificities and substrates experimentally. Using the available crystal structures, molecular modeling, and sequence analyses of kinases and substrates, we developed a set of rules governing the binding of a heptapeptide substrate motif (surrounding the phosphorylation site) to the kinase and implemented these rules in a web-interfaced program for automated prediction of optimal substrate peptides, taking only the amino acid sequence of a protein kinase as input. We show the utility of the method by analyzing yeast cell cycle control and DNA damage checkpoint pathways. Our method is the only available predictive method generally applicable for identifying possible substrate proteins for protein serine/threonine kinases and helps in silico construction of signaling pathways. The accuracy of prediction is comparable to the accuracy of data from systematic large-scale experimental approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of new experimental techniques for the determination of phase equilibria in complex slag systems, chemical thermodynamic, and viscosity models is reported. The new experimental data, and new thermodynamic and viscosity models, have been combined in a custom-designed computer software package to produce limiting operability diagrams for slag systems. These diagrams are used to describe phase equilibria and physicochemical properties in complex slag systems. The approach is illustrated with calculations on the system FeO-Fe2O3-CaO-SiO-Al2O3 at metallic iron saturation, slags produced in coal slagging gasifiers, and in the reprocessing of nonferrous smelting slags. This article was presented at the Mills Symposium Molten Metals, Slags and Glasses-Characterisation of Properties and Phenomena held in London in August 2000.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Participation in at least 30 min of moderate intensity activity on most days is assumed to confer health benefits. This study accordingly determined whether the more vigorous household and garden tasks (sweeping, window cleaning, vacuuming and lawn mowing) are performed by middle-aged men at a moderate intensity of 3-6 metabolic equivalents (METs) in the laboratory and at home. Measured energy expenditure during self-perceived moderate-paced walking was used as a marker of exercise intensity. Energy expenditure was also predicted via indirect methods. Thirty-six males [Xmacr (SD): 40.0 (3.3) years; 179.5 (6.9) cm; 83.4 (14.0) kg] were measured for resting metabolic rate (RMR) and oxygen consumption (V.O-2) during the five activities using the Douglas bag method. Heart rate , respiratory frequency, CSA (Computer Science Applications) movement counts, Borg scale ratings of perceived exertion and Quetelet's index were also recorded as potential predictors of exercise intensity. Except for vacuuming in the laboratory, which was not significantly different from 3.0 METs (P=0.98), the MET means in the laboratory and home were all significantly greater than 3.0 (Pless than or equal to0.006). The sweeping and vacuuming MET means were significantly higher (P

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plant breeders use many different breeding methods to develop superior cultivars. However, it is difficult, cumbersome, and expensive to evaluate the performance of a breeding method or to compare the efficiencies of different breeding methods within an ongoing breeding program. To facilitate comparisons, we developed a QU-GENE module called QuCim that can simulate a large number of breeding strategies for self-pollinated species. The wheat breeding strategy Selected Bulk used by CIMMYT's wheat breeding program was defined in QuCim as an example of how this is done. This selection method was simulated in QuCim to investigate the effects of deviations from the additive genetic model, in the form of dominance and epistasis, on selection outcomes. The simulation results indicate that the partial dominance model does not greatly influence genetic advance compared with the pure additive model. Genetic advance in genetic systems with overdominance and epistasis are slower than when gene effects are purely additive or partially dominant. The additive gene effect is an appropriate indicator of the change in gene frequency following selection when epistasis is absent. In the absence of epistasis, the additive variance decreases rapidly with selection. However, after several cycles of selection it remains relatively fixed when epistasis is present. The variance from partial dominance is relatively small and therefore hard to detect by the covariance among half sibs and the covariance among full sibs. The dominance variance from the overdominance model can be identified successfully, but it does not change significantly, which confirms that overdominance cannot be utilized by an inbred breeding program. QuCim is an effective tool to compare selection strategies and to validate some theories in quantitative genetics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scorpion toxins are common experimental tools for studies of biochemical and pharmacological properties of ion channels. The number of functionally annotated scorpion toxins is steadily growing, but the number of identified toxin sequences is increasing at much faster pace. With an estimated 100,000 different variants, bioinformatic analysis of scorpion toxins is becoming a necessary tool for their systematic functional analysis. Here, we report a bioinformatics-driven system involving scorpion toxin structural classification, functional annotation, database technology, sequence comparison, nearest neighbour analysis, and decision rules which produces highly accurate predictions of scorpion toxin functional properties. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: Targeting peptides direct nascent proteins to their specific subcellular compartment. Knowledge of targeting signals enables informed drug design and reliable annotation of gene products. However, due to the low similarity of such sequences and the dynamical nature of the sorting process, the computational prediction of subcellular localization of proteins is challenging. Results: We contrast the use of feed forward models as employed by the popular TargetP/SignalP predictors with a sequence-biased recurrent network model. The models are evaluated in terms of performance at the residue level and at the sequence level, and demonstrate that recurrent networks improve the overall prediction performance. Compared to the original results reported for TargetP, an ensemble of the tested models increases the accuracy by 6 and 5% on non-plant and plant data, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide an abstract command language for real-time programs and outline how a partial correctness semantics can be used to compute execution times. The notions of a timed command, refinement of a timed command, the command traversal condition, and the worst-case and best-case execution time of a command are formally introduced and investigated with the help of an underlying weakest liberal precondition semantics. The central result is a theory for the computation of worst-case and best-case execution times from the underlying semantics based on supremum and infimum calculations. The framework is applied to the analysis of a message transmitter program and its implementation. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The real-time refinement calculus is a formal method for the systematic derivation of real-time programs from real-time specifications in a style similar to the non-real-time refinement calculi of Back and Morgan. In this paper we extend the real-time refinement calculus with procedures and provide refinement rules for refining real-time specifications to procedure calls. A real-time specification can include constraints on, not only what outputs are produced, but also when they are produced. The derived programs can also include time constraints oil when certain points in the program must be reached; these are expressed in the form of deadline commands. Such programs are machine independent. An important consequence of the approach taken is that, not only are the specifications machine independent, but the whole refinement process is machine independent. To implement the machine independent code on a target machine one has a separate task of showing that the compiled machine code will reach all its deadlines before they expire. For real-time programs, externally observable input and output variables are essential. These differ from local variables in that their values are observable over the duration of the execution of the program. Hence procedures require input and output parameter mechanisms that are references to the actual parameters so that changes to external inputs are observable within the procedure and changes to output parameters are externally observable. In addition, we allow value and result parameters. These may be auxiliary parameters, which are used for reasoning about the correctness of real-time programs as well as in the expression of timing deadlines, but do not lead to any code being generated for them by a compiler. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multiagent diagnostic system implemented in a Protege-JADE-JESS environment interfaced with a dynamic simulator and database services is described in this paper. The proposed system architecture enables the use of a combination of diagnostic methods from heterogeneous knowledge sources. The process ontology and the process agents are designed based on the structure of the process system, while the diagnostic agents implement the applied diagnostic methods. A specific completeness coordinator agent is implemented to coordinate the diagnostic agents based on different methods. The system is demonstrated on a case study for diagnosis of faults in a granulation process based on HAZOP and FMEA analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Short proteins play key roles in cell signalling and other processes, but their abundance in the mammalian proteome is unknown. Current catalogues of mammalian proteins exhibit an artefactual discontinuity at a length of 100 aa, so that protein abundance peaks just above this length and falls off sharply below it. To clarify the abundance of short proteins, we identify proteins in the FANTOM collection of mouse cDNAs by analysing synonymous and nonsynonymous substitutions with the computer program CRITICA. This analysis confirms that there is no real discontinuity at length 100. Roughly 10% of mouse proteins are shorter than 100 aa, although the majority of these are variants of proteins longer than 100 aa. We identify many novel short proteins, including a dark matter'' subset containing ones that lack detectable homology to other known proteins. Translation assays confirm that some of these novel proteins can be translated and localised to the secretory pathway.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the number of computer-assisted methods described for the derivation of steady-state equations of enzyme systems, most of them are focused on strict steady-state conditions or are not able to solve complex reaction mechanisms. Moreover, many of them are based on computer programs that are either not readily available or have limitations. We present here a computer program called WinStes, which derives equations for both strict steady-state systems and those with the assumption of rapid equilibrium, for branched or unbranched mechanisms, containing both reversible and irreversible conversion steps. It solves reaction mechanisms involving up to 255 enzyme species, connected by up to 255 conversion steps. The program provides all the advantages of the Windows programs, such as a user-friendly graphical interface, and has a short computation time. WinStes is available free of charge on request from the authors. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adsorption of Lennard-Jones fluids (argon and nitrogen) onto a graphitized thermal carbon black surface was studied with a Grand Canonical Monte Carlo Simulation (GCMC). The surface was assumed to be finite in length and composed of three graphene layers. When the GCMC simulation was used to describe adsorption on a graphite surface, an over-prediction of the isotherm was consistently observed in the pressure regions where the first and second layers are formed. To remove this over-prediction, surface mediation was accounted for to reduce the fluid-fluid interaction. Do and co-workers have introduced the so-called surface-mediation damping factor to correct the over-prediction for the case of a graphite surface of infinite extent, and this approach has yielded a good description of the adsorption isotherm. In this paper, the effects of the finite size of the graphene layer on the adsorption isotherm and how these would affect the extent of the surface mediation were studied. It was found that this finite-surface model provides a better description of the experimental data for graphitized thermal carbon black of high surface area (i.e. small crystallite size) while the infinite- surface model describes data for carbon black of very low surface area (i.e. large crystallite size).