989 resultados para computational estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To present a novel algorithm for estimating recruitable alveolar collapse and hyperdistension based on electrical impedance tomography (EIT) during a decremental positive end-expiratory pressure (PEEP) titration. Technical note with illustrative case reports. Respiratory intensive care unit. Patients with acute respiratory distress syndrome. Lung recruitment and PEEP titration maneuver. Simultaneous acquisition of EIT and X-ray computerized tomography (CT) data. We found good agreement (in terms of amount and spatial location) between the collapse estimated by EIT and CT for all levels of PEEP. The optimal PEEP values detected by EIT for patients 1 and 2 (keeping lung collapse < 10%) were 19 and 17 cmH(2)O, respectively. Although pointing to the same non-dependent lung regions, EIT estimates of hyperdistension represent the functional deterioration of lung units, instead of their anatomical changes, and could not be compared directly with static CT estimates for hyperinflation. We described an EIT-based method for estimating recruitable alveolar collapse at the bedside, pointing out its regional distribution. Additionally, we proposed a measure of lung hyperdistension based on regional lung mechanics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explore the task of optimal quantum channel identification and in particular, the estimation of a general one-parameter quantum process. We derive new characterizations of optimality and apply the results to several examples including the qubit depolarizing channel and the harmonic oscillator damping channel. We also discuss the geometry of the problem and illustrate the usefulness of using entanglement in process estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum information theory, applied to optical interferometry, yields a 1/n scaling of phase uncertainty Delta phi independent of the applied phase shift phi, where n is the number of photons in the interferometer. This 1/n scaling is achieved provided that the output state is subjected to an optimal phase measurement. We establish this scaling law for both passive (linear) and active (nonlinear) interferometers and identify the coefficient of proportionality. Whereas a highly nonclassical state is required to achieve optimal scaling for passive interferometry, a classical input state yields a 1/n scaling of phase uncertainty for active interferometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The open channel diameter of Escherichia coli recombinant large-conductance mechanosensitive ion channels (MscL) was estimated using the model of Hille (Hille, B. 1968. Pharmacological modifications of the sodium channels of frog nerve. J. Gen. Physiol. 51:199-219)that relates the pore size to conductance. Based on the MscL conductance of 3.8 nS, and assumed pore lengths, a channel diameter of 34 to 46 Angstrom was calculated. To estimate the pore size experimentally, the effect of large organic ions on the conductance of MscL was examined. Poly-L-lysines (PLLs) with a diameter of 37 Angstrom or larger significantly reduced channel conductance, whereas spermine (similar to 15 Angstrom), PLL19 (similar to 25 Angstrom) and 1,1'-bis-(3-(1'-methyl-(4,4'-bipyridinium)-1-yl)-propyl)-4,4'-bipyridinium (similar to 30 Angstrom) had no effect. The smaller organic ions putrescine, cadaverine, spermine, and succinate all permeated the channel. We conclude that the open pore diameter of the MscL is similar to 40 Angstrom, indicating that the MscL has one of the largest channel pores yet described. This channel diameter is consistent with the proposed homohexameric model of the MscL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parenteral anticoagulation is a cornerstone in the management of venous and arterial thrombosis. Unfractionated heparin has a wide dose/response relationship, requiring frequent and troublesome laboratorial follow-up. Because of all these factors, low-molecular-weight heparin use has been increasing. Inadequate dosage has been pointed out as a potential problem because the use of subjectively estimated weight instead of real measured weight is common practice in the emergency department (ED). To evaluate the impact of inadequate weight estimation on enoxaparin dosage, we investigated the adequacy of anticoagulation of patients in a tertiary ED where subjective weight estimation is common practice. We obtained the estimated, informed, and measured weight of 28 patients in need of parenteral anticoagulation. Basal and steady-state (after the second subcutaneous shot of enoxaparin) anti-Xa activity was obtained as a measure of adequate anticoagulation. The patients were divided into 2 groups according the anticoagulation adequacy. From the 28 patients enrolled, 75% (group 1, n = 21) received at least 0.9 mg/kg per dose BID and 25% (group 2, n = 7) received less than 0.9 mg/kg per dose BID of enoxaparin. Only 4 (14.3%) of all patients had anti-Xa activity less than the inferior limit of the therapeutic range (<0.5 UI/mL), all of them from group 2. In conclusion, when weight estimation was used to determine the enoxaparin dosage, 25% of the patients were inadequately anticoagulated (anti-Xa activity <0.5 UI/mL) during the initial crucial phase of treatment. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human leukocyte antigen (HLA) haplotypes are frequently evaluated for population history inferences and association studies. However, the available typing techniques for the main HLA loci usually do not allow the determination of the allele phase and the constitution of a haplotype, which may be obtained by a very time-consuming and expensive family-based segregation study. Without the family-based study, computational inference by probabilistic models is necessary to obtain haplotypes. Several authors have used the expectation-maximization (EM) algorithm to determine HLA haplotypes, but high levels of erroneous inferences are expected because of the genetic distance among the main HLA loci and the presence of several recombination hotspots. In order to evaluate the efficiency of computational inference methods, 763 unrelated individuals stratified into three different datasets had their haplotypes manually defined in a family-based study of HLA-A, -B, -DRB1 and -DQB1 segregation, and these haplotypes were compared with the data obtained by the following three methods: the Expectation-Maximization (EM) and Excoffier-Laval-Balding (ELB) algorithms using the arlequin 3.11 software, and the PHASE method. When comparing the methods, we observed that all algorithms showed a poor performance for haplotype reconstruction with distant loci, estimating incorrect haplotypes for 38%-57% of the samples considering all algorithms and datasets. We suggest that computational haplotype inferences involving low-resolution HLA-A, HLA-B, HLA-DRB1 and HLA-DQB1 haplotypes should be considered with caution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Food portion size estimation involves a complex mental process that may influence food consumption evaluation. Knowing the variables that influence this process can improve the accuracy of dietary assessment. The present study aimed to evaluate the ability of nutrition students to estimate food portions in usual meals and relate food energy content with errors in food portion size estimation. Methods: Seventy-eight nutrition students, who had already studied food energy content, participated in this cross-sectional study on the estimation of food portions, organised into four meals. The participants estimated the quantity of each food, in grams or millilitres, with the food in view. Estimation errors were quantified, and their magnitude were evaluated. Estimated quantities (EQ) lower than 90% and higher than 110% of the weighed quantity (WQ) were considered to represent underestimation and overestimation, respectively. Correlation between food energy content and error on estimation was analysed by the Spearman correlation, and comparison between the mean EQ and WQ was accomplished by means of the Wilcoxon signed rank test (P < 0.05). Results: A low percentage of estimates (18.5%) were considered accurate (+/- 10% of the actual weight). The most frequently underestimated food items were cauliflower, lettuce, apple and papaya; the most often overestimated items were milk, margarine and sugar. A significant positive correlation between food energy density and estimation was found (r = 0.8166; P = 0.0002). Conclusions: The results obtained in the present study revealed a low percentage of acceptable estimations of food portion size by nutrition students, with trends toward overestimation of high-energy food items and underestimation of low-energy items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The magnitude of the basic reproduction ratio R(0) of an epidemic can be estimated in several ways, namely, from the final size of the epidemic, from the average age at first infection, or from the initial growth phase of the outbreak. In this paper, we discuss this last method for estimating R(0) for vector-borne infections. Implicit in these models is the assumption that there is an exponential phase of the outbreaks, which implies that in all cases R(0) > 1. We demonstrate that an outbreak is possible, even in cases where R(0) is less than one, provided that the vector-to-human component of R(0) is greater than one and that a certain number of infected vectors are introduced into the affected population. This theory is applied to two real epidemiological dengue situations in the southeastern part of Brazil, one where R(0) is less than one, and other one where R(0) is greater than one. In both cases, the model mirrors the real situations with reasonable accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A mixture model incorporating long-term survivors has been adopted in the field of biostatistics where some individuals may never experience the failure event under study. The surviving fractions may be considered as cured. In most applications, the survival times are assumed to be independent. However, when the survival data are obtained from a multi-centre clinical trial, it is conceived that the environ mental conditions and facilities shared within clinic affects the proportion cured as well as the failure risk for the uncured individuals. It necessitates a long-term survivor mixture model with random effects. In this paper, the long-term survivor mixture model is extended for the analysis of multivariate failure time data using the generalized linear mixed model (GLMM) approach. The proposed model is applied to analyse a numerical data set from a multi-centre clinical trial of carcinoma as an illustration. Some simulation experiments are performed to assess the applicability of the model based on the average biases of the estimates formed. Copyright (C) 2001 John Wiley & Sons, Ltd.