971 resultados para Parameter-estimation
Resumo:
Bioelectrical impedance analysis (BIA) was used to assess body composition in rats fed on either standard laboratory diet or on a high-fat diet designed to induce obesity. Bioelectrical impedance analysis predictions of total body water and thus fat-free mass (FFM) for the group mean values were generally within 5% of the measured values by tritiated water ((H2O)-H-3) dilution. The limits of agreement for the procedure were, however, large, approximately +/-25%, limiting the applicability of the technique for measurement of body composition in individual animals.
Resumo:
A new two-parameter integrable model with quantum superalgebra U-q[gl(3/1)] symmetry is proposed, which is an eight-state fermions model with correlated single-particle and pair hoppings as well as uncorrelated triple-particle hopping. The model is solved and the Bethe ansatz equations are obtained.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
Background From the mid-1980s to mid-1990s, the WHO MONICA Project monitored coronary events and classic risk factors for coronary heart disease (CHD) in 38 populations from 21 countries. We assessed the extent to which changes in these risk factors explain the variation in the trends in coronary-event rates across the populations. Methods In men and women aged 35-64 years, non-fatal myocardial infarction and coronary deaths were registered continuously to assess trends in rates of coronary events. We carried out population surveys to estimate trends in risk factors. Trends in event rates were regressed on trends in risk score and in individual risk factors. Findings Smoking rates decreased in most male populations but trends were mixed in women; mean blood pressures and cholesterol concentrations decreased, body-mass index increased, and overall risk scores and coronary-event rates decreased. The model of trends in 10-year coronary-event rates against risk scores and single risk factors showed a poor fit, but this was improved with a 4-year time lag for coronary events. The explanatory power of the analyses was limited by imprecision of the estimates and homogeneity of trends in the study populations. Interpretation Changes in the classic risk factors seem to partly explain the variation in population trends in CHD. Residual variance is attributable to difficulties in measurement and analysis, including time lag, and to factors that were not included, such as medical interventions. The results support prevention policies based on the classic risk factors but suggest potential for prevention beyond these.
Resumo:
The concept of rainfall erosivity is extended to the estimation of catchment sediment yield and its variation over time. Five different formulations of rainfall erosivity indices, using annual, monthly and daily rainfall data, are proposed and tested on two catchments in the humid tropics of Australia. Rainfall erosivity indices, using simple power functions of annual and daily rainfall amounts, were found to be adequate in describing the interannual and seasonal variation of catchment sediment yield. The parameter values of these rainfall erosivity indices for catchment sediment yield are broadly similar to those for rainfall erosivity models in relation to the R-factor in the Universal Soil Loss Equation.
Resumo:
Dendritic cells (DC) are considered to be the major cell type responsible for induction of primary immune responses. While they have been shown to play a critical role in eliciting allosensitization via the direct pathway, there is evidence that maturational and/or activational heterogeneity between DC in different donor organs may be crucial to allograft outcome. Despite such an important perceived role for DC, no accurate estimates of their number in commonly transplanted organs have been reported. Therefore, leukocytes and DC were visualized and enumerated in cryostat sections of normal mouse (C57BL/10, B10.BR, C3H) liver, heart, kidney and pancreas by immunohistochemistry (CD45 and MHC class II staining, respectively). Total immunopositive cell number and MHC class II+ cell density (C57BL/10 mice only) were estimated using established morphometric techniques - the fractionator and disector principles, respectively. Liver contained considerably more leukocytes (similar to 5-20 x 10(6)) and DC (similar to 1-3 x 10(6)) than the other organs examined (pancreas: similar to 0.6 x 10(6) and similar to 0.35 x 10(6): heart: similar to 0.8 x 10(6) and similar to 0.4 x 10(6); kidney similar to 1.2 x 10(6) and 0.65 x 10(6), respectively). In liver, DC comprised a lower proportion of all leukocytes (similar to 15-25%) than in the other parenchymal organs examined (similar to 40-60%). Comparatively, DC density in C57BL/10 mice was heart > kidney > pancreas much greater than liver (similar to 6.6 x 10(6), 5 x 10(6), 4.5 x 10(6) and 1.1 x 10(6) cells/cm(3), respectively). When compared to previously published data on allograft survival, the results indicate that the absolute number of MHC class II+ DC present in a donor organ is a poor predictor of graft outcome. Survival of solid organ allografts is more closely related to the density of the donor DC network within the graft. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
The amount of crystalline fraction present in monohydrate glucose crystal-solution mixture up to 110% crystal in relation to solution (crystal:solution=110:100) was determined by water activity measurement. It was found that the water activity had a strong linear correlation (R-2=0.994) with the amount of glucose present above saturation. Difference in the water activities of the crystal-solution mixture (a(w1)) and the supersaturated solution (a(w2)) by re-dissolving the crystalline fraction allowed calculation of the amount of crystalline phase present (DeltaG) in the mixture by an equation DeltaG=846.97(a(w1)-a(w2)). Other methods such as Raoult's, Norrish and Money-Born equations were also tested for the prediction of water activity of supersaturated glucose solution. (C) 2003 Swiss Society of Food Science and Technology. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
The problem of the negative values of the interaction parameter in the equation of Frumkin has been analyzed with respect to the adsorption of nonionic molecules on energetically homogeneous surface. For this purpose, the adsorption states of a homologue series of ethoxylated nonionic surfactants on air/water interface have been determined using four different models and literature data (surface tension isotherms). The results obtained with the Frumkin adsorption isotherm imply repulsion between the adsorbed species (corresponding to negative values of the interaction parameter), while the classical lattice theory for energetically homogeneous surface (e.g., water/air) admits attraction alone. It appears that this serious contradiction can be overcome by assuming heterogeneity in the adsorption layer, that is, effects of partial condensation (formation of aggregates) on the surface. Such a phenomenon is suggested in the Fainerman-Lucassen-Reynders-Miller (FLM) 'Aggregation model'. Despite the limitations of the latter model (e.g., monodispersity of the aggregates), we have been able to estimate the sign and the order of magnitude of Frumkin's interaction parameter and the range of the aggregation numbers of the surface species. (C) 2004 Elsevier B.V All rights reserved.
Resumo:
To present a novel algorithm for estimating recruitable alveolar collapse and hyperdistension based on electrical impedance tomography (EIT) during a decremental positive end-expiratory pressure (PEEP) titration. Technical note with illustrative case reports. Respiratory intensive care unit. Patients with acute respiratory distress syndrome. Lung recruitment and PEEP titration maneuver. Simultaneous acquisition of EIT and X-ray computerized tomography (CT) data. We found good agreement (in terms of amount and spatial location) between the collapse estimated by EIT and CT for all levels of PEEP. The optimal PEEP values detected by EIT for patients 1 and 2 (keeping lung collapse < 10%) were 19 and 17 cmH(2)O, respectively. Although pointing to the same non-dependent lung regions, EIT estimates of hyperdistension represent the functional deterioration of lung units, instead of their anatomical changes, and could not be compared directly with static CT estimates for hyperinflation. We described an EIT-based method for estimating recruitable alveolar collapse at the bedside, pointing out its regional distribution. Additionally, we proposed a measure of lung hyperdistension based on regional lung mechanics.
Resumo:
Quantum information theory, applied to optical interferometry, yields a 1/n scaling of phase uncertainty Delta phi independent of the applied phase shift phi, where n is the number of photons in the interferometer. This 1/n scaling is achieved provided that the output state is subjected to an optimal phase measurement. We establish this scaling law for both passive (linear) and active (nonlinear) interferometers and identify the coefficient of proportionality. Whereas a highly nonclassical state is required to achieve optimal scaling for passive interferometry, a classical input state yields a 1/n scaling of phase uncertainty for active interferometry.
Resumo:
The open channel diameter of Escherichia coli recombinant large-conductance mechanosensitive ion channels (MscL) was estimated using the model of Hille (Hille, B. 1968. Pharmacological modifications of the sodium channels of frog nerve. J. Gen. Physiol. 51:199-219)that relates the pore size to conductance. Based on the MscL conductance of 3.8 nS, and assumed pore lengths, a channel diameter of 34 to 46 Angstrom was calculated. To estimate the pore size experimentally, the effect of large organic ions on the conductance of MscL was examined. Poly-L-lysines (PLLs) with a diameter of 37 Angstrom or larger significantly reduced channel conductance, whereas spermine (similar to 15 Angstrom), PLL19 (similar to 25 Angstrom) and 1,1'-bis-(3-(1'-methyl-(4,4'-bipyridinium)-1-yl)-propyl)-4,4'-bipyridinium (similar to 30 Angstrom) had no effect. The smaller organic ions putrescine, cadaverine, spermine, and succinate all permeated the channel. We conclude that the open pore diameter of the MscL is similar to 40 Angstrom, indicating that the MscL has one of the largest channel pores yet described. This channel diameter is consistent with the proposed homohexameric model of the MscL.
Resumo:
The concept of parameter-space size adjustment is pn,posed in order to enable successful application of genetic algorithms to continuous optimization problems. Performance of genetic algorithms with six different combinations of selection and reproduction mechanisms, with and without parameter-space size adjustment, were severely tested on eleven multiminima test functions. An algorithm with the best performance was employed for the determination of the model parameters of the optical constants of Pt, Ni and Cr.
Resumo:
Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Parenteral anticoagulation is a cornerstone in the management of venous and arterial thrombosis. Unfractionated heparin has a wide dose/response relationship, requiring frequent and troublesome laboratorial follow-up. Because of all these factors, low-molecular-weight heparin use has been increasing. Inadequate dosage has been pointed out as a potential problem because the use of subjectively estimated weight instead of real measured weight is common practice in the emergency department (ED). To evaluate the impact of inadequate weight estimation on enoxaparin dosage, we investigated the adequacy of anticoagulation of patients in a tertiary ED where subjective weight estimation is common practice. We obtained the estimated, informed, and measured weight of 28 patients in need of parenteral anticoagulation. Basal and steady-state (after the second subcutaneous shot of enoxaparin) anti-Xa activity was obtained as a measure of adequate anticoagulation. The patients were divided into 2 groups according the anticoagulation adequacy. From the 28 patients enrolled, 75% (group 1, n = 21) received at least 0.9 mg/kg per dose BID and 25% (group 2, n = 7) received less than 0.9 mg/kg per dose BID of enoxaparin. Only 4 (14.3%) of all patients had anti-Xa activity less than the inferior limit of the therapeutic range (<0.5 UI/mL), all of them from group 2. In conclusion, when weight estimation was used to determine the enoxaparin dosage, 25% of the patients were inadequately anticoagulated (anti-Xa activity <0.5 UI/mL) during the initial crucial phase of treatment. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Background: Food portion size estimation involves a complex mental process that may influence food consumption evaluation. Knowing the variables that influence this process can improve the accuracy of dietary assessment. The present study aimed to evaluate the ability of nutrition students to estimate food portions in usual meals and relate food energy content with errors in food portion size estimation. Methods: Seventy-eight nutrition students, who had already studied food energy content, participated in this cross-sectional study on the estimation of food portions, organised into four meals. The participants estimated the quantity of each food, in grams or millilitres, with the food in view. Estimation errors were quantified, and their magnitude were evaluated. Estimated quantities (EQ) lower than 90% and higher than 110% of the weighed quantity (WQ) were considered to represent underestimation and overestimation, respectively. Correlation between food energy content and error on estimation was analysed by the Spearman correlation, and comparison between the mean EQ and WQ was accomplished by means of the Wilcoxon signed rank test (P < 0.05). Results: A low percentage of estimates (18.5%) were considered accurate (+/- 10% of the actual weight). The most frequently underestimated food items were cauliflower, lettuce, apple and papaya; the most often overestimated items were milk, margarine and sugar. A significant positive correlation between food energy density and estimation was found (r = 0.8166; P = 0.0002). Conclusions: The results obtained in the present study revealed a low percentage of acceptable estimations of food portion size by nutrition students, with trends toward overestimation of high-energy food items and underestimation of low-energy items.