983 resultados para trimmed likelihood estimation
Resumo:
Background From the mid-1980s to mid-1990s, the WHO MONICA Project monitored coronary events and classic risk factors for coronary heart disease (CHD) in 38 populations from 21 countries. We assessed the extent to which changes in these risk factors explain the variation in the trends in coronary-event rates across the populations. Methods In men and women aged 35-64 years, non-fatal myocardial infarction and coronary deaths were registered continuously to assess trends in rates of coronary events. We carried out population surveys to estimate trends in risk factors. Trends in event rates were regressed on trends in risk score and in individual risk factors. Findings Smoking rates decreased in most male populations but trends were mixed in women; mean blood pressures and cholesterol concentrations decreased, body-mass index increased, and overall risk scores and coronary-event rates decreased. The model of trends in 10-year coronary-event rates against risk scores and single risk factors showed a poor fit, but this was improved with a 4-year time lag for coronary events. The explanatory power of the analyses was limited by imprecision of the estimates and homogeneity of trends in the study populations. Interpretation Changes in the classic risk factors seem to partly explain the variation in population trends in CHD. Residual variance is attributable to difficulties in measurement and analysis, including time lag, and to factors that were not included, such as medical interventions. The results support prevention policies based on the classic risk factors but suggest potential for prevention beyond these.
Resumo:
Dendritic cells (DC) are considered to be the major cell type responsible for induction of primary immune responses. While they have been shown to play a critical role in eliciting allosensitization via the direct pathway, there is evidence that maturational and/or activational heterogeneity between DC in different donor organs may be crucial to allograft outcome. Despite such an important perceived role for DC, no accurate estimates of their number in commonly transplanted organs have been reported. Therefore, leukocytes and DC were visualized and enumerated in cryostat sections of normal mouse (C57BL/10, B10.BR, C3H) liver, heart, kidney and pancreas by immunohistochemistry (CD45 and MHC class II staining, respectively). Total immunopositive cell number and MHC class II+ cell density (C57BL/10 mice only) were estimated using established morphometric techniques - the fractionator and disector principles, respectively. Liver contained considerably more leukocytes (similar to 5-20 x 10(6)) and DC (similar to 1-3 x 10(6)) than the other organs examined (pancreas: similar to 0.6 x 10(6) and similar to 0.35 x 10(6): heart: similar to 0.8 x 10(6) and similar to 0.4 x 10(6); kidney similar to 1.2 x 10(6) and 0.65 x 10(6), respectively). In liver, DC comprised a lower proportion of all leukocytes (similar to 15-25%) than in the other parenchymal organs examined (similar to 40-60%). Comparatively, DC density in C57BL/10 mice was heart > kidney > pancreas much greater than liver (similar to 6.6 x 10(6), 5 x 10(6), 4.5 x 10(6) and 1.1 x 10(6) cells/cm(3), respectively). When compared to previously published data on allograft survival, the results indicate that the absolute number of MHC class II+ DC present in a donor organ is a poor predictor of graft outcome. Survival of solid organ allografts is more closely related to the density of the donor DC network within the graft. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the context of clustering multivariate data in the presence of atypical observations in the form of background noise.
Resumo:
The amount of crystalline fraction present in monohydrate glucose crystal-solution mixture up to 110% crystal in relation to solution (crystal:solution=110:100) was determined by water activity measurement. It was found that the water activity had a strong linear correlation (R-2=0.994) with the amount of glucose present above saturation. Difference in the water activities of the crystal-solution mixture (a(w1)) and the supersaturated solution (a(w2)) by re-dissolving the crystalline fraction allowed calculation of the amount of crystalline phase present (DeltaG) in the mixture by an equation DeltaG=846.97(a(w1)-a(w2)). Other methods such as Raoult's, Norrish and Money-Born equations were also tested for the prediction of water activity of supersaturated glucose solution. (C) 2003 Swiss Society of Food Science and Technology. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.
Resumo:
A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
ArtinM is a D-mannose binding lectin that has been arousing increasing interest because of its biomedical properties, especially those involving the stimulation of Th1 immune response, which confers protection against intracellular pathogens The potential pharmaceutical applications of ArtinM have motivated the production of its recombinant form (rArtinM) so that it is important to compare the sugar-binding properties of jArtinM and rArtinM in order to take better advantage of the potential applications of the recombinant lectin. In this work, a biosensor framework based on a Quartz Crystal Microbalance was established with the purpose of making a comparative study of the activity of native and recombinant ArtinM protein The QCM transducer was strategically functionalized to use a simple model of protein binding kinetics. This approach allowed for the determination of the binding/dissociation kinetics rate and affinity equilibrium constant of both forms of ArtinM with horseradish peroxidase glycoprotein (HRP), a N-glycosylated protein that contains the trimannoside Man alpha 1-3[Man alpha 1-6]Man, which is a known ligand for jArtinM (Jeyaprakash et al, 2004). Monitoring of the real-time binding of rArtinM shows that it was able to bind HRP, leading to an analytical curve similar to that of jArtinM, with statistically equivalent kinetic rates and affinity equilibrium constants for both forms of ArtinM The lower reactivity of rArtinM with HRP than jArtinM was considered to be due to a difference in the number of Carbohydrate Recognition Domains (CRDs) per molecule of each lectin form rather than to a difference in the energy of binding per CRD of each lectin form. (C) 2010 Elsevier B V. All rights reserved
Resumo:
To present a novel algorithm for estimating recruitable alveolar collapse and hyperdistension based on electrical impedance tomography (EIT) during a decremental positive end-expiratory pressure (PEEP) titration. Technical note with illustrative case reports. Respiratory intensive care unit. Patients with acute respiratory distress syndrome. Lung recruitment and PEEP titration maneuver. Simultaneous acquisition of EIT and X-ray computerized tomography (CT) data. We found good agreement (in terms of amount and spatial location) between the collapse estimated by EIT and CT for all levels of PEEP. The optimal PEEP values detected by EIT for patients 1 and 2 (keeping lung collapse < 10%) were 19 and 17 cmH(2)O, respectively. Although pointing to the same non-dependent lung regions, EIT estimates of hyperdistension represent the functional deterioration of lung units, instead of their anatomical changes, and could not be compared directly with static CT estimates for hyperinflation. We described an EIT-based method for estimating recruitable alveolar collapse at the bedside, pointing out its regional distribution. Additionally, we proposed a measure of lung hyperdistension based on regional lung mechanics.
Resumo:
We explore the task of optimal quantum channel identification and in particular, the estimation of a general one-parameter quantum process. We derive new characterizations of optimality and apply the results to several examples including the qubit depolarizing channel and the harmonic oscillator damping channel. We also discuss the geometry of the problem and illustrate the usefulness of using entanglement in process estimation.
Resumo:
In this paper use consider the problem of providing standard errors of the component means in normal mixture models fitted to univariate or multivariate data by maximum likelihood via the EM algorithm. Two methods of estimation of the standard errors are considered: the standard information-based method and the computationally-intensive bootstrap method. They are compared empirically by their application to three real data sets and by a small-scale Monte Carlo experiment.
Resumo:
Izenman and Sommer (1988) used a non-parametric Kernel density estimation technique to fit a seven-component model to the paper thickness of the 1872 Hidalgo stamp issue of Mexico. They observed an apparent conflict when fitting a normal mixture model with three components with unequal variances. This conflict is examined further by investigating the most appropriate number of components when fitting a normal mixture of components with equal variances.
Resumo:
Quantum information theory, applied to optical interferometry, yields a 1/n scaling of phase uncertainty Delta phi independent of the applied phase shift phi, where n is the number of photons in the interferometer. This 1/n scaling is achieved provided that the output state is subjected to an optimal phase measurement. We establish this scaling law for both passive (linear) and active (nonlinear) interferometers and identify the coefficient of proportionality. Whereas a highly nonclassical state is required to achieve optimal scaling for passive interferometry, a classical input state yields a 1/n scaling of phase uncertainty for active interferometry.
Resumo:
Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The open channel diameter of Escherichia coli recombinant large-conductance mechanosensitive ion channels (MscL) was estimated using the model of Hille (Hille, B. 1968. Pharmacological modifications of the sodium channels of frog nerve. J. Gen. Physiol. 51:199-219)that relates the pore size to conductance. Based on the MscL conductance of 3.8 nS, and assumed pore lengths, a channel diameter of 34 to 46 Angstrom was calculated. To estimate the pore size experimentally, the effect of large organic ions on the conductance of MscL was examined. Poly-L-lysines (PLLs) with a diameter of 37 Angstrom or larger significantly reduced channel conductance, whereas spermine (similar to 15 Angstrom), PLL19 (similar to 25 Angstrom) and 1,1'-bis-(3-(1'-methyl-(4,4'-bipyridinium)-1-yl)-propyl)-4,4'-bipyridinium (similar to 30 Angstrom) had no effect. The smaller organic ions putrescine, cadaverine, spermine, and succinate all permeated the channel. We conclude that the open pore diameter of the MscL is similar to 40 Angstrom, indicating that the MscL has one of the largest channel pores yet described. This channel diameter is consistent with the proposed homohexameric model of the MscL.
Resumo:
Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.