77 resultados para robust estimation statistics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study is to compare the accuracy of sonographic estimation of fetal weight of macrosomic babies in diabetic vs non-diabetic pregnancies. Ali babies weighing 4000 g or more at birth, and who had ultrasound scans performed within one week of delivery were included in this retrospective study. Pregnancies with diabetes mellitus were compared to those without diabetes mellitus. The mean simple error (actual birthweight - estimated fetal weight); mean standardised absolute error (absolute value of simple error (g)/actual birthweight (kg)); and the percentage of estimated birthweight falling within 15% of the actual birthweight between the two groups were compared. There were 9516 deliveries during the study period. Of this total 1211 (12.7 %) babies weighed 4000 g or more. A total of 56 non-diabetic pregnancies and 19 diabetic pregnancies were compared. The average sonographic estimation of fetal weight in diabetic pregnancies was 8 % less than the actual birthweight, compared to 0.2 % in the non-diabetic group (p < 0.01). The estimated fetal weight was within 15% of the birthweight in 74 % of the diabetic pregnancies, compared to 93 % of the non-diabetic pregnancies (p < 0.05). In the diabetic group, 26.3 % of the birthweights were underestimated by more than 15 %, compared to 5.4 % in the non-diabetic group (p < 0.05). In conclusion, the prediction accuracy of fetal weight estimation using standard formulae in macrosomic fetuses is significantly worse in diabetic pregnancies compared to non-diabetic pregnancies. When sonographic fetal weight estimation is used to influence the mode of delivery for diabetic women, a more conservative cut-off needs to be considered.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As discussed in the preceding paper [Wiseman and Vaccaro, preceding paper, Phys. Rev. A 65, 043605 (2002)], the stationary state of an optical or atom laser far above threshold is a mixture of coherent field states with random phase, or, equivalently, a Poissonian mixture of number states. We are interested in which, if either, of these descriptions of rho(ss) as a stationary ensemble of pure states, is more natural. In the preceding paper we concentrated upon the question of whether descriptions such as these are physically realizable (PR). In this paper we investigate another relevant aspect of these ensembles, their robustness. A robust ensemble is one for which the pure states that comprise it survive relatively unchanged for a long time under the system evolution. We determine numerically the most robust ensembles as a function of the parameters in the laser model: the self-energy chi of the bosons in the laser mode, and the excess phase noise nu. We find that these most robust ensembles are PR ensembles, or similar to PR ensembles, for all values of these parameters. In the ideal laser limit (nu=chi=0), the most robust states are coherent states. As the phase noise or phase dispersion is increased through nu or the self-interaction of the bosons chi, respectively, the most robust states become more and more amplitude squeezed. We find scaling laws for these states, and give analytical derivations for them. As the phase diffusion or dispersion becomes so large that the laser output is no longer quantum coherent, the most robust states become so squeezed that they cease to have a well-defined coherent amplitude. That is, the quantum coherence of the laser output is manifest in the most robust PR ensemble being an ensemble of states with a well-defined coherent amplitude. This lends support to our approach of regarding robust PR ensembles as the most natural description of the state of the laser mode. It also has interesting implications for atom lasers in particular, for which phase dispersion due to self-interactions is expected to be large.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale BNR activated sludge plant, we developed a state-space model with 154 state variables in this work. A general algorithm for robustly reducing the nonlinear PDE model is presented and based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The Singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Bioelectrical impedance analysis (BIA) is a useful field measure to estimate total body water (TBW). No prediction formulae have been developed or validated against a reference method in patients with pancreatic cancer. The aim of this study was to assess the agreement between three prediction equations for the estimation of TBW in cachectic patients with pancreatic cancer. Methods Resistance was measured at frequencies of 50 and 200 kHz in 18 outpatients (10 males and eight females, age 70.2 +/- 11.8 years) with pancreatic cancer from two tertiary Australian hospitals. Three published prediction formulae were used to calculate TBW - TBWs developed in surgical patients, TBWca-uw and TBWca-nw developed in underweight and normal weight patients with end-stage cancer. Results There was no significant difference in the TBW estimated by the three prediction equations - TBWs 32.9 +/- 8.3 L, TBWca-nw 36.3 +/- 7.4 L, TBWca-uw 34.6 +/- 7.6 L. At a population level, there is agreement between prediction of TBW in patients with pancreatic cancer estimated from the three equations. The best combination of low bias and narrow limits of agreement was observed when TBW was estimated from the equation developed in the underweight cancer patients relative to the normal weight cancer patients. When no established BIA prediction equation exists, practitioners should utilize an equation developed in a population with similar critical characteristics such as diagnosis, weight loss, body mass index and/or age. Conclusions Further research is required to determine the accuracy of the BIA prediction technique against a reference method in patients with pancreatic cancer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the robustness of a range of short–term interest rate models. We examine the robustness of these models over different data sets, time periods, sampling frequencies, and estimation techniques. We examine a range of popular one–factor models that allow the conditional mean (drift) and conditional variance (diffusion) to be functions of the current short rate. We find that parameter estimates are highly sensitive to all of these factors in the eight countries that we examine. Since parameter estimates are not robust, these models should be used with caution in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.