71 resultados para Count data models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Red cell number and size increase during puberty, particularly in males. The aim of the present study was to determine whether expression of genes affecting red cell indices varied with age and sex. Haemoglobin, red cell count, and mean cellular volume were measured longitudinally on 578 pairs of twins at twelve, fourteen and sixteen years of age. Data were analysed using a structural equation modeling approach, in which a variety of univariate and longitudinal simplex models were fitted to the data. Significant heritability was demonstrated for all variables across all ages. The genes involved did not differ between the sexes, although there was evidence for sex limitation in the case of haemoglobin at age twelve. Longitudinal analyses indicated that new genes affecting red cell indices were expressed at different stages of puberty. Some of these genes affected the different red cell indices pleiotropically, while others had effects specific to one variable only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comparative phylogeography has proved useful for investigating biological responses to past climate change and is strongest when combined with extrinsic hypotheses derived from the fossil record or geology. However, the rarity of species with sufficient, spatially explicit fossil evidence restricts the application of this method. Here, we develop an alternative approach in which spatial models of predicted species distributions under serial paleoclimates are compared with a molecular phylogeography, in this case for a snail endemic to the rainforests of North Queensland, Australia. We also compare the phylogeography of the snail to those from several endemic vertebrates and use consilience across all of these approaches to enhance biogeographical inference for this rainforest fauna. The snail mtDNA phylogeography is consistent with predictions from paleoclimate modeling in relation to the location and size of climatic refugia through the late Pleistocene-Holocene and broad patterns of extinction and recolonization. There is general agreement between quantitative estimates of population expansion from sequence data (using likelihood and coalescent methods) vs. distributional modeling. The snail phylogeography represents a composite of both common and idiosyncratic patterns seen among vertebrates, reflecting the geographically finer scale of persistence and subdivision in the snail. In general, this multifaceted approach, combining spatially explicit paleoclimatological models and comparative phylogeography, provides a powerful approach to locating historical refugia and understanding species' responses to them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use published and new trace element data to identify element ratios which discriminate between arc magmas from the supra-subduction zone mantle wedge and those formed by direct melting of subducted crust (i.e. adakites). The clearest distinction is obtained with those element ratios which are strongly fractionated during refertilisation of the depleted mantle wedge, ultimately reflecting slab dehydration. Hence, adakites have significantly lower Pb/Nd and B/Be but higher Nb/Ta than typical arc magmas and continental crust as a whole. Although Li and Be are also overenriched in continental crust, behaviour of Li/Yb and Be/Nd is more complex and these ratios do not provide unique signatures of slab melting. Archaean tonalite-trondhjemite-granodiorites (TTGs) strongly resemble ordinary mantle wedge-derived arc magmas in terms of fluid-mobile trace element content, implying that they-did not form by slab melting but that they originated from mantle which was hydrated and enriched in elements lost from slabs during prograde dehydration. We suggest that Archaean TTGs formed by extensive fractional crystallisation from a mafic precursor. It is widely claimed that the time between the creation and subduction of oceanic lithosphere was significantly shorter in the Archaean (i.e. 20 Ma) than it is today. This difference was seen as an attractive explanation for the presumed preponderance of adakitic magmas during the first half of Earth's history. However, when we consider the effects of a higher potential mantle temperature on the thickness of oceanic crust, it follows that the mean age of oceanic lithosphere has remained virtually constant. Formation of adakites has therefore always depended on local plate geometry and not on potential mantle temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thin-layer drying behaviour of bananas in a beat pump dehumidifier dryer was examined. Four pre-treatments (blanching, chilling, freezing and combined blanching and freezing) were applied to the bananas, which were dried at 50 degreesC with an air velocity of 3.1 m s(-1) and with the relative humidity of the inlet air of 10-35%. Three drying models, the simple model, the two-term exponential model and the Page model were examined. All models were evaluated using three statistical measures, correlation coefficient, root means square error, and mean absolute percent error. Moisture diffusivity was calculated based on the diffusion equation for an infinite cylindrical shape using the slope method. The rate of drying was higher for the pre-treatments involving freezing. The sample which was blanched only did not show any improvement in drying rate. In fact, a longer drying time resulted due to water absorption during blanching. There was no change in the rate for the chilled sample compared with the control. While all models closely fitted the drying data, the simple model showed greatest deviation from the experimental results. The two-term exponential model was found to be the best model for describing the drying curves of bananas because its parameters represent better the physical characteristics of the drying process. Moisture diffusivities of bananas were in the range 4.3-13.2 x 10(-10) m(2)s(-1). (C) 2002 Published by Elsevier Science Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of the performance of the APACHE III (Acute Physiology and Chronic Health Evaluation) ICU (intensive care unit) and hospital mortality models at the Princess Alexandra Hospital, Brisbane is reported. Prospective collection of demographic, diagnostic, physiological, laboratory, admission and discharge data of 5681 consecutive eligible admissions (1 January 1995 to 1 January 2000) was conducted at the Princess Alexandra Hospital, a metropolitan Australian tertiary referral medical/surgical adult ICU. ROC (receiver operating characteristic) curve areas for the APACHE III ICU mortality and hospital mortality models demonstrated excellent discrimination. Observed ICU mortality (9.1%) was significantly overestimated by the APACHE III model adjusted for hospital characteristics (10.1%), but did not significantly differ from the prediction of the generic APACHE III model (8.6%). In contrast, observed hospital mortality (14.8%) agreed well with the prediction of the APACHE III model adjusted for hospital characteristics (14.6%), but was significantly underestimated by the unadjusted APACHE III model (13.2%). Calibration curves and goodness-of-fit analysis using Hosmer-Lemeshow statistics, demonstrated that calibration was good with the unadjusted APACHE III ICU mortality model, and the APACHE III hospital mortality model adjusted for hospital characteristics. Post hoc analysis revealed a declining annual SMR (standardized mortality rate) during the study period. This trend was present in each of the non-surgical, emergency and elective surgical diagnostic groups, and the change was temporally related to increased specialist staffing levels. This study demonstrates that the APACHE III model performs well on independent assessment in an Australian hospital. Changes observed in annual SMR using such a validated model support an hypothesis of improved survival outcomes 1995-1999.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the robustness of a range of short–term interest rate models. We examine the robustness of these models over different data sets, time periods, sampling frequencies, and estimation techniques. We examine a range of popular one–factor models that allow the conditional mean (drift) and conditional variance (diffusion) to be functions of the current short rate. We find that parameter estimates are highly sensitive to all of these factors in the eight countries that we examine. Since parameter estimates are not robust, these models should be used with caution in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.