964 resultados para variance component models
Resumo:
This work concerns the influence of industrialized agriculture in the tropics on precipitation chemistry. A total of 264 rain events were sampled using a wet-only collector in central Sao Paulo State, Brazil, between January 2003 and July 2007. Electroneutrality balance calculations (considering H(+), K(+), Na(+), NH(4)(+), Ca(2)(+), Mg(2)(+), Cl(-), NO(3)(-), SO(4)(2-), F(-), PO(4)(3-), H(3)CCOO(-), HCOO(-), C(2)O(4)(2-) and HCO(3)(-)) showed that there was an excess of cations (similar to 15%), which was attributed to the presence of unmeasured organic anion species originating from biomass burning and biogenic emissions. On average, the three ions NH(4)(+), NO(3)(-) and H(+) were responsible for >55% of the total ion concentrations in the rainwater samples. Concentrations (except of H(+)) were significantly higher (t-test; P = 0.05), by between two to six-fold depending on species, during the winter sugar cane harvest period, due to the practice of pre-harvest burning of the crop. Principal component analysis showed that three components could explain 88% of the variance for measurements made throughout the year: PC1 (52%, biomass burning and soil dust resuspension); PC2 (26%, secondary aerosols); PC3 (10%, road transport emissions). Differences between harvest and non-harvest periods appeared to be mainly due to an increased relative importance of road transport/industrial emissions during the summer (non-harvest) period. The volume-weighted mean (VWM) concentrations of ammonium (23.4 mu mol L(-1)) and nitrate (17.5 mu mol L(-1)) in rainwater samples collected during the harvest period were similar to those found in rainwater from Sao Paulo city, which emphasizes the importance of including rural agro-industrial emissions in regional-scale atmospheric chemistry and transport models. Since there was evidence of a biomass burning source throughout the year, it appears that rainwater composition will continue to be affected by vegetation fires, even after sugar cane burning is phased out as envisaged by recent Sao Paulo State legislation. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
PHWAT is a new model that couples a geochemical reaction model (PHREEQC-2) with a density-dependent groundwater flow and solute transport model (SEAWAT) using the split-operator approach. PHWAT was developed to simulate multi-component reactive transport in variable density groundwater flow. Fluid density in PHWAT depends not on only the concentration of a single species as in SEAWAT, but also the concentrations of other dissolved chemicals that can be subject to reactive processes. Simulation results of PHWAT and PHREEQC-2 were compared in their predictions of effluent concentration from a column experiment. Both models produced identical results, showing that PHWAT has correctly coupled the sub-packages. PHWAT was then applied to the simulation of a tank experiment in which seawater intrusion was accompanied by cation exchange. The density dependence of the intrusion and the snow-plough effect in the breakthrough curves were reflected in the model simulations, which were in good agreement with the measured breakthrough data. Comparison simulations that, in turn, excluded density effects and reactions allowed us to quantify the marked effect of ignoring these processes. Next, we explored numerical issues involved in the practical application of PHWAT using the example of a dense plume flowing into a tank containing fresh water. It was shown that PHWAT could model physically unstable flow and that numerical instabilities were suppressed. Physical instability developed in the model in accordance with the increase of the modified Rayleigh number for density-dependent flow, in agreement with previous research. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In this paper use consider the problem of providing standard errors of the component means in normal mixture models fitted to univariate or multivariate data by maximum likelihood via the EM algorithm. Two methods of estimation of the standard errors are considered: the standard information-based method and the computationally-intensive bootstrap method. They are compared empirically by their application to three real data sets and by a small-scale Monte Carlo experiment.
Resumo:
Recent advances in computer technology have made it possible to create virtual plants by simulating the details of structural development of individual plants. Software has been developed that processes plant models expressed in a special purpose mini-language based on the Lindenmayer system formalism. These models can be extended from their architectural basis to capture plant physiology by integrating them with crop models, which estimate biomass production as a consequence of environmental inputs. Through this process, virtual plants will gain the ability to react to broad environmental conditions, while crop models will gain a visualisation component. This integration requires the resolution of the fundamentally different time scales underlying the approaches. Architectural models are usually based on physiological time; each time step encompasses the same amount of development in the plant, without regard to the passage of real time. In contrast, physiological models are based in real time; the amount of development in a time step is dependent on environmental conditions during the period. This paper provides a background on the plant modelling language, then describes how widely-used concepts of thermal time can be implemented to resolve these time scale differences. The process is illustrated using a case study. (C) 1997 Elsevier Science Ltd.
Resumo:
Background/Aims: Statistical analysis of age-at-onset involving family data is particularly complicated because there is a correlation pattern that needs to be modeled and also because there are measurements that are censored. In this paper, our main purpose was to evaluate the effect of genetic and shared family environmental factors on age-at-onset of three cardiovascular risk factors: hypertension, diabetes and high cholesterol. Methods: The mixed-effects Cox model proposed by Pankratz et al. [2005] was used to analyze the data from 81 families, involving 1,675 individuals from the village of Baependi, in the state of Minas Gerais, Brazil. Results: The analyses performed showed that the polygenic effect plays a greater role than the shared family environmental effect in explaining the variability of the age-at-onset of hypertension, diabetes and high cholesterol. The model which simultaneously evaluated both effects indicated that there are individuals which may have risk of hypertension due to polygenic effects 130% higher than the overall average risk for the entire sample. For diabetes and high cholesterol the risks of some individuals were 115 and 45%, respectively, higher than the overall average risk for the entire population. Conclusions: Results showed evidence of significant polygenic effects indicating that age-at-onset is a useful trait for gene mapping of the common complex diseases analyzed. In addition, we found that the polygenic random component might absorb the effects of some covariates usually considered in the risk evaluation, such as gender, age and BMI. Copyright (C) 2008 S. Karger AG, Basel
Resumo:
Records of 18,770 Nelore animals, born from 1975 to 2002, in 8 herds participating in the Nelore Cattle Breeding Program, were analyzed to estimate genetic parameters for mature BW. The mature BW were analyzed as a single BW taken closest to 4.5 yr of age for each cow in the data file, considering BW starting from 2 (W2Y_S), 3 (W3Y_S), or 4 (W4Y_S) yr of age or as repeated records, including all BW starting from 2 (W2Y_R), 3 (W3Y_R), or 4 (W4Y_R) yr of age. The variance components were estimated by restricted maximum likelihood, fitting univariate and bivariate animal models, including weaning weight. The heritability estimates were 0.29, 0.34, 0.36, 0.41, 0.44, and 0.46 for W2Y_S, W3Y_S, W4Y_S, W2Y_R, W3Y_R, and W4Y_R, respectively. The repeatability estimates for W2Y_R, W3Y_R, and W4Y_R were 0.59, 0.64, and 0.72, respectively. Larger accuracy values associated with the EBV were obtained in the repeated records models. The results indicated the bivariate repeated records model as the most appropriate for analyzing mature BW.
Resumo:
The Eysenck Personality Questionnaire-Revised (EPQ-R), the Eysenck Personality Profiler Short Version (EPP-S), and the Big Five Inventory (BFI-V4a) were administered to 135 postgraduate students of business in Pakistan. Whilst Extraversion and Neuroticism scales from the three questionnaires were highly correlated, it was found that Agreeableness was most highly correlated with Psychoticism in the EPQ-R and Conscientiousness was most highly correlated with Psychoticism in the EPP-S. Principal component analyses with varimax rotation were carried out. The analyses generally suggested that the five factor model rather than the three-factor model was more robust and better for interpretation of all the higher order scales of the EPQ-R, EPP-S, and BFI-V4a in the Pakistani data. Results show that the superiority of the five factor solution results from the inclusion of a broader variety of personality scales in the input data, whereas Eysenck's three factor solution seems to be best when a less complete but possibly more important set of variables are input. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The P3(00) event-related potential (ERP) component is widely used as a measure of cognitive functioning and provides a sensitive electrophysiological index of the attentional and working memory demands of a task. This study investigated what proportion of the variance in the amplitude and latency of the P3, elicited in a delayed response working memory task, could be attributed to genetic factors. In 335 adolescent twin pairs and 48 siblings, the amplitude and latency of the P3 were examined at frontal, central, and parietal sites. Additive genetic factors accounted for 48% to 61% of the variance in P3 amplitude. Approximately one-third of the genetic variation at frontal sites was mediated by a common genetic factor that also influenced the genetic variation at parietal and central sites. Familial resemblance in P3 latency was due to genetic influence that accounted for 44% to 50% of the variance. Genetic covariance in P3 latency across sites was substantial, with a large part of the variance found at parietal, central, and frontal sites attributed to a common genetic factor. The findings provide further evidence that the P3 is a promising phenotype of neural activity of the brain and has the potential to be used in linkage and association analysis in the search for quantitative trait loci (QTLs) influencing cognition.
Resumo:
Five kinetic models for adsorption of hydrocarbons on activated carbon are compared and investigated in this study. These models assume different mass transfer mechanisms within the porous carbon particle. They are: (a) dual pore and surface diffusion (MSD), (b) macropore, surface, and micropore diffusion (MSMD), (c) macropore, surface and finite mass exchange (FK), (d) finite mass exchange (LK), and (e) macropore, micropore diffusion (BM) models. These models are discriminated using the single component kinetic data of ethane and propane as well as the multicomponent kinetics data of their binary mixtures measured on two commercial activated carbon samples (Ajax and Norit) under various conditions. The adsorption energetic heterogeneity is considered for all models to account for the system. It is found that, in general, the models assuming diffusion flux of adsorbed phase along the particle scale give better description of the kinetic data.
Resumo:
Measurements of mean and fluctuating velocity and temperature and their self- and cross-products to the third-order are presented for a heated axisymmetric air jet. Froude numbers in the range of 3500 13,190, Reynolds numbers in the range of 3470-8500 and non-dimensional streamwise distances. X*, from 0.27 to 1.98 are covered by the data. It was found that turbulence intensity decreases for the heated jet in the region between the inertia dominated and the buoyancy dominated regions which is contrary to findings with helium jets mixing with ambient air to produce density fluctuations. The effects of heating on the turbulent kinetic energy budget and the temperature variance budget show small differences for the inertia dominated region and the intermediate region which help to explain the transition process to the far field plume region. Constants are evaluated for the isotropic eddy diffusivity and generalised gradient hypothesis models as well as the scalar variance model. No significant effect of heating on the dissipation time-scale ratio was found. A novel wire array with an inclined cold wire was used. Measurements obtained with this probe are found to lead to asymmetries in some of the higher-order products. Further investigation suggested that the asymmetries are attributable to an as yet unreported interference effect produced by the leading prong of the inclined temperature wire, The effect may also have implications for inclined velocity wires which contain a temperature component when used in heated flows. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Latitudinal clines provide natural systems that may allow the effect of natural selection on the genetic variance to be determined. Ten clinal populations of Drosophila serrata collected from the eastern coast of Australia were used to examine clinal patterns in the trait mean and genetic variance of the life-history trait egg-to-adult development time. Development time significantly lengthened from tropical areas to temperate areas. The additive genetic variance for development time in each population was not associated with latitude but was associated with the population mean development time. Additive genetic variance tended to be larger in populations with more extreme development times and appeared to be consistent with allele frequency change. In contrast, the nonadditive genetic variance was not associated with the population mean but was associated with latitude. Levels of nonadditive genetic variance were greatest in the region of the cline where the gradient in the change in mean was greatest, consistent with Barton's (1999) conjecture that the generation of linkage disequilibrium may become an important component of the genetic variance in systems with a spatially varying optimum.
Resumo:
We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.
Resumo:
This paper investigates the robustness of a range of short–term interest rate models. We examine the robustness of these models over different data sets, time periods, sampling frequencies, and estimation techniques. We examine a range of popular one–factor models that allow the conditional mean (drift) and conditional variance (diffusion) to be functions of the current short rate. We find that parameter estimates are highly sensitive to all of these factors in the eight countries that we examine. Since parameter estimates are not robust, these models should be used with caution in practice.
Resumo:
Blast fragmentation can have a significant impact on the profitability of a mine. An optimum run of mine (ROM) size distribution is required to maximise the performance of downstream processes. If this fragmentation size distribution can be modelled and controlled, the operation will have made a significant advancement towards improving its performance. Blast fragmentation modelling is an important step in Mine to Mill™ optimisation. It allows the estimation of blast fragmentation distributions for a number of different rock mass, blast geometry, and explosive parameters. These distributions can then be modelled in downstream mining and milling processes to determine the optimum blast design. When a blast hole is detonated rock breakage occurs in two different stress regions - compressive and tensile. In the-first region, compressive stress waves form a 'crushed zone' directly adjacent to the blast hole. The second region, termed the 'cracked zone', occurs outside the crush one. The widely used Kuz-Ram model does not recognise these two blast regions. In the Kuz-Ram model the mean fragment size from the blast is approximated and is then used to estimate the remaining size distribution. Experience has shown that this model predicts the coarse end reasonably accurately, but it can significantly underestimate the amount of fines generated. As part of the Australian Mineral Industries Research Association (AMIRA) P483A Mine to Mill™ project, the Two-Component Model (TCM) and Crush Zone Model (CZM), developed by the Julius Kruttschnitt Mineral Research Centre (JKMRC), were compared and evaluated to measured ROM fragmentation distributions. An important criteria for this comparison was the variation of model results from measured ROM in the-fine to intermediate section (1-100 mm) of the fragmentation curve. This region of the distribution is important for Mine to Mill™ optimisation. The comparison of modelled and Split ROM fragmentation distributions has been conducted in harder ores (UCS greater than 80 MPa). Further work involves modelling softer ores. The comparisons will be continued with future site surveys to increase confidence in the comparison of the CZM and TCM to Split results. Stochastic fragmentation modelling will then be conducted to take into account variation of input parameters. A window of possible fragmentation distributions can be compared to those obtained by Split . Following this work, an improved fragmentation model will be developed in response to these findings.
Resumo:
Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.