864 resultados para Hierarchical sampling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first results of a study investigating the processes that control concentrations and sources of Pb and particulate matter in the atmosphere of Sao Paulo City Brazil Aerosols were collected with high temporal resolution (3 hours) during a four-day period in July 2005 The highest Pb concentrations measured coincided with large fireworks during celebration events and associated to high traffic occurrence Our high-resolution data highlights the impact that a singular transient event can have on air quality even in a megacity Under meteorological conditions non-conducive to pollutant dispersion Pb and particulate matter concentrations accumulated during the night leading to the highest concentrations in aerosols collected early in the morning of the following day The stable isotopes of Pb suggest that emissions from traffic remain an Important source of Pb in Sao Paulo City due to the large traffic fleet despite low Pb concentrations in fuels (C) 2010 Elsevier BV All rights reserved

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents a Bayesian semiparametric approach for dealing with regression models where the covariate is measured with error. Given that (1) the error normality assumption is very restrictive, and (2) assuming a specific elliptical distribution for errors (Student-t for example), may be somewhat presumptuous; there is need for more flexible methods, in terms of assuming only symmetry of errors (admitting unknown kurtosis). In this sense, the main advantage of this extended Bayesian approach is the possibility of considering generalizations of the elliptical family of models by using Dirichlet process priors in dependent and independent situations. Conditional posterior distributions are implemented, allowing the use of Markov Chain Monte Carlo (MCMC), to generate the posterior distributions. An interesting result shown is that the Dirichlet process prior is not updated in the case of the dependent elliptical model. Furthermore, an analysis of a real data set is reported to illustrate the usefulness of our approach, in dealing with outliers. Finally, semiparametric proposed models and parametric normal model are compared, graphically with the posterior distribution density of the coefficients. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analytical determination of atmospheric pollutants still presents challenges due to the low-level concentrations (frequently in the mu g m(-3) range) and their variations with sampling site and time In this work a capillary membrane diffusion scrubber (CMDS) was scaled down to match with capillary electrophoresis (CE) a quick separation technique that requires nothing more than some nanoliters of sample and when combined with capacitively coupled contactless conductometric detection (C(4)D) is particularly favorable for ionic species that do not absorb in the UV-vis region like the target analytes formaldehyde formic acid acetic acid and ammonium The CMDS was coaxially assembled inside a PTFE tube and fed with acceptor phase (deionized water for species with a high Henry s constant such as formaldehyde and carboxylic acids or acidic solution for ammonia sampling with equilibrium displacement to the non-volatile ammonium ion) at a low flow rate (8 3 nLs(-1)) while the sample was aspirated through the annular gap of the concentric tubes at 25 mLs(-1) A second unit in all similar to the CMDS was operated as a capillary membrane diffusion emitter (CMDE) generating a gas flow with know concentrations of ammonia for the evaluation of the CMDS The fluids of the system were driven with inexpensive aquarium air pumps and the collected samples were stored in vials cooled by a Peltier element Complete protocols were developed for the analysis in air of NH(3) CH(3)COOH HCOOH and with a derivatization setup CH(2)O by associating the CMDS collection with the determination by CE-C(4)D The ammonia concentrations obtained by electrophoresis were checked against the reference spectrophotometric method based on Berthelot s reaction Sensitivity enhancements of this reference method were achieved by using a modified Berthelot reaction solenoid micro-pumps for liquid propulsion and a long optical path cell based on a liquid core waveguide (LCW) All techniques and methods of this work are in line with the green analytical chemistry trends (C) 2010 Elsevier B V All rights reserved

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports a method for the direct and simultaneous determination of Cr and Mn in alumina by slurry sampling graphite furnace atomic absorption spectrometry (SiS-SIMAAS) using niobium carbide (NbC) as a graphite platform modifier and sodium fluoride (NaF) as a matrix modifier. 350 mu g of Nb were thermally deposited on the platform surface allowing the formation of NbC (mp 3500 degrees C) to minimize the reaction between aluminium and carbon of the pyrolytic platform, improving the graphite tube lifetime up to 150 heating cycles. A solution of 0.2 mol L(-1) NaF was used as matrix modifier for alumina dissolution as cryolite-based melt, allowing volatilization during pyrolysis step. Masses (c.a. 50 mg) of sample were suspended in 30 ml of 2.0% (v/v) of HNO(3). Slurry was manually homogenized before sampling. Aliquots of 20 mu l of analytical solutions and slurry samples were co-injected into the graphite tube with 20 mu l of the matrix modifier. In the best conditions of the heating program, pyrolysis and atomization temperatures were 1300 degrees C and 2400 degrees C, respectively. A step of 1000 degrees C was optimized allowing the alumina dissolution to form cryolite. The accuracy of the proposed method has been evaluated by the analysis of standard reference materials. The found concentrations presented no statistical differences compared to the certified values at 95% of the confidence level. Limits of detection were 66 ng g(-1) for Cr and 102 ng g(-1) for Mn and the characteristic masses were 10 and 13 pg for Cr and Mn, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In situ fusion on the boat-type graphite platform has been used as a sample pretreatment for the direct determination of Co, Cr and Mn in Portland cement by solid sampling graphite furnace atomic absorption spectrometry (SS-GF AAS). The 3-field Zeeman technique was adopted for background correction to decrease the sensitivity during measurements. This strategy allowed working with up to 200 mu g of sample. The in situ fusion was accomplished using 10 mu L of a flux mixture 4.0% m/v Na(2)CO(3) + 4.0% m/v ZnO + 0.1% m/v Triton (R) X-100 added over the cement sample and heated at 800 degrees C for 20 s. The resulting mould was completely dissolved with 10 mu L of 0.1% m/v HNO(3). Limits of detection were 0.11 mu g g(-1) for Co, 1.1 mu g g(-1) for Cr and 1.9 mu g g(-1) for Mn. The accuracy of the proposed method has been evaluated by the analysis of certified reference materials. The values found presented no statistically significant differences compared to the certified values (Student`s t-test, p<0.05). In general, the relative standard deviation was lower than 12% (n = 5). (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compared to other volatile carbonylic compounds present in outdoor air, formaldehyde (CH2O) is the most toxic, deserving more attention in terms of indoor and outdoor air quality legislation and control. The analytical determination of CH2O in air still presents challenges due to the low-level concentration (in the sub-ppb range) and its variation with sampling site and time. Of the many available analytical methods for carbonylic compounds, the most widespread one is the time consuming collection in cartridges impregnated with 2,4-dinitrophenylhydrazine followed by the analysis of the formed hydrazones by HPLC. The present work proposes the use of polypropylene hollow porous capillary fibers to achieve efficient CH2O collection. The Oxyphan (R) fiber (designed for blood oxygenation) was chosen for this purpose because it presents good mechanical resistance, high density of very fine pores and high ratio of collection area to volume of the acceptor fluid in the tube, all favorable for the development of air sampling apparatus. The collector device consists of a Teflon pipe inside of which a bundle of polypropylene microporous capillary membranes was introduced. While the acceptor passes at a low flow rate through the capillaries, the sampled air circulates around the fibers, impelled by a low flow membrane pump (of the type used for aquariums ventilation). The coupling of this sampling technique with the selective and quantitative determination of CH2O, in the form of hydroxymethanesulfonate (HMS) after derivatization with HSO3-, by capillary electrophoresis with capacitively coupled contactless conductivity detection (CE-(CD)-D-4) enabled the development of a complete analytical protocol for the CH2O evaluation in air. (C) 2008 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fast and reliable method for the direct determination of iron in sand by solid sampling graphite furnace atomic absorption spectrometry was developed. A Zeeman-effect 3-field background corrector was used to decrease the sensitivity of spectrometer measurements. This strategy allowed working with up to 200 mu g of samples, thus improving the representativity. Using samples with small particle sizes (1-50 mu m) and adding 5 mu g Pd as chemical modifier, it was possible to obtain suitable calibration curves with aqueous reference solutions. The pyrolysis and atomization temperatures for the optimized heating program were 1400 and 2500 degrees C, respectively. The characteristic mass, based on integrated absorbance, was 56 pg, and the detection limits, calculated considering the variability of 20 consecutive measurements of platform inserted without sample was 32 pg. The accuracy of the procedure was checked with the analysis of two reference materials (IPT 62 and 63). The determined concentrations were in agreement with the recommended values (95% confidence level). Five sand samples were analyzed, and a good agreement (95% confidence level) was observed using the proposed method and conventional flame atomic absorption spectrometry. The relative standard deviations were lower than 25% (n = 5). The tube and boat platform lifetimes were around 1000 and 250 heating cycles, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One method using a solid sampling device for the direct determination of Cr and Ni in fresh and used lubricating oils by graphite furnace atomic absorption spectrometry are proposed. The high organic content in the samples was minimized using a digestion step at 400 degrees C in combination with an oxidant mixture 1.0% (v v(-1)) HNO3+15% (v v(-1)) H2O2+0.1% (m v(-1)) Triton X-100 for the in situ digestion. The 3-field mode Zeeman-effect allowed the spectrometer calibration up to 5 ng of Cr and Ni. The quantification limits were 0.86 mu g g(-1) for Cr and 0.82 mg g(-1) for Ni, respectively. The analysis of reference materials showed no statistically significant difference between the recommended values and those obtained by the proposed methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the hglm package for fitting hierarchical generalized linear models. It can be used for linear mixed models and generalized linear mixed models with random effects for a variety of links and a variety of distributions for both the outcomes and the random effects. Fixed effects can also be fitted in the dispersion part of the model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.

Relevância:

20.00% 20.00%

Publicador: