39 resultados para Mixture toxicity
Resumo:
Molecular phylogenetic studies of homologous sequences of nucleotides often assume that the underlying evolutionary process was globally stationary, reversible, and homogeneous (SRH), and that a model of evolution with one or more site-specific and time-reversible rate matrices (e.g., the GTR rate matrix) is enough to accurately model the evolution of data over the whole tree. However, an increasing body of data suggests that evolution under these conditions is an exception, rather than the norm. To address this issue, several non-SRH models of molecular evolution have been proposed, but they either ignore heterogeneity in the substitution process across sites (HAS) or assume it can be modeled accurately using the distribution. As an alternative to these models of evolution, we introduce a family of mixture models that approximate HAS without the assumption of an underlying predefined statistical distribution. This family of mixture models is combined with non-SRH models of evolution that account for heterogeneity in the substitution process across lineages (HAL). We also present two algorithms for searching model space and identifying an optimal model of evolution that is less likely to over- or underparameterize the data. The performance of the two new algorithms was evaluated using alignments of nucleotides with 10 000 sites simulated under complex non-SRH conditions on a 25-tipped tree. The algorithms were found to be very successful, identifying the correct HAL model with a 75% success rate (the average success rate for assigning rate matrices to the tree's 48 edges was 99.25%) and, for the correct HAL model, identifying the correct HAS model with a 98% success rate. Finally, parameter estimates obtained under the correct HAL-HAS model were found to be accurate and precise. The merits of our new algorithms were illustrated with an analysis of 42 337 second codon sites extracted from a concatenation of 106 alignments of orthologous genes encoded by the nuclear genomes of Saccharomyces cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, S. castellii, S. kluyveri, S. bayanus, and Candida albicans. Our results show that second codon sites in the ancestral genome of these species contained 49.1% invariable sites, 39.6% variable sites belonging to one rate category (V1), and 11.3% variable sites belonging to a second rate category (V2). The ancestral nucleotide content was found to differ markedly across these three sets of sites, and the evolutionary processes operating at the variable sites were found to be non-SRH and best modeled by a combination of eight edge-specific rate matrices (four for V1 and four for V2). The number of substitutions per site at the variable sites also differed markedly, with sites belonging to V1 evolving slower than those belonging to V2 along the lineages separating the seven species of Saccharomyces. Finally, sites belonging to V1 appeared to have ceased evolving along the lineages separating S. cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, and S. bayanus, implying that they might have become so selectively constrained that they could be considered invariable sites in these species.
Resumo:
The aim of this study is to identify current knowledge gaps in fate, exposure, and toxicity of engineered nanomaterials (ENMs), highlight research gaps, and suggest future research directions. Humans and other living organisms are exposed to ENMs during production or use of products containing them. To assess the hazards of ENMs, it is important to assess their physiochemical properties and try to relate them to any observed hazard. However, the full determination of these relationships is currently limited by the lack of empirical data. Moreover, most toxicity studies do not use realistic environmental exposure conditions for determining dose-response parameters, affecting the accurate estimation of health risks associated with the exposure to ENMs. Regulatory aspects of nanotechnology are still developing and are currently the subject of much debate. Synthesis of available studies suggests a number of open questions. These include (i) developing a combination of different analytical methods for determining ENM concentration, size, shape, surface properties, and morphology in different environmental media, (ii) conducting toxicity studies using environmentally relevant exposure conditions and obtaining data relevant to developing quantitative nanostructure-toxicity relationships (QNTR), and (iii) developing guidelines for regulating exposure of ENMs in the environment.
Resumo:
Airborne organic pollutants have significant impacts on health; however their sources, atmospheric characteristics and resulting human exposures are poorly understood. This research characterized chemical composition of atmospheric volatile organic compounds, polycyclic aromatic hydrocarbons and carbonyls in representative number of primary schools in Brisbane Metropolitan Area, quantified their concentrations, assessed their toxicity and apportioned them to their sources. The findings expand scientific knowledge of these pollutants, and will contribute towards science based management of risks associated with pollution emissions and air quality in schools and other urban and indoor environments.
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
In school environments, children are constantly exposed to mixtures of airborne substances, derived from a variety of sources, both in the classroom and in the school surroundings. It is important to evaluate the hazardous properties of these mixtures, in order to conduct risk assessments of their impact on chil¬dren’s health. Within this context, through the application of a Maximum Cumulative Ratio approach, this study aimed to explore whether health risks due to indoor air mixtures are driven by a single substance or are due to cumulative exposure to various substances. This methodology requires knowledge of the concentration of substances in the air mixture, together with a health related weighting factor (i.e. reference concentration or lowest concentration of interest), which is necessary to calculate the Hazard Index. Maximum cumulative ratio and Hazard Index values were then used to categorise the mixtures into four groups, based on their hazard potential and therefore, appropriate risk management strategies. Air samples were collected from classrooms in 25 primary schools in Brisbane, Australia. Analysis was conducted based on the measured concentration of these substances in about 300 air samples. The results showed that in 92% of the schools, indoor air mixtures belonged to the ‘low concern’ group and therefore, they did not require any further assessment. In the remaining schools, toxicity was mainly governed by a single substance, with a very small number of schools having a multiple substance mix which required a combined risk assessment. The proposed approach enables the identification of such schools and thus, aides in the efficient health risk management of pollution emissions and air quality in the school environment.
Resumo:
Polycyclic Aromatic Hydrocarbons (PAHs) represent a major class of toxic pollutants because of their carcinogenic and mutagenic characteristics. People living in urban areas are regularly exposed to PAHs because of abundance of their emission sources. Within this context, this study aimed to: (i) identify and quantify the levels of ambient PAHs in an urban environment; (ii) evaluate their toxicity; and (iii) identify their sources as well as the contribution of specific sources to measured concentrations. Sixteen PAHs were identified and quantified in air samples collected from Brisbane. Principal Component Analysis – Absolute Principal Component Scores (PCA- APCS) was used in order to conduct source apportionment of the measured PAHs. Vehicular emissions, natural gas combustion, petrol emissions and evaporative/unburned fuel were the sources identified; contributing 56%, 21%, 15% and 8% of the total PAHs emissions, respectively, all of which need to be considered for any pollution control measures implemented in urban areas.
Resumo:
In this paper, we examine approaches to estimate a Bayesian mixture model at both single and multiple time points for a sample of actual and simulated aerosol particle size distribution (PSD) data. For estimation of a mixture model at a single time point, we use Reversible Jump Markov Chain Monte Carlo (RJMCMC) to estimate mixture model parameters including the number of components which is assumed to be unknown. We compare the results of this approach to a commonly used estimation method in the aerosol physics literature. As PSD data is often measured over time, often at small time intervals, we also examine the use of an informative prior for estimation of the mixture parameters which takes into account the correlated nature of the parameters. The Bayesian mixture model offers a promising approach, providing advantages both in estimation and inference.
Resumo:
Many biological environments are crowded by macromolecules, organelles and cells which can impede the transport of other cells and molecules. Previous studies have sought to describe these effects using either random walk models or fractional order diffusion equations. Here we examine the transport of both a single agent and a population of agents through an environment containing obstacles of varying size and shape, whose relative densities are drawn from a specified distribution. Our simulation results for a single agent indicate that smaller obstacles are more effective at retarding transport than larger obstacles; these findings are consistent with our simulations of the collective motion of populations of agents. In an attempt to explore whether these kinds of stochastic random walk simulations can be described using a fractional order diffusion equation framework, we calibrate the solution of such a differential equation to our averaged agent density information. Our approach suggests that these kinds of commonly used differential equation models ought to be used with care since we are unable to match the solution of a fractional order diffusion equation to our data in a consistent fashion over a finite time period.