145 resultados para Samplers
Resumo:
The mathematical modelling underlying passive air sampling theory can be based on mass transfer coefficients or rate constants. Generally, these models have not been inter-related. Starting with basic models, the exchange of chemicals between the gaseous phase and the sampler is developed using mass transfer coefficients and rate constants. Importantly, the inter-relationships between the approaches are demonstrated by relating uptake rate constants and loss rate constants to mass transfer coefficients when either sampler-side or air-side resistance is dominating chemical exchange. The influence of sampler area and sampler volume on chemical exchange is discussed in general terms and as they relate to frequently used parameters such as sampling rates and time to equilibrium. Where air-side or sampler-side resistance dominates, an increase in the surface area of the sampler will increase sampling rates. Sampling rates are not related to the sampler/air partition coefficient (K-SV) when air-side resistance dominates and increase with K-SV when sampler-side resistance dominates.
Resumo:
Water-sampler equilibrium partitioning coefficients and aqueous boundary layer mass transfer coefficients for atrazine, diuron, hexazionone and fluometuron onto C18 and SDB-RPS Empore disk-based aquatic passive samplers have been determined experimentally under a laminar flow regime (Re = 5400). The method involved accelerating the time to equilibrium of the samplers by exposing them to three water concentrations, decreasing stepwise to 50% and then 25% of the original concentration. Assuming first-order Fickian kinetics across a rate-limiting aqueous boundary layer, both parameters are determined computationally by unconstrained nonlinear optimization. In addition, a method of estimation of mass transfer coefficients-therefore sampling rates-using the dimensionless Sherwood correlation developed for laminar flow over a flat plate is applied. For each of the herbicides, this correlation is validated to within 40% of the experimental data. The study demonstrates that for trace concentrations (sub 0.1 mu g/L) and these flow conditions, a naked Empore disk performs well as an integrative sampler over short deployments (up to 7 days) for the range of polar herbicides investigated. The SDB-RPS disk allows a longer integrative period than the C18 disk due to its higher sorbent mass and/or its more polar sorbent chemistry. This work also suggests that for certain passive sampler designs, empirical estimation of sampling rates may be possible using correlations that have been available in the chemical engineering literature for some time.
Resumo:
There have been a number of developments in the need, design and use of passive air samplers (PAS) for persistent organic pollutants (POPs). This article is the first in a Special Issue of the journal to review these developments and some of the data arising from them. We explain the need and benefit of developing PAS for POPs, the different approaches that can be used, and highlight future developments and needs. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This collection of papers records a series of studies, carried out over a period of some 50 years, on two aspects of river pollution control - the prevention of pollution by sewage biological filtration and the monitoring of river pollution by biological surveillance. The earlier studies were carried out to develop methods of controlling flies which bred in the filters and caused serious nuisance and possible public health hazard, when they dispersed to surrounding villages. Although the application of insecticides proved effective as an alleviate measure, because it resulted in only a temporary disturbance of the ecological balance, it was considered ecologically unsound as a long-term solution. Subsequent investigations showed that the fly populations in filters were largely determined by the amount of food available to the grazing larval stage in the form of filter film. It was also established that the winter deterioration in filter performance was due to the excessive accumulation of film. Subsequent investigations were therefore carried out to determine the factors responsible for the accumulation of film in different types of filter. Methods of filtration which were considered to control film accumulation by increasing the flushing action of the sewage, were found to control fungal film by creating nutrient limiting conditions. In some filters increasing the hydraulic flushing reduced the grazing fauna population in the surface layers and resulted in an increase in film. The results of these investigations were successfully applied in modifying filters and in the design of a Double Filtration process. These studies on biological filters lead to the conclusion that they should be designed and operated as ecological systems and not merely as hydraulic ones. Studies on the effects of sewage effluents on Birmingham streams confirmed the findings of earlier workers justifying their claim for using biological methods for detecting and assessing river pollution. Further ecological studies showed the sensitivity of benthic riffle communities to organic pollution. Using experimental channels and laboratory studies the different environmental conditions associated with organic pollution were investigated. The degree and duration of the oxygen depletion during the dark hours were found to be a critical factor. The relative tolerance of different taxa to other pollutants, such as ammonia, differed. Although colonisation samplers proved of value in sampling difficult sites, the invertebrate data generated were not suitable for processing as any of the commonly used biotic indexes. Several of the papers, which were written by request for presentation at conferences etc., presented the biological viewpoint on river pollution and water quality issues at the time and advocated the use of biological methods. The information and experiences gained in these investigations was used as the "domain expert" in the development of artificial intelligence systems for use in the biological surveillance of river water quality.
Resumo:
Passive samplers are not only a versatile tool to integrate environmental concentrations of pollutants, but also to avoid the use of live sentinel organisms for environmental monitoring. This study introduced the use of magnetic silicone polymer composites (Fe-PDMS) as passive sampling media to pre-concentrate a wide range of analytes from environmental settings. The composite samplers were assessed for their accumulation properties by performing lab experiments with two model herbicides (Atrazine and Irgarol 1051) and evaluated for their uptake properties from environmental settings (waters and sediments). The Fe-PDMS composites showed good accumulation of herbicides and pesticides from both freshwater and saltwater settings and the accumulation mechanism was positively correlated with the log Kow value of individual analytes. Results from the studies show that these composites could be easily used for a wide number of applications such as monitoring, cleanup, and/or bioaccumulation modeling, and as a non-intrusive and nondestructive monitoring tool for environmental forensic purposes.
Resumo:
This study aimed to assess ambient air quality in a urban area of Natal, capital of Rio Grande do Norte (latitude 5º49'29 '' S and longitude 35º13'34'' W), aiming to determine the metals concentration in particulate matter (PM10 and PM2,5) of atmospheric air in the urban area o the Natal city. The sampling period for the study consisted of data acquisition from January to December 2012. Samples were collected on glass fiber filters by means of two large volumes samplers, one for PM2,5 (AGV PM 2,5) and another for PM10 (PM10 AGV). Monthly averages ranged from 8.92 to 19.80 g.m-3 , where the annual average was 16,21 g.m-3 for PM10 and PM2,5 monthly averages ranged from 2,84 to 7,89 g.m -3 , with an annual average of 5,61 g.m-3 . The results of PM2,5 and PM10 concentrations were related meteorological variables and for information on the effects of these variables on the concentration of PM, an exploratory analysis of the data using Principal Component Analysis (PCA) was performed. The results of the PCA showed that with increasing barometric pressure, the direction of the winds, the rainfall and relative humidity decreases the concentration of PM and the variable weekday little influence compared the meteorological variables. Filters containing particulate matter were selected in six days and subjected to microwave digestion. After digestion samples were analyzed by with Inductively Coupled Plasma Mass Spectrometry (ICP-MS). The concentrations for heavy metals Vanadium, Chromium, Manganese, Nickel, Copper, Arsenic and lead were determined. The highest concentrations of metals were for Pb and Cu, whose average PM10 values were, respectively, 5,34 and 2,34 ng.m-3 and PM2,5 4,68 and 2,95 ng.m-3 . Concentrations for metals V, Cr, Mn, Ni, and Cd were respectively 0,13, 0,39, 0,48, 0,45 and 0,03 ng.m-3 for PM10 fraction and PM2,5 fraction, 0,05, 0,10, 0,10, 0,34 and 0,01 ng.m-3. The concentration for As was null for the two fractions
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.
Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.
In this work, we propose two methods for improving the efficiency of free energy calculations.
First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.
We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.
Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.
We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.
Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.
Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.
Resumo:
Nodule samples obtained were described and studied on board for 1) observation of occurrence and morphology in and outside samplers, size classification, measurement of weight and calculation of population density (kg/m2); 2) photographing whole nodules on the plate marked with the frames of unit areas of both 0cean-70 (0.50 m2) and freefall grab (0.13 m2), and that of typical samples on the plate with a 5 cm grid scale: 3) observation of internal structures of the nodules on cut section; and 4) determination of mineral composition by X-ray diffractometer. The relation between nodule types and geological environment or chemical composition was examined by referring to other data of related studies, such as sedimentology. acoustic survey, and chemical analysis.
Resumo:
Rhizon samplers were originally designed as micro-tensiometers for soil science to sample seepage water in the unsaturated zone. This study shows applications of Rhizons for porewater sampling from sediments in aquatic systems and presents a newly developed Rhizon in situ sampler (RISS). With the inexpensive Rhizon sampling technique, porewater profiles can be sampled with minimum disturbance of both the sediment structure and possible flow fields. Field experiments, tracer studies, and numerical modeling were combined to assess the suitability of Rhizons for porewater sampling. It is shown that the low effort and simple application makes Rhizons a powerful tool for porewater sampling and an alternative to classical methods. Our investigations show that Rhizons are well suited for sampling porewater on board a ship, in a laboratory, and also for in situ sampling. The results revealed that horizontally aligned Rhizons can sample porewater with a vertical resolution of 1 cm. Combined with an in situ benthic chamber system, the RISS allows studies of benthic fluxes and porewater profiles at the same location on the seafloor with negligible effect on the incubated sediment water interface. Results derived by porewater sampling of sediment cores from the Southern Ocean (Atlantic sector) and by in situ sampling of tidal flat sediments of the Wadden Sea (Sahlenburg/Cuxhaven, Germany) are presented.
Resumo:
As the development of a viable quantum computer nears, existing widely used public-key cryptosystems, such as RSA, will no longer be secure. Thus, significant effort is being invested into post-quantum cryptography (PQC). Lattice-based cryptography (LBC) is one such promising area of PQC, which offers versatile, efficient, and high performance security services. However, the vulnerabilities of these implementations against side-channel attacks (SCA) remain significantly understudied. Most, if not all, lattice-based cryptosystems require noise samples generated from a discrete Gaussian distribution, and a successful timing analysis attack can render the whole cryptosystem broken, making the discrete Gaussian sampler the most vulnerable module to SCA. This research proposes countermeasures against timing information leakage with FPGA-based designs of the CDT-based discrete Gaussian samplers with constant response time, targeting encryption and signature scheme parameters. The proposed designs are compared against the state-of-the-art and are shown to significantly outperform existing implementations. For encryption, the proposed sampler is 9x faster in comparison to the only other existing time-independent CDT sampler design. For signatures, the first time-independent CDT sampler in hardware is proposed.
Resumo:
The objective of this study was to assess worker exposure to mineral dust particles and a metabolic model, based on the model adopted by ICRP, was applied to assess human exposure to Ta, and predicted values of Ta concentrations in excreta. The occupational exposure to Th, U, Nb, and Ta bearing particles during routine tasks to obtain Fe-Nb alloys was estimated using air samplers and excreta samples. Ta concentrations in food samples and in drinking water were also determined. The results support that workers were occupationally exposed to Ta bearing particles, and also indicate that a source of Ta exposure for both workers and the control group was the ingestion of drinking water containing soluble compounds of Ta. Therefore, some Ta compounds should be considered soluble compounds in gastrointestinal tract. Consequently the metabolic model based on ICRP metabolic model and/or the transfer factor f1 for Ta should be reviewed and the solubility of Ta compounds in gastrointestinal should be determined.
Resumo:
Se determinó la composición química del agua de lluvia y de niebla en tres sitios en la Reserva Biológica Monteverde, Puntarenas; entre octubre 2009 y enero 2010. Debido a su estado de conservación y a su ubicación geográfica sobre la deriva continental, la Reserva Biológica Monteverde ofrece un sitio de estudio ideal, para el estudio de la composición de las aguas atmosféricas (agua de lluvia y de niebla). Las muestras de agua de niebla se recolectaron al utilizar muestreadores de niebla con líneas de teflón, mientras que las de agua de lluvia se recogieron al emplear muestreadores de lluvia simples y uno de cascada. En ambos tipos de agua se analizaron las especies iónicas más relevantes: H3O+, NH4 +, Ca2+, Mg2+, K+, Na+, Cl-, NO3 - y SO4 2-, al utilizar cromatografía de iones y detección por conductividad eléctrica. Las concentraciones promedio de estas especies en el agua de lluvia estuvieron entre 0,54 ± 0,02 μeq L-1 y 101± 3 μeq L-1, mientras que en el agua de niebla variaron entre 1,00 ± 0,02 μeq L-1 y 93 ± 4 μeq L-1. Además, se presentan el balance iónico y los factores de enriquecimiento con respecto al mar y el suelo de ambos tipos de muestras.
Resumo:
There is increasing evidence of a causal link between airborne particles and ill health and this study monitored the exposure to both airborne particles and the gas phase contaminants of environmental tobacco smoke (ETS) in a nightclub. The present study followed a number of pilot studies in which the human exposure to airborne particles in a nightclub was assessed and the spatio-temporal distribution of gas phase pollutants was evaluated in restaurants and pubs. The work reported here re-examined the nightclub environment and utilized concurrent and continuous monitoring using optical scattering samplers to measure particulates (PM10) together with multi-gas analysers. The analysis illustrated the highly episodic nature of both gaseous and particulate concentrations in both the dance floor and in the bar area but levels were well below the maximum recommended exposure levels. Short-term exposure to high concentrations may however be relevant when considering the possible toxic effects on biological systems. The results give an indication of the problems associated with achieving acceptable indoor air quality (IAQ) in a complex space and identified some of the problems inherent in the design and operation of ventilation systems for such spaces.