7 resultados para anthropogenic emissions. gaussian plume modeling
em Duke University
Resumo:
Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as
`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol
particles and greenhouse gases (GHGs) as responses to their surrounding environments.
While the signicance of quantifying the exchange rates of GHGs and atmospheric
aerosol particles between the terrestrial biosphere and the atmosphere is
hardly questioned in many scientic elds, the progress in improving model predictability,
data interpretation or the combination of the two remains impeded by
the lack of precise framework elucidating their dynamic transport processes over a
wide range of spatiotemporal scales. The diculty in developing prognostic modeling
tools to quantify the source or sink strength of these atmospheric substances
can be further magnied by the fact that the climate system is also sensitive to the
feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,
the emergent need is to reduce uncertainties when assessing this complex and dynamic
feedback cycle that is necessary to support the decisions of mitigation and
adaptation policies associated with human activities (e.g., anthropogenic emission
controls and land use managements) under current and future climate regimes.
With the goal to improve the predictions for the biosphere-atmosphere exchange
of biologically active gases and atmospheric aerosol particles, the main focus of this
dissertation is on revising and up-scaling the biotic and abiotic transport processes
from leaf to canopy scales. The validity of previous modeling studies in determining
iv
the exchange rate of gases and particles is evaluated with detailed descriptions of their
limitations. Mechanistic-based modeling approaches along with empirical studies
across dierent scales are employed to rene the mathematical descriptions of surface
conductance responsible for gas and particle exchanges as commonly adopted by all
operational models. Specically, how variation in horizontal leaf area density within
the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes
and thereby the ultrane particle collection eciency at the leaf/branch scale
is explored using wind tunnel experiments with interpretations by a porous media
model and a scaling analysis. A multi-layered and size-resolved second-order closure
model combined with particle
uxes and concentration measurements within and
above a forest is used to explore the particle transport processes within the canopy
sub-layer and the partitioning of particle deposition onto canopy medium and forest
oor. For gases, a modeling framework accounting for the leaf-level boundary layer
eects on the stomatal pathway for gas exchange is proposed and combined with sap
ux measurements in a wind tunnel to assess how leaf-level transpiration varies with
increasing wind speed. How exogenous environmental conditions and endogenous
soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and
below-ground water dynamics in the soil-plant system and shape plant responses
to droughts is assessed by a porous media model that accommodates the transient
water
ow within the plant vascular system and is coupled with the aforementioned
leaf-level gas exchange model and soil-root interaction model. It should be noted
that tackling all aspects of potential issues causing uncertainties in forecasting the
feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single
dissertation but further research questions and opportunities based on the foundation
derived from this dissertation are also brie
y discussed.
Resumo:
The rise of the twenty-first century has seen the further increase in the industrialization of Earth’s resources, as society aims to meet the needs of a growing population while still protecting our environmental and natural resources. The advent of the industrial bioeconomy – which encompasses the production of renewable biological resources and their conversion into food, feed, and bio-based products – is seen as an important step in transition towards sustainable development and away from fossil fuels. One sector of the industrial bioeconomy which is rapidly being expanded is the use of biobased feedstocks in electricity production as an alternative to coal, especially in the European Union.
As bioeconomy policies and objectives increasingly appear on political agendas, there is a growing need to quantify the impacts of transitioning from fossil fuel-based feedstocks to renewable biological feedstocks. Specifically, there is a growing need to conduct a systems analysis and potential risks of increasing the industrial bioeconomy, given that the flows within it are inextricably linked. Furthermore, greater analysis is needed into the consequences of shifting from fossil fuels to renewable feedstocks, in part through the use of life cycle assessment modeling to analyze impacts along the entire value chain.
To assess the emerging nature of the industrial bioeconomy, three objectives are addressed: (1) quantify the global industrial bioeconomy, linking the use of primary resources with the ultimate end product; (2) quantify the impacts of the expaning wood pellet energy export market of the Southeastern United States; (3) conduct a comparative life cycle assessment, incorporating the use of dynamic life cycle assessment, of replacing coal-fired electricity generation in the United Kingdom with wood pellets that are produced in the Southeastern United States.
To quantify the emergent industrial bioeconomy, an empirical analysis was undertaken. Existing databases from multiple domestic and international agencies was aggregated and analyzed in Microsoft Excel to produce a harmonized dataset of the bioeconomy. First-person interviews, existing academic literature, and industry reports were then utilized to delineate the various intermediate and end use flows within the bioeconomy. The results indicate that within a decade, the industrial use of agriculture has risen ten percent, given increases in the production of bioenergy and bioproducts. The underlying resources supporting the emergent bioeconomy (i.e., land, water, and fertilizer use) were also quantified and included in the database.
Following the quantification of the existing bioeconomy, an in-depth analysis of the bioenergy sector was conducted. Specifically, the focus was on quantifying the impacts of the emergent wood pellet export sector that has rapidly developed in recent years in the Southeastern United States. A cradle-to-gate life cycle assessment was conducted in order to quantify supply chain impacts from two wood pellet production scenarios: roundwood and sawmill residues. For reach of the nine impact categories assessed, wood pellet production from sawmill residues resulted in higher values, ranging from 10-31% higher.
The analysis of the wood pellet sector was then expanded to include the full life cycle (i.e., cradle-to-grave). In doing to, the combustion of biogenic carbon and the subsequent timing of emissions were assessed by incorporating dynamic life cycle assessment modeling. Assuming immediate carbon neutrality of the biomass, the results indicated an 86% reduction in global warming potential when utilizing wood pellets as compared to coal for electricity production in the United Kingdom. When incorporating the timing of emissions, wood pellets equated to a 75% or 96% reduction in carbon dioxide emissions, depending upon whether the forestry feedstock was considered to be harvested or planted in year one, respectively.
Finally, a policy analysis of renewable energy in the United States was conducted. Existing coal-fired power plants in the Southeastern United States were assessed in terms of incorporating the co-firing of wood pellets. Co-firing wood pellets with coal in existing Southeastern United States power stations would result in a nine percent reduction in global warming potential.
Resumo:
A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.
Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.
The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.
The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.
All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.
Resumo:
In perifusion cell cultures, the culture medium flows continuously through a chamber containing immobilized cells and the effluent is collected at the end. In our main applications, gonadotropin releasing hormone (GnRH) or oxytocin is introduced into the chamber as the input. They stimulate the cells to secrete luteinizing hormone (LH), which is collected in the effluent. To relate the effluent LH concentration to the cellular processes producing it, we develop and analyze a mathematical model consisting of coupled partial differential equations describing the intracellular signaling and the movement of substances in the cell chamber. We analyze three different data sets and give cellular mechanisms that explain the data. Our model indicates that two negative feedback loops, one fast and one slow, are needed to explain the data and we give their biological bases. We demonstrate that different LH outcomes in oxytocin and GnRH stimulations might originate from different receptor dynamics. We analyze the model to understand the influence of parameters, like the rate of the medium flow or the fraction collection time, on the experimental outcomes. We investigate how the rate of binding and dissociation of the input hormone to and from its receptor influence its movement down the chamber. Finally, we formulate and analyze simpler models that allow us to predict the distortion of a square pulse due to hormone-receptor interactions and to estimate parameters using perifusion data. We show that in the limit of high binding and dissociation the square pulse moves as a diffusing Gaussian and in this limit the biological parameters can be estimated.
Resumo:
The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.
Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.
Resumo:
Purpose: To build a model that will predict the survival time for patients that were treated with stereotactic radiosurgery for brain metastases using support vector machine (SVM) regression.
Methods and Materials: This study utilized data from 481 patients, which were equally divided into training and validation datasets randomly. The SVM model used a Gaussian RBF function, along with various parameters, such as the size of the epsilon insensitive region and the cost parameter (C) that are used to control the amount of error tolerated by the model. The predictor variables for the SVM model consisted of the actual survival time of the patient, the number of brain metastases, the graded prognostic assessment (GPA) and Karnofsky Performance Scale (KPS) scores, prescription dose, and the largest planning target volume (PTV). The response of the model is the survival time of the patient. The resulting survival time predictions were analyzed against the actual survival times by single parameter classification and two-parameter classification. The predicted mean survival times within each classification were compared with the actual values to obtain the confidence interval associated with the model’s predictions. In addition to visualizing the data on plots using the means and error bars, the correlation coefficients between the actual and predicted means of the survival times were calculated during each step of the classification.
Results: The number of metastases and KPS scores, were consistently shown to be the strongest predictors in the single parameter classification, and were subsequently used as first classifiers in the two-parameter classification. When the survival times were analyzed with the number of metastases as the first classifier, the best correlation was obtained for patients with 3 metastases, while patients with 4 or 5 metastases had significantly worse results. When the KPS score was used as the first classifier, patients with a KPS score of 60 and 90/100 had similar strong correlation results. These mixed results are likely due to the limited data available for patients with more than 3 metastases or KPS scores of 60 or less.
Conclusions: The number of metastases and the KPS score both showed to be strong predictors of patient survival time. The model was less accurate for patients with more metastases and certain KPS scores due to the lack of training data.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge