10 resultados para alternative modeling approaches

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rise of the twenty-first century has seen the further increase in the industrialization of Earth’s resources, as society aims to meet the needs of a growing population while still protecting our environmental and natural resources. The advent of the industrial bioeconomy – which encompasses the production of renewable biological resources and their conversion into food, feed, and bio-based products – is seen as an important step in transition towards sustainable development and away from fossil fuels. One sector of the industrial bioeconomy which is rapidly being expanded is the use of biobased feedstocks in electricity production as an alternative to coal, especially in the European Union.

As bioeconomy policies and objectives increasingly appear on political agendas, there is a growing need to quantify the impacts of transitioning from fossil fuel-based feedstocks to renewable biological feedstocks. Specifically, there is a growing need to conduct a systems analysis and potential risks of increasing the industrial bioeconomy, given that the flows within it are inextricably linked. Furthermore, greater analysis is needed into the consequences of shifting from fossil fuels to renewable feedstocks, in part through the use of life cycle assessment modeling to analyze impacts along the entire value chain.

To assess the emerging nature of the industrial bioeconomy, three objectives are addressed: (1) quantify the global industrial bioeconomy, linking the use of primary resources with the ultimate end product; (2) quantify the impacts of the expaning wood pellet energy export market of the Southeastern United States; (3) conduct a comparative life cycle assessment, incorporating the use of dynamic life cycle assessment, of replacing coal-fired electricity generation in the United Kingdom with wood pellets that are produced in the Southeastern United States.

To quantify the emergent industrial bioeconomy, an empirical analysis was undertaken. Existing databases from multiple domestic and international agencies was aggregated and analyzed in Microsoft Excel to produce a harmonized dataset of the bioeconomy. First-person interviews, existing academic literature, and industry reports were then utilized to delineate the various intermediate and end use flows within the bioeconomy. The results indicate that within a decade, the industrial use of agriculture has risen ten percent, given increases in the production of bioenergy and bioproducts. The underlying resources supporting the emergent bioeconomy (i.e., land, water, and fertilizer use) were also quantified and included in the database.

Following the quantification of the existing bioeconomy, an in-depth analysis of the bioenergy sector was conducted. Specifically, the focus was on quantifying the impacts of the emergent wood pellet export sector that has rapidly developed in recent years in the Southeastern United States. A cradle-to-gate life cycle assessment was conducted in order to quantify supply chain impacts from two wood pellet production scenarios: roundwood and sawmill residues. For reach of the nine impact categories assessed, wood pellet production from sawmill residues resulted in higher values, ranging from 10-31% higher.

The analysis of the wood pellet sector was then expanded to include the full life cycle (i.e., cradle-to-grave). In doing to, the combustion of biogenic carbon and the subsequent timing of emissions were assessed by incorporating dynamic life cycle assessment modeling. Assuming immediate carbon neutrality of the biomass, the results indicated an 86% reduction in global warming potential when utilizing wood pellets as compared to coal for electricity production in the United Kingdom. When incorporating the timing of emissions, wood pellets equated to a 75% or 96% reduction in carbon dioxide emissions, depending upon whether the forestry feedstock was considered to be harvested or planted in year one, respectively.

Finally, a policy analysis of renewable energy in the United States was conducted. Existing coal-fired power plants in the Southeastern United States were assessed in terms of incorporating the co-firing of wood pellets. Co-firing wood pellets with coal in existing Southeastern United States power stations would result in a nine percent reduction in global warming potential.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During the past 2 decades, significant advances in our understanding of the humoral immune response to human immunodeficiency virus type 1 (HIV-1) infection have been made, yet a tremendous amount of work lies ahead. Despite these advances, strategies to reliably induce antibodies that can control HIV-1 infection are still critically needed. However, recent advances in our understanding of the kinetics, specificity, and function of early humoral responses offer alternative new approaches to attain this goal. These results, along with the new broadly neutralizing antibody specificities, the role for other antibody functions, the increased understanding of HIV-1-induced changes to B cell biology, and results from the RV144 "Thai" trial showing potential modest sterilizing protection by nonneutralizing antibody responses, have renewed focus on the humoral system. In this review, recent advances in our understanding of the earliest humoral responses are discussed, highlighting presentations from the meeting on the Biology of Acute HIV Infection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Diagnostic imaging represents the fastest growing segment of costs in the US health system. This study investigated the cost-effectiveness of alternative diagnostic approaches to meniscus tears of the knee, a highly prevalent disease that traditionally relies on MRI as part of the diagnostic strategy. PURPOSE: To identify the most efficient strategy for the diagnosis of meniscus tears. STUDY DESIGN: Economic and decision analysis; Level of evidence, 1. METHODS: A simple-decision model run as a cost-utility analysis was constructed to assess the value added by MRI in various combinations with patient history and physical examination (H&P). The model examined traumatic and degenerative tears in 2 distinct settings: primary care and orthopaedic sports medicine clinic. Strategies were compared using the incremental cost-effectiveness ratio (ICER). RESULTS: In both practice settings, H&P alone was widely preferred for degenerative meniscus tears. Performing MRI to confirm a positive H&P was preferred for traumatic tears in both practice settings, with a willingness to pay of less than US$50,000 per quality-adjusted life-year. Performing an MRI for all patients was not preferred in any reasonable clinical scenario. The prevalence of a meniscus tear in a clinician's patient population was influential. For traumatic tears, MRI to confirm a positive H&P was preferred when prevalence was less than 46.7%, with H&P preferred above that. For degenerative tears, H&P was preferred until the prevalence reaches 74.2%, and then MRI to confirm a negative was the preferred strategy. In both settings, MRI to confirm positive physical examination led to more than a 10-fold lower rate of unnecessary surgeries than did any other strategy, while MRI to confirm negative physical examination led to a 2.08 and 2.26 higher rate than H&P alone in primary care and orthopaedic clinics, respectively. CONCLUSION: For all practitioners, H&P is the preferred strategy for the suspected degenerative meniscus tear. An MRI to confirm a positive H&P is preferred for traumatic tears for all practitioners. Consideration should be given to implementing alternative diagnostic strategies as well as enhancing provider education in physical examination skills to improve the reliability of H&P as a diagnostic test. CLINICAL RELEVANCE: Alternative diagnostic strategies that do not include the use of MRI may result in decreased health care costs without harm to the patient and could possibly reduce unnecessary procedures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as

`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol

particles and greenhouse gases (GHGs) as responses to their surrounding environments.

While the signicance of quantifying the exchange rates of GHGs and atmospheric

aerosol particles between the terrestrial biosphere and the atmosphere is

hardly questioned in many scientic elds, the progress in improving model predictability,

data interpretation or the combination of the two remains impeded by

the lack of precise framework elucidating their dynamic transport processes over a

wide range of spatiotemporal scales. The diculty in developing prognostic modeling

tools to quantify the source or sink strength of these atmospheric substances

can be further magnied by the fact that the climate system is also sensitive to the

feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,

the emergent need is to reduce uncertainties when assessing this complex and dynamic

feedback cycle that is necessary to support the decisions of mitigation and

adaptation policies associated with human activities (e.g., anthropogenic emission

controls and land use managements) under current and future climate regimes.

With the goal to improve the predictions for the biosphere-atmosphere exchange

of biologically active gases and atmospheric aerosol particles, the main focus of this

dissertation is on revising and up-scaling the biotic and abiotic transport processes

from leaf to canopy scales. The validity of previous modeling studies in determining

iv

the exchange rate of gases and particles is evaluated with detailed descriptions of their

limitations. Mechanistic-based modeling approaches along with empirical studies

across dierent scales are employed to rene the mathematical descriptions of surface

conductance responsible for gas and particle exchanges as commonly adopted by all

operational models. Specically, how variation in horizontal leaf area density within

the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes

and thereby the ultrane particle collection eciency at the leaf/branch scale

is explored using wind tunnel experiments with interpretations by a porous media

model and a scaling analysis. A multi-layered and size-resolved second-order closure

model combined with particle

uxes and concentration measurements within and

above a forest is used to explore the particle transport processes within the canopy

sub-layer and the partitioning of particle deposition onto canopy medium and forest

oor. For gases, a modeling framework accounting for the leaf-level boundary layer

eects on the stomatal pathway for gas exchange is proposed and combined with sap

ux measurements in a wind tunnel to assess how leaf-level transpiration varies with

increasing wind speed. How exogenous environmental conditions and endogenous

soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and

below-ground water dynamics in the soil-plant system and shape plant responses

to droughts is assessed by a porous media model that accommodates the transient

water

ow within the plant vascular system and is coupled with the aforementioned

leaf-level gas exchange model and soil-root interaction model. It should be noted

that tackling all aspects of potential issues causing uncertainties in forecasting the

feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single

dissertation but further research questions and opportunities based on the foundation

derived from this dissertation are also brie

y discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The approach used to model technological change in a climate policy model is a critical determinant of its results in terms of the time path of CO2 prices and costs required to achieve various emission reduction goals. We provide an overview of the different approaches used in the literature, with an emphasis on recent developments regarding endogenous technological change, research and development, and learning. Detailed examination sheds light on the salient features of each approach, including strengths, limitations, and policy implications. Key issues include proper accounting for the opportunity costs of climate-related knowledge generation, treatment of knowledge spillovers and appropriability, and the empirical basis for parameterizing technological relationships. No single approach appears to dominate on all these dimensions, and different approaches may be preferred depending on the purpose of the analysis, be it positive or normative. © 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Economic analyses of climate change policies frequently focus on reductions of energy-related carbon dioxide emissions via market-based, economy-wide policies. The current course of environment and energy policy debate in the United States, however, suggests an alternative outcome: sector-based and/or inefficiently designed policies. This paper uses a collection of specialized, sector-based models in conjunction with a computable general equilibrium model of the economy to examine and compare these policies at an aggregate level. We examine the relative cost of different policies designed to achieve the same quantity of emission reductions. We find that excluding a limited number of sectors from an economy-wide policy does not significantly raise costs. Focusing policy solely on the electricity and transportation sectors doubles costs, however, and using non-market policies can raise cost by a factor of ten. These results are driven in part by, and are sensitive to, our modeling of pre-existing tax distortions. Copyright © 2006 by the IAEE. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNaseI footprinting is an established assay for identifying transcription factor (TF)-DNA interactions with single base pair resolution. High-throughput DNase-seq assays have recently been used to detect in vivo DNase footprints across the genome. Multiple computational approaches have been developed to identify DNase-seq footprints as predictors of TF binding. However, recent studies have pointed to a substantial cleavage bias of DNase and its negative impact on predictive performance of footprinting. To assess the potential for using DNase-seq to identify individual binding sites, we performed DNase-seq on deproteinized genomic DNA and determined sequence cleavage bias. This allowed us to build bias corrected and TF-specific footprint models. The predictive performance of these models demonstrated that predicted footprints corresponded to high-confidence TF-DNA interactions. DNase-seq footprints were absent under a fraction of ChIP-seq peaks, which we show to be indicative of weaker binding, indirect TF-DNA interactions or possible ChIP artifacts. The modeling approach was also able to detect variation in the consensus motifs that TFs bind to. Finally, cell type specific footprints were detected within DNase hypersensitive sites that are present in multiple cell types, further supporting that footprints can identify changes in TF binding that are not detectable using other strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.

We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.

We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.

The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.

Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.

The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.