10 resultados para Wide-angle seismic modeling

em Duke University


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Economic analyses of climate change policies frequently focus on reductions of energy-related carbon dioxide emissions via market-based, economy-wide policies. The current course of environment and energy policy debate in the United States, however, suggests an alternative outcome: sector-based and/or inefficiently designed policies. This paper uses a collection of specialized, sector-based models in conjunction with a computable general equilibrium model of the economy to examine and compare these policies at an aggregate level. We examine the relative cost of different policies designed to achieve the same quantity of emission reductions. We find that excluding a limited number of sectors from an economy-wide policy does not significantly raise costs. Focusing policy solely on the electricity and transportation sectors doubles costs, however, and using non-market policies can raise cost by a factor of ten. These results are driven in part by, and are sensitive to, our modeling of pre-existing tax distortions. Copyright © 2006 by the IAEE. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When recalling autobiographical memories, individuals often experience visual images associated with the event. These images can be constructed from two different perspectives: first person, in which the event is visualized from the viewpoint experienced at encoding, or third person, in which the event is visualized from an external vantage point. Using a novel technique to measure visual perspective, we examined where the external vantage point is situated in third-person images. Individuals in two studies were asked to recall either 10 or 15 events from their lives and describe the perspectives they experienced. Wide variation in spatial locations was observed within third-person perspectives, with the location of these perspectives relating to the event being recalled. Results suggest remembering from an external viewpoint may be more common than previous studies have demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During mitotic cell cycles, DNA experiences many types of endogenous and exogenous damaging agents that could potentially cause double strand breaks (DSB). In S. cerevisiae, DSBs are primarily repaired by mitotic recombination and as a result, could lead to loss-of-heterozygosity (LOH). Genetic recombination can happen in both meiosis and mitosis. While genome-wide distribution of meiotic recombination events has been intensively studied, mitotic recombination events have not been mapped unbiasedly throughout the genome until recently. Methods for selecting mitotic crossovers and mapping the positions of crossovers have recently been developed in our lab. Our current approach uses a diploid yeast strain that is heterozygous for about 55,000 SNPs, and employs SNP-Microarrays to map LOH events throughout the genome. These methods allow us to examine selected crossovers and unselected mitotic recombination events (crossover, noncrossover and BIR) at about 1 kb resolution across the genome. Using this method, we generated maps of spontaneous and UV-induced LOH events. In this study, we explore machine learning and variable selection techniques to build a predictive model for where the LOH events occur in the genome.

Randomly from the yeast genome, we simulated control tracts resembling the LOH tracts in terms of tract lengths and locations with respect to single-nucleotide-polymorphism positions. We then extracted roughly 1,100 features such as base compositions, histone modifications, presence of tandem repeats etc. and train classifiers to distinguish control tracts and LOH tracts. We found interesting features of good predictive values. We also found that with the current repertoire of features, the prediction is generally better for spontaneous LOH events than UV-induced LOH events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.

We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.

We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.

Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.

This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally- derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. We illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound FeSi over a wide range of temperature. Results agree well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of my Ph.D. thesis is to enhance the visualization of the peripheral retina using wide-field optical coherence tomography (OCT) in a clinical setting.

OCT has gain widespread adoption in clinical ophthalmology due to its ability to visualize the diseases of the macula and central retina in three-dimensions, however, clinical OCT has a limited field-of-view of 300. There has been increasing interest to obtain high-resolution images outside of this narrow field-of-view, because three-dimensional imaging of the peripheral retina may prove to be important in the early detection of neurodegenerative diseases, such as Alzheimer's and dementia, and the monitoring of known ocular diseases, such as diabetic retinopathy, retinal vein occlusions, and choroid masses.

Before attempting to build a wide-field OCT system, we need to better understand the peripheral optics of the human eye. Shack-Hartmann wavefront sensors are commonly used tools for measuring the optical imperfections of the eye, but their acquisition speed is limited by their underlying camera hardware. The first aim of my thesis research is to create a fast method of ocular wavefront sensing such that we can measure the wavefront aberrations at numerous points across a wide visual field. In order to address aim one, we will develop a sparse Zernike reconstruction technique (SPARZER) that will enable Shack-Hartmann wavefront sensors to use as little as 1/10th of the data that would normally be required for an accurate wavefront reading. If less data needs to be acquired, then we can increase the speed at which wavefronts can be recorded.

For my second aim, we will create a sophisticated optical model that reproduces the measured aberrations of the human eye. If we know how the average eye's optics distort light, then we can engineer ophthalmic imaging systems that preemptively cancel inherent ocular aberrations. This invention will help the retinal imaging community to design systems that are capable of acquiring high resolution images across a wide visual field. The proposed model eye is also of interest to the field of vision science as it aids in the study of how anatomy affects visual performance in the peripheral retina.

Using the optical model from aim two, we will design and reduce to practice a clinical OCT system that is capable of imaging a large (800) field-of-view with enhanced visualization of the peripheral retina. A key aspect of this third and final aim is to make the imaging system compatible with standard clinical practices. To this end, we will incorporate sensorless adaptive optics in order to correct the inter- and intra- patient variability in ophthalmic aberrations. Sensorless adaptive optics will improve both the brightness (signal) and clarity (resolution) of features in the peripheral retina without affecting the size of the imaging system.

The proposed work should not only be a noteworthy contribution to the ophthalmic and engineering communities, but it should strengthen our existing collaborations with the Duke Eye Center by advancing their capability to diagnose pathologies of the peripheral retinal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic resonance imaging is a research and clinical tool that has been applied in a wide variety of sciences. One area of magnetic resonance imaging that has exhibited terrific promise and growth in the past decade is magnetic susceptibility imaging. Imaging tissue susceptibility provides insight into the microstructural organization and chemical properties of biological tissues, but this image contrast is not well understood. The purpose of this work is to develop effective approaches to image, assess, and model the mechanisms that generate both isotropic and anisotropic magnetic susceptibility contrast in biological tissues, including myocardium and central nervous system white matter.

This document contains the first report of MRI-measured susceptibility anisotropy in myocardium. Intact mouse heart specimens were scanned using MRI at 9.4 T to ascertain both the magnetic susceptibility and myofiber orientation of the tissue. The susceptibility anisotropy of myocardium was observed and measured by relating the apparent tissue susceptibility as a function of the myofiber angle with respect to the applied magnetic field. A multi-filament model of myocardial tissue revealed that the diamagnetically anisotropy α-helix peptide bonds in myofilament proteins are capable of producing bulk susceptibility anisotropy on a scale measurable by MRI, and are potentially the chief sources of the experimentally observed anisotropy.

The growing use of paramagnetic contrast agents in magnetic susceptibility imaging motivated a series of investigations regarding the effect of these exogenous agents on susceptibility imaging in the brain, heart, and kidney. In each of these organs, gadolinium increases susceptibility contrast and anisotropy, though the enhancements depend on the tissue type, compartmentalization of contrast agent, and complex multi-pool relaxation. In the brain, the introduction of paramagnetic contrast agents actually makes white matter tissue regions appear more diamagnetic relative to the reference susceptibility. Gadolinium-enhanced MRI yields tensor-valued susceptibility images with eigenvectors that more accurately reflect the underlying tissue orientation.

Despite the boost gadolinium provides, tensor-valued susceptibility image reconstruction is prone to image artifacts. A novel algorithm was developed to mitigate these artifacts by incorporating orientation-dependent tissue relaxation information into susceptibility tensor estimation. The technique was verified using a numerical phantom simulation, and improves susceptibility-based tractography in the brain, kidney, and heart. This work represents the first successful application of susceptibility-based tractography to a whole, intact heart.

The knowledge and tools developed throughout the course of this research were then applied to studying mouse models of Alzheimer’s disease in vivo, and studying hypertrophic human myocardium specimens ex vivo. Though a preliminary study using contrast-enhanced quantitative susceptibility mapping has revealed diamagnetic amyloid plaques associated with Alzheimer’s disease in the mouse brain ex vivo, non-contrast susceptibility imaging was unable to precisely identify these plaques in vivo. Susceptibility tensor imaging of human myocardium specimens at 9.4 T shows that susceptibility anisotropy is larger and mean susceptibility is more diamagnetic in hypertrophic tissue than in normal tissue. These findings support the hypothesis that myofilament proteins are a source of susceptibility contrast and anisotropy in myocardium. This collection of preclinical studies provides new tools and context for analyzing tissue structure, chemistry, and health in a variety of organs throughout the body.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.