4 resultados para Simulation and Modeling
em Duke University
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Magnetic resonance imaging is a research and clinical tool that has been applied in a wide variety of sciences. One area of magnetic resonance imaging that has exhibited terrific promise and growth in the past decade is magnetic susceptibility imaging. Imaging tissue susceptibility provides insight into the microstructural organization and chemical properties of biological tissues, but this image contrast is not well understood. The purpose of this work is to develop effective approaches to image, assess, and model the mechanisms that generate both isotropic and anisotropic magnetic susceptibility contrast in biological tissues, including myocardium and central nervous system white matter.
This document contains the first report of MRI-measured susceptibility anisotropy in myocardium. Intact mouse heart specimens were scanned using MRI at 9.4 T to ascertain both the magnetic susceptibility and myofiber orientation of the tissue. The susceptibility anisotropy of myocardium was observed and measured by relating the apparent tissue susceptibility as a function of the myofiber angle with respect to the applied magnetic field. A multi-filament model of myocardial tissue revealed that the diamagnetically anisotropy α-helix peptide bonds in myofilament proteins are capable of producing bulk susceptibility anisotropy on a scale measurable by MRI, and are potentially the chief sources of the experimentally observed anisotropy.
The growing use of paramagnetic contrast agents in magnetic susceptibility imaging motivated a series of investigations regarding the effect of these exogenous agents on susceptibility imaging in the brain, heart, and kidney. In each of these organs, gadolinium increases susceptibility contrast and anisotropy, though the enhancements depend on the tissue type, compartmentalization of contrast agent, and complex multi-pool relaxation. In the brain, the introduction of paramagnetic contrast agents actually makes white matter tissue regions appear more diamagnetic relative to the reference susceptibility. Gadolinium-enhanced MRI yields tensor-valued susceptibility images with eigenvectors that more accurately reflect the underlying tissue orientation.
Despite the boost gadolinium provides, tensor-valued susceptibility image reconstruction is prone to image artifacts. A novel algorithm was developed to mitigate these artifacts by incorporating orientation-dependent tissue relaxation information into susceptibility tensor estimation. The technique was verified using a numerical phantom simulation, and improves susceptibility-based tractography in the brain, kidney, and heart. This work represents the first successful application of susceptibility-based tractography to a whole, intact heart.
The knowledge and tools developed throughout the course of this research were then applied to studying mouse models of Alzheimer’s disease in vivo, and studying hypertrophic human myocardium specimens ex vivo. Though a preliminary study using contrast-enhanced quantitative susceptibility mapping has revealed diamagnetic amyloid plaques associated with Alzheimer’s disease in the mouse brain ex vivo, non-contrast susceptibility imaging was unable to precisely identify these plaques in vivo. Susceptibility tensor imaging of human myocardium specimens at 9.4 T shows that susceptibility anisotropy is larger and mean susceptibility is more diamagnetic in hypertrophic tissue than in normal tissue. These findings support the hypothesis that myofilament proteins are a source of susceptibility contrast and anisotropy in myocardium. This collection of preclinical studies provides new tools and context for analyzing tissue structure, chemistry, and health in a variety of organs throughout the body.
Resumo:
Carbon nanotubes (CNTs) have recently emerged as promising candidates for electron field emission (FE) cathodes in integrated FE devices. These nanostructured carbon materials possess exceptional properties and their synthesis can be thoroughly controlled. Their integration into advanced electronic devices, including not only FE cathodes, but sensors, energy storage devices, and circuit components, has seen rapid growth in recent years. The results of the studies presented here demonstrate that the CNT field emitter is an excellent candidate for next generation vacuum microelectronics and related electron emission devices in several advanced applications.
The work presented in this study addresses determining factors that currently confine the performance and application of CNT-FE devices. Characterization studies and improvements to the FE properties of CNTs, along with Micro-Electro-Mechanical Systems (MEMS) design and fabrication, were utilized in achieving these goals. Important performance limiting parameters, including emitter lifetime and failure from poor substrate adhesion, are examined. The compatibility and integration of CNT emitters with the governing MEMS substrate (i.e., polycrystalline silicon), and its impact on these performance limiting parameters, are reported. CNT growth mechanisms and kinetics were investigated and compared to silicon (100) to improve the design of CNT emitter integrated MEMS based electronic devices, specifically in vacuum microelectronic device (VMD) applications.
Improved growth allowed for design and development of novel cold-cathode FE devices utilizing CNT field emitters. A chemical ionization (CI) source based on a CNT-FE electron source was developed and evaluated in a commercial desktop mass spectrometer for explosives trace detection. This work demonstrated the first reported use of a CNT-based ion source capable of collecting CI mass spectra. The CNT-FE source demonstrated low power requirements, pulsing capabilities, and average lifetimes of over 320 hours when operated in constant emission mode under elevated pressures, without sacrificing performance. Additionally, a novel packaged ion source for miniature mass spectrometer applications using CNT emitters, a MEMS based Nier-type geometry, and a Low Temperature Cofired Ceramic (LTCC) 3D scaffold with integrated ion optics were developed and characterized. While previous research has shown other devices capable of collecting ion currents on chip, this LTCC packaged MEMS micro-ion source demonstrated improvements in energy and angular dispersion as well as the ability to direct the ions out of the packaged source and towards a mass analyzer. Simulations and experimental design, fabrication, and characterization were used to make these improvements.
Finally, novel CNT-FE devices were developed to investigate their potential to perform as active circuit elements in VMD circuits. Difficulty integrating devices at micron-scales has hindered the use of vacuum electronic devices in integrated circuits, despite the unique advantages they offer in select applications. Using a combination of particle trajectory simulation and experimental characterization, device performance in an integrated platform was investigated. Solutions to the difficulties in operating multiple devices in close proximity and enhancing electron transmission (i.e., reducing grid loss) are explored in detail. A systematic and iterative process was used to develop isolation structures that reduced crosstalk between neighboring devices from 15% on average, to nearly zero. Innovative geometries and a new operational mode reduced grid loss by nearly threefold, thereby improving transmission of the emitted cathode current to the anode from 25% in initial designs to 70% on average. These performance enhancements are important enablers for larger scale integration and for the realization of complex vacuum microelectronic circuits.