12 resultados para Finite model generation
em CaltechTHESIS
Resumo:
There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.
Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.
Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.
Resumo:
A general solution is presented for water waves generated by an arbitrary movement of the bed (in space and time) in a two-dimensional fluid domain with a uniform depth. The integral solution which is developed is based on a linearized approximation to the complete (nonlinear) set of governing equations. The general solution is evaluated for the specific case of a uniform upthrust or downthrow of a block section of the bed; two time-displacement histories of the bed movement are considered.
An integral solution (based on a linear theory) is also developed for a three-dimensional fluid domain of uniform depth for a class of bed movements which are axially symmetric. The integral solution is evaluated for the specific case of a block upthrust or downthrow of a section of the bed, circular in planform, with a time-displacement history identical to one of the motions used in the two-dimensional model.
Since the linear solutions are developed from a linearized approximation of the complete nonlinear description of wave behavior, the applicability of these solutions is investigated. Two types of non-linear effects are found which limit the applicability of the linear theory: (1) large nonlinear effects which occur in the region of generation during the bed movement, and (2) the gradual growth of nonlinear effects during wave propagation.
A model of wave behavior, which includes, in an approximate manner, both linear and nonlinear effects is presented for computing wave profiles after the linear theory has become invalid due to the growth of nonlinearities during wave propagation.
An experimental program has been conducted to confirm both the linear model for the two-dimensional fluid domain and the strategy suggested for determining wave profiles during propagation after the linear theory becomes invalid. The effect of a more general time-displacement history of the moving bed than those employed in the theoretical models is also investigated experimentally.
The linear theory is found to accurately approximate the wave behavior in the region of generation whenever the total displacement of the bed is much less than the water depth. Curves are developed and confirmed by the experiments which predict gross features of the lead wave propagating from the region of generation once the values of certain nondimensional parameters (which characterize the generation process) are known. For example, the maximum amplitude of the lead wave propagating from the region of generation has been found to never exceed approximately one-half of the total bed displacement. The gross features of the tsunami resulting from the Alaskan earthquake of 27 March 1964 can be estimated from the results of this study.
Resumo:
Forced vibration field tests and finite element studies have been conducted on Morrow Point (arch) Dam in order to investigate dynamic dam-water interaction and water compressibility. Design of the data acquisition system incorporates several special features to retrieve both amplitude and phase of the response in a low signal to noise environment. These features contributed to the success of the experimental program which, for the first time, produced field evidence of water compressibility; this effect seems to play a significant role only in the symmetric response of Morrow Point Dam in the frequency range examined. In the accompanying analysis, frequency response curves for measured accelerations and water pressures as well as their resonating shapes are compared to predictions from the current state-of-the-art finite element model for which water compressibility is both included and neglected. Calibration of the numerical model employs the antisymmetric response data since they are only slightly affected by water compressibility, and, after calibration, good agreement to the data is obtained whether or not water compressibility is included. In the effort to reproduce the symmetric response data, on which water compressibility has a significant influence, the calibrated model shows better correlation when water compressibility is included, but the agreement is still inadequate. Similar results occur using data obtained previously by others at a low water level. A successful isolation of the fundamental water resonance from the experimental data shows significantly different features from those of the numerical water model, indicating possible inaccuracy in the assumed geometry and/or boundary conditions for the reservoir. However, the investigation does suggest possible directions in which the numerical model can be improved.
Resumo:
Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.
Resumo:
This thesis describes simple extensions of the standard model with new sources of baryon number violation but no proton decay. The motivation for constructing such theories comes from the shortcomings of the standard model to explain the generation of baryon asymmetry in the universe, and from the absence of experimental evidence for proton decay. However, lack of any direct evidence for baryon number violation in general puts strong bounds on the naturalness of some of those models and favors theories with suppressed baryon number violation below the TeV scale. The initial part of the thesis concentrates on investigating models containing new scalars responsible for baryon number breaking. A model with new color sextet scalars is analyzed in more detail. Apart from generating cosmological baryon number, it gives nontrivial predictions for the neutron-antineutron oscillations, the electric dipole moment of the neutron, and neutral meson mixing. The second model discussed in the thesis contains a new scalar leptoquark. Although this model predicts mainly lepton flavor violation and a nonzero electric dipole moment of the electron, it includes, in its original form, baryon number violating nonrenormalizable dimension-five operators triggering proton decay. Imposing an appropriate discrete symmetry forbids such operators. Finally, a supersymmetric model with gauged baryon and lepton numbers is proposed. It provides a natural explanation for proton stability and predicts lepton number violating processes below the supersymmetry breaking scale, which can be tested at the Large Hadron Collider. The dark matter candidate in this model carries baryon number and can be searched for in direct detection experiments as well. The thesis is completed by constructing and briefly discussing a minimal extension of the standard model with gauged baryon, lepton, and flavor symmetries.
Resumo:
Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.
In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.
Resumo:
This work is concerned with a general analysis of wave interactions in periodic structures and particularly periodic thin film dielectric waveguides.
The electromagnetic wave propagation in an asymmetric dielectric waveguide with a periodically perturbed surface is analyzed in terms of a Floquet mode solution. First order approximate analytical expressions for the space harmonics are obtained. The solution is used to analyze various applications: (1) phase matched second harmonic generation in periodically perturbed optical waveguides; (2) grating couplers and thin film filters; (3) Bragg reflection devices; (4) the calculation of the traveling wave interaction impedance for solid state and vacuum tube optical traveling wave amplifiers which utilize periodic dielectric waveguides. Some of these applications are of interest in the field of integrated optics.
A special emphasis is put on the analysis of traveling wave interaction between electrons and electromagnetic waves in various operation regimes. Interactions with a finite temperature electron beam at the collision-dominated, collisionless, and quantum regimes are analyzed in detail assuming a one-dimensional model and longitudinal coupling.
The analysis is used to examine the possibility of solid state traveling wave devices (amplifiers, modulators), and some monolithic structures of these devices are suggested, designed to operate at the submillimeter-far infrared frequency regime. The estimates of attainable traveling wave interaction gain are quite low (on the order of a few inverse centimeters). However, the possibility of attaining net gain with different materials, structures and operation condition is not ruled out.
The developed model is used to discuss the possibility and the theoretical limitations of high frequency (optical) operation of vacuum electron beam tube; and the relation to other electron-electromagnetic wave interaction effects (Smith-Purcell and Cerenkov radiation and the free electron laser) are pointed out. Finally, the case where the periodic structure is the natural crystal lattice is briefly discussed. The longitudinal component of optical space harmonics in the crystal is calculated and found to be of the order of magnitude of the macroscopic wave, and some comments are made on the possibility of coherent bremsstrahlung and distributed feedback lasers in single crystals.
Resumo:
This study proposes a wastewater electrolysis cell (WEC) for on-site treatment of human waste coupled with decentralized molecular H2 production. The core of the WEC includes mixed metal oxides anodes functionalized with bismuth doped TiO2 (BiOx/TiO2). The BiOx/TiO2 anode shows reliable electro-catalytic activity to oxidize Cl- to reactive chlorine species (RCS), which degrades environmental pollutants including chemical oxygen demand (COD), protein, NH4+, urea, and total coliforms. The WEC experiments for treatment of various kinds of synthetic and real wastewater demonstrate sufficient water quality of effluent for reuse for toilet flushing and environmental purposes. Cathodic reduction of water and proton on stainless steel cathodes produced molecular H2 with moderate levels of current and energy efficiency. This thesis presents a comprehensive environmental analysis together with kinetic models to provide an in-depth understanding of reaction pathways mediated by the RCS and the effects of key operating parameters. The latter part of this thesis is dedicated to bilayer hetero-junction anodes which show enhanced generation efficiency of RCS and long-term stability.
Chapter 2 describes the reaction pathway and kinetics of urea degradation mediated by electrochemically generated RCS. The urea oxidation involves chloramines and chlorinated urea as reaction intermediates, for which the mass/charge balance analysis reveals that N2 and CO2 are the primary products. Chapter 3 investigates direct-current and photovoltaic powered WEC for domestic wastewater treatment, while Chapter 4 demonstrates the feasibility of the WEC to treat model septic tank effluents. The results in Chapter 2 and 3 corroborate the active roles of chlorine radicals (Cl•/Cl2-•) based on iR-compensated anodic potential (thermodynamic basis) and enhanced pseudo-first-order rate constants (kinetic basis). The effects of operating parameters (anodic potential and [Cl-] in Chapter 3; influent dilution and anaerobic pretreatment in Chapter 4) on the rate and current/energy efficiency of pollutants degradation and H2 production are thoroughly discussed based on robust kinetic models. Chapter 5 reports the generation of RCS on Ir0.7Ta0.3Oy/BixTi1-xOz hetero-junction anodes with enhanced rate, current efficiency, and long-term stability compared to the Ir0.7Ta0.3Oy anode. The effects of surficial Bi concentration are interrogated, focusing on relative distributions between surface-bound hydroxyl radical and higher oxide.
Resumo:
This thesis outlines the construction of several types of structured integrators for incompressible fluids. We first present a vorticity integrator, which is the Hamiltonian counterpart of the existing Lagrangian-based fluid integrator. We next present a model-reduced variational Eulerian integrator for incompressible fluids, which combines the efficiency gains of dimension reduction, the qualitative robustness to coarse spatial and temporal resolutions of geometric integrators, and the simplicity of homogenized boundary conditions on regular grids to deal with arbitrarily-shaped domains with sub-grid accuracy.
Both these numerical methods involve approximating the Lie group of volume-preserving diffeomorphisms by a finite-dimensional Lie-group and then restricting the resulting variational principle by means of a non-holonomic constraint. Advantages and limitations of this discretization method will be outlined. It will be seen that these derivation techniques are unable to yield symplectic integrators, but that energy conservation is easily obtained, as is a discretized version of Kelvin's circulation theorem.
Finally, we outline the basis of a spectral discrete exterior calculus, which may be a useful element in producing structured numerical methods for fluids in the future.
Resumo:
The pulsed neutron technique has been used to investigate the decay of thermal neutrons in two adjacent water-borated water finite media. Experiments were performed with a 6x6x6 inches cubic assembly divided in two halves by a thin membrane and filled with pure distilled water on one side and borated water on the other side.
The fundamental decay constant was measured versus the boric acid concentration in the poisoned medium. The experimental results showed good agreement with the predictions of the time dependent diffusion model. It was assumed that the addition of boric acid increases the absorption cross section of the poisoned medium without affecting its diffusion properties: In these conditions, space-energy separability and the concept of an “effective” buckling as derived from diffusion theory were introduced. Their validity was supported by the experimental results.
Measurements were performed with the absorption cross section of the poisoned medium increasing gradually up to 16 times its initial value. Extensive use of the IBM 7090-7094 Computing facility was made to analyze properly the decay data (Frantic Code). Attention was given to the count loss correction scheme and the handling of the statistics involved. Fitting of the experimental results into the analytical form predicted by the diffusion model led to
Ʃav = 4721 sec-1 (±150)
Do = 35972 cm2sec-1 (±800) for water at 21˚C
C (given) = 3420 cm4sec-1
These values, when compared with published data, show that the diffusion model is adequate in describing the experiment.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
Due to their high specific strength and low density, magnesium and magnesium-based alloys have gained great technological importance in recent years. However, their underlying hexagonal crystal structure furnishes Mg and its alloys with a complex mechanical behavior because of their comparably smaller number of energetically favorable slip systems. Besides the commonly studied slip mechanism, another way to accomplish general deformation is through the additional mechanism of deformation-induced twinning. The main aim of this thesis research is to develop an efficient continuum model to understand and ultimately predict the material response resulting from the interaction between these two mechanisms.
The constitutive model we present is based on variational constitutive updates of plastic slips and twin volume fractions and accounts for the related lattice reorientation mechanisms. The model is applied to single- and polycrystalline pure magnesium. We outline the finite-deformation plasticity model combining basal, pyramidal, and prismatic dislocation activity as well as a convexification based approach for deformation twinning. A comparison with experimental data from single-crystal tension-compression experiments validates the model and serves for parameter identification. The extension to polycrystals via both Taylor-type modeling and finite element simulations shows a characteristic stress-strain response that agrees well with experimental observations for polycrystalline magnesium. The presented continuum model does not aim to represent the full details of individual twin-dislocation interactions, yet it is sufficiently efficient to allow for finite element simulations while qualitatively capturing the underlying microstructural deformation mechanisms.