10 resultados para Isotropic and Anisotropic models
em CaltechTHESIS
A model for energy and morphology of crystalline grain boundaries with arbitrary geometric character
Resumo:
It has been well-established that interfaces in crystalline materials are key players in the mechanics of a variety of mesoscopic processes such as solidification, recrystallization, grain boundary migration, and severe plastic deformation. In particular, interfaces with complex morphologies have been observed to play a crucial role in many micromechanical phenomena such as grain boundary migration, stability, and twinning. Interfaces are a unique type of material defect in that they demonstrate a breadth of behavior and characteristics eluding simplified descriptions. Indeed, modeling the complex and diverse behavior of interfaces is still an active area of research, and to the author's knowledge there are as yet no predictive models for the energy and morphology of interfaces with arbitrary character. The aim of this thesis is to develop a novel model for interface energy and morphology that i) provides accurate results (especially regarding "energy cusp" locations) for interfaces with arbitrary character, ii) depends on a small set of material parameters, and iii) is fast enough to incorporate into large scale simulations.
In the first half of the work, a model for planar, immiscible grain boundary is formulated. By building on the assumption that anisotropic grain boundary energetics are dominated by geometry and crystallography, a construction on lattice density functions (referred to as "covariance") is introduced that provides a geometric measure of the order of an interface. Covariance forms the basis for a fully general model of the energy of a planar interface, and it is demonstrated by comparison with a wide selection of molecular dynamics energy data for FCC and BCC tilt and twist boundaries that the model accurately reproduces the energy landscape using only three material parameters. It is observed that the planar constraint on the model is, in some cases, over-restrictive; this motivates an extension of the model.
In the second half of the work, the theory of faceting in interfaces is developed and applied to the planar interface model for grain boundaries. Building on previous work in mathematics and materials science, an algorithm is formulated that returns the minimal possible energy attainable by relaxation and the corresponding relaxed morphology for a given planar energy model. It is shown that the relaxation significantly improves the energy results of the planar covariance model for FCC and BCC tilt and twist boundaries. The ability of the model to accurately predict faceting patterns is demonstrated by comparison to molecular dynamics energy data and experimental morphological observation for asymmetric tilt grain boundaries. It is also demonstrated that by varying the temperature in the planar covariance model, it is possible to reproduce a priori the experimentally observed effects of temperature on facet formation.
Finally, the range and scope of the covariance and relaxation models, having been demonstrated by means of extensive MD and experimental comparison, future applications and implementations of the model are explored.
Resumo:
This dissertation is concerned with the problem of determining the dynamic characteristics of complicated engineering systems and structures from the measurements made during dynamic tests or natural excitations. Particular attention is given to the identification and modeling of the behavior of structural dynamic systems in the nonlinear hysteretic response regime. Once a model for the system has been identified, it is intended to use this model to assess the condition of the system and to predict the response to future excitations.
A new identification methodology based upon a generalization of the method of modal identification for multi-degree-of-freedom dynaimcal systems subjected to base motion is developed. The situation considered herein is that in which only the base input and the response of a small number of degrees-of-freedom of the system are measured. In this method, called the generalized modal identification method, the response is separated into "modes" which are analogous to those of a linear system. Both parametric and nonparametric models can be employed to extract the unknown nature, hysteretic or nonhysteretic, of the generalized restoring force for each mode.
In this study, a simple four-term nonparametric model is used first to provide a nonhysteretic estimate of the nonlinear stiffness and energy dissipation behavior. To extract the hysteretic nature of nonlinear systems, a two-parameter distributed element model is then employed. This model exploits the results of the nonparametric identification as an initial estimate for the model parameters. This approach greatly improves the convergence of the subsequent optimization process.
The capability of the new method is verified using simulated response data from a three-degree-of-freedom system. The new method is also applied to the analysis of response data obtained from the U.S.-Japan cooperative pseudo-dynamic test of a full-scale six-story steel-frame structure.
The new system identification method described has been found to be both accurate and computationally efficient. It is believed that it will provide a useful tool for the analysis of structural response data.
Resumo:
A novel spectroscopy of trapped ions is proposed which will bring single-ion detection sensitivity to the observation of magnetic resonance spectra. The approaches developed here are aimed at resolving one of the fundamental problems of molecular spectroscopy, the apparent incompatibility in existing techniques between high information content (and therefore good species discrimination) and high sensitivity. Methods for studying both electron spin resonance (ESR) and nuclear magnetic resonance (NMR) are designed. They assume established methods for trapping ions in high magnetic field and observing the trapping frequencies with high resolution (<1 Hz) and sensitivity (single ion) by electrical means. The introduction of a magnetic bottle field gradient couples the spin and spatial motions together and leads to a small spin-dependent force on the ion, which has been exploited by Dehmelt to observe directly the perturbation of the ground-state electron's axial frequency by its spin magnetic moment.
A series of fundamental innovations is described m order to extend magnetic resonance to the higher masses of molecular ions (100 amu = 2x 10^5 electron masses) and smaller magnetic moments (nuclear moments = 10^(-3) of the electron moment). First, it is demonstrated how time-domain trapping frequency observations before and after magnetic resonance can be used to make cooling of the particle to its ground state unnecessary. Second, adiabatic cycling of the magnetic bottle off between detection periods is shown to be practical and to allow high-resolution magnetic resonance to be encoded pointwise as the presence or absence of trapping frequency shifts. Third, methods of inducing spindependent work on the ion orbits with magnetic field gradients and Larmor frequency irradiation are proposed which greatly amplify the attainable shifts in trapping frequency.
The dissertation explores the basic concepts behind ion trapping, adopting a variety of classical, semiclassical, numerical, and quantum mechanical approaches to derive spin-dependent effects, design experimental sequences, and corroborate results from one approach with those from another. The first proposal presented builds on Dehmelt's experiment by combining a "before and after" detection sequence with novel signal processing to reveal ESR spectra. A more powerful technique for ESR is then designed which uses axially synchronized spin transitions to perform spin-dependent work in the presence of a magnetic bottle, which also converts axial amplitude changes into cyclotron frequency shifts. A third use of the magnetic bottle is to selectively trap ions with small initial kinetic energy. A dechirping algorithm corrects for undesired frequency shifts associated with damping by the measurement process.
The most general approach presented is spin-locked internally resonant ion cyclotron excitation, a true continuous Stern-Gerlach effect. A magnetic field gradient modulated at both the Larmor and cyclotron frequencies is devised which leads to cyclotron acceleration proportional to the transverse magnetic moment of a coherent state of the particle and radiation field. A preferred method of using this to observe NMR as an axial frequency shift is described in detail. In the course of this derivation, a new quantum mechanical description of ion cyclotron resonance is presented which is easily combined with spin degrees of freedom to provide a full description of the proposals.
Practical, technical, and experimental issues surrounding the feasibility of the proposals are addressed throughout the dissertation. Numerical ion trajectory simulations and analytical models are used to predict the effectiveness of the new designs as well as their sensitivity and resolution. These checks on the methods proposed provide convincing evidence of their promise in extending the wealth of magnetic resonance information to the study of collisionless ions via single-ion spectroscopy.
Resumo:
The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.
The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.
In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.
The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Electromagnetic wave propagation and scattering in a sphere composed of an inhomogeneous medium having random variations in its permittivity are studied by utilizing the Born approximation in solving the vector wave equation. The variations in the permittivity are taken to be isotropic and homogeneous, and are spatially characterized by a Gaussian correlation function. Temporal variations in the medium are not considered.
Two particular problems are considered: i) finding the far-zone electric field when an electric or magnetic dipole is situated at the center of the sphere, and ii) finding the electric field at the sphere's center when a linearly polarized plane wave is incident upon it. Expressions are obtained for the mean-square magnitudes of the scattered field components; it is found that the mean of the product of any two transverse components vanishes. The cases where the wavelength is much shorter than correlation distance of the medium and where it is much longer than it are both considered.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
The sun has the potential to power the Earth's total energy needs, but electricity from solar power still constitutes an extremely small fraction of our power generation because of its high cost relative to traditional energy sources. Therefore, the cost of solar must be reduced to realize a more sustainable future. This can be achieved by significantly increasing the efficiency of modules that convert solar radiation to electricity. In this thesis, we consider several strategies to improve the device and photonic design of solar modules to achieve record, ultrahigh (> 50%) solar module efficiencies. First, we investigate the potential of a new passivation treatment, trioctylphosphine sulfide, to increase the performance of small GaAs solar cells for cheaper and more durable modules. We show that small cells (mm2), which currently have a significant efficiency decrease (~ 5%) compared to larger cells (cm2) because small cells have a higher fraction of recombination-active surface from the sidewalls, can achieve significantly higher efficiencies with effective passivation of the sidewalls. We experimentally validate the passivation qualities of treatment by trioctylphosphine sulfide (TOP:S) through four independent studies and show that this facile treatment can enable efficient small devices. Then, we discuss our efforts toward the design and prototyping of a spectrum-splitting module that employs optical elements to divide the incident spectrum into different color bands, which allows for higher efficiencies than traditional methods. We present a design, the polyhedral specular reflector, that has the potential for > 50% module efficiencies even with realistic losses from combined optics, cell, and electrical models. Prototyping efforts of one of these designs using glass concentrators yields an optical module whose combined spectrum-splitting and concentration should correspond to a record module efficiency of 42%. Finally, we consider how the manipulation of radiatively emitted photons from subcells in multijunction architectures can be used to achieve even higher efficiencies than previously thought, inspiring both optimization of incident and radiatively emitted photons for future high efficiency designs. In this thesis work, we explore novel device and photonic designs that represent a significant departure from current solar cell manufacturing techniques and ultimately show the potential for much higher solar cell efficiencies.
Resumo:
In this thesis, I develop the velocity and structure models for the Los Angeles Basin and Southern Peru. The ultimate goal is to better understand the geological processes involved in the basin and subduction zone dynamics. The results are obtained from seismic interferometry using ambient noise and receiver functions using earthquake- generated waves. Some unusual signals specific to the local structures are also studied. The main findings are summarized as follows:
(1) Los Angeles Basin
The shear wave velocities range from 0.5 to 3.0 km/s in the sediments, with lateral gradients at the Newport-Inglewood, Compton-Los Alamitos, and Whittier Faults. The basin is a maximum of 8 km deep along the profile, and the Moho rises to a depth of 17 km under the basin. The basin has a stretch factor of 2.6 in the center decreasing to 1.3 at the edges, and is in approximate isostatic equilibrium. This "high-density" (~1 km spacing) "short-duration" (~1.5 month) experiment may serve as a prototype experiment that will allow basins to be covered by this type of low-cost survey.
(2) Peruvian subduction zone
Two prominent mid-crust structures are revealed in the 70 km thick crust under the Central Andes: a low-velocity zone interpreted as partially molten rocks beneath the Western Cordillera – Altiplano Plateau, and the underthrusting Brazilian Shield beneath the Eastern Cordillera. The low-velocity zone is oblique to the present trench, and possibly indicates the location of the volcanic arcs formed during the steepening of the Oligocene flat slab beneath the Altiplano Plateau.
The Nazca slab changes from normal dipping (~25 degrees) subduction in the southeast to flat subduction in the northwest of the study area. In the flat subduction regime, the slab subducts to ~100 km depth and then remains flat for ~300 km distance before it resumes a normal dipping geometry. The flat part closely follows the topography of the continental Moho above, indicating a strong suction force between the slab and the overriding plate. A high-velocity mantle wedge exists above the western half of the flat slab, which indicates the lack of melting and thus explains the cessation of the volcanism above. The velocity turns to normal values before the slab steepens again, indicating possible resumption of dehydration and ecologitization.
(3) Some unusual signals
Strong higher-mode Rayleigh waves due to the basin structure are observed in the periods less than 5 s. The particle motions provide a good test for distinguishing between the fundamental and higher mode. The precursor and coda waves relative to the interstation Rayleigh waves are observed, and modeled with a strong scatterer located in the active volcanic area in Southern Peru. In contrast with the usual receiver function analysis, multiples are extensively involved in this thesis. In the LA Basin, a good image is only from PpPs multiples, while in Peru, PpPp multiples contribute significantly to the final results.