11 resultados para Theories of fracture
em CaltechTHESIS
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity play as contributors to the specific fracture energy of the material. Next, we present an experimental assessment of the optimal scaling laws. We show that when the specific fracture energy is renormalized in a manner suggested by the optimal scaling laws, the data falls within the bounds predicted by the analysis and, moreover, they ostensibly collapse---with allowances made for experimental scatter---on a master curve dependent on the hardening exponent, but otherwise material independent.
Resumo:
We know from the CMB and observations of large-scale structure that the universe is extremely flat, homogenous, and isotropic. The current favored mechanism for generating these characteristics is inflation, a theorized period of exponential expansion of the universe that occurred shortly after the Big Bang. Most theories of inflation generically predict a background of stochastic gravitational waves. These gravitational waves should leave their unique imprint on the polarization of the CMB via Thompson scattering. Scalar perturbations of the metric will cause a pattern of polarization with no curl (E-mode). Tensor perturbations (gravitational waves) will cause a unique pattern of polarization on the CMB that includes a curl component (B-mode). A measurement of the ratio of the tensor to scalar perturbations (r) tells us the energy scale of inflation. Recent measurements by the BICEP2 team detect the B-mode spectrum with a tensor-to-scalar ratio of r = 0.2 (+0.05, −0.07). An independent confirmation of this result is the next step towards understanding the inflationary universe.
This thesis describes my work on a balloon-borne polarimeter called SPIDER, which is designed to illuminate the physics of the early universe through measurements of the cosmic microwave background polarization. SPIDER consists of six single-frequency, on-axis refracting telescopes contained in a shared-vacuum liquid-helium cryostat. Its large format arrays of millimeter-wave detectors and tight control of systematics will give it unprecedented sensitivity. This thesis describes how the SPIDER detectors are characterized and calibrated for flight, as well as how the systematics requirements for the SPIDER system are simulated and measured.
Resumo:
This thesis presents the results of an experimental investigation of the initiation of brittle fracture and the nature of discontinuous yielding in small plastic enclaves in an annealed mild steel. Upper and lower yield stress data have been obtained from unnotched specimens and nominal fracture stress data have been obtained from specimens of two scale factors and two grain sizes over a range of nominal stress rates from 10^2 to 10^7 lb/in.^2 sec at -111°F and -200°F. The size and shape of plastic enclaves near the notches were revealed by an etch technique.
A stress analysis utilizing slip-line field theory in the plastic region has been developed for the notched specimen geometry employed in this investigation. The yield stress of the material in the plastic enclaves near the notch root has been correlated with the lower yield stress measured on unnotched specimens through a consideration of the plastic boundary velocity under dynamic loading. A maximum tensile stress of about 122,000 lb/in.^2 at the instant of fracture initiation was calculated with the aid of the stress analysis for the large scale specimens of ASTM grain size 8 1/4.
The plastic strain state adjacent to a plastic-elastic interface has been shown to cause the maximum shear stress to have a larger value on the elastic than the plastic side of the interface. This characteristic of dis continuous yielding is instrumental in causing the plastic boundaries to be nearly parallel to the slip-line field where the plastic strain is of the order of the Lüder's strain.
Resumo:
Complexity in the earthquake rupture process can result from many factors. This study investigates the origin of such complexity by examining several recent, large earthquakes in detail. In each case the local tectonic environment plays an important role in understanding the source of the complexity.
Several large shallow earthquakes (Ms > 7.0) along the Middle American Trench have similarities and differences between them that may lead to a better understanding of fracture and subduction processes. They are predominantly thrust events consistent with the known subduction of the Cocos plate beneath N. America. Two events occurring along this subduction zone close to triple junctions show considerable complexity. This may be attributable to a more heterogeneous stress environment in these regions and as such has implications for other subduction zone boundaries.
An event which looks complex but is actually rather simple is the 1978 Bermuda earthquake (Ms ~ 6). It is located predominantly in the mantle. Its mechanism is one of pure thrust faulting with a strike N 20°W and dip 42°NE. Its apparent complexity is caused by local crustal structure. This is an important event in terms of understanding and estimating seismic hazard on the eastern seaboard of N. America.
A study of several large strike-slip continental earthquakes identifies characteristics which are common to them and may be useful in determining what to expect from the next great earthquake on the San Andreas fault. The events are the 1976 Guatemala earthquake on the Motagua fault and two events on the Anatolian fault in Turkey (the 1967, Mudurnu Valley and 1976, E. Turkey events). An attempt to model the complex P-waveforms of these events results in good synthetic fits for the Guatemala and Mudurnu Valley events. However, the E. Turkey event proves to be too complex as it may have associated thrust or normal faulting. Several individual sources occurring at intervals of between 5 and 20 seconds characterize the Guatemala and Mudurnu Valley events. The maximum size of an individual source appears to be bounded at about 5 x 1026 dyne-cm. A detailed source study including directivity is performed on the Guatemala event. The source time history of the Mudurnu Valley event illustrates its significance in modeling strong ground motion in the near field. The complex source time series of the 1967 event produces amplitudes greater by a factor of 2.5 than a uniform model scaled to the same size for a station 20 km from the fault.
Three large and important earthquakes demonstrate an important type of complexity --- multiple-fault complexity. The first, the 1976 Philippine earthquake, an oblique thrust event, represents the first seismological evidence for a northeast dipping subduction zone beneath the island of Mindanao. A large event, following the mainshock by 12 hours, occurred outside the aftershock area and apparently resulted from motion on a subsidiary fault since the event had a strike-slip mechanism.
An aftershock of the great 1960 Chilean earthquake on June 6, 1960, proved to be an interesting discovery. It appears to be a large strike-slip event at the main rupture's southern boundary. It most likely occurred on the landward extension of the Chile Rise transform fault, in the subducting plate. The results for this event suggest that a small event triggered a series of slow events; the duration of the whole sequence being longer than 1 hour. This is indeed a "slow earthquake".
Perhaps one of the most complex of events is the recent Tangshan, China event. It began as a large strike-slip event. Within several seconds of the mainshock it may have triggered thrust faulting to the south of the epicenter. There is no doubt, however, that it triggered a large oblique normal event to the northeast, 15 hours after the mainshock. This event certainly contributed to the great loss of life-sustained as a result of the Tangshan earthquake sequence.
What has been learned from these studies has been applied to predict what one might expect from the next great earthquake on the San Andreas. The expectation from this study is that such an event would be a large complex event, not unlike, but perhaps larger than, the Guatemala or Mudurnu Valley events. That is to say, it will most likely consist of a series of individual events in sequence. It is also quite possible that the event could trigger associated faulting on neighboring fault systems such as those occurring in the Transverse Ranges. This has important bearing on the earthquake hazard estimation for the region.
Resumo:
The topological phases of matter have been a major part of condensed matter physics research since the discovery of the quantum Hall effect in the 1980s. Recently, much of this research has focused on the study of systems of free fermions, such as the integer quantum Hall effect, quantum spin Hall effect, and topological insulator. Though these free fermion systems can play host to a variety of interesting phenomena, the physics of interacting topological phases is even richer. Unfortunately, there is a shortage of theoretical tools that can be used to approach interacting problems. In this thesis I will discuss progress in using two different numerical techniques to study topological phases.
Recently much research in topological phases has focused on phases made up of bosons. Unlike fermions, free bosons form a condensate and so interactions are vital if the bosons are to realize a topological phase. Since these phases are difficult to study, much of our understanding comes from exactly solvable models, such as Kitaev's toric code, as well as Levin-Wen and Walker-Wang models. We may want to study systems for which such exactly solvable models are not available. In this thesis I present a series of models which are not solvable exactly, but which can be studied in sign-free Monte Carlo simulations. The models work by binding charges to point topological defects. They can be used to realize bosonic interacting versions of the quantum Hall effect in 2D and topological insulator in 3D. Effective field theories of "integer" (non-fractionalized) versions of these phases were available in the literature, but our models also allow for the construction of fractional phases. We can measure a number of properties of the bulk and surface of these phases.
Few interacting topological phases have been realized experimentally, but there is one very important exception: the fractional quantum Hall effect (FQHE). Though the fractional quantum Hall effect we discovered over 30 years ago, it can still produce novel phenomena. Of much recent interest is the existence of non-Abelian anyons in FQHE systems. Though it is possible to construct wave functions that realize such particles, whether these wavefunctions are the ground state is a difficult quantitative question that must be answered numerically. In this thesis I describe progress using a density-matrix renormalization group algorithm to study a bilayer system thought to host non-Abelian anyons. We find phase diagrams in terms of experimentally relevant parameters, and also find evidence for a non-Abelian phase known as the "interlayer Pfaffian".
Resumo:
The microscopic properties of a two-dimensional model dense fluid of Lennard-Jones disks have been studied using the so-called "molecular dynamics" method. Analyses of the computer-generated simulation data in terms of "conventional" thermodynamic and distribution functions verify the physical validity of the model and the simulation technique.
The radial distribution functions g(r) computed from the simulation data exhibit several subsidiary features rather similar to those appearing in some of the g(r) functions obtained by X-ray and thermal neutron diffraction measurements on real simple liquids. In the case of the model fluid, these "anomalous" features are thought to reflect the existence of two or more alternative configurations for local ordering.
Graphical display techniques have been used extensively to provide some intuitive insight into the various microscopic phenomena occurring in the model. For example, "snapshots" of the instantaneous system configurations for different times show that the "excess" area allotted to the fluid is collected into relatively large, irregular, and surprisingly persistent "holes". Plots of the particle trajectories over intervals of 2.0 to 6.0 x 10-12 sec indicate that the mechanism for diffusion in the dense model fluid is "cooperative" in nature, and that extensive diffusive migration is generally restricted to groups of particles in the vicinity of a hole.
A quantitative analysis of diffusion in the model fluid shows that the cooperative mechanism is not inconsistent with the statistical predictions of existing theories of singlet, or self-diffusion in liquids. The relative diffusion of proximate particles is, however, found to be retarded by short-range dynamic correlations associated with the cooperative mechanism--a result of some importance from the standpoint of bimolecular reaction kinetics in solution.
A new, semi-empirical treatment for relative diffusion in liquids is developed, and is shown to reproduce the relative diffusion phenomena observed in the model fluid quite accurately. When incorporated into the standard Smoluchowski theory of diffusion-controlled reaction kinetics, the more exact treatment of relative diffusion is found to lower the predicted rate of reaction appreciably.
Finally, an entirely new approach to an understanding of the liquid state is suggested. Our experience in dealing with the simulation data--and especially, graphical displays of the simulation data--has led us to conclude that many of the more frustrating scientific problems involving the liquid state would be simplified considerably, were it possible to describe the microscopic structures characteristic of liquids in a concise and precise manner. To this end, we propose that the development of a formal language of partially-ordered structures be investigated.
Resumo:
The Q values and 0o cross sections of (He3, n) reactions forming seven proton-rich nuclei have been measured with accuracies varying from 6 to 18 keV. The Q values (in keV) are: Si26 (85), S30 (-573), Ar34 (-759), Ti42 (-2865), Cr48 (5550), Ni56 (4513) and Zn60 (818). At least one excited state was found for all but Ti42. The first four nuclei complete isotopic spin triplets; the results obtained agree well with charge-symmetry predictions. The last three, all multiples of the α particle, are important in the α and e-process theories of nucleo-synthesis in stars. The energy available for β decay of these three was found by magnetic spectrometer measurements of the (He3, p) Q values of reactions leading to V48, Co56, and Cu60. Many excited states were seen: V48 (3), Co56 (15), Cu60 (23). The first two states of S30 are probably 0+ and 2+ from (He3, n) angular distribution measurements. Two NaI γ-ray measurements are described: the decay of Ar34 (measured Ƭ1/2 = 1.2 ± 0.3s) and the prompt γ-ray spectrum from Fe54(He3, nγ)Ni56. Possible collective structure in Ni56 and Ca40, both doubly magic, is discussed.
The (He3, n) neutron energy and yield measurements utilized neutron-induced nuclear reactions in a silicon semiconductor detector. Cross sections for the most important detection processes, Si28 (n, α) Mg25 and Si28 (n, p) Al28, are presented for reactions leading to the first four states of both residual nuclei for neutron energies from 7.3 to 16.4 MeV. Resolution and pulse-height anomalies associated with recoil Mg25 and Al28 ions are discussed. The 0o cross section for Be9 (α, n) C12, used to provide calibration neutrons, has been measured with a stilbene spectrometer for no (5.0 ≤ Eα ≤ 12 MeV), n1 (4.3 ≤ Eα ≤ 12.0 MeV) and n2 (6.0 ≤ Eα ≤ 10.1 MeV). Resonances seen in the no yield may correspond to nine new levels in C13.
Resumo:
A mathematical model is proposed in this thesis for the control mechanism of free fatty acid-glucose metabolism in healthy individuals under resting conditions. The objective is to explain in a consistent manner some clinical laboratory observations such as glucose, insulin and free fatty acid responses to intravenous injection of glucose, insulin, etc. Responses up to only about two hours from the beginning of infusion are considered. The model is an extension of the one for glucose homeostasis proposed by Charette, Kadish and Sridhar (Modeling and Control Aspects of Glucose Homeostasis. Mathematical Biosciences, 1969). It is based upon a systems approach and agrees with the current theories of glucose and free fatty acid metabolism. The description is in terms of ordinary differential equations. Validation of the model is based on clinical laboratory data available at the present time. Finally procedures are suggested for systematically identifying the parameters associated with the free fatty acid portion of the model.
Resumo:
Part 1. Many interesting visual and mechanical phenomena occur in the critical region of fluids, both for the gas-liquid and liquid-liquid transitions. The precise thermodynamic and transport behavior here has some broad consequences for the molecular theory of liquids. Previous studies in this laboratory on a liquid-liquid critical mixture via ultrasonics supported a basically classical analysis of fluid behavior by M. Fixman (e. g., the free energy is assumed analytic in intensive variables in the thermodynamics)--at least when the fluid is not too close to critical. A breakdown in classical concepts is evidenced close to critical, in some well-defined ways. We have studied herein a liquid-liquid critical system of complementary nature (possessing a lower critical mixing or consolute temperature) to all previous mixtures, to look for new qualitative critical behavior. We did not find such new behavior in the ultrasonic absorption ascribable to the critical fluctuations, but we did find extra absorption due to chemical processes (yet these are related to the mixing behavior generating the lower consolute point). We rederived, corrected, and extended Fixman's analysis to interpret our experimental results in these more complex circumstances. The entire account of theory and experiment is prefaced by an extensive introduction recounting the general status of liquid state theory. The introduction provides a context for our present work, and also points out problems deserving attention. Interest in these problems was stimulated by this work but also by work in Part 3.
Part 2. Among variational theories of electronic structure, the Hartree-Fock theory has proved particularly valuable for a practical understanding of such properties as chemical binding, electric multipole moments, and X-ray scattering intensity. It also provides the most tractable method of calculating first-order properties under external or internal one-electron perturbations, either developed explicitly in orders of perturbation theory or in the fully self-consistent method. The accuracy and consistency of first-order properties are poorer than those of zero-order properties, but this is most often due to the use of explicit approximations in solving the perturbed equations, or to inadequacy of the variational basis in size or composition. We have calculated the electric polarizabilities of H2, He, Li, Be, LiH, and N2 by Hartree-Fock theory, using exact perturbation theory or the fully self-consistent method, as dictated by convenience. By careful studies on total basis set composition, we obtained good approximations to limiting Hartree-Fock values of polarizabilities with bases of reasonable size. The values for all species, and for each direction in the molecular cases, are within 8% of experiment, or of best theoretical values in the absence of the former. Our results support the use of unadorned Hartree-Pock theory for static polarizabilities needed in interpreting electron-molecule scattering data, collision-induced light scattering experiments, and other phenomena involving experimentally inaccessible polarizabilities.
Part 3. Numerical integration of the close-coupled scattering equations has been carried out to obtain vibrational transition probabilities for some models of the electronically adiabatic H2-H2 collision. All the models use a Lennard-Jones interaction potential between nearest atoms in the collision partners. We have analyzed the results for some insight into the vibrational excitation process in its dependence on the energy of collision, the nature of the vibrational binding potential, and other factors. We conclude also that replacement of earlier, simpler models of the interaction potential by the Lennard-Jones form adds very little realism for all the complication it introduces. A brief introduction precedes the presentation of our work and places it in the context of attempts to understand the collisional activation process in chemical reactions as well as some other chemical dynamics.
Resumo:
The time distribution of the decays of an initially pure K° beam into π+π-π° has been analyzed to determine the complex parameter W (also known as Ƞ+-° and (x + iy)). The K° beam was produced in a brass target by the interactions of a 2.85 GeV/c π- beam which was generated on an internal target in the Lawrence Radiation Laboratory (LRL) Bevatron. The counters and hodoscopes in the apparatus selected for events with a neutral (K°) produced in the brass target, two charged secondaries passing through a magnet spectrometer and a ɣ-ray shower in a shower hodoscope.
From the 275K apparatus triggers, 148 K → π+π-π° events were isolated. The presence of a ɣ-ray shower in the optical shower chambers and a two-prong vee in the optical spark chambers were devices used to isolate the events. The backgrounds were further reduced by reconstructing the momenta of the two charged secondaries and applying kinematic constraints.
The best fit to the final sample of 148 events distributed between .3 and 7.0 KS lifetimes gives:
ReW = -.05 ±.17
ImW = +.39 +.35/-.37
This result is consistent with both CPT invariance (ReW = 0) and CP invariance (W = 0). Backgrounds are estimated to be less than 10% and systematic effects have also been estimated to be negligible.
An analysis of the present data on CP violation in this decay mode and other K° decay modes has estimated the phase of ɛ to be 45.3 ± 2.3 degrees. This result is consistent with the super weak theories of CP violation which predicts the phase of ɛ to be 43°. This estimate is in turn used to predict the phase of Ƞ°° to be 48.0 ± 7.9 degrees. This is a substantial improvement on presently available measurements. The largest error in this analysis comes from the present limits on W from the world average of recent experiments. The K → πuʋ mode produces the next largest error. Therefore further experimentation in these modes would be useful.