6 resultados para Expansion tests
em CaltechTHESIS
Resumo:
The geometry and constituent materials of metastructures can be used to engineer the thermal expansion coefficient. In this thesis, we design, fabricate, and test thin thermally stable metastructures consisting of bi-metallic unit cells and show how the coefficient of thermal expansion (CTE) of these metastructures can be finely and coarsely tuned by varying the CTE of the constituent materials and the unit cell geometry. Planar and three-dimensional finite element method modeling is used to drive the design and inform experiments, and predict the response of these metastructures. We demonstrate computationally the significance of out-of-plane effects in the metastructure response. We develop an experimental setup using digital image correlation and an infrared camera to experimentally measure full displacement and temperature fields during testing and accurately measure the metastructures’ CTE. We experimentally demonstrate high aspect ratio metastructures of Ti/Al and Kovar/Al which exhibit near-zero and negative CTE, respectively. We demonstrate robust fabrication procedures for thermally stable samples with high aspect ratios in thin foil and thin film scales. We investigate the lattice structure and mechanical properties of thin films comprising a near-zero CTE metastructure. The mechanics developed in this work can be used to engineer metastructures of arbitrary CTE and can be extended to three dimensions.
Resumo:
Metallic glasses have typically been treated as a “one size fits all” type of material. Every alloy is considered to have high strength, high hardness, large elastic limits, corrosion resistance, etc. However, similar to traditional crystalline materials, properties are strongly dependent upon the constituent elements, how it was processed, and the conditions under which it will be used. An important distinction which can be made is between metallic glasses and their composites. Charpy impact toughness measurements are performed to determine the effect processing and microstructure have on bulk metallic glass matrix composites (BMGMCs). Samples are suction cast, machined from commercial plates, and semi-solidly forged (SSF). The SSF specimens have been found to have the highest impact toughness due to the coarsening of the dendrites, which occurs during the semi-solid processing stages. Ductile to brittle transition (DTBT) temperatures are measured for a BMGMC. While at room temperature the BMGMC is highly toughened compared to a fully glassy alloy, it undergoes a DTBT by 250 K. At this point, its impact toughness mirrors that of the constituent glassy matrix. In the following chapter, BMGMCs are shown to have the capability of being capacitively welded to form single, monolithic structures. Shear measurements are performed across welded samples, and, at sufficient weld energies, are found to retain the strength of the parent alloy. Cross-sections are inspected via SEM and no visible crystallization of the matrix occurs.
Next, metallic glasses and BMGMCs are formed into sheets and eggbox structures are tested in hypervelocity impacts. Metallic glasses are ideal candidates for protection against micrometeorite orbital debris due to their high hardness and relatively low density. A flat single layer, flat BMG is compared to a BMGMC eggbox and the latter creates a more diffuse projectile cloud after penetration. A three tiered eggbox structure is also tested by firing a 3.17 mm aluminum sphere at 2.7 km/s at it. The projectile penetrates the first two layers, but is successfully contained by the third.
A large series of metallic glass alloys are created and their wear loss is measured in a pin on disk test. Wear is found to vary dramatically among different metallic glasses, with some considerably outperforming the current state-of-the-art crystalline material (most notably Cu₄₃Zr₄₃Al₇Be₇). Others, on the other hand, suffered extensive wear loss. Commercially available Vitreloy 1 lost nearly three times as much mass in wear as alloy prepared in a laboratory setting. No conclusive correlations can be found between any set of mechanical properties (hardness, density, elastic, bulk, or shear modulus, Poisson’s ratio, frictional force, and run in time) and wear loss. Heat treatments are performed on Vitreloy 1 and Cu₄₃Zr₄₃Al₇Be₇. Anneals near the glass transition temperature are found to increase hardness slightly, but decrease wear loss significantly. Crystallization of both alloys leads to dramatic increases in wear resistance. Finally, wear tests under vacuum are performed on the two alloys above. Vitreloy 1 experiences a dramatic decrease in wear loss, while Cu₄₃Zr₄₃Al₇Be₇ has a moderate increase. Meanwhile, gears are fabricated through three techniques: electrical discharge machining of 1 cm by 3 mm cylinders, semisolid forging, and copper mold suction casting. Initial testing finds the pin on disk test to be an accurate predictor of wear performance in gears.
The final chapter explores an exciting technique in the field of additive manufacturing. Laser engineered net shaping (LENS) is a method whereby small amounts of metallic powders are melted by a laser such that shapes and designs can be built layer by layer into a final part. The technique is extended to mixing different powders during melting, so that compositional gradients can be created across a manufactured part. Two compositional gradients are fabricated and characterized. Ti 6Al¬ 4V to pure vanadium was chosen for its combination of high strength and light weight on one end, and high melting point on the other. It was inspected by cross-sectional x-ray diffraction, and only the anticipated phases were present. 304L stainless steel to Invar 36 was created in both pillar and as a radial gradient. It combines strength and weldability along with a zero coefficient of thermal expansion material. Only the austenite phase is found to be present via x-ray diffraction. Coefficient of thermal expansion is measured for four compositions, and it is found to be tunable depending on composition.
Resumo:
This dissertation primarily describes chemical-scale studies of G protein-coupled receptors and Cys-loop ligand-gated ion channels to better understand ligand binding interactions and the mechanism of channel activation using recently published crystal structures as a guide. These studies employ the use of unnatural amino acid mutagenesis and electrophysiology to measure subtle changes in receptor function.
In chapter 2, the role of a conserved aromatic microdomain predicted in the D3 dopamine receptor is probed in the closely related D2 and D4 dopamine receptors. This domain was found to act as a structural unit near the ligand binding site that is important for receptor function. The domain consists of several functionally important noncovalent interactions including hydrogen bond, aromatic-aromatic, and sulfur-π interactions that show strong couplings by mutant cycle analysis. We also assign an alternate interpretation for the linear fluorination plot observed at W6.48, a residue previously thought to participate in a cation-π interaction with dopamine.
Chapter 3 outlines attempts to incorporate chemically synthesized and in vitro acylated unnatural amino acids into mammalian cells. While our attempts were not successful, method optimizations and data for nonsense suppression with an in vivo acylated tRNA are included. This chapter is aimed to aid future researchers attempting unnatural amino acid mutagenesis in mammalian cells.
Chapter 4 identifies a cation-π interaction between glutamate and a tyrosine residue on loop C in the GluClβ receptor. Using the recently published crystal structure of the homologous GluClα receptor, other ligand-binding and protein-protein interactions are probed to determine the similarity between this invertebrate receptor and other more distantly related vertebrate Cys-loop receptors. We find that many of the interactions previously observed are conserved in the GluCl receptors, however care must be taken when extrapolating structural data.
Chapter 5 examines inherent properties of the GluClα receptor that are responsible for the observed glutamate insensitivity of the receptor. Chimera synthesis and mutagenesis reveal the C-terminal portion of the M4 helix and the C-terminus as contributing to formation of the decoupled state, where ligand binding is incapable of triggering channel gating. Receptor mutagenesis was unable to identify single residue mismatches or impaired protein-protein interactions within this domain. We conclude that M4 helix structure and/or membrane dynamics are likely the cause of ligand insensitivity in this receptor and that the M4 helix has an role important in the activation process.
Resumo:
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.
A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.
The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.
Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.
The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.
Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.
Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.