9 resultados para Experimental methods
em CaltechTHESIS
Resumo:
This thesis describes studies surrounding a ligand-gated ion channel (LGIC): the serotonin type 3A receptor (5-HT3AR). Structure-function experiments using unnatural amino acid mutagenesis are described, as well as experiments on the methodology of unnatural amino acid mutagenesis. Chapter 1 introduces LGICs, experimental methods, and an overview of the unnatural amino acid mutagenesis.
In Chapter 2, the binding orientation of the clinically available drugs ondansetron and granisetron within 5-HT3A is determined through a combination of unnatural amino acid mutagenesis and an inhibition based assay. A cation-π interaction is found for both ondansetron and granisetron with a specific tryptophan residue (Trp183, TrpB) of the mouse 5-HT3AR, which establishes a binding orientation for these drugs.
In Chapter 3, further studies were performed with ondansetron and granisetron with 5-HT3A. The primary determinant of binding for these drugs was determined to not include interactions with a specific tyrosine residue (Tyr234, TyrC2). In completing these studies, evidence supporting a cation-π interaction of a synthetic agonist, meta-chlorophenylbiguanide, was found with TyrC2.
In Chapter 4, a direct chemical acylation strategy was implemented to prepare full-length suppressor tRNA mediated by lanthanum(III) and amino acid phosphate esters. The derived aminoacyl-tRNA is shown to be translationally competent in Xenopus oocytes.
Appendix A.1 gives details of a pharmacological method for determining the equilibrium dissociation constant, KB, of a competitive antagonist with a receptor, known as Schild analysis. Appendix A.2 describes an examination of the inhibitory activity of new chemical analogs of the 5-HT3A antagonist ondansetron. Appendix A.3 reports an organic synthesis of an intermediate for a new unnatural amino acid. Appendix A.4 covers an additional methodological examination for the preparation of amino-acyl tRNA.
Resumo:
The complex domain structure in ferroelectrics gives rise to electromechanical coupling, and its evolution (via domain switching) results in a time-dependent (i.e. viscoelastic) response. Although ferroelectrics are used in many technological applications, most do not attempt to exploit the viscoelastic response of ferroelectrics, mainly due to a lack of understanding and accurate models for their description and prediction. Thus, the aim of this thesis research is to gain better understanding of the influence of domain evolution in ferroelectrics on their dynamic mechanical response. There have been few studies on the viscoelastic properties of ferroelectrics, mainly due to a lack of experimental methods. Therefore, an apparatus and method called Broadband Electromechanical Spectroscopy (BES) was designed and built. BES allows for the simultaneous application of dynamic mechanical and electrical loading in a vacuum environment. Using BES, the dynamic stiffness and loss tangent in bending and torsion of a particular ferroelectric, viz. lead zirconate titanate (PZT), was characterized for different combinations of electrical and mechanical loading frequencies throughout the entire electric displacement hysteresis. Experimental results showed significant increases in loss tangent (by nearly an order of magnitude) and compliance during domain switching, which shows promise as a new approach to structural damping. A continuum model of the viscoelasticity of ferroelectrics was developed, which incorporates microstructural evolution via internal variables and associated kinetic relations. For the first time, through a new linearization process, the incremental dynamic stiffness and loss tangent of materials were computed throughout the entire electric displacement hysteresis for different combinations of mechanical and electrical loading frequencies. The model accurately captured experimental results. Using the understanding gained from the characterization and modeling of PZT, two applications of domain switching kinetics were explored by using Micro Fiber Composites (MFCs). Proofs of concept of set-and-hold actuation and structural damping using MFCs were demonstrated.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The problem of finding the depths of glaciers and the current methods are discussed briefly. Radar methods are suggested as a possible improvement for, or adjunct to, seismic and gravity survey methods. The feasibility of propagating electromagnetic waves in ice and the maximum range to be expected are then investigated theoretically with the aid of experimental data on the dielectric properties of ice. It is found that the maximum expected range is great enough to measure the depth of many glaciers at the lower radar frequencies if there is not too much liquid water present. Greater ranges can be attained by going to lower frequencies.
The results are given of two expeditions in two different years to the Seward Glacier in the Yukon Territory. Experiments were conducted on a small valley glacier whose depth was determined by seismic sounding. Many echoes were received but their identification was uncertain. Using the best echoes, a profile was obtained each year, but they were not in exact agreement with each other. It could not be definitely established that echoes had been received from bedrock. Agreement with seismic methods for a considerable number of glaciers would have to be obtained before radar methods could be relied upon. The presence of liquid water in the ice is believed to be one of the greatest obstacles. Besides increasing the attenuation and possibly reflecting energy, it makes it impossible to predict the velocity of propagation. The equipment used was far from adequate for such purposes, so many of the difficulties could be attributed to this. Partly because of this, and the fact that there are glaciers with very little liquid water present, radar methods are believed to be worthy of further research for the exploration of glaciers.
Resumo:
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.
We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.
We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.
We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.
Resumo:
We investigated four unique methods for achieving scalable, deterministic integration of quantum emitters into ultra-high Q{V photonic crystal cavities, including selective area heteroepitaxy, engineered photoemission from silicon nanostructures, wafer bonding and dimensional reduction of III-V quantum wells, and cavity-enhanced optical trapping. In these areas, we were able to demonstrate site-selective heteroepitaxy, size-tunable photoluminescence from silicon nanostructures, Purcell modification of QW emission spectra, and limits of cavity-enhanced optical trapping designs which exceed any reports in the literature and suggest the feasibility of capturing- and detecting nanostructures with dimensions below 10 nm. In addition to process scalability and the requirement for achieving accurate spectral- and spatial overlap between the emitter and cavity, these techniques paid specific attention to the ability to separate the cavity and emitter material systems in order to allow optimal selection of these independently, and eventually enable monolithic integration with other photonic and electronic circuitry.
We also developed an analytic photonic crystal design process yielding optimized cavity tapers with minimal computational effort, and reported on a general cavity modification which exhibits improved fabrication tolerance by relying exclusively on positional- rather than dimensional tapering. We compared several experimental coupling techniques for device characterization. Significant efforts were devoted to optimizing cavity fabrication, including the use of atomic layer deposition to improve surface quality, exploration into factors affecting the design fracturing, and automated analysis of SEM images. Using optimized fabrication procedures, we experimentally demonstrated 1D photonic crystal nanobeam cavities exhibiting the highest Q/V reported on substrate. Finally, we analyzed the bistable behavior of the devices to quantify the nonlinear optical response of our cavities.
Resumo:
The intent of this study is to provide formal apparatus which facilitates the investigation of problems in the methodology of science. The introduction contains several examples of such problems and motivates the subsequent formalism.
A general definition of a formal language is presented, and this definition is used to characterize an individual’s view of the world around him. A notion of empirical observation is developed which is independent of language. The interplay of formal language and observation is taken as the central theme. The process of science is conceived as the finding of that formal language that best expresses the available experimental evidence.
To characterize the manner in which a formal language imposes structure on its universe of discourse, the fundamental concepts of elements and states of a formal language are introduced. Using these, the notion of a basis for a formal language is developed as a collection of minimal states distinguishable within the language. The relation of these concepts to those of model theory is discussed.
An a priori probability defined on sets of observations is postulated as a reflection of an individual’s ontology. This probability, in conjunction with a formal language and a basis for that language, induces a subjective probability describing an individual’s conceptual view of admissible configurations of the universe. As a function of this subjective probability, and consequently of language, a measure of the informativeness of empirical observations is introduced and is shown to be intuitively plausible – particularly in the case of scientific experimentation.
The developed formalism is then systematically applied to the general problems presented in the introduction. The relationship of scientific theories to empirical observations is discussed and the need for certain tacit, unstatable knowledge is shown to be necessary to fully comprehend the meaning of realistic theories. The idea that many common concepts can be specified only by drawing on knowledge obtained from an infinite number of observations is presented, and the problems of reductionism are examined in this context.
A definition of when one formal language can be considered to be more expressive than another is presented, and the change in the informativeness of an observation as language changes is investigated. In this regard it is shown that the information inherent in an observation may decrease for a more expressive language.
The general problem of induction and its relation to the scientific method are discussed. Two hypotheses concerning an individual’s selection of an optimal language for a particular domain of discourse are presented and specific examples from the introduction are examined.
Resumo:
Experimental and theoretical studies have been made of the electrothermal waves occurring in a nonequilibrium MHD plasma. These waves are caused by an instability that occurs when a plasma having a dependence of conductivity on current density is subjected to crossed electric and magnetic fields. Theoretically, these waves were studied by developing and solving the equations of a steady, one-dimensional nonuniformity in electron density. From these nonlinear equations, predictions of the maximum amplitude and of the half width of steady waves could be obtained. Experimentally, the waves were studied in a nonequilibrium discharge produced in a potassium-seeded argon plasma at 2000°K and 1 atm. pressure. The behavior of such a discharge with four different configurations of electrodes was determined from photographs, photomultiplier measurements, and voltage probes. These four configurations were chosen to produce steady waves, to check the stability of steady waves, and to observe the manifestation of the waves in a MHD generator or accelerator configuration.
Steady, one-dimensional waves were found to exist in a number of situations, and where they existed, their characteristics agreed with the predictions of the steady theory. Some extensions of this theory were necessary, however, to describe the transient phenomena occurring in the inlet region of a discharge transverse to the gas flow. It was also found that in a discharge away from the stabilizing effect of the electrodes, steady waves became unstable for large Hall parameters. Methods of prediction of the effective electrical conductivity and Hall parameter of a plasma with nonuniformities caused by the electrothermal waves were also studied. Using these methods and the values of amplitude predicted by the steady theory, it was found that the measured decrease in transverse conductivity of a MHD device, 50 per cent at a Hall parameter of 5, could be accounted for in terms of the electrothermal instability.
Resumo:
This thesis presents methods by which electrical analogies can be obtained for nonlinear systems. The accuracy of these methods is investigated and several specific types of nonlinear equations are studied in detail.
In Part I a general method is given for obtaining electrical analogs of nonlinear systems with one degree of freedom. Loop and node methods are compared and the stability of the loop analogy is briefly considered.
Parts II and III give a description of the equipment and a discussion of its accuracy. Comparisons are made between experimental and analytic solutions of linear systems.
Part IV is concerned with systems having a nonlinear restoring force. In particular, solutions of Duffing's equation are obtained, both by using the electrical analogy and also by approximate analytical methods.
Systems with nonlinear damping are considered in Part V. Two specific examples are chosen: (1) forced oscillations and (2) self-excited oscillations (van der Pol’s equation). Comparisons are made with approximate analytic solutions.
Part VI gives experimental data for a system obeying Mathieu's equation. Regions of stability are obtained. Examples of subharmonic, ultraharmonic, and ultrasubharmonic oscillat1ons are shown.