4 resultados para experimental designs

em CaltechTHESIS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.

The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.

In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.

The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel spectroscopy of trapped ions is proposed which will bring single-ion detection sensitivity to the observation of magnetic resonance spectra. The approaches developed here are aimed at resolving one of the fundamental problems of molecular spectroscopy, the apparent incompatibility in existing techniques between high information content (and therefore good species discrimination) and high sensitivity. Methods for studying both electron spin resonance (ESR) and nuclear magnetic resonance (NMR) are designed. They assume established methods for trapping ions in high magnetic field and observing the trapping frequencies with high resolution (<1 Hz) and sensitivity (single ion) by electrical means. The introduction of a magnetic bottle field gradient couples the spin and spatial motions together and leads to a small spin-dependent force on the ion, which has been exploited by Dehmelt to observe directly the perturbation of the ground-state electron's axial frequency by its spin magnetic moment.

A series of fundamental innovations is described m order to extend magnetic resonance to the higher masses of molecular ions (100 amu = 2x 10^5 electron masses) and smaller magnetic moments (nuclear moments = 10^(-3) of the electron moment). First, it is demonstrated how time-domain trapping frequency observations before and after magnetic resonance can be used to make cooling of the particle to its ground state unnecessary. Second, adiabatic cycling of the magnetic bottle off between detection periods is shown to be practical and to allow high-resolution magnetic resonance to be encoded pointwise as the presence or absence of trapping frequency shifts. Third, methods of inducing spindependent work on the ion orbits with magnetic field gradients and Larmor frequency irradiation are proposed which greatly amplify the attainable shifts in trapping frequency.

The dissertation explores the basic concepts behind ion trapping, adopting a variety of classical, semiclassical, numerical, and quantum mechanical approaches to derive spin-dependent effects, design experimental sequences, and corroborate results from one approach with those from another. The first proposal presented builds on Dehmelt's experiment by combining a "before and after" detection sequence with novel signal processing to reveal ESR spectra. A more powerful technique for ESR is then designed which uses axially synchronized spin transitions to perform spin-dependent work in the presence of a magnetic bottle, which also converts axial amplitude changes into cyclotron frequency shifts. A third use of the magnetic bottle is to selectively trap ions with small initial kinetic energy. A dechirping algorithm corrects for undesired frequency shifts associated with damping by the measurement process.

The most general approach presented is spin-locked internally resonant ion cyclotron excitation, a true continuous Stern-Gerlach effect. A magnetic field gradient modulated at both the Larmor and cyclotron frequencies is devised which leads to cyclotron acceleration proportional to the transverse magnetic moment of a coherent state of the particle and radiation field. A preferred method of using this to observe NMR as an axial frequency shift is described in detail. In the course of this derivation, a new quantum mechanical description of ion cyclotron resonance is presented which is easily combined with spin degrees of freedom to provide a full description of the proposals.

Practical, technical, and experimental issues surrounding the feasibility of the proposals are addressed throughout the dissertation. Numerical ion trajectory simulations and analytical models are used to predict the effectiveness of the new designs as well as their sensitivity and resolution. These checks on the methods proposed provide convincing evidence of their promise in extending the wealth of magnetic resonance information to the study of collisionless ions via single-ion spectroscopy.