6 resultados para choice test

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a two-part thesis concerning the motion of a test particle in a bath. In part one we use an expansion of the operator PLeit(1-P)LLP to shape the Zwanzig equation into a generalized Fokker-Planck equation which involves a diffusion tensor depending on the test particle's momentum and the time.

In part two the resultant equation is studied in some detail for the case of test particle motion in a weakly coupled Lorentz Gas. The diffusion tensor for this system is considered. Some of its properties are calculated; it is computed explicitly for the case of a Gaussian potential of interaction.

The equation for the test particle distribution function can be put into the form of an inhomogeneous Schroedinger equation. The term corresponding to the potential energy in the Schroedinger equation is considered. Its structure is studied, and some of its simplest features are used to find the Green's function in the limiting situations of low density and long time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Supreme Court’s decision in Shelby County has severely limited the power of the Voting Rights Act. I argue that Congressional attempts to pass a new coverage formula are unlikely to gain the necessary Republican support. Instead, I propose a new strategy that takes a “carrot and stick” approach. As the stick, I suggest amending Section 3 to eliminate the need to prove that discrimination was intentional. For the carrot, I envision a competitive grant program similar to the highly successful Race to the Top education grants. I argue that this plan could pass the currently divided Congress.

Without Congressional action, Section 2 is more important than ever before. A successful Section 2 suit requires evidence that voting in the jurisdiction is racially polarized. Accurately and objectively assessing the level of polarization has been and continues to be a challenge for experts. Existing ecological inference methods require estimating polarization levels in individual elections. This is a problem because the Courts want to see a history of polarization across elections.

I propose a new 2-step method to estimate racially polarized voting in a multi-election context. The procedure builds upon the Rosen, Jiang, King, and Tanner (2001) multinomial-Dirichlet model. After obtaining election-specific estimates, I suggest regressing those results on election-specific variables, namely candidate quality, incumbency, and ethnicity of the minority candidate of choice. This allows researchers to estimate the baseline level of support for candidates of choice and test whether the ethnicity of the candidates affected how voters cast their ballots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?

In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last century, the silicon revolution has enabled us to build faster, smaller and more sophisticated computers. Today, these computers control phones, cars, satellites, assembly lines, and other electromechanical devices. Just as electrical wiring controls electromechanical devices, living organisms employ "chemical wiring" to make decisions about their environment and control physical processes. Currently, the big difference between these two substrates is that while we have the abstractions, design principles, verification and fabrication techniques in place for programming with silicon, we have no comparable understanding or expertise for programming chemistry.

In this thesis we take a small step towards the goal of learning how to systematically engineer prescribed non-equilibrium dynamical behaviors in chemical systems. We use the formalism of chemical reaction networks (CRNs), combined with mass-action kinetics, as our programming language for specifying dynamical behaviors. Leveraging the tools of nucleic acid nanotechnology (introduced in Chapter 1), we employ synthetic DNA molecules as our molecular architecture and toehold-mediated DNA strand displacement as our reaction primitive.

Abstraction, modular design and systematic fabrication can work only with well-understood and quantitatively characterized tools. Therefore, we embark on a detailed study of the "device physics" of DNA strand displacement (Chapter 2). We present a unified view of strand displacement biophysics and kinetics by studying the process at multiple levels of detail, using an intuitive model of a random walk on a 1-dimensional energy landscape, a secondary structure kinetics model with single base-pair steps, and a coarse-grained molecular model that incorporates three-dimensional geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Our findings are consistent with previously measured or inferred rates for hybridization, fraying, and branch migration, and provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems.

In Chapters 3 and 4, we identify and overcome the crucial experimental challenges involved in using our general DNA-based technology for engineering dynamical behaviors in the test tube. In this process, we identify important design rules that inform our choice of molecular motifs and our algorithms for designing and verifying DNA sequences for our molecular implementation. We also develop flexible molecular strategies for "tuning" our reaction rates and stoichiometries in order to compensate for unavoidable non-idealities in the molecular implementation, such as imperfectly synthesized molecules and spurious "leak" pathways that compete with desired pathways.

We successfully implement three distinct autocatalytic reactions, which we then combine into a de novo chemical oscillator. Unlike biological networks, which use sophisticated evolved molecules (like proteins) to realize such behavior, our test tube realization is the first to demonstrate that Watson-Crick base pairing interactions alone suffice for oscillatory dynamics. Since our design pipeline is general and applicable to any CRN, our experimental demonstration of a de novo chemical oscillator could enable the systematic construction of CRNs with other dynamic behaviors.