5 resultados para Track and field.
em CaltechTHESIS
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Freshwater fish of the genus Apteronotus (family Gymnotidae) generate a weak, high frequency electric field (< 100 mV/cm, 0.5-10 kHz) which permeates their local environment. These nocturnal fish are acutely sensitive to perturbations in their electric field caused by other electric fish, and nearby objects whose impedance is different from the surrounding water. This thesis presents high temporal and spatial resolution maps of the electric potential and field on and near Apteronotus. The fish's electric field is a complicated and highly stable function of space and time. Its characteristics, such as spectral composition, timing, and rate of attenuation, are examined in terms of physical constraints, and their possible functional roles in electroreception.
Temporal jitter of the periodic field is less than 1 µsec. However, electrocyte activity is not globally synchronous along the fish 's electric organ. The propagation of electrocyte activation down the fish's body produces a rotation of the electric field vector in the caudal part of the fish. This may assist the fish in identifying nonsymmetrical objects, and could also confuse electrosensory predators that try to locate Apteronotus by following its fieldlines. The propagation also results in a complex spatiotemporal pattern of the EOD potential near the fish. Visualizing the potential on the same and different fish over timescales of several months suggests that it is stable and could serve as a unique signature for individual fish.
Measurements of the electric field were used to calculate the effects of simple objects on the fish's electric field. The shape of the perturbation or "electric image" on the fish's skin is relatively independent of a simple object's size, conductivity, and rostrocaudal location, and therefore could unambiguously determine object distance. The range of electrolocation may depend on both the size of objects and their rostrocaudal location. Only objects with very large dielectric constants cause appreciable phase shifts, and these are strongly dependent on the water conductivity.
Resumo:
This study concerns the longitudinal dispersion of fluid particles which are initially distributed uninformly over one cross section of a uniform, steady, turbulent open channel flow. The primary focus is on developing a method to predict the rate of dispersion in a natural stream.
Taylor's method of determining a dispersion coefficient, previously applied to flow in pipes and two-dimensional open channels, is extended to a class of three-dimensional flows which have large width-to-depth ratios, and in which the velocity varies continuously with lateral cross-sectional position. Most natural streams are included. The dispersion coefficient for a natural stream may be predicted from measurements of the channel cross-sectional geometry, the cross-sectional distribution of velocity, and the overall channel shear velocity. Tracer experiments are not required.
Large values of the dimensionless dispersion coefficient D/rU* are explained by lateral variations in downstream velocity. In effect, the characteristic length of the cross section is shown to be proportional to the width, rather than the hydraulic radius. The dimensionless dispersion coefficient depends approximately on the square of the width to depth ratio.
A numerical program is given which is capable of generating the entire dispersion pattern downstream from an instantaneous point or plane source of pollutant. The program is verified by the theory for two-dimensional flow, and gives results in good agreement with laboratory and field experiments.
Both laboratory and field experiments are described. Twenty-one laboratory experiments were conducted: thirteen in two-dimensional flows, over both smooth and roughened bottoms; and eight in three-dimensional flows, formed by adding extreme side roughness to produce lateral velocity variations. Four field experiments were conducted in the Green-Duwamish River, Washington.
Both laboratory and flume experiments prove that in three-dimensional flow the dominant mechanism for dispersion is lateral velocity variation. For instance, in one laboratory experiment the dimensionless dispersion coefficient D/rU* (where r is the hydraulic radius and U* the shear velocity) was increased by a factory of ten by roughening the channel banks. In three-dimensional laboratory flow, D/rU* varied from 190 to 640, a typical range for natural streams. For each experiment, the measured dispersion coefficient agreed with that predicted by the extension of Taylor's analysis within a maximum error of 15%. For the Green-Duwamish River, the average experimentally measured dispersion coefficient was within 5% of the prediction.
Resumo:
This thesis covers a range of topics in numerical and analytical relativity, centered around introducing tools and methodologies for the study of dynamical spacetimes. The scope of the studies is limited to classical (as opposed to quantum) vacuum spacetimes described by Einstein's general theory of relativity. The numerical works presented here are carried out within the Spectral Einstein Code (SpEC) infrastructure, while analytical calculations extensively utilize Wolfram's Mathematica program.
We begin by examining highly dynamical spacetimes such as binary black hole mergers, which can be investigated using numerical simulations. However, there are difficulties in interpreting the output of such simulations. One difficulty stems from the lack of a canonical coordinate system (henceforth referred to as gauge freedom) and tetrad, against which quantities such as Newman-Penrose Psi_4 (usually interpreted as the gravitational wave part of curvature) should be measured. We tackle this problem in Chapter 2 by introducing a set of geometrically motivated coordinates that are independent of the simulation gauge choice, as well as a quasi-Kinnersley tetrad, also invariant under gauge changes in addition to being optimally suited to the task of gravitational wave extraction.
Another difficulty arises from the need to condense the overwhelming amount of data generated by the numerical simulations. In order to extract physical information in a succinct and transparent manner, one may define a version of gravitational field lines and field strength using spatial projections of the Weyl curvature tensor. Introduction, investigation and utilization of these quantities will constitute the main content in Chapters 3 through 6.
For the last two chapters, we turn to the analytical study of a simpler dynamical spacetime, namely a perturbed Kerr black hole. We will introduce in Chapter 7 a new analytical approximation to the quasi-normal mode (QNM) frequencies, and relate various properties of these modes to wave packets traveling on unstable photon orbits around the black hole. In Chapter 8, we study a bifurcation in the QNM spectrum as the spin of the black hole a approaches extremality.
Resumo:
A model for some of the many physical-chemical and biological processes in intermittent sand filtration of wastewaters is described and an expression for oxygen transfer is formulated.
The model assumes that aerobic bacterial activity within the sand or soil matrix is limited, mostly by oxygen deficiency, while the surface is ponded with wastewater. Atmospheric oxygen reenters into the soil after infiltration ends. Aerobic activity is resumed, but the extent of penetration of oxygen is limited and some depths may be always anaerobic. These assumptions lead to the conclusion that the percolate shows large variations with respect to the concentration of certain contaminants, with some portions showing little change in a specific contaminant. Analyses of soil moisture in field studies and of effluent from laboratory sand columns substantiated the model.
The oxygen content of the system at sufficiently long times after addition of wastes can be described by a quasi-steady-state diffusion equation including a term for an oxygen sink. Measurements of oxygen content during laboratory and field studies show that the oxygen profile changes only slightly up to two days after the quasi-steady state is attained.
Results of these hypotheses and experimental verification can be applied in the operation of existing facilities and in the interpretation of data from pilot plant-studies.