5 resultados para Asymptotic behaviour, Bayesian methods, Mixture models, Overfitting, Posterior concentration
em CaltechTHESIS
Resumo:
In Part I a class of linear boundary value problems is considered which is a simple model of boundary layer theory. The effect of zeros and singularities of the coefficients of the equations at the point where the boundary layer occurs is considered. The usual boundary layer techniques are still applicable in some cases and are used to derive uniform asymptotic expansions. In other cases it is shown that the inner and outer expansions do not overlap due to the presence of a turning point outside the boundary layer. The region near the turning point is described by a two-variable expansion. In these cases a related initial value problem is solved and then used to show formally that for the boundary value problem either a solution exists, except for a discrete set of eigenvalues, whose asymptotic behaviour is found, or the solution is non-unique. A proof is given of the validity of the two-variable expansion; in a special case this proof also demonstrates the validity of the inner and outer expansions.
Nonlinear dispersive wave equations which are governed by variational principles are considered in Part II. It is shown that the averaged Lagrangian variational principle is in fact exact. This result is used to construct perturbation schemes to enable higher order terms in the equations for the slowly varying quantities to be calculated. A simple scheme applicable to linear or near-linear equations is first derived. The specific form of the first order correction terms is derived for several examples. The stability of constant solutions to these equations is considered and it is shown that the correction terms lead to the instability cut-off found by Benjamin. A general stability criterion is given which explicitly demonstrates the conditions under which this cut-off occurs. The corrected set of equations are nonlinear dispersive equations and their stationary solutions are investigated. A more sophisticated scheme is developed for fully nonlinear equations by using an extension of the Hamiltonian formalism recently introduced by Whitham. Finally the averaged Lagrangian technique is extended to treat slowly varying multiply-periodic solutions. The adiabatic invariants for a separable mechanical system are derived by this method.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
This thesis presents two different forms of the Born approximations for acoustic and elastic wavefields and discusses their application to the inversion of seismic data. The Born approximation is valid for small amplitude heterogeneities superimposed over a slowly varying background. The first method is related to frequency-wavenumber migration methods. It is shown to properly recover two independent acoustic parameters within the bandpass of the source time function of the experiment for contrasts of about 5 percent from data generated using an exact theory for flat interfaces. The independent determination of two parameters is shown to depend on the angle coverage of the medium. For surface data, the impedance profile is well recovered.
The second method explored is mathematically similar to iterative tomographic methods recently introduced in the geophysical literature. Its basis is an integral relation between the scattered wavefield and the medium parameters obtained after applying a far-field approximation to the first-order Born approximation. The Davidon-Fletcher-Powell algorithm is used since it converges faster than the steepest descent method. It consists essentially of successive backprojections of the recorded wavefield, with angular and propagation weighing coefficients for density and bulk modulus. After each backprojection, the forward problem is computed and the residual evaluated. Each backprojection is similar to a before-stack Kirchhoff migration and is therefore readily applicable to seismic data. Several examples of reconstruction for simple point scatterer models are performed. Recovery of the amplitudes of the anomalies are improved with successive iterations. Iterations also improve the sharpness of the images.
The elastic Born approximation, with the addition of a far-field approximation is shown to correspond physically to a sum of WKBJ-asymptotic scattered rays. Four types of scattered rays enter in the sum, corresponding to P-P, P-S, S-P and S-S pairs of incident-scattered rays. Incident rays propagate in the background medium, interacting only once with the scatterers. Scattered rays propagate as if in the background medium, with no interaction with the scatterers. An example of P-wave impedance inversion is performed on a VSP data set consisting of three offsets recorded in two wells.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.
Resumo:
I. The attenuation of sound due to particles suspended in a gas was first calculated by Sewell and later by Epstein in their classical works on the propagation of sound in a two-phase medium. In their work, and in more recent works which include calculations of sound dispersion, the calculations were made for systems in which there was no mass transfer between the two phases. In the present work, mass transfer between phases is included in the calculations.
The attenuation and dispersion of sound in a two-phase condensing medium are calculated as functions of frequency. The medium in which the sound propagates consists of a gaseous phase, a mixture of inert gas and condensable vapor, which contains condensable liquid droplets. The droplets, which interact with the gaseous phase through the interchange of momentum, energy, and mass (through evaporation and condensation), are treated from the continuum viewpoint. Limiting cases, for flow either frozen or in equilibrium with respect to the various exchange processes, help demonstrate the effects of mass transfer between phases. Included in the calculation is the effect of thermal relaxation within droplets. Pressure relaxation between the two phases is examined, but is not included as a contributing factor because it is of interest only at much higher frequencies than the other relaxation processes. The results for a system typical of sodium droplets in sodium vapor are compared to calculations in which there is no mass exchange between phases. It is found that the maximum attenuation is about 25 per cent greater and occurs at about one-half the frequency for the case which includes mass transfer, and that the dispersion at low frequencies is about 35 per cent greater. Results for different values of latent heat are compared.
II. In the flow of a gas-particle mixture through a nozzle, a normal shock may exist in the diverging section of the nozzle. In Marble’s calculation for a shock in a constant area duct, the shock was described as a usual gas-dynamic shock followed by a relaxation zone in which the gas and particles return to equilibrium. The thickness of this zone, which is the total shock thickness in the gas-particle mixture, is of the order of the relaxation distance for a particle in the gas. In a nozzle, the area may change significantly over this relaxation zone so that the solution for a constant area duct is no longer adequate to describe the flow. In the present work, an asymptotic solution, which accounts for the area change, is obtained for the flow of a gas-particle mixture downstream of the shock in a nozzle, under the assumption of small slip between the particles and gas. This amounts to the assumption that the shock thickness is small compared with the length of the nozzle. The shock solution, valid in the region near the shock, is matched to the well known small-slip solution, which is valid in the flow downstream of the shock, to obtain a composite solution valid for the entire flow region. The solution is applied to a conical nozzle. A discussion of methods of finding the location of a shock in a nozzle is included.