4 resultados para Prospect
em CaltechTHESIS
Resumo:
Many particles proposed by theories, such as GUT monopoles, nuclearites and 1/5 charge superstring particles, can be categorized as Slow-moving, Ionizing, Massive Particles (SIMPs).
Detailed calculations of the signal-to-noise ratios in vanous acoustic and mechanical methods for detecting such SIMPs are presented. It is shown that the previous belief that such methods are intrinsically prohibited by the thermal noise is incorrect, and that ways to solve the thermal noise problem are already within the reach of today's technology. In fact, many running and finished gravitational wave detection ( GWD) experiments are already sensitive to certain SIMPs. As an example, a published GWD result is used to obtain a flux limit for nuclearites.
The result of a search using a scintillator array on Earth's surface is reported. A flux limit of 4.7 x 10^(-12) cm^(-2)sr^(-1)s^(-1) (90% c.l.) is set for any SIMP with 2.7 x 10^(-4) less than β less than 5 x 10^(-3) and ionization greater than 1/3 of minimum ionizing muons. Although this limit is above the limits from underground experiments for typical supermassive particles (10^(16)GeV), it is a new limit in certain β and ionization regions for less massive ones (~10^9 GeV) not able to penetrate deep underground, and implies a stringent limit on the fraction of the dark matter that can be composed of massive electrically and/ or magnetically charged particles.
The prospect of the future SIMP search in the MACRO detector is discussed. The special problem of SIMP trigger is examined and a circuit proposed, which may solve most of the problems of the previous ones proposed or used by others and may even enable MACRO to detect certain SIMP species with β as low as the orbital velocity around the earth.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The prospect of terawatt-scale electricity generation using a photovoltaic (PV) device places strict requirements on the active semiconductor optoelectronic properties and elemental abundance. After reviewing the constraints placed on an "earth-abundant" solar absorber, we find zinc phosphide (α-Zn3P2) to be an ideal candidate. In addition to its near-optimal direct band gap of 1.5 eV, high visible-light absorption coefficient (>104 cm-1), and long minority-carrier diffusion length (>5 μm), Zn3P2 is composed of abundant Zn and P elements and has excellent physical properties for scalable thin-film deposition. However, to date, a Zn3P2 device of sufficient efficiency for commercial applications has not been demonstrated. Record efficiencies of 6.0% for multicrystalline and 4.3% for thin-film cells have been reported, respectively. Performance has been limited by the intrinsic p-type conductivity of Zn3P2 which restricts us to Schottky and heterojunction device designs. Due to our poor understanding of Zn3P2 interfaces, an ideal heterojunction partner has not yet been found.
The goal of this thesis is to explore the upper limit of solar conversion efficiency achievable with a Zn3P2 absorber through the design of an optimal heterojunction PV device. To do so, we investigate three key aspects of material growth, interface energetics, and device design. First, the growth of Zn3P2 on GaAs(001) is studied using compound-source molecular-beam epitaxy (MBE). We successfully demonstrate the pseudomorphic growth of Zn3P2 epilayers of controlled orientation and optoelectronic properties. Next, the energy-band alignments of epitaxial Zn3P2 and II-VI and III-V semiconductor interfaces are measured via high-resolution x-ray photoelectron spectroscopy in order to determine the most appropriate heterojunction partner. From this work, we identify ZnSe as a nearly ideal n-type emitter for a Zn3P2 PV device. Finally, various II-VI/Zn3P2 heterojunction solar cells designs are fabricated, including substrate and superstrate architectures, and evaluated based on their solar conversion efficiency.
Resumo:
Quantum mechanics places limits on the minimum energy of a harmonic oscillator via the ever-present "zero-point" fluctuations of the quantum ground state. Through squeezing, however, it is possible to decrease the noise of a single motional quadrature below the zero-point level as long as noise is added to the orthogonal quadrature. While squeezing below the quantum noise level was achieved decades ago with light, quantum squeezing of the motion of a mechanical resonator is a more difficult prospect due to the large thermal occupations of megahertz-frequency mechanical devices even at typical dilution refrigerator temperatures of ~ 10 mK.
Kronwald, Marquardt, and Clerk (2013) propose a method of squeezing a single quadrature of mechanical motion below the level of its zero-point fluctuations, even when the mechanics starts out with a large thermal occupation. The scheme operates under the framework of cavity optomechanics, where an optical or microwave cavity is coupled to the mechanics in order to control and read out the mechanical state. In the proposal, two pump tones are applied to the cavity, each detuned from the cavity resonance by the mechanical frequency. The pump tones establish and couple the mechanics to a squeezed reservoir, producing arbitrarily-large, steady-state squeezing of the mechanical motion. In this dissertation, I describe two experiments related to the implementation of this proposal in an electromechanical system. I also expand on the theory presented in Kronwald et. al. to include the effects of squeezing in the presence of classical microwave noise, and without assumptions of perfect alignment of the pump frequencies.
In the first experiment, we produce a squeezed thermal state using the method of Kronwald et. al.. We perform back-action evading measurements of the mechanical squeezed state in order to probe the noise in both quadratures of the mechanics. Using this method, we detect single-quadrature fluctuations at the level of 1.09 +/- 0.06 times the quantum zero-point motion.
In the second experiment, we measure the spectral noise of the microwave cavity in the presence of the squeezing tones and fit a full model to the spectrum in order to deduce a quadrature variance of 0.80 +/- 0.03 times the zero-point level. These measurements provide the first evidence of quantum squeezing of motion in a mechanical resonator.