8 resultados para Asset Pricing, Expectations, Beta

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part I of this thesis, a new magnetic spectrometer experiment which measured the β spectrum of ^(35)S is described. New limits on heavy neutrino emission in nuclear β decay were set, for a heavy neutrino mass range between 12 and 22 keV. In particular, this measurement rejects the hypothesis that a 17 keV neutrino is emitted, with sin^2 θ = 0.0085, at the 6δ statistical level. In addition, an auxiliary experiment was performed, in which an artificial kink was induced in the β spectrum by means of an absorber foil which masked a fraction of the source area. In this measurement, the sensitivity of the magnetic spectrometer to the spectral features of heavy neutrino emission was demonstrated.

In Part II, a measurement of the neutron spallation yield and multiplicity by the Cosmic-ray Underground Background Experiment is described. The production of fast neutrons by muons was investigated at an underground depth of 20 meters water equivalent, with a 200 liter detector filled with 0.09% Gd-loaded liquid scintillator. We measured a neutron production yield of (3.4 ± 0.7) x 10^(-5) neutrons per muon-g/cm^2, in agreement with other experiments. A single-to-double neutron multiplicity ratio of 4:1 was observed. In addition, stopped π^+ decays to µ^+ and then e^+ were observed as was the associated production of pions and neutrons, by the muon spallation interaction. It was seen that practically all of the π^+ produced by muons were also accompanied by at least one neutron. These measurements serve as the basis for neutron background estimates for the San Onofre neutrino detector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes the design, construction and performance of a high-pressure, xenon, gas time projection chamber (TPC) for the study of double beta decay in ^(136) Xe. The TPC when operating at 5 atm can accommodate 28 moles of 60% enriched ^(136) Xe. The TPC has operated as a detector at Caltech since 1986. It is capable of reconstructing a charged particle trajectory and can easily distinguish between different kinds of charged particles. A gas purification and xenon gas recovery system were developed. The electronics for the 338 channels of readout was developed along with a data acquistion system. Currently, the detector is being prepared at the University of Neuchatel for installation in the low background laboratory situated in the St. Gotthard tunnel, Switzerland. In one year of runtime the detector should be sensitive to a 0ν lifetime of the order of 10^(24) y, which corresponds to a neutrino mass in the range 0.3 to 3.3 eV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The isomerization of glucose into fructose is a large-scale reaction for the production of high-fructose corn syrup, and is now being considered as an intermediate step in the possible route of biomass conversion into fuels and chemicals. Recently, it has been shown that a hydrophobic, large pore, silica molecular sieve having the zeolite beta structure and containing framework Sn4+ (Sn-Beta) is able to isomerize glucose into fructose in aqueous media. Here, I have investigated how this catalyst converts glucose to fructose and show that it is analogous to that achieved with metalloenzymes. Specifically, glucose partitions into the molecular sieve in the pyranose form, ring opens to the acyclic form in the presence of the Lewis acid center (framework Sn4+), isomerizes into the acyclic form of fructose and finally ring closes to yield the furanose product. Akin to the metalloenzyme, the isomerization step proceeds by intramolecular hydride transfer from C2 to C1. Extraframework tin oxides located within hydrophobic channels of the molecular sieve that exclude liquid water can also isomerize glucose to fructose in aqueous media, but do so through a base-catalyzed proton abstraction mechanism. Extraframework tin oxide particles located at the external surface of the molecular sieve crystals or on amorphous silica supports are not active in aqueous media but are able to perform the isomerization in methanol by a base-catalyzed proton abstraction mechanism. Post-synthetic exchange of Na+ with Sn-Beta alters the glucose reaction pathway from the 1,2 intramolecular hydrogen shift (isomerization) to produce fructose towards the 1,2 intramolecular carbon shift (epimerization) that forms mannose. Na+ remains exchanged onto silanol groups during reaction in methanol solvent, leading to a near complete shift in selectivity towards glucose epimerization to mannose. In contrast, decationation occurs during reaction in aqueous solutions and gradually increases the reaction selectivity to isomerization at the expense of epimerization. Decationation and concomitant changes in selectivity can be eliminated by addition of NaCl to the aqueous reaction solution. Thus, framework tin sites with a proximal silanol group are the active sites for the 1, 2 intramolecular hydride shift in the isomerization of glucose to fructose, while these sites with Na-exchanged silanol group are the active sites for the 1, 2 intramolecular carbon shift in epimerization of glucose to mannose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The free neutron beta decay correlation A0 between neutron polarization and electron emission direction provides the strongest constraint on the ratio λ = gA/gV of the Axial-vector to Vector coupling constants in Weak decay. In conjunction with the CKM Matrix element Vud and the neutron lifetime τn, λ provides a test of Standard Model assumptions for the Weak interaction. Leading high-precision measurements of A0 and τn in the 1995-2005 time period showed discrepancies with prior measurements and Standard Model predictions for the relationship between λ, τn, and Vud. The UCNA experiment was developed to measure A0 from decay of polarized ultracold neutrons (UCN), providing a complementary determination of λ with different systematic uncertainties from prior cold neutron beam experiments. This dissertation describes analysis of the dataset collected by UCNA in 2010, with emphasis on detector response calibrations and systematics. The UCNA measurement is placed in the context of the most recent τn results and cold neutron A0 experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energies and relative intensities of gamma transitions in 152Sm, 152Gd, 154Gd, 166Er, and 232U following radioactive decay have been measured with a Ge(Li) spectrometer. A peak fitting program has been developed to determine gamma ray energies and relative intensities with precision sufficient to give a meaningful test of nuclear models. Several previously unobserved gamma rays were placed in the nuclear level schemes. Particular attention has been paid to transitions from the beta and gamma vibrational bands, since the gamma ray branching ratios are sensitive tests of configuration mixing in the nuclear levels. As the reduced branching ratios depend on the multipolarity of the gamma transitions, experiments were performed to measure multipole mixing ratios for transitions from the gamma vibrational band. In 154Gd, angular correlation experiments showed that transitions from the gamma band to the ground state band were predominantly electric quadrupole, in agreement with the rotational model. In 232U, the internal conversion spectrum has been studied with a Si(Li) spectrometer constructed for electron spectroscopy. The strength of electric monopole transitions and the multipolarity of some gamma transitions have been determined from the measured relative electron intensities.

The results of the experiments have been compared with the rotational model and several microscopic models. Relative B(E2) strengths for transitions from the gamma band in 232U and 166Er are in good agreement with a single parameter band mixing model, with values of z2= 0.025(10) and 0.046(2), respectively. Neither the beta nor the gamma band transition strengths in 152Sm and 154Gd can be accounted for by a single parameter theory, nor can agreement be found by considering the large mixing found between the beta and gamma bands. The relative B(E2) strength for transitions from the gamma band to the beta band in 232U is found to be five times greater than the strength to the ground state band, indicating collective transitions with strength approximately 15 single particle units.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An experimental investigation of the optical properties of β–gallium oxide has been carried out, covering the wavelength range 220-2500 nm.

The refractive index and birefringence have been determined to about ± 1% accuracy over the range 270-2500 nm, by the use of a technique based on the occurrence of fringes in the transmission of a thin sample due to multiple internal reflections in the sample (ie., the "channelled spectrum" of the sample.)

The optical absorption coefficient has been determined over the range 220 - 300 nm, which range spans the fundamental absorption edge of β – Ga2O3. Two techniques were used in the absorption coefficient determination: measurement of transmission of a thin sample, and measurement of photocurrent from a Schottky barrier formed on the surface of a sample. Absorption coefficient was measured over a range from 10 to greater than 105, to an accuracy of better than ± 20%. The absorption edge was found to be strongly polarization-dependent.

Detailed analyses are presented of all three experimental techniques used. Experimentally determined values of the optical constants are presented in graphical form.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.

In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.

The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.

The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.