24 resultados para Noise - Loss hearing
em CaltechTHESIS
Resumo:
Spontaneous emission into the lasing mode fundamentally limits laser linewidths. Reducing cavity losses provides two benefits to linewidth: (1) fewer excited carriers are needed to reach threshold, resulting in less phase-corrupting spontaneous emission into the laser mode, and (2) more photons are stored in the laser cavity, such that each individual spontaneous emission event disturbs the phase of the field less. Strong optical absorption in III-V materials causes high losses, preventing currently-available semiconductor lasers from achieving ultra-narrow linewidths. This absorption is a natural consequence of the compromise between efficient electrical and efficient optical performance in a semiconductor laser. Some of the III-V layers must be heavily doped in order to funnel excited carriers into the active region, which has the side effect of making the material strongly absorbing.
This thesis presents a new technique, called modal engineering, to remove modal energy from the lossy region and store it in an adjacent low-loss material, thereby reducing overall optical absorption. A quantum mechanical analysis of modal engineering shows that modal gain and spontaneous emission rate into the laser mode are both proportional to the normalized intensity of that mode at the active region. If optical absorption near the active region dominates the total losses of the laser cavity, shifting modal energy from the lossy region to the low-loss region will reduce modal gain, total loss, and the spontaneous emission rate into the mode by the same factor, so that linewidth decreases while the threshold inversion remains constant. The total spontaneous emission rate into all other modes is unchanged.
Modal engineering is demonstrated using the Si/III-V platform, in which light is generated in the III-V material and stored in the low-loss silicon material. The silicon is patterned as a high-Q resonator to minimize all sources of loss. Fabricated lasers employing modal engineering to concentrate light in silicon demonstrate linewidths at least 5 times smaller than lasers without modal engineering at the same pump level above threshold, while maintaining the same thresholds.
Resumo:
This thesis describes the development of low-noise heterodyne receivers at THz frequencies for submillimeter astronomy using Nb-based superconductor-insulator-superconductor (SIS) tunneling junctions. The mixers utilize a quasi-optical configuration which consists of a planar twin-slot antenna and antisymmetrically-fed two-junctions on an antireflection-coated silicon hyperhemispherical lens. On-chip integrated tuning circuits, in the form of microstrip lines, are used to obtain maximum coupling efficiency in the designed frequency band. To reduce the rf losses in the integrated tuning circuits above the superconducting Nb gap frequency (~ 700 GHz), normal-metal Al is used to replace Nb as the tuning circuits.
To account the rf losses in the micros trip lines, we calculated the surface impedance of the AI films using the nonlocal anomalous skin effect for finite thickness films. Nb films were calculated using the Mattis-Bardeen theory in the extreme anomalous limit. Our calculations show that the losses of the Al and Nb microstrip lines are about equal at 830 GHz. For Al-wiring and Nb-wiring mixers both optimized at 1050 GHz, the RF coupling efficiency of Al-wiring mixer is higher than that of Nb-wiring one by almost 50%. We have designed both Nb-wiring and Al-wiring mixers below and above the gap frequency.
A Fourier transform spectrometer (FTS) has been constructed especially for the study of the frequency response of SIS receivers. This FTS features large aperture size (10 inch) and high frequency resolution (114 MHz). The FTS spectra, obtained using the SIS receivers as direct detectors on the FTS, agree quite well with our theoretical simulations. We have also, for the first time, measured the FTS heterodyne response of an SIS mixer at sufficiently high resolution to resolve the LO and the sidebands. Heterodyne measurements of our SIS receivers with Nb-wiring or Al-wiring have yielded results which arc among the best reported to date for broadband heterodyne receivers. The Nb-wiring mixers, covering 400 - 850 GHz band with four separate fixed-tuned mixers, have uncorrected DSB receiver noise temperature around 5hv/kb to 700 GHz, and better than 540 K at 808 GHz. An Al-wiring mixer designed for 1050 GHz band has an uncorrected DSB receiver noise temperature 840 K at 1042 GHz and 2.5 K bath temperature. Mixer performance analysis shows that Nb junctions can work well up to twice the gap frequency and the major cause of loss above the gap frequency is the rf losses in the microstrip tuning structures. Further advances in THz SIS mixers may be possible using circuits fabricated with higher-gap superconductors such as NbN. However, this will require high-quality films with low RF surface resistance at THz frequencies.
Resumo:
In this thesis we uncover a new relation which links thermodynamics and information theory. We consider time as a channel and the detailed state of a physical system as a message. As the system evolves with time, ever present noise insures that the "message" is corrupted. Thermodynamic free energy measures the approach of the system toward equilibrium. Information theoretical mutual information measures the loss of memory of initial state. We regard the free energy and the mutual information as operators which map probability distributions over state space to real numbers. In the limit of long times, we show how the free energy operator and the mutual information operator asymptotically attain a very simple relationship to one another. This relationship is founded on the common appearance of entropy in the two operators and on an identity between internal energy and conditional entropy. The use of conditional entropy is what distinguishes our approach from previous efforts to relate thermodynamics and information theory.
Resumo:
The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.
Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.
The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.
The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.
In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.
Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.
The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.
The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.
Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.
Resumo:
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.
At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.
In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.
In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.
In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.
Resumo:
Optical frequency combs (OFCs) provide direct phase-coherent link between optical and RF frequencies, and enable precision measurement of optical frequencies. In recent years, a new class of frequency combs (microcombs) have emerged based on parametric frequency conversions in dielectric microresonators. Micocombs have large line spacing from 10's to 100's GHz, allowing easy access to individual comb lines for arbitrary waveform synthesis. They also provide broadband parametric gain bandwidth, not limited by specific atomic or molecular transitions in conventional OFCs. The emerging applications of microcombs include low noise microwave generation, astronomical spectrograph calibration, direct comb spectroscopy, and high capacity telecommunications.
In this thesis, research is presented starting with the introduction of a new type of chemically etched, planar silica-on-silicon disk resonator. A record Q factor of 875 million is achieved for on-chip devices. A simple and accurate approach to characterize the FSR and dispersion of microcavities is demonstrated. Microresonator-based frequency combs (microcombs) are demonstrated with microwave repetition rate less than 80 GHz on a chip for the first time. Overall low threshold power (as low as 1 mW) of microcombs across a wide range of resonator FSRs from 2.6 to 220 GHz in surface-loss-limited disk resonators is demonstrated. The rich and complex dynamics of microcomb RF noise are studied. High-coherence, RF phase-locking of microcombs is demonstrated where injection locking of the subcomb offset frequencies are observed by pump-detuning-alignment. Moreover, temporal mode locking, featuring subpicosecond pulses from a parametric 22 GHz microcomb, is observed. We further demonstrated a shot-noise-limited white phase noise of microcomb for the first time. Finally, stabilization of the microcomb repetition rate is realized by phase lock loop control.
For another major nonlinear optical application of disk resonators, highly coherent, simulated Brillouin lasers (SBL) on silicon are also demonstrated, with record low Schawlow-Townes noise less than 0.1 Hz^2/Hz for any chip-based lasers and low technical noise comparable to commercial narrow-linewidth fiber lasers. The SBL devices are efficient, featuring more than 90% quantum efficiency and threshold as low as 60 microwatts. Moreover, novel properties of the SBL are studied, including cascaded operation, threshold tuning, and mode-pulling phenomena. Furthermore, high performance microwave generation using on-chip cascaded Brillouin oscillation is demonstrated. It is also robust enough to enable incorporation as the optical voltage-controlled-oscillator in the first demonstration of a photonic-based, microwave frequency synthesizer. Finally, applications of microresonators as frequency reference cavities and low-phase-noise optomechanical oscillators are presented.
Resumo:
The relentlessly increasing demand for network bandwidth, driven primarily by Internet-based services such as mobile computing, cloud storage and video-on-demand, calls for more efficient utilization of the available communication spectrum, as that afforded by the resurging DSP-powered coherent optical communications. Encoding information in the phase of the optical carrier, using multilevel phase modulationformats, and employing coherent detection at the receiver allows for enhanced spectral efficiency and thus enables increased network capacity. The distributed feedback semiconductor laser (DFB) has served as the near exclusive light source powering the fiber optic, long-haul network for over 30 years. The transition to coherent communication systems is pushing the DFB laser to the limits of its abilities. This is due to its limited temporal coherence that directly translates into the number of different phases that can be imparted to a single optical pulse and thus to the data capacity. Temporal coherence, most commonly quantified in the spectral linewidth Δν, is limited by phase noise, result of quantum-mandated spontaneous emission of photons due to random recombination of carriers in the active region of the laser.
In this work we develop a generically new type of semiconductor laser with the requisite coherence properties. We demonstrate electrically driven lasers characterized by a quantum noise-limited spectral linewidth as low as 18 kHz. This narrow linewidth is result of a fundamentally new laser design philosophy that separates the functions of photon generation and storage and is enabled by a hybrid Si/III-V integration platform. Photons generated in the active region of the III-V material are readily stored away in the low loss Si that hosts the bulk of the laser field, thereby enabling high-Q photon storage. The storage of a large number of coherent quanta acts as an optical flywheel, which by its inertia reduces the effect of the spontaneous emission-mandated phase perturbations on the laser field, while the enhanced photon lifetime effectively reduces the emission rate of incoherent quanta into the lasing mode. Narrow linewidths are obtained over a wavelength bandwidth spanning the entire optical communication C-band (1530-1575nm) at only a fraction of the input power required by conventional DFB lasers. The results presented in this thesis hold great promise for the large scale integration of lithographically tuned, high-coherence laser arrays for use in coherent communications, that will enable Tb/s-scale data capacities.
Resumo:
In this thesis, I apply detailed waveform modeling to study noise correlations in different environments, and earthquake waveforms for source parameters and velocity structure.
Green's functions from ambient noise correlations have primarily been used for travel-time measurement. In Part I of this thesis, by detailed waveform modeling of noise correlation functions, I retrieve both surface waves and crustal body waves from noise, and use them in improving earthquake centroid locations and regional crustal structures. I also present examples in which the noise correlations do not yield Green's functions, yet the results are still interesting and useful after case-by-case analyses, including non-uniform distribution of noise sources, spurious velocity changes, and noise correlations on the Amery Ice Shelf.
In Part II of this thesis, I study teleseismic body waves of earthquakes for source parameters or near-source structure. With the dense modern global network and improved methodologies, I obtain high-resolution earthquake locations, focal mechanisms and rupture processes, which provide critical insights to earthquake faulting processes in shallow and deep parts of subduction zones. Waveform modeling of relatively simple subduction zone events also displays new constraints on the structure of subducted slabs.
In summary, behind my approaches to the relatively independent problems, the philosophy is to bring observational insights from seismic waveforms in critical and simple ways.
Resumo:
The technique of variable-angle, electron energy-loss spectroscopy has been used to study the electronic spectroscopy of the diketene molecule. The experiment was performed using incident electron beam energies of 25 eV and 50 eV, and at scattering angles between 10° and 90°. The energy-loss region from 2 eV to 11 eV was examined. One spin-forbidden transition has been observed at 4.36 eV and three others that are spin-allowed have been located at 5.89 eV, 6.88 eV and 7.84 eV. Based on the intensity variation of these transitions with impact energy and scattering angle, and through analogy with simpler molecules, the first three transitions are tentatively assigned to an n → π* transition, a π - σ* (3s) Rydberg transition and a π → π* transition.
Thermal decomposition of chlorodifluoromethane, chloroform, dichloromethane and chloromethane under flash-vacuum pyrolysis conditions (900-1100°C) was investigated by the technique of electron energy-loss spectroscopy, using the impact energy of 50 eV and a scattering angle of 10°. The pyrolytic reaction follows a hydrogen-chloride α-elimination pathway. The difluoromethylene radical was produced from chlorodifluoromethane pyrolysis at 900°C and identified by its X^1 A_1 → A^1B_1 band at 5.04 eV.
Finally, a number of exploratory studies have been performed. The thermal decomposition of diketene was studied under flash vacuum pressures (1-10 mTorr) and temperatures ranging from 500°C to 1000°C. The complete decomposition of the diketene molecule into two ketene molecules was achieved at 900°C. The pyrolysis of trifluoromethyl iodide molecule at 1000°C produced an electron energy-loss spectrum with several iodine-atom, sharp peaks and only a small shoulder at 8.37 eV as a possible trifluoromethyl radical feature. The electron energy-loss spectrum of trichlorobromomethane at 900°C mainly showed features from bromine atom, chlorine molecule and tetrachloroethylene. Hexachloroacetone decomposed partially at 900°C, but showed well-defined features from chlorine, carbon monoxide and tetrachloroethylene molecules. Bromodichloromethane molecule was investigated at 1000°C and produced a congested, electron energy-loss spectrum with bromine-atom, hydrogen-bromide, hydrogen-chloride and tetrachloroethylene features.
Resumo:
The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.
Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.
The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Thermal noise arising from mechanical loss in high reflective dielectric coatings is a significant source of noise in precision optical measurements. In particular, Advanced LIGO, a large scale interferometer aiming to observed gravitational wave, is expected to be limited by coating thermal noise in the most sensitive region around 30–300 Hz. Various theoretical calculations for predicting coating Brownian noise have been proposed. However, due to the relatively limited knowledge of the coating material properties, an accurate approximation of the noise cannot be achieved. A testbed that can directly observed coating thermal noise close to Advanced LIGO band will serve as an indispensable tool to verify the calculations, study material properties of the coating, and estimate the detector’s performance.
This dissertation reports a setup that has sensitivity to observe wide band (10Hz to 1kHz) thermal noise from fused silica/tantala coating at room temperature from fixed-spacer Fabry–Perot cavities. Important fundamental noises and technical noises associated with the setup are discussed. The coating loss obtained from the measurement agrees with results reported in the literature. The setup serves as a testbed to study thermal noise in high reflective mirrors from different materials. One example is a heterostructure of AlxGa1−xAs (AlGaAs). An optimized design to minimize thermo–optic noise in the coating is proposed and discussed in this work.
Resumo:
This work reports investigations upon weakly superconducting proximity effect bridges. These bridges, which exhibit the Josephson effects, are produced by bisecting a superconductor with a short (<1µ) region of material whose superconducting transition temperature is below that of the adjacent superconductors. These bridges are fabricated from layered refractory metal thin films whose transition temperature will depend upon the thickness ratio of the materials involved. The thickness ratio is changed in the area of the bridge to lower its transition temperature. This is done through novel photolithographic techniques described in the text, Chapter 2.
If two such proximity effect bridges are connected in parallel, they form a quantum interferometer. The maximum zero voltage current through this circuit is periodically modulated by the magnetic flux through the circuit. At a constant bias current, the modulation of the critical current produces a modulation in the dc voltage across the bridge. This change in dc voltage has been found to be the result of a change in the internal dissipation in the device. A simple model using lumped circuit theory and treating the bridges as quantum oscillators of frequency ω = 2eV/h, where V is the time average voltage across the device, has been found to adequately describe the observed voltage modulation.
The quantum interferometers have been converted to a galvanometer through the inclusion of an integral thin film current path which couples magnetic flux through the interferometer. Thus a change in signal current produces a change in the voltage across the interferometer at a constant bias current. This work is described in Chapter 3 of the text.
The sensitivity of any device incorporating proximity effect bridges will ultimately be determined by the fluctuations in their electrical parameters. He have measured the spectral power density of the voltage fluctuations in proximity effect bridges using a room temperature electronics and a liquid helium temperature transformer to match the very low (~ 0.1 Ω) impedances characteristic of these devices.
We find the voltage noise to agree quite well with that predicted by phonon noise in the normal conduction through the bridge plus a contribution from the superconducting pair current through the bridge which is proportional to the ratios of this current to the time average voltage across the bridge. The total voltage fluctuations are given by <V^2(f ) > = 4kTR^2_d I/V where R_d is the dynamic resistance, I the total current, and V the voltage across the bridge . An additional noise source appears with a strong 1/f^(n) dependence , 1.5 < n < 2, if the bridges are fabricated upon a glass substrate. This excess noise, attributed to thermodynamic temperature fluctuations in the volume of the bridge, increases dramatically on a glass substrate due to the greatly diminished thermal diffusivity of the glass as compared to sapphire.
Resumo:
Noise measurements from 140°K to 350°K ambient temperature and between 10kHz and 22MHz performed on a double injection silicon diode as a function of operating point indicate that the high frequency noise depends linearly on the ambient temperature T and on the differential conductance g measured at the same frequency. The noise is represented quantitatively by〈i^2〉 = α•4kTgΔf. A new interpretation demands Nyquist noise with α ≡ 1 in these devices at high frequencies. This is in accord with an equivalent circuit derived for the double injection process. The effects of diode geometry on the static I-V characteristic as well as on the ac properties are illustrated. Investigation of the temperature dependence of double injection yields measurements of the temperature variation of the common high-level lifetime τ(τ ∝ T^2), the hole conductivity mobility µ_p (µ_p ∝ T^(-2.18)) and the electron conductivity mobility µ_n(µ_n ∝ T^(-1.75)).
Resumo:
The LIGO gravitational wave detectors are on the brink of making the first direct detections of gravi- tational waves. Noise cancellation techniques are described, in order to simplify the commissioning of these detectors as well as significantly improve their sensitivity to astrophysical sources. Future upgrades to the ground based detectors will require further cancellation of Newtonian gravitational noise in order to make the transition from detectors striving to make the first direct detection of gravitational waves, to observatories extracting physics from many, many detections. Techniques for this noise cancellation are described, as well as the work remaining in this realm.