997 resultados para Empirical dispersion corrections
Resumo:
In an optical parametric chirped pulse amplification (OPCPA) laser system, residual phase dispersion should be compensated as much as possible to shorten the amplified pulses and improve the pulse contrast ratio. Expressions of orders of the induced phases in collinear optical parametric amplification (OPA) processes are presented at the central signal wavelength to depict a clear physics picture and to simplify the design of phase compensation. As examples, we simulate two OPCPA systems to compensate for the phases up to the partial fourth-order terms, and obtain flat phase spectra of 200-nm bandwidth at 1064 nm and 90-nm at 800 nm.
Resumo:
The nonlinear dynamics of 1.6-mu m fs laser pulses propagating in fused silica is investigated by employing a full-order dispersion model. Different from the x-wave generation in normally dispersive media, a few-cycle spatiotemporally compressed soliton wave is generated with the contrary contributions of anomalous group velocity dispersion (GVD) and self-phase-modulation. However, at the tailing edge of the pulse forms a shock wave which generates separate and strong supercontinuum peaked at 670 nm. It is also the origin of conical emission formed both in time and frequency domain with the contribution of normal GVD at visible light.
Resumo:
This thesis examines four distinct facets and methods for understanding political ideology, and so it includes four distinct chapters with only moderate connections between them. Chapter 2 examines how reactions to emotional stimuli vary with political opinion, and how the stimuli can produce changes in an individuals political preferences. Chapter 3 examines the connection between self-reported fear and item nonresponse on surveys. Chapter 4 examines the connection between political and moral consistency with low-dimensional ideology, and Chapter 5 develops a technique for estimating ideal points and salience in a low-dimensional ideological space.
Resumo:
It is shown that in a closed equispaced three-level ladder system, by controlling the relative phase of two applied coherent fields, the conversion from absorption with inversion to lasing without inversion (LWI) can be realized; a large index of the refraction with zero absorption can be gotten; considerable increasing of the spectrum region and value of the LWI gain can be achieved. Our study also reveals that the incoherent pumping will produce a remarkable effect oil the phase-dependent properties of the system. Modifying value of the incoherent pumping can change the property of the system from absorption to amplification and enhance significantly LWI gain. If the incoherent pumping is absent, we cannot get any gain for any value of the relative phase. (c) 2007 Elsevier GmbH. All rights reserved.
Resumo:
The control role of the relative phase between the probe and driving fields on the gain and dispersion in an open Lambda-type inversionless lasing system with spontaneously generated coherence (SGC) is investigated. It is shown that the inversionless gain and dispersion are quite sensitive to variation in the relative phase; by adjusting the value of the relative phase, electromagnetically induced transparency (EIT), a high refractive index with zero absorption and a larger inversionless gain can be realized. It is also shown that, in the contributions to the inversionless gain ( absorption) and dispersion, the contribution from SGC is always much larger than that from the dynamically induced coherence for any value of the relative phase. Our analysis shows that variation in the SGC effect will cause the spectrum regions and values of the inversionless gain and dispersion to vary evidently. We also found that, under the same conditions, the values of the inversionless gain and dispersion in the open system are evidently larger than those in the corresponding closed system; EIT occurs in the open system but cannot occur in the closed system.
Resumo:
The control role of the relative phase between the probe and driving fields on gain, dispersion and populations in an open V-type three-level system with spontaneously generated coherence is studied. The result shows that by adjusting the value of the relative phase, the transformation between lasing with inversion and lasing without inversion (LWI) can be realized and high dispersion (refractive index) without absorption can be obtained. The shape and value range of the dispersion curve are similar to those of the gain curve, and this similarity is closely related to the relative phase. The effects of the atomic exit and injection rates and the incoherent pump rate on the control role of the relative phase are also analysed. It is found easier to get LWI by adjusting the value of the relative phase using the open system rather than the closed system, and using an incoherent pump rather than without using the incoherent pump. Moreover the open system can give a larger LWI gain than the closed system.
Resumo:
Consumption of addictive substances poses a challenge to economic models of rational, forward-looking agents. This dissertation presents a theoretical and empirical examination of consumption of addictive goods.
The theoretical model draws on evidence from psychology and neurobiology to improve on the standard assumptions used in intertemporal consumption studies. I model agents who may misperceive the severity of the future consequences from consuming addictive substances and allow for an agent's environment to shape her preferences in a systematic way suggested by numerous studies that have found craving to be induced by the presence of environmental cues associated with past substance use. The behavior of agents in this behavioral model of addiction can mimic the pattern of quitting and relapsing that is prevalent among addictive substance users.
Chapter 3 presents an empirical analysis of the Becker and Murphy (1988) model of rational addiction using data on grocery store sales of cigarettes. This essay empirically tests the model's predictions concerning consumption responses to future and past price changes as well as the prediction that the response to an anticipated price change differs from the response to an unanticipated price change. In addition, I consider the consumption effects of three institutional changes that occur during the time period 1996 through 1999.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.