1000 resultados para M-Theory
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.
Resumo:
In Part I, we construct a symmetric stress-energy-momentum pseudo-tensor for the gravitational fields of Brans-Dicke theory, and use this to establish rigorously conserved integral expressions for energy-momentum Pi and angular momentum Jik. Application of the two-dimensional surface integrals to the exact static spherical vacuum solution of Brans leads to an identification of our conserved mass with the active gravitational mass. Application to the distant fields of an arbitrary stationary source reveals that Pi and Jik have the same physical interpretation as in general relativity. For gravitational waves whose wavelength is small on the scale of the background radius of curvature, averaging over several wavelengths in the Brill-Hartle-Isaacson manner produces a stress-energy-momentum tensor for gravitational radiation which may be used to calculate the changes in Pi and Jik of their source.
In Part II, we develop strong evidence in favor of a conjecture by Penrose--that, in the Brans-Dicke theory, relativistic gravitational collapse in three dimensions produce black holes identical to those of general relativity. After pointing out that any black hole solution of general relativity also satisfies Brans-Dicke theory, we establish the Schwarzschild and Kerr geometries as the only possible spherical and axially symmetric black hole exteriors, respectively. Also, we show that a Schwarzschild geometry is necessarily formed in the collapse of an uncharged sphere.
Appendices discuss relationships among relativistic gravity theories and an example of a theory in which black holes do not exist.
Resumo:
An attempt is made to provide a theoretical explanation of the effect of the positive column on the voltage-current characteristic of a glow or an arc discharge. Such theories have been developed before, and all are based on balancing the production and loss of charged particles and accounting for the energy supplied to the plasma by the applied electric field. Differences among the theories arise from the approximations and omissions made in selecting processes that affect the particle and energy balances. This work is primarily concerned with the deviation from the ambipolar description of the positive column caused by space charge, electron-ion volume recombination, and temperature inhomogeneities.
The presentation is divided into three parts, the first of which involved the derivation of the final macroscopic equations from kinetic theory. The final equations are obtained by taking the first three moments of the Boltzmann equation for each of the three species in the plasma. Although the method used and the equations obtained are not novel, the derivation is carried out in detail in order to appraise the validity of numerous approximations and to justify the use of data from other sources. The equations are applied to a molecular hydrogen discharge contained between parallel walls. The applied electric field is parallel to the walls, and the dependent variables—electron and ion flux to the walls, electron and ion densities, transverse electric field, and gas temperature—vary only in the direction perpendicular to the walls. The mathematical description is given by a sixth-order nonlinear two-point boundary value problem which contains the applied field as a parameter. The amount of neutral gas and its temperature at the walls are held fixed, and the relation between the applied field and the electron density at the center of the discharge is obtained in the process of solving the problem. This relation corresponds to that between current and voltage and is used to interpret the effect of space charge, recombination, and temperature inhomogeneities on the voltage-current characteristic of the discharge.
The complete solution of the equations is impractical both numerically and analytically, and in Part II the gas temperature is assumed uniform so as to focus on the combined effects of space charge and recombination. The terms representing these effects are treated as perturbations to equations that would otherwise describe the ambipolar situation. However, the term representing space charge is not negligible in a thin boundary layer or sheath near the walls, and consequently the perturbation problem is singular. Separate solutions must be obtained in the sheath and in the main region of the discharge, and the relation between the electron density and the applied field is not determined until these solutions are matched.
In Part III the electron and ion densities are assumed equal, and the complicated space-charge calculation is thereby replaced by the ambipolar description. Recombination and temperature inhomogeneities are both important at high values of the electron density. However, the formulation of the problem permits a comparison of the relative effects, and temperature inhomogeneities are shown to be important at lower values of the electron density than recombination. The equations are solved by a direct numerical integration and by treating the term representing temperature inhomogeneities as a perturbation.
The conclusions reached in the study are primarily concerned with the association of the relation between electron density and axial field with the voltage-current characteristic. It is known that the effect of space charge can account for the subnormal glow discharge and that the normal glow corresponds to a close approach to an ambipolar situation. The effect of temperature inhomogeneities helps explain the decreasing characteristic of the arc, and the effect of recombination is not expected to appear except at very high electron densities.
Resumo:
Interest in the possible applications of a priori inequalities in linear elasticity theory motivated the present investigation. Korn's inequality under various side conditions is considered, with emphasis on the Korn's constant. In the "second case" of Korn's inequality, a variational approach leads to an eigenvalue problem; it is shown that, for simply-connected two-dimensional regions, the problem of determining the spectrum of this eigenvalue problem is equivalent to finding the values of Poisson's ratio for which the displacement boundary-value problem of linear homogeneous isotropic elastostatics has a non-unique solution.
Previous work on the uniqueness and non-uniqueness issue for the latter problem is examined and the results applied to the spectrum of the Korn eigenvalue problem. In this way, further information on the Korn constant for general regions is obtained.
A generalization of the "main case" of Korn's inequality is introduced and the associated eigenvalue problem is a gain related to the displacement boundary-value problem of linear elastostatics in two dimensions.
Resumo:
The problem of the continuation to complex values of the angular momentum of the partial wave amplitude is examined for the simplest production process, that of two particles → three particles. The presence of so-called "anomalous singularities" complicates the procedure followed relative to that used for quasi two-body scattering amplitudes. The anomalous singularities are shown to lead to exchange degenerate amplitudes with possible poles in much the same way as "normal" singularities lead to the usual signatured amplitudes. The resulting exchange-degenerate trajectories would also be expected to occur in two-body amplitudes.
The representation of the production amplitude in terms of the singularities of the partial wave amplitude is then developed and applied to the high energy region, with attention being paid to the emergence of "double Regge" terms. Certain new results are obtained for the behavior of the amplitude at zero momentum transfer, and some predictions of polarization and minima in momentum transfer distributions are made. A calculation of the polarization of the ρo meson in the reaction π - p → π - ρop at high energy with small momentum transfer to the proton is compared with data taken at 25 Gev by W. D. Walker and collaborators. The result is favorable, although limited by the statistics of the available data.
Resumo:
An equation for the reflection which results when an expanding dielectric slab scatters normally incident plane electromagnetic waves is derived using the invariant imbedding concept. The equation is solved approximately and the character of the solution is investigated. Also, an equation for the radiation transmitted through such a slab is similarly obtained. An alternative formulation of the slab problem is presented which is applicable to the analogous problem in spherical geometry. The form of an equation for the modal reflections from a nonrelativistically expanding sphere is obtained and some salient features of the solution are described. In all cases the material is assumed to be a nondispersive, nonmagnetic dielectric whose rest frame properties are slowly varying.
Resumo:
The degradation of image quality caused by aberrations of projection optics in lithographic tools is a serious problem in optical lithography. We propose what we believe to be a novel technique for measuring aberrations of projection optics based on two-beam interference theory. By utilizing the partial coherent imaging theory, a novel model that accurately characterizes the relative image displacement of a fine grating pattern to a large pattern induced by aberrations is derived. Both even and odd aberrations are extracted independently from the relative image displacements of the printed patterns by two-beam interference imaging of the zeroth and positive first orders. The simulation results show that by using this technique we can measure the aberrations present in the lithographic tool with higher accuracy. (c) 2006 Optical Society of America.
Resumo:
The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.
The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.
The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.
The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.
Resumo:
The effect on the scattering amplitude of the existence of a pole in the angular momentum plane near J = 1 in the channel with the quantum numbers of the vacuum is calculated. This is then compared with a fourth order calculation of the scattering of neutral vector mesons from a fermion pair field in the limit of large momentum transfer. The presence of the third double spectral function in the perturbation amplitude complicates the identification of pole trajectory parameters, and the limitations of previous methods of treating this are discussed. A gauge invariant scheme for extracting the contribution of the vacuum trajectory is presented which gives agreement with unitarity predictions, but further calculations must be done to determine the position and slope of the trajectory at s = 0. The residual portion of the amplitude is compared with the Gribov singularity.
Resumo:
The propagation of waves in an extended, irregular medium is studied under the "quasi-optics" and the "Markov random process" approximations. Under these assumptions, a Fokker-Planck equation satisfied by the characteristic functional of the random wave field is derived. A complete set of the moment equations with different transverse coordinates and different wavenumbers is then obtained from the characteristic functional. The derivation does not require Gaussian statistics of the random medium and the result can be applied to the time-dependent problem. We then solve the moment equations for the phase correlation function, angular broadening, temporal pulse smearing, intensity correlation function, and the probability distribution of the random waves. The necessary and sufficient conditions for strong scintillation are also given.
We also consider the problem of diffraction of waves by a random, phase-changing screen. The intensity correlation function is solved in the whole Fresnel diffraction region and the temporal pulse broadening function is derived rigorously from the wave equation.
The method of smooth perturbations is applied to interplanetary scintillations. We formulate and calculate the effects of the solar-wind velocity fluctuations on the observed intensity power spectrum and on the ratio of the observed "pattern" velocity and the true velocity of the solar wind in the three-dimensional spherical model. The r.m.s. solar-wind velocity fluctuations are found to be ~200 km/sec in the region about 20 solar radii from the Sun.
We then interpret the observed interstellar scintillation data using the theories derived under the Markov approximation, which are also valid for the strong scintillation. We find that the Kolmogorov power-law spectrum with an outer scale of 10 to 100 pc fits the scintillation data and that the ambient averaged electron density in the interstellar medium is about 0.025 cm-3. It is also found that there exists a region of strong electron density fluctuation with thickness ~10 pc and mean electron density ~7 cm-3 between the PSR 0833-45 pulsar and the earth.
Resumo:
The resolution of the so-called thermodynamic paradox is presented in this paper. It is shown, in direct contradiction to the results of several previously published papers, that the cutoff modes (evanescent modes having complex propagation constants) can carry power in a waveguide containing ferrite. The errors in all previous “proofs” which purport to show that the cutoff modes cannot carry power are uncovered. The boundary value problem underlying the paradox is studied in detail; it is shown that, although the solution is somewhat complicated, there is nothing paradoxical about it.
The general problem of electromagnetic wave propagation through rectangular guides filled inhomogeneously in cross-section with transversely magnetized ferrite is also studied. Application of the standard waveguide techniques reduces the TM part to the well-known self-adjoint Sturm Liouville eigenvalue equation. The TE part, however, leads in general to a non-self-adjoint eigenvalue equation. This equation and the associated expansion problem are studied in detail. Expansion coefficients and actual fields are determined for a particular problem.
Resumo:
Part I:
The perturbation technique developed by Rannie and Marble is used to study the effect of droplet solidification upon two-phase flow in a rocket nozzle. It is shown that under certain conditions an equilibrium flow exists, where the gas and particle phases have the same velocity and temperature at each section of the nozzle. The flow is divided into three regions: the first region, where the particles are all in the form of liquid droplets; a second region, over which the droplets solidify at constant freezing temperature; and a third region, where the particles are all solid. By a perturbation about the equilibrium flow, a solution is obtained for small particle slip velocities using the Stokes drag law and the corresponding approximation for heat transfer between the particle and gas phases. Singular perturbation procedure is required to handle the problem at points where solidification first starts and where it is complete. The effects of solidification are noticeable.
Part II:
When a liquid surface, in contact with only its pure vapor, is not in the thermodynamic equilibrium with it, a net condensation or evaporation of fluid occurs. This phenomenon is studied from a kinetic theory viewpoint by means of moment method developed by Lees. The evaporation-condensation rate is calculated for a spherical droplet and for a liquid sheet, when the temperatures and pressures are not too far removed from their equilibrium values. The solutions are valid for the whole range of Knudsen numbers from the free molecule to the continuum limit. In the continuum limit, the mass flux rate is proportional to the pressure difference alone.
Resumo:
Time, risk, and attention are all integral to economic decision making. The aim of this work is to understand those key components of decision making using a variety of approaches: providing axiomatic characterizations to investigate time discounting, generating measures of visual attention to infer consumers' intentions, and examining data from unique field settings.
Chapter 2, co-authored with Federico Echenique and Kota Saito, presents the first revealed-preference characterizations of exponentially-discounted utility model and its generalizations. My characterizations provide non-parametric revealed-preference tests. I apply the tests to data from a recent experiment, and find that the axiomatization delivers new insights on a dataset that had been analyzed by traditional parametric methods.
Chapter 3, co-authored with Min Jeong Kang and Colin Camerer, investigates whether "pre-choice" measures of visual attention improve in prediction of consumers' purchase intentions. We measure participants' visual attention using eyetracking or mousetracking while they make hypothetical as well as real purchase decisions. I find that different patterns of visual attention are associated with hypothetical and real decisions. I then demonstrate that including information on visual attention improves prediction of purchase decisions when attention is measured with mousetracking.
Chapter 4 investigates individuals' attitudes towards risk in a high-stakes environment using data from a TV game show, Jeopardy!. I first quantify players' subjective beliefs about answering questions correctly. Using those beliefs in estimation, I find that the representative player is risk averse. I then find that trailing players tend to wager more than "folk" strategies that are known among the community of contestants and fans, and this tendency is related to their confidence. I also find gender differences: male players take more risk than female players, and even more so when they are competing against two other male players.
Chapter 5, co-authored with Colin Camerer, investigates the dynamics of the favorite-longshot bias (FLB) using data on horse race betting from an online exchange that allows bettors to trade "in-play." I find that probabilistic forecasts implied by market prices before start of the races are well-calibrated, but the degree of FLB increases significantly as the events approach toward the end.
Resumo:
The Maxwell integral equations of transfer are applied to a series of problems involving flows of arbitrary density gases about spheres. As suggested by Lees a two sided Maxwellian-like weighting function containing a number of free parameters is utilized and a sufficient number of partial differential moment equations is used to determine these parameters. Maxwell's inverse fifth-power force law is used to simplify the evaluation of the collision integrals appearing in the moment equations. All flow quantities are then determined by integration of the weighting function which results from the solution of the differential moment system. Three problems are treated: the heat-flux from a slightly heated sphere at rest in an infinite gas; the velocity field and drag of a slowly moving sphere in an unbounded space; the velocity field and drag torque on a slowly rotating sphere. Solutions to the third problem are found to both first and second-order in surface Mach number with the secondary centrifugal fan motion being of particular interest. Singular aspects of the moment method are encountered in the last two problems and an asymptotic study of these difficulties leads to a formal criterion for a "well posed" moment system. The previously unanswered question of just how many moments must be used in a specific problem is now clarified to a great extent.