935 resultados para Classical Theories of Gravity
Resumo:
A microscopic theory of equilibrium solvation and solvation dynamics of a classical, polar, solute molecule in dipolar solvent is presented. Density functional theory is used to explicitly calculate the polarization structure around a solvated ion. The calculated solvent polarization structure is different from the continuum model prediction in several respects. The value of the polarization at the surface of the ion is less than the continuum value. The solvent polarization also exhibits small oscillations in space near the ion. We show that, under certain approximations, our linear equilibrium theory reduces to the nonlocal electrostatic theory, with the dielectric function (c(k)) of the liquid now wave vector (k) dependent. It is further shown that the nonlocal electrostatic estimate of solvation energy, with a microscopic c(k), is close to the estimate of linearized equilibrium theories of polar liquids. The study of solvation dynamics is based on a generalized Smoluchowski equation with a mean-field force term to take into account the effects of intermolecular interactions. This study incorporates the local distortion of the solvent structure near the ion and also the effects of the translational modes of the solvent molecules.The latter contribution, if significant, can considerably accelerate the relaxation of solvent polarization and can even give rise to a long time decay that agrees with the continuum model prediction. The significance of these results is discussed.
Resumo:
The cylindrical Couette device is commonly employed to study the rheology of fluids, but seldom used for dense granular materials. Plasticity theories used for granular flows predict a stress field that is independent of the shear rate, but otherwise similar to that in fluids. In this paper we report detailed measurements of the stress as a function of depth, and show that the stress profile differs fundamentally from that of fluids, from the predictions of plasticity theories, and from intuitive expectation. In the static state, a part of the weight of the material is transferred to the walls by a downward vertical shear stress, bringing about the well-known Janssen saturation of the stress in vertical columns. When the material is sheared, the vertical shear stress changes sign, and the magnitudes of all components of the stress rise rapidly with depth. These qualitative features are preserved over a range of the Couette gap and shear rate, for smooth and rough walls and two model granular materials. To explain the anomalous rheological response, we consider some hypotheses that seem plausibleapriori, but showthat none survive after careful analysis of the experimental observations. We argue that the anomalous stress is due to an anisotropic fabric caused by the combined actions of gravity, shear, and frictional walls, for which we present indirect evidence from our experiments. A general theoretical framework for anisotropic plasticity is then presented. The detailed mechanics of how an anisotropic fabric is brought about by the above-mentioned factors is not clear, and promises to be a challenging problem for future investigations. (C) 2013 AIP Publishing LLC.
Resumo:
In arXiv:1310.5713 1] and arXiv:1310.6659 2] a formula was proposed as the entanglement entropy functional for a general higher-derivative theory of gravity, whose lagrangian consists of terms containing contractions of the Riemann tensor. In this paper, we carry out some tests of this proposal. First, we find the surface equation of motion for general four-derivative gravity theory by minimizing the holographic entanglement entropy functional resulting from this proposed formula. Then we calculate the surface equation for the same theory using the generalized gravitational entropy method of arXiv:1304.4926 3]. We find that the two do not match in their entirety. We also construct the holographic entropy functional for quasi-topological gravity, which is a six-derivative gravity theory. We find that this functional gives the correct universal terms. However, as in the R-2 case, the generalized gravitational entropy method applied to this theory does not give exactly the surface equation of motion coming from minimizing the entropy functional.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
A classical argument of de Finetti holds that Rationality implies Subjective Expected Utility (SEU). In contrast, the Knightian distinction between Risk and Ambiguity suggests that a rational decision maker would obey the SEU paradigm when the information available is in some sense good, and would depart from it when the information available is not good. Unlike de Finetti's, however, this view does not rely on a formal argument. In this paper, we study the set of all information structures that might be availabe to a decision maker, and show that they are of two types: those compatible with SEU theory and those for which SEU theory must fail. We also show that the former correspond to "good" information, while the latter correspond to information that is not good. Thus, our results provide a formalization of the distinction between Risk and Ambiguity. As a consequence of our main theorem (Theorem 2, Section 8), behavior not-conforming to SEU theory is bound to emerge in the presence of Ambiguity. We give two examples of situations of Ambiguity. One concerns the uncertainty on the class of measure zero events, the other is a variation on Ellberg's three-color urn experiment. We also briefly link our results to two other strands of literature: the study of ambiguous events and the problem of unforeseen contingencies. We conclude the paper by re-considering de Finetti's argument in light of our findings.
Resumo:
In 1931 Dirac studied the motion of an electron in the field of a magnetic monopole and found that the quantization of electric charge can be explained by postulating the mere existence of a magnetic monopole. Since 1974 there has been a resurgence of interest in magnetic monopole due to the work of ‘t’ Hooft and Polyakov who independently observed that monopoles can exist as finite energy topologically stable solutions to certain spontaneously broken gauge theories. The thesis, “Studies on Magnetic Monopole Solutions of Non-abelian Gauge Theories and Related Problems”, reports a systematic investigation of classical solutions of non-abelian gauge theories with special emphasis on magnetic monopoles and dyons which possess both electric and magnetic charges. The formation of bound states of a dyon with fermions and bosons is also studied in detail. The thesis opens with an account of a new derivation of a relationship between the magnetic charge of a dyon and the topology of the gauge fields associated with it. Although this formula has been reported earlier in the literature, the present method has two distinct advantages. In the first place, it does not depend either on the mechanism of symmetry breaking or on the nature of the residual symmetry group. Secondly, the results can be generalized to finite temperature monopoles.
Resumo:
We discuss modified gravity which includes negative and positive powers of curvature and provides gravitational dark energy. It is shown that in GR plus a term containing a negative power of curvature, cosmic speed-up may be achieved while the effective phantom phase (with w less than -1) follows when such a term contains a fractional positive power of curvature. Minimal coupling with matter makes the situation more interesting: even 1/R theory coupled with the usual ideal fluid may describe the (effective phantom) dark energy. The account of the R(2) term (consistent modified gravity) may help to escape cosmic doomsday.
Resumo:
We calculate the gravitational deflection of massive particles moving with relativistic velocity in the solar system to second post-Newtonian order. For a particle passing close to the Sun with impact parameter b, the deflection in classical general relativity is Phi(C)[GRAPHICS]where v(0) is the particle speed at infinity and M is the Sun's mass. We compute afterwards the gravitational deflection of a spinless neutral particle of mass m in the same static gravitational field as above, treated now as an external field. For a scalar boson with energy E, the deflection in semiclassical general relativity (SGR) is Phisc[GRAPHICS]This result shows that the propagation of the =2E spinless massive boson produces inexorably dispersive effects. It also shows that the semiclassical prediction is always greater than the geometrical one, no matter what the boson mass is. In addition, it is found that SGR predicts a deflection angle of similar to2.6 arcsec for a nonrelativistic spinless massive boson passing at the Sun's limb.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory.rnAs its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained.rnThe constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point.rnFinally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of existing computations, taking the independent running of the Euler topological term into account. Known perturbative results are reproduced in this case from the renormalization group equation, identifying however a unique non-Gaussian fixed point.rn
Resumo:
We partially solve a long-standing problem in the proof theory of explicit mathematics or the proof theory in general. Namely, we give a lower bound of Feferman’s system T0 of explicit mathematics (but only when formulated on classical logic) with a concrete interpretat ion of the subsystem Σ12-AC+ (BI) of second order arithmetic inside T0. Whereas a lower bound proof in the sense of proof-theoretic reducibility or of ordinalanalysis was already given in 80s, the lower bound in the sense of interpretability we give here is new. We apply the new interpretation method developed by the author and Zumbrunnen (2015), which can be seen as the third kind of model construction method for classical theories, after Cohen’s forcing and Krivine’s classical realizability. It gives us an interpretation between classical theories, by composing interpretations between intuitionistic theories.
Resumo:
We show that global properties of gauge groups can be understood as geometric properties in M-theory. Different wrappings of a system of N M5-branes on a torus reduce to four-dimensional theories with AN−1 gauge algebra and different unitary groups. The classical properties of the wrappings determine the global properties of the gauge theories without the need to impose any quantum conditions. We count the inequivalent wrappings as they fall into orbits of the modular group of the torus, which correspond to the S-duality orbits of the gauge theories.
Resumo:
Stereo video techniques are effective for estimating the space–time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space–time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space–time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space–time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.