953 resultados para Nonperturbative field theory
Resumo:
The statistical-mechanics theory of the passive scalar field convected by turbulence, developed in an earlier paper [Phys. Fluids 28, 1299 (1985)], is extended to the case of a small molecular Prandtl number. The set of governing integral equations is solved by the equation-error method. The resultant scalar-variance spectrum for the inertial range is F(k)~x−5/3/[1+1.21x1.67(1+0.353x2.32)], where x is the wavenumber scaled by Corrsin's dissipation wavenumber. This result reduces to the − (5)/(3) law in the inertial-convective range. It also approximately reduces to the − (17)/(3) law in the inertial-diffusive range, but the proportionality constant differs from Batchelor's by a factor of 3.6.
Resumo:
The variational approach to the closure problem of turbulence theory, proposed in an earlier article [Phys. Fluids 26, 2098 (1983); 27, 2229 (1984)], is extended to evaluate the flatness factor, which indicates the degree of intermittency of turbulence. Since the flatness factor is related to the fourth moment of a turbulent velocity field, the corresponding higher-order terms in the perturbation solution of the Liouville equation have to be considered. Most closure methods discard these higher-order terms and fail to explain the intermittency phenomenon. The computed flatness factor of the idealized model of infinite isotropic turbulence ranges from 9 to 15 and has the same order of magnitude as the experimental data of real turbulent flows. The intermittency phenomenon does not necessarily negate the Kolmogorov k−5/3 inertial range spectrum. The Kolmogorov k−5/3 law and the high degree of intermittency can coexist as two consistent consequences of the closure theory of turbulence. The Kolmogorov 1941 theory [J. Fluid Mech. 62, 305 (1974)] cannot be disqualified merely because the energy dissipation rate fluctuates.
Resumo:
Classical statistical mechanics is applied to the study of a passive scalar field convected by isotropic turbulence. A complete set of independent real parameters and dynamic equations are worked out to describe the dynamic state of the passive scalar field. The corresponding Liouville equation is solved by a perturbation method based upon a Langevin–Fokker–Planck model. The closure problem is treated by a variational approach reported in earlier papers. Two integral equations are obtained for two unknown functions: the scalar variance spectrum F(k) and the effective damping coefficient (k). The appearance of the energy spectrum of the velocity field in the two integral equations represents the coupling of the scalar field with the velocity field. As an application of the theory, the two integral equations are solved to derive the inertial-convective-range spectrum, obtaining F(k)=0.61 −1/3 k−5/3. Here is the dissipation rate of the scalar variance and is the dissipation rate of the energy of the velocity field. This theoretical value of the scalar Kolmogorov constant, 0.61, is in good agreement with experiments.
Resumo:
As an improvement of resolution of observations, more and more radio galaxies with radiojets have been identified and many fine structures in the radio jets yielded. In the presentpaper, the two-dimensional magnetohydrodynamical theory is applied to the analysis of themagnetic field configurations in the radio jefs. Two-dimensional results not only are con-sistent theoretically, but also explain the fine structures of observations. One of the theo-retical models is discussed in detail, and is in good agreement as compared with the observedradio jets of NGC6251. The results of the present paper also show that the magneticfields in the radio jets are mainly longitudinal ones and associate with the double sources ofQSOs if the magnetic field of the central object is stronger; the fields in the radio jets aremainly transverse ones and associate with the double sources of radio galaxies if the fieldof the central object is weaker. The magnetic field has great influence on the morphol-ogy and dynamic process.
Resumo:
A new method is proposed to solve the closure problem of turbulence theory and to drive the Kolmogorov law in an Eulerian framework. Instead of using complex Fourier components of velocity field as modal parameters, a complete set of independent real parameters and dynamic equations are worked out to describe the dynamic states of a turbulence. Classical statistical mechanics is used to study the statistical behavior of the turbulence. An approximate stationary solution of the Liouville equation is obtained by a perturbation method based on a Langevin-Fokker-Planck (LFP) model. The dynamic damping coefficient eta of the LFP model is treated as an optimum control parameter to minimize the error of the perturbation solution; this leads to a convergent integral equation for eta to replace the divergent response equation of Kraichnan's direct-interaction (DI) approximation, thereby solving the closure problem without appealing to a Lagrangian formulation. The Kolmogorov constant Ko is evaluated numerically, obtaining Ko = 1.2, which is compatible with the experimental data given by Gibson and Schwartz, (1963).
Resumo:
Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.
Resumo:
In Part I the kinetic theory of excitations in flowing liquid He II is developed to a higher order than that carried out previously, by Landau and Khalatnikov, in order to demonstrate the existence of non-equilibrium terms of a new nature in the hydrodynamic equations. It is then shown that these terms can lead to spontaneous destabilization in counter currents when the relative velocity of the normal and super fluids exceeds a critical value that depends on the temperature, but not on geometry. There are no adjustable parameters in the theory. The critical velocities are estimated to be in the 14-20 m/sec range for T ≤ 2.0° K, but tend to zero as T → T_λ. The possibility that these critical velocities may be related to the experimentally observed "intrinsic" critical velocities is discussed.
Part II consists of a semi-classical investigation of rotonquantized vortex line interactions. An essentially classical model is used for the collision and the behavior of the roton in the vortex field is investigated in detail. From this model it is possible to derive the HVBK mutual friction terms that appear in the phenomenalogical equations of motion for rotating liquid He II. Estimates of the Hall and Vinen B and B' coefficients are in good agreement with experiments. The claim is made that the theory does not contain any arbitrary adjustable parameters.
Resumo:
This thesis presents recent research into analytic topics in the classical theory of General Relativity. It is a thesis in two parts. The first part features investigations into the spectrum of perturbed, rotating black holes. These include the study of near horizon perturbations, leading to a new generic frequency mode for black hole ringdown; an treatment of high frequency waves using WKB methods for Kerr black holes; and the discovery of a bifurcation of the quasinormal mode spectrum of rapidly rotating black holes. These results represent new discoveries in the field of black hole perturbation theory, and rely on additional approximations to the linearized field equations around the background black hole. The second part of this thesis presents a recently developed method for the visualization of curved spacetimes, using field lines called the tendex and vortex lines of the spacetime. The works presented here both introduce these visualization techniques, and explore them in simple situations. These include the visualization of asymptotic gravitational radiation; weak gravity situations with and without radiation; stationary black hole spacetimes; and some preliminary study into numerically simulated black hole mergers. The second part of thesis culminates in the investigation of perturbed black holes using these field line methods, which have uncovered new insights into the dynamics of curved spacetime around black holes.
Resumo:
In recent years coastal resource management has begun to stand as its own discipline. Its multidisciplinary nature gives it access to theory situated in each of the diverse fields which it may encompass, yet management practices often revert to the primary field of the manager. There is a lack of a common set of “coastal” theory from which managers can draw. Seven resource-related issues with which coastal area managers must contend include: coastal habitat conservation, traditional maritime communities and economies, strong development and use pressures, adaptation to sea level rise and climate change, landscape sustainability and resilience, coastal hazards, and emerging energy technologies. The complexity and range of human and environmental interactions at the coast suggest a strong need for a common body of coastal management theory which managers would do well to understand generally. Planning theory, which itself is a synthesis of concepts from multiple fields, contains ideas generally valuable to coastal management. Planning theory can not only provide an example of how to develop a multi- or transdisciplinary set of theory, but may also provide actual theoretical foundation for a coastal theory. In particular we discuss five concepts in the planning theory discourse and present their utility for coastal resource managers. These include “wicked” problems, ecological planning, the epistemology of knowledge communities, the role of the planner/ manager, and collaborative planning. While these theories are known and familiar to some professionals working at the coast, we argue that there is a need for broader understanding amongst the various specialists working in the increasingly identifiable field of coastal resource management. (PDF contains 4 pages)
Resumo:
This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The superspace approach provides a manifestly supersymmetric formulation of supersymmetric theories. For N= 1 supersymmetry one can use either constrained or unconstrained superfields for such a formulation. Only the unconstrained formulation is suitable for quantum calculations. Until now, all interacting N>1 theories have been written using constrained superfields. No solutions of the nonlinear constraint equations were known.
In this work, we first review the superspace approach and its relation to conventional component methods. The difference between constrained and unconstrained formulations is explained, and the origin of the nonlinear constraints in supersymmetric gauge theories is discussed. It is then shown that these nonlinear constraint equations can be solved by transforming them into linear equations. The method is shown to work for N=1 Yang-Mills theory in four dimensions.
N=2 Yang-Mills theory is formulated in constrained form in six-dimensional superspace, which can be dimensionally reduced to four-dimensional N=2 extended superspace. We construct a superfield calculus for six-dimensional superspace, and show that known matter multiplets can be described very simply. Our method for solving constraints is then applied to the constrained N=2 Yang-Mills theory, and we obtain an explicit solution in terms of an unconstrained superfield. The solution of the constraints can easily be expanded in powers of the unconstrained superfield, and a similar expansion of the action is also given. A background-field expansion is provided for any gauge theory in which the constraints can be solved by our methods. Some implications of this for superspace gauge theories are briefly discussed.
Resumo:
The problem of s-d exchange scattering of conduction electrons off localized magnetic moments in dilute magnetic alloys is considered employing formal methods of quantum field theoretical scattering. It is shown that such a treatment not only allows for the first time, the inclusion of multiparticle intermediate states in single particle scattering equations but also results in extremely simple and straight forward mathematical analysis. These equations are proved to be exact in the thermodynamic limit. A self-consistent integral equation for electron self energy is derived and approximately solved. The ground state and physical parameters of dilute magnetic alloys are discussed in terms of the theoretical results. Within the approximation of single particle intermediate states our results reduce to earlier versions. The following additional features are found as a consequence of the inclusion of multiparticle intermediate states;
(i) A non analytic binding energy is pre sent for both, antiferromagnetic (J < o) and ferromagnetic (J > o) couplings of the electron plus impurity system.
(ii) The correct behavior of the energy difference of the conduction electron plus impurity system and the free electron system is found which is free of unphysical singularities present in earlier versions of the theories.
(iii) The ground state of the conduction electron plus impurity system is shown to be a many-body condensate state for J < o and J > o, both. However, a distinction is made between the usual terminology of "Singlet" and "Triplet" ground states and nature of our ground state.
(iv) It is shown that a long range ordering, leading to an ordering of the magnetic moments can result from a contact interaction such as the s-d exchange interaction.
(v) The explicit dependence of the excess specific heat of the Kondo systems is obtained and found to be linear in temperatures as T→ o and T ℓnT for 0.3 T_K ≤ T ≤ 0.6 T_K. A rise in (ΔC/T) for temperatures in the region 0 < T ≤ 0.1 T_K is predicted. These results are found to be in excellent agreement with experiments.
(vi) The existence of a critical temperature for Ferromagnetic coupling (J > o) is shown. On the basis of this the apparent contradiction of the simultaneous existence of giant moments and Kondo effect is resolved.
Resumo:
The surface resistance and the critical magnetic field of lead electroplated on copper were studied at 205 MHz in a half-wave coaxial resonator. The observed surface resistance at a low field level below 4.2°K could be well described by the BCS surface resistance with the addition of a temperature independent residual resistance. The available experimental data suggest that the major fraction of the residual resistance in the present experiment was due to the presence of an oxide layer on the surface. At higher magnetic field levels the surface resistance was found to be enhanced due to surface imperfections.
The attainable rf critical magnetic field between 2.2°K and T_c of lead was found to be limited not by the thermodynamic critical field but rather by the superheating field predicted by the one-dimensional Ginzburg-Landau theory. The observed rf critical field was very close to the expected superheating field, particularly in the higher reduced temperature range, but showed somewhat stronger temperature dependence than the expected superheating field in the lower reduced temperature range.
The rf critical magnetic field was also studied at 90 MHz for pure tin and indium, and for a series of SnIn and InBi alloys spanning both type I and type II superconductivity. The samples were spherical with typical diameters of 1-2 mm and a helical resonator was used to generate the rf magnetic field in the measurement. The results of pure samples of tin and indium showed that a vortex-like nucleation of the normal phase was responsible for the superconducting-to-normal phase transition in the rf field at temperatures up to about 0.98-0.99 T_c' where the ideal superheating limit was being reached. The results of the alloy samples showed that the attainable rf critical fields near T_c were well described by the superheating field predicted by the one-dimensional GL theory in both the type I and type II regimes. The measurement was also made at 300 MHz resulting in no significant change in the rf critical field. Thus it was inferred that the nucleation time of the normal phase, once the critical field was reached, was small compared with the rf period in this frequency range.
Resumo:
The single ionization of an He atom by intense linearly polarized laser field in the tunneling regime is studied by S- matrix theory. When only the first term of the expansion of the S matrix is considered and time, spatial distribution, and fluctuation of the laser pulse are taken into account, the obtained momentum distribution in the polarization direction of laser field is consistent with the semiclassical calculation, which only considers tunneling and the interaction between the free electron and external field. When the second term, which includes the interaction between the core and the free electron, is considered, the momentum distribution shows a complex multipeak structure with the central minimum and the positions of some peaks are independent of the intensity in some intensity regime, which is consistent with the recent experimental result. Based on our analysis, we found that the structures observed in the momentum distribution of an He atom are attributed to the " soft" collision of the tunneled electron with the core.