966 resultados para Conformal field theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Part I the kinetic theory of excitations in flowing liquid He II is developed to a higher order than that carried out previously, by Landau and Khalatnikov, in order to demonstrate the existence of non-equilibrium terms of a new nature in the hydrodynamic equations. It is then shown that these terms can lead to spontaneous destabilization in counter currents when the relative velocity of the normal and super fluids exceeds a critical value that depends on the temperature, but not on geometry. There are no adjustable parameters in the theory. The critical velocities are estimated to be in the 14-20 m/sec range for T ≤ 2.0° K, but tend to zero as T → T_λ. The possibility that these critical velocities may be related to the experimentally observed "intrinsic" critical velocities is discussed.

Part II consists of a semi-classical investigation of rotonquantized vortex line interactions. An essentially classical model is used for the collision and the behavior of the roton in the vortex field is investigated in detail. From this model it is possible to derive the HVBK mutual friction terms that appear in the phenomenalogical equations of motion for rotating liquid He II. Estimates of the Hall and Vinen B and B' coefficients are in good agreement with experiments. The claim is made that the theory does not contain any arbitrary adjustable parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photoelectron angular distributions produced in above-threshold ionization (ATI) are analysed using a nonperturbative scattering theory. The numerical results are in good qualitative agreement with recent measurements. Our study shows that the origin of the jet-like structure arises from the inherent properties of the ATI process and not from the angular momentum of either the initial or the excited states of the atom.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents recent research into analytic topics in the classical theory of General Relativity. It is a thesis in two parts. The first part features investigations into the spectrum of perturbed, rotating black holes. These include the study of near horizon perturbations, leading to a new generic frequency mode for black hole ringdown; an treatment of high frequency waves using WKB methods for Kerr black holes; and the discovery of a bifurcation of the quasinormal mode spectrum of rapidly rotating black holes. These results represent new discoveries in the field of black hole perturbation theory, and rely on additional approximations to the linearized field equations around the background black hole. The second part of this thesis presents a recently developed method for the visualization of curved spacetimes, using field lines called the tendex and vortex lines of the spacetime. The works presented here both introduce these visualization techniques, and explore them in simple situations. These include the visualization of asymptotic gravitational radiation; weak gravity situations with and without radiation; stationary black hole spacetimes; and some preliminary study into numerically simulated black hole mergers. The second part of thesis culminates in the investigation of perturbed black holes using these field line methods, which have uncovered new insights into the dynamics of curved spacetime around black holes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years coastal resource management has begun to stand as its own discipline. Its multidisciplinary nature gives it access to theory situated in each of the diverse fields which it may encompass, yet management practices often revert to the primary field of the manager. There is a lack of a common set of “coastal” theory from which managers can draw. Seven resource-related issues with which coastal area managers must contend include: coastal habitat conservation, traditional maritime communities and economies, strong development and use pressures, adaptation to sea level rise and climate change, landscape sustainability and resilience, coastal hazards, and emerging energy technologies. The complexity and range of human and environmental interactions at the coast suggest a strong need for a common body of coastal management theory which managers would do well to understand generally. Planning theory, which itself is a synthesis of concepts from multiple fields, contains ideas generally valuable to coastal management. Planning theory can not only provide an example of how to develop a multi- or transdisciplinary set of theory, but may also provide actual theoretical foundation for a coastal theory. In particular we discuss five concepts in the planning theory discourse and present their utility for coastal resource managers. These include “wicked” problems, ecological planning, the epistemology of knowledge communities, the role of the planner/ manager, and collaborative planning. While these theories are known and familiar to some professionals working at the coast, we argue that there is a need for broader understanding amongst the various specialists working in the increasingly identifiable field of coastal resource management. (PDF contains 4 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces fundamental equations and numerical methods for manipulating surfaces in three dimensions via conformal transformations. Conformal transformations are valuable in applications because they naturally preserve the integrity of geometric data. To date, however, there has been no clearly stated and consistent theory of conformal transformations that can be used to develop general-purpose geometry processing algorithms: previous methods for computing conformal maps have been restricted to the flat two-dimensional plane, or other spaces of constant curvature. In contrast, our formulation can be used to produce---for the first time---general surface deformations that are perfectly conformal in the limit of refinement. It is for this reason that we commandeer the title Conformal Geometry Processing.

The main contribution of this thesis is analysis and discretization of a certain time-independent Dirac equation, which plays a central role in our theory. Given an immersed surface, we wish to construct new immersions that (i) induce a conformally equivalent metric and (ii) exhibit a prescribed change in extrinsic curvature. Curvature determines the potential in the Dirac equation; the solution of this equation determines the geometry of the new surface. We derive the precise conditions under which curvature is allowed to evolve, and develop efficient numerical algorithms for solving the Dirac equation on triangulated surfaces.

From a practical perspective, this theory has a variety of benefits: conformal maps are desirable in geometry processing because they do not exhibit shear, and therefore preserve textures as well as the quality of the mesh itself. Our discretization yields a sparse linear system that is simple to build and can be used to efficiently edit surfaces by manipulating curvature and boundary data, as demonstrated via several mesh processing applications. We also present a formulation of Willmore flow for triangulated surfaces that permits extraordinarily large time steps and apply this algorithm to surface fairing, geometric modeling, and construction of constant mean curvature (CMC) surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a recent experimental work on the excess photon detachment (EPD) of H- ions [Phys. Rev. Lett. 87 (2001) 243001] it has been found that the ponderomotive shift of each EPD peak increases with the order of the EPD channel. By using a nonperturbative quantum scattering theory, we obtain the kinetic energy spectra for the differential detachment rate along the laser polarization for several laser intensities. It is demonstrated that higher order EPD peaks are produced mainly at relatively higher laser intensities. By calculating the overall EPD spectra with varying laser intensities, it is found that the ponderomotive shift of each EPD peak increases with the order of the EPD channel. Our calculations are in good agreement with the experimental observation. It is found that different EPD channels occur mainly when the laser field reaches some values, thus the intensity distribution of the laser field is responsible for the varying ponderomotive shifts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The superspace approach provides a manifestly supersymmetric formulation of supersymmetric theories. For N= 1 supersymmetry one can use either constrained or unconstrained superfields for such a formulation. Only the unconstrained formulation is suitable for quantum calculations. Until now, all interacting N>1 theories have been written using constrained superfields. No solutions of the nonlinear constraint equations were known.

In this work, we first review the superspace approach and its relation to conventional component methods. The difference between constrained and unconstrained formulations is explained, and the origin of the nonlinear constraints in supersymmetric gauge theories is discussed. It is then shown that these nonlinear constraint equations can be solved by transforming them into linear equations. The method is shown to work for N=1 Yang-Mills theory in four dimensions.

N=2 Yang-Mills theory is formulated in constrained form in six-dimensional superspace, which can be dimensionally reduced to four-dimensional N=2 extended superspace. We construct a superfield calculus for six-dimensional superspace, and show that known matter multiplets can be described very simply. Our method for solving constraints is then applied to the constrained N=2 Yang-Mills theory, and we obtain an explicit solution in terms of an unconstrained superfield. The solution of the constraints can easily be expanded in powers of the unconstrained superfield, and a similar expansion of the action is also given. A background-field expansion is provided for any gauge theory in which the constraints can be solved by our methods. Some implications of this for superspace gauge theories are briefly discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of s-d exchange scattering of conduction electrons off localized magnetic moments in dilute magnetic alloys is considered employing formal methods of quantum field theoretical scattering. It is shown that such a treatment not only allows for the first time, the inclusion of multiparticle intermediate states in single particle scattering equations but also results in extremely simple and straight forward mathematical analysis. These equations are proved to be exact in the thermodynamic limit. A self-consistent integral equation for electron self energy is derived and approximately solved. The ground state and physical parameters of dilute magnetic alloys are discussed in terms of the theoretical results. Within the approximation of single particle intermediate states our results reduce to earlier versions. The following additional features are found as a consequence of the inclusion of multiparticle intermediate states;

(i) A non analytic binding energy is pre sent for both, antiferromagnetic (J < o) and ferromagnetic (J > o) couplings of the electron plus impurity system.

(ii) The correct behavior of the energy difference of the conduction electron plus impurity system and the free electron system is found which is free of unphysical singularities present in earlier versions of the theories.

(iii) The ground state of the conduction electron plus impurity system is shown to be a many-body condensate state for J < o and J > o, both. However, a distinction is made between the usual terminology of "Singlet" and "Triplet" ground states and nature of our ground state.

(iv) It is shown that a long range ordering, leading to an ordering of the magnetic moments can result from a contact interaction such as the s-d exchange interaction.

(v) The explicit dependence of the excess specific heat of the Kondo systems is obtained and found to be linear in temperatures as T→ o and T ℓnT for 0.3 T_K ≤ T ≤ 0.6 T_K. A rise in (ΔC/T) for temperatures in the region 0 < T ≤ 0.1 T_K is predicted. These results are found to be in excellent agreement with experiments.

(vi) The existence of a critical temperature for Ferromagnetic coupling (J > o) is shown. On the basis of this the apparent contradiction of the simultaneous existence of giant moments and Kondo effect is resolved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The surface resistance and the critical magnetic field of lead electroplated on copper were studied at 205 MHz in a half-wave coaxial resonator. The observed surface resistance at a low field level below 4.2°K could be well described by the BCS surface resistance with the addition of a temperature independent residual resistance. The available experimental data suggest that the major fraction of the residual resistance in the present experiment was due to the presence of an oxide layer on the surface. At higher magnetic field levels the surface resistance was found to be enhanced due to surface imperfections.

The attainable rf critical magnetic field between 2.2°K and T_c of lead was found to be limited not by the thermodynamic critical field but rather by the superheating field predicted by the one-dimensional Ginzburg-Landau theory. The observed rf critical field was very close to the expected superheating field, particularly in the higher reduced temperature range, but showed somewhat stronger temperature dependence than the expected superheating field in the lower reduced temperature range.

The rf critical magnetic field was also studied at 90 MHz for pure tin and indium, and for a series of SnIn and InBi alloys spanning both type I and type II superconductivity. The samples were spherical with typical diameters of 1-2 mm and a helical resonator was used to generate the rf magnetic field in the measurement. The results of pure samples of tin and indium showed that a vortex-like nucleation of the normal phase was responsible for the superconducting-to-normal phase transition in the rf field at temperatures up to about 0.98-0.99 T_c' where the ideal superheating limit was being reached. The results of the alloy samples showed that the attainable rf critical fields near T_c were well described by the superheating field predicted by the one-dimensional GL theory in both the type I and type II regimes. The measurement was also made at 300 MHz resulting in no significant change in the rf critical field. Thus it was inferred that the nucleation time of the normal phase, once the critical field was reached, was small compared with the rf period in this frequency range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The single ionization of an He atom by intense linearly polarized laser field in the tunneling regime is studied by S- matrix theory. When only the first term of the expansion of the S matrix is considered and time, spatial distribution, and fluctuation of the laser pulse are taken into account, the obtained momentum distribution in the polarization direction of laser field is consistent with the semiclassical calculation, which only considers tunneling and the interaction between the free electron and external field. When the second term, which includes the interaction between the core and the free electron, is considered, the momentum distribution shows a complex multipeak structure with the central minimum and the positions of some peaks are independent of the intensity in some intensity regime, which is consistent with the recent experimental result. Based on our analysis, we found that the structures observed in the momentum distribution of an He atom are attributed to the " soft" collision of the tunneled electron with the core.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approximate theory for steady irrotational flow through a cascade of thin cambered airfoils is developed. Isolated thin airfoils have only slight camber is most applications, and the well known methods that replace the source and vorticity distributions of the curved camber line by similar distributions on the straight chord line are adequate. In cascades, however, the camber is usually appreciable, and significant errors are introduced if the vorticity and source distributions on the camber line are approximated by the same distribution on the chord line.

The calculation of the flow field becomes very clumsy in practice if the vorticity and source distributions are not confined to a straight line. A new method is proposed and investigated; in this method, at each point on the camber line, the vorticity and sources are assumed to be distributed along a straight line tangent to the camber line at that point, and corrections are determined to account for the deviation of the actual camber line from the tangent line. Hence, the basic calculation for the cambered airfoils is reduced to the simpler calculation of the straight line airfoils, with the equivalent straight line airfoils changing from point to point.

The results of the approximate method are compared with numerical solutions for cambers as high as 25 per cent of the chord. The leaving angles of flow are predicted quite well, even at this high value of the camber. The present method also gives the functional relationship between the exit angle and the other parameters such as airfoil shape and cascade geometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kohn-Sham density functional theory (KSDFT) is currently the main work-horse of quantum mechanical calculations in physics, chemistry, and materials science. From a mechanical engineering perspective, we are interested in studying the role of defects in the mechanical properties in materials. In real materials, defects are typically found at very small concentrations e.g., vacancies occur at parts per million, dislocation density in metals ranges from $10^{10} m^{-2}$ to $10^{15} m^{-2}$, and grain sizes vary from nanometers to micrometers in polycrystalline materials, etc. In order to model materials at realistic defect concentrations using DFT, we would need to work with system sizes beyond millions of atoms. Due to the cubic-scaling computational cost with respect to the number of atoms in conventional DFT implementations, such system sizes are unreachable. Since the early 1990s, there has been a huge interest in developing DFT implementations that have linear-scaling computational cost. A promising approach to achieving linear-scaling cost is to approximate the density matrix in KSDFT. The focus of this thesis is to provide a firm mathematical framework to study the convergence of these approximations. We reformulate the Kohn-Sham density functional theory as a nested variational problem in the density matrix, the electrostatic potential, and a field dual to the electron density. The corresponding functional is linear in the density matrix and thus amenable to spectral representation. Based on this reformulation, we introduce a new approximation scheme, called spectral binning, which does not require smoothing of the occupancy function and thus applies at arbitrarily low temperatures. We proof convergence of the approximate solutions with respect to spectral binning and with respect to an additional spatial discretization of the domain. For a standard one-dimensional benchmark problem, we present numerical experiments for which spectral binning exhibits excellent convergence characteristics and outperforms other linear-scaling methods.