955 resultados para algebraic bethe-ansatz
Resumo:
In the case of suspension flows, the rate of interphase momentum transfer M(k) and that of interphase energy transfer E(k), which were expressed as a sum of infinite discontinuities by Ishii, have been reduced to the sum of several terms which have concise physical significance. M(k) is composed of the following terms: (i) the momentum carried by the interphase mass transfer; (ii) the interphase drag force due to the relative motion between phases; (iii) the interphase force produced by the concentration gradient of the dispersed phase in a pressure field. And E(k) is composed of the following four terms, that is, the energy carried by the interphase mass transfer, the work produced by the interphase forces of the second and third parts above, and the heat transfer between phases. It is concluded from the results that (i) the term, (-alpha-k-nabla-p), which is related to the pressure gradient in the momentum equation, can be derived from the basic conservation laws without introducing the "shared-pressure presumption"; (ii) the mean velocity of the action point of the interphase drag is the mean velocity of the interface displacement, upsilonBAR-i. It is approximately equal to the mean velocity of the dispersed phase, upsilonBAR-d. Hence the work terms produced by the drag forces are f(dc) . upsilonBAR-d, and f(cd) . upsilonBAR-d, respectively, with upsilonBAR-i not being replaced by the mean velocity of the continuous phase, upsilonBAR-c; (iii) by analogy, the terms of the momentum transfer due to phase change are upsilonBAR-d-GAMMA-c, and upsilonBAR-d-GAMMA-d, respectively; (iv) since the transformation between explicit heat and latent heat occurs in the process of phase change, the algebraic sum of the heat transfer between phases is not equal to zero. Q(ic) and Q(id) are composed of the explicit heat and latent heat, so that the sum Q(ic) + Q(id)) is equal to zero.
Resumo:
In this paper, TASCflow3D is used to solve inner and outer 3D viscous incompressible turbulent flow (R-e = 5.6 X 10(6)) around axisymmetric body with duct. The governing equation is a RANS equation with standard k-epsilon turbulence model. The discrete method used is a finite volume method based on the finite element approach. In this method, the description of geometry is very flexible and at the same time important conservative properties are retained. The multi-block and algebraic multi-grid techniques are used for the convergence acceleration. Agreement between experimental results and calculation is good. It indicates that this novel approach can be used to simulate complex flow such as the interaction between rotor and stator or propulsion systems containing tip clearance and cavitation.
Resumo:
A full two-fluid model of reacting gas-particle flows with an algebraic unified second-order moment (AUSM) turbulence-chemistry model is used to simulate Beijing coal combustion and NOx formation. The sub-models are the k-epsilon-kp two-phase turbulence model, the EBU-Arrhenius volatile and CO combustion model, the six-flux radiation model, coal devolatilization model and char combustion model. The blocking effect on NOx formation is discussed. In addition, the chemical equilibrium analysis is used to predict NOx concentration at different temperature. Results of CID simulation and chemical equilibrium analysis show that, optimizing air dynamic parameters can delay the NOx formation and decrease NOx emission, but it is effective only in a restricted range. In order to decrease NOx emission near to zero, the re-burning or other chemical methods must be used.
Resumo:
Monitoring of the waters of the Middle Atlantic Bight and Gulf of Maine has been conducted by the MARMAP Ships of Opportunity Program since the early 1970's. Presented in this atlas are portrayals of the temporal and spatial patterns of surface and bottom temperature and surface salinity for these areas during the period 1978-1990. These patterns are shown in the form of time-space diagrams for single-year and multiyear (base period) time frames. Each base period figure shows thirteen-year (1978-1990) mean conditions, sample variance in the form of standard deviations of the measured values, and data locations. Each single-year figure displays annual conditions, sampling locations, and departures of annual conditions from the thirteen-year means, expressed as algebraic anomalies and standardized anomalies. (PDF file contains 112 pages.)
Resumo:
Entsprechend dem Rahmenkonzept für die Bundesforschungsanstalten im Geschäftsbereich des damaligen Bundesministeriums für Ernährung, Landwirtschaft und Forsten vom 12. 6. 1996 ist an der Bundesforschungsanstalt für Fischerei in Hamburg die Anzahl der Institute von fünf auf vier zu reduzieren. Das ab 1. 1. 2001 neu gebildete Institut für Fischereitechnik und Fischqualität (IFF) nimmt die Forschungsaufgaben der beiden bisherigen Institute für Fischereitechnik (IFH) sowie Biochemie und Technologie (IBT) wahr. Damit bietet sich für das aus zweivergleichsweise kleinen Instituten hervorgegangene IFF die Möglichkeit, in einem integrierten Ansatz Fische, Krebse und Weichtiere auf verschiedenen Stufen der Produktions- und Verarbeitungskette zu untersuchen und zu bewerten. Fangprozess und Folgebehandlung der Fangobjekte werden dadurch ganzheitlich betrachtet, was sich nicht zuletzt in Maßnahmen zur Qualitätserhaltung und -verbesserung von Fischen und Fischereierzeugnissen niederschlagen soll.
Resumo:
Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.
Resumo:
I. Existence and Structure of Bifurcation Branches
The problem of bifurcation is formulated as an operator equation in a Banach space, depending on relevant control parameters, say of the form G(u,λ) = 0. If dimN(G_u(u_O,λ_O)) = m the method of Lyapunov-Schmidt reduces the problem to the solution of m algebraic equations. The possible structure of these equations and the various types of solution behaviour are discussed. The equations are normally derived under the assumption that G^O_λεR(G^O_u). It is shown, however, that if G^O_λεR(G^O_u) then bifurcation still may occur and the local structure of such branches is determined. A new and compact proof of the existence of multiple bifurcation is derived. The linearized stability near simple bifurcation and "normal" limit points is then indicated.
II. Constructive Techniques for the Generation of Solution Branches
A method is described in which the dependence of the solution arc on a naturally occurring parameter is replaced by the dependence on a form of pseudo-arclength. This results in continuation procedures through regular and "normal" limit points. In the neighborhood of bifurcation points, however, the associated linear operator is nearly singular causing difficulty in the convergence of continuation methods. A study of the approach to singularity of this operator yields convergence proofs for an iterative method for determining the solution arc in the neighborhood of a simple bifurcation point. As a result of these considerations, a new constructive proof of bifurcation is determined.
Resumo:
The theory of bifurcation of solutions to two-point boundary value problems is developed for a system of nonlinear first order ordinary differential equations in which the bifurcation parameter is allowed to appear nonlinearly. An iteration method is used to establish necessary and sufficient conditions for bifurcation and to construct a unique bifurcated branch in a neighborhood of a bifurcation point which is a simple eigenvalue of the linearized problem. The problem of bifurcation at a degenerate eigenvalue of the linearized problem is reduced to that of solving a system of algebraic equations. Cases with no bifurcation and with multiple bifurcation at a degenerate eigenvalue are considered.
The iteration method employed is shown to generate approximate solutions which contain those obtained by formal perturbation theory. Thus the formal perturbation solutions are rigorously justified. A theory of continuation of a solution branch out of the neighborhood of its bifurcation point is presented. Several generalizations and extensions of the theory to other types of problems, such as systems of partial differential equations, are described.
The theory is applied to the problem of the axisymmetric buckling of thin spherical shells. Results are obtained which confirm recent numerical computations.
Resumo:
In the first part I perform Hartree-Fock calculations to show that quantum dots (i.e., two-dimensional systems of up to twenty interacting electrons in an external parabolic potential) undergo a gradual transition to a spin-polarized Wigner crystal with increasing magnetic field strength. The phase diagram and ground state energies have been determined. I tried to improve the ground state of the Wigner crystal by introducing a Jastrow ansatz for the wave function and performing a variational Monte Carlo calculation. The existence of so called magic numbers was also investigated. Finally, I also calculated the heat capacity associated with the rotational degree of freedom of deformed many-body states and suggest an experimental method to detect Wigner crystals.
The second part of the thesis investigates infinite nuclear matter on a cubic lattice. The exact thermal formalism describes nucleons with a Hamiltonian that accommodates on-site and next-neighbor parts of the central, spin-exchange and isospin-exchange interaction. Using auxiliary field Monte Carlo methods, I show that energy and basic saturation properties of nuclear matter can be reproduced. A first order phase transition from an uncorrelated Fermi gas to a clustered system is observed by computing mechanical and thermodynamical quantities such as compressibility, heat capacity, entropy and grand potential. The structure of the clusters is investigated with the help two-body correlations. I compare symmetry energy and first sound velocities with literature and find reasonable agreement. I also calculate the energy of pure neutron matter and search for a similar phase transition, but the survey is restricted by the infamous Monte Carlo sign problem. Also, a regularization scheme to extract potential parameters from scattering lengths and effective ranges is investigated.
Resumo:
Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.
In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.
Resumo:
Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.
The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.
In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.