997 resultados para S-matrix theory
Resumo:
We review the status of integrable models from the point of view of their dynamics and integrability conditions. A few integrable models are discussed in detail. We comment on the use it is made of them in string theory. We also discuss the SO(6) symmetric Hamiltonian with SO(6) boundary. This work is especially prepared for the 70th anniversaries of Andr, Swieca (in memoriam) and Roland Koberle.
Resumo:
Starting from the Fisher matrix for counts in cells, we derive the full Fisher matrix for surveys of multiple tracers of large-scale structure. The key step is the classical approximation, which allows us to write the inverse of the covariance of the galaxy counts in terms of the naive matrix inverse of the covariance in a mixed position-space and Fourier-space basis. We then compute the Fisher matrix for the power spectrum in bins of the 3D wavenumber , the Fisher matrix for functions of position (or redshift z) such as the linear bias of the tracers and/or the growth function and the cross-terms of the Fisher matrix that expresses the correlations between estimations of the power spectrum and estimations of the bias. When the bias and growth function are fully specified, and the Fourier-space bins are large enough that the covariance between them can be neglected, the Fisher matrix for the power spectrum reduces to the widely used result that was first derived by Feldman, Kaiser & Peacock. Assuming isotropy, a fully analytical calculation of the Fisher matrix in the classical approximation can be performed in the case of a constant-density, volume-limited survey.
Resumo:
We investigate the classical integrability of the Alday-Arutyunov-Frolov model, and show that the Lax connection can be reduced to a simpler 2 x 2 representation. Based on this result, we calculate the algebra between the L-operators and find that it has a highly non-ultralocal form. We then employ and make a suitable generalization of the regularization technique proposed by Mail let for a simpler class of non-ultralocal models, and find the corresponding r- and s-matrices. We also make a connection between the operator-regularization method proposed earlier for the quantum case, and the Mail let's symmetric limit regularization prescription used for non-ultralocal algebras in the classical theory.
Resumo:
There is special interest in the incorporation of metallic nanoparticles in a surrounding dielectric matrix for obtaining composites with desirable characteristics such as for surface plasmon resonance, which can be used in photonics and sensing, and controlled surface electrical conductivity. We investigated nanocomposites produced through metallic ion implantation in insulating substrate, where the implanted metal self-assembles into nanoparticles. During the implantation, the excess of metal atom concentration above the solubility limit leads to nucleation and growth of metal nanoparticles, driven by the temperature and temperature gradients within the implanted sample including the beam-induced thermal characteristics. The nanoparticles nucleate near the maximum of the implantation depth profile (projected range), that can be estimated by computer simulation using the TRIDYN. This is a Monte Carlo simulation program based on the TRIM (Transport and Range of Ions in Matter) code that takes into account compositional changes in the substrate due to two factors: previously implanted dopant atoms, and sputtering of the substrate surface. Our study suggests that the nanoparticles form a bidimentional array buried few nanometers below the substrate surface. More specifically we have studied Au/PMMA (polymethylmethacrylate), Pt/PMMA, Ti/alumina and Au/alumina systems. Transmission electron microscopy of the implanted samples showed the metallic nanoparticles formed in the insulating matrix. The nanocomposites were characterized by measuring the resistivity of the composite layer as function of the dose implanted. These experimental results were compared with a model based on percolation theory, in which electron transport through the composite is explained by conduction through a random resistor network formed by the metallic nanoparticles. Excellent agreement was found between the experimental results and the predictions of the theory. It was possible to conclude, in all cases, that the conductivity process is due only to percolation (when the conducting elements are in geometric contact) and that the contribution from tunneling conduction is negligible.
Resumo:
[EN]Research and theory on second language reading has reached heightened dimensions in recent years. It is through reading that learners access much information concerning the target language and culture, and consequently reading is an important part of almost all language programs across stages of acquisition. The purpose of this article is to offer informed suggestions for the foreign language instructor of reading. The ideas given in this paper constitute a collaborative project that developed as part of a graduate seminar on L2 Reading and Writing taught at Washington University in St. Louis.
Resumo:
This thesis is focused on the financial model for interest rates called the LIBOR Market Model. In the appendixes, we provide the necessary mathematical theory. In the inner chapters, firstly, we define the main interest rates and financial instruments concerning with the interest rate models, then, we set the LIBOR market model, demonstrate its existence, derive the dynamics of forward LIBOR rates and justify the pricing of caps according to the Black’s formula. Then, we also present the Swap Market Model, which models the forward swap rates instead of the LIBOR ones. Even this model is justified by a theoretical demonstration and the resulting formula to price the swaptions coincides with the Black’s one. However, the two models are not compatible from a theoretical point. Therefore, we derive various analytical approximating formulae to price the swaptions in the LIBOR market model and we explain how to perform a Monte Carlo simulation. Finally, we present the calibration of the LIBOR market model to the markets of both caps and swaptions, together with various examples of application to the historical correlation matrix and the cascade calibration of the forward volatilities to the matrix of implied swaption volatilities provided by the market.
Resumo:
Coupled-cluster (CC) theory is one of the most successful approaches in high-accuracy quantum chemistry. The present thesis makes a number of contributions to the determination of molecular properties and excitation energies within the CC framework. The multireference CC (MRCC) method proposed by Mukherjee and coworkers (Mk-MRCC) has been benchmarked within the singles and doubles approximation (Mk-MRCCSD) for molecular equilibrium structures. It is demonstrated that Mk-MRCCSD yields reliable results for multireference cases where single-reference CC methods fail. At the same time, the present work also illustrates that Mk-MRCC still suffers from a number of theoretical problems and sometimes gives rise to results of unsatisfactory accuracy. To determine polarizability tensors and excitation spectra in the MRCC framework, the Mk-MRCC linear-response function has been derived together with the corresponding linear-response equations. Pilot applications show that Mk-MRCC linear-response theory suffers from a severe problem when applied to the calculation of dynamic properties and excitation energies: The Mk-MRCC sufficiency conditions give rise to a redundancy in the Mk-MRCC Jacobian matrix, which entails an artificial splitting of certain excited states. This finding has established a new paradigm in MRCC theory, namely that a convincing method should not only yield accurate energies, but ought to allow for the reliable calculation of dynamic properties as well. In the context of single-reference CC theory, an analytic expression for the dipole Hessian matrix, a third-order quantity relevant to infrared spectroscopy, has been derived and implemented within the CC singles and doubles approximation. The advantages of analytic derivatives over numerical differentiation schemes are demonstrated in some pilot applications.
Resumo:
We present three methods for the distortion-free enhancement of THz signals measured by electro-optic sampling in zinc blende-type detector crystals, e.g., ZnTe or GaP. A technique commonly used in optically heterodyne-detected optical Kerr effect spectroscopy is introduced, which is based on two measurements at opposite optical biases near the zero transmission point in a crossed polarizer detection geometry. In contrast to other techniques for an undistorted THz signal enhancement, it also works in a balanced detection scheme and does not require an elaborate procedure for the reconstruction of the true signal as the two measured waveforms are simply subtracted to remove distortions. We study three different approaches for setting an optical bias using the Jones matrix formalism and discuss them also in the framework of optical heterodyne detection. We show that there is an optimal bias point in realistic situations where a small fraction of the probe light is scattered by optical components. The experimental demonstration will be given in the second part of this two-paper series [J. Opt. Soc. Am. B, doc. ID 204877 (2014, posted online)].
Resumo:
We study the effects of a finite cubic volume with twisted boundary conditions on pseudoscalar mesons. We apply Chiral Perturbation Theory in the p-regime and introduce the twist by means of a constant vector field. The corrections of masses, decay constants, pseudoscalar coupling constants and form factors are calculated at next-to-leading order. We detail the derivations and compare with results available in the literature. In some case there is disagreement due to a different treatment of new extra terms generated from the breaking of the cubic invariance. We advocate to treat such terms as renormalization terms of the twisting angles and reabsorb them in the on-shell conditions. We confirm that the corrections of masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. Furthermore, we show that the matrix elements of the scalar (resp. vector) form factor satisfies the Feynman–Hellman Theorem (resp. the Ward–Takahashi identity). To show the Ward–Takahashi identity we construct an effective field theory for charged pions which is invariant under electromagnetic gauge transformations and which reproduces the results obtained with Chiral Perturbation Theory at a vanishing momentum transfer. This generalizes considerations previously published for periodic boundary conditions to twisted boundary conditions. Another method to estimate the corrections in finite volume are asymptotic formulae. Asymptotic formulae were introduced by Lüscher and relate the corrections of a given physical quantity to an integral of a specific amplitude, evaluated in infinite volume. Here, we revise the original derivation of Lüscher and generalize it to finite volume with twisted boundary conditions. In some cases, the derivation involves complications due to extra terms generated from the breaking of the cubic invariance. We isolate such terms and treat them as renormalization terms just as done before. In that way, we derive asymptotic formulae for masses, decay constants, pseudoscalar coupling constants and scalar form factors. At the same time, we derive also asymptotic formulae for renormalization terms. We apply all these formulae in combination with Chiral Perturbation Theory and estimate the corrections beyond next-to-leading order. We show that asymptotic formulae for masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. A similar relation connects in an independent way asymptotic formulae for renormalization terms. We check these relations for charged pions through a direct calculation. To conclude, a numerical analysis quantifies the importance of finite volume corrections at next-to-leading order and beyond. We perform a generic Analysis and illustrate two possible applications to real simulations.
Resumo:
The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.
Resumo:
Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space
Resumo:
We introduce in this paper a method to calculate the Hessenberg matrix of a sum of measures from the Hessenberg matrices of the component measures. Our method extends the spectral techniques used by G. Mantica to calculate the Jacobi matrix associated with a sum of measures from the Jacobi matrices of each of the measures. We apply this method to approximate the Hessenberg matrix associated with a self-similar measure and compare it with the result obtained by a former method for self-similar measures which uses a fixed point theorem for moment matrices. Results are given for a series of classical examples of self-similar measures. Finally, we also apply the method introduced in this paper to some examples of sums of (not self-similar) measures obtaining the exact value of the sections of the Hessenberg matrix.
Resumo:
This work presents a method for the analysis of timber composite beams which considers the slip in the connection system, based on assembling the flexibility matrix of the whole structure. This method is based on one proposed by Tommola and Jutila (2001). This paper extends the method to the case of a gap between two pieces with an arbitrary location at the first connector, which notably broadens its practical application. The addition of the gap makes it possible to model a cracked zone in concrete topping, as well as the case in which forming produces the gap. The consideration of induced stresses due to changes in temperature and moisture content is also described, while the concept of equivalent eccentricity is generalized. This method has important advantages in connection with the current European Standard EN 1995-1-1: 2004, as it is able to deal with any type of load, variable section, discrete and non-regular connection systems, a gap between the two pieces, and variations in temperature and moisture content. Although it could be applied to any structural system, it is specially suited for the case of simple supported and continuous beams. Working examples are presented at the end, showing that the arrangement of the connection notably modifies shear force distribution. A first interpretation of the results is made on the basis of the strut and tie theory. The examples prove that the use of EC-5 is unsafe when, as a rule of thumb, the strut or compression field between the support and the first connector is at an angle with the axis of the beam of less than 60º.
Resumo:
La inmensa mayoría de los flujos de relevancia ingenieril permanecen sin estudiar en el marco de la teoría de estabilidad global. Esto es debido a dos razones fundamentalmente, las dificultades asociadas con el análisis de los flujos turbulentos y los inmensos recursos computacionales requeridos para obtener la solución del problema de autovalores asociado al análisis de inestabilidad de flujos tridimensionales, también conocido como problema TriGlobal. En esta tesis se aborda el problema asociado con la tridimensionalidad. Se ha desarrollado una metodología general para obtener soluciones de problemas de análisis modal de las inestabilidades lineales globales mediante el acoplamiento de métodos de evolución temporal, desarrollados en este trabajo, con códigos de mecánica de fluidos computacional de segundo orden, utilizados de forma general en la industria. Esta metodología consiste en la resolución del problema de autovalores asociado al análisis de inestabilidad mediante métodos de proyección en subespacios de Krylov, con la particularidad de que dichos subespacios son generados por medio de la integración temporal de un vector inicial usando cualquier código de mecánica de fluidos computacional. Se han elegido tres problemas desafiantes en función de la exigencia de recursos computacionales necesarios y de la complejidad física para la demostración de la presente metodología: (i) el flujo en el interior de una cavidad tridimensional impulsada por una de sus tapas, (ii) el flujo alrededor de un cilindro equipado con aletas helicoidales a lo largo su envergadura y (iii) el flujo a través de una cavidad abierta tridimensinal en ausencia de homogeneidades espaciales. Para la validación de la tecnología se ha obtenido la solución del problema TriGlobal asociado al flujo en la cavidad tridimensional, utilizando el método de evolución temporal desarrollado acoplado con los operadores numéricos de flujo incompresible del código CFD OpenFOAM (código libre). Los resultados obtenidos coinciden plentamente con la literatura. La aplicación de esta metodología al estudio de inestabilidades globales de flujos abiertos tridimensionales ha proporcionado por primera vez, información sobre la transición tridimensional de estos flujos. Además, la metodología ha sido adaptada para resolver problemas adjuntos TriGlobales, permitiendo el control de flujo basado en modificaciones de las inestabilidades globales. Finalmente, se ha demostrado que la cantidad moderada de los recursos computacionales requeridos para la solución del problema de valor propio TriGlobal usando este método numérico, junto a su versatilidad al poder acoplarse a cualquier código aerodinámico, permite la realización de análisis de inestabilidad global y control de flujos complejos de relevancia industrial. Abstract Most flows of engineering relevance still remain unexplored in a global instability theory context for two reasons. First, because of the difficulties associated with the analysis of turbulent flows and, second, for the formidable computational resources required for the solution of the eigenvalue problem associated with the instability analysis of three-dimensional base flows, also known as TriGlobal problem. In this thesis, the problem associated with the three-dimensionality is addressed by means of the development of a general approach to the solution of large-scale global linear instability analysis by coupling a time-stepping approach with second order aerodynamic codes employed in industry. Three challenging flows in the terms of required computational resources and physical complexity have been chosen for demonstration of the present methodology; (i) the flow inside a wall-bounded three-dimensional lid-driven cavity, (ii) the flow past a cylinder fitted with helical strakes and (iii) the flow over a inhomogeneous three-dimensional open cavity. Results in excellent agreement with the literature have been obtained for the three-dimensional lid-driven cavity by using this methodology coupled with the incompressible solver of the open-source toolbox OpenFOAM®, which has served as validation. Moreover, significant physical insight of the instability of three-dimensional open flows has been gained through the application of the present time-stepping methodology to the other two cases. In addition, modifications to the present approach have been proposed in order to perform adjoint instability analysis of three-dimensional base flows and flow control; validation and TriGlobal examples are presented. Finally, it has been demonstrated that the moderate amount of computational resources required for the solution of the TriGlobal eigenvalue problem using this method enables the performance of instability analysis and control of flows of industrial relevance.
Resumo:
Most large dynamical systems are thought to have ergodic dynamics, whereas small systems may not have free interchange of energy between degrees of freedom. This assumption is made in many areas of chemistry and physics, ranging from nuclei to reacting molecules and on to quantum dots. We examine the transition to facile vibrational energy flow in a large set of organic molecules as molecular size is increased. Both analytical and computational results based on local random matrix models describe the transition to unrestricted vibrational energy flow in these molecules. In particular, the models connect the number of states participating in intramolecular energy flow to simple molecular properties such as the molecular size and the distribution of vibrational frequencies. The transition itself is governed by a local anharmonic coupling strength and a local state density. The theoretical results for the transition characteristics compare well with those implied by experimental measurements using IR fluorescence spectroscopy of dilution factors reported by Stewart and McDonald [Stewart, G. M. & McDonald, J. D. (1983) J. Chem. Phys. 78, 3907–3915].