992 resultados para S-matrix theory
Resumo:
Let m and n be integers greater than 1. Given lattices A and B of dimensions m and n, respectively, a technique for constructing a lattice from them of dimension m+n-1 is introduced. Furthermore, if A and B possess bases satisfying certain conditions, then a second technique yields a lattice of dimension m+n-2. The relevant parameters of the new lattices are given in terms of the respective parameters of A,B, and a lattice C isometric to a sublattice of A and B. Denser sphere packings than previously known ones in dimensions 52, 68, 84, 248, 520, and 4098 are obtained. © 2012 Elsevier Inc. All rights reserved.
Resumo:
The present work describes an alternative methodology for identification of aeroelastic stability in a range of varying parameters. Analysis is performed in time domain based on Lyapunov stability and solved by convex optimization algorithms. The theory is outlined and simulations are carried out on a benchmark system to illustrate the method. The classical methodology with the analysis of the system's eigenvalues is presented for comparing the results and validating the approach. The aeroelastic model is represented in state space format and the unsteady aerodynamic forces are written in time domain using rational function approximation. The problem is formulated as a polytopic differential inclusion system and the conceptual idea can be used in two different applications. In the first application the method verifies the aeroelastic stability in a range of air density (or its equivalent altitude range). In the second one, the stability is verified for a rage of velocities. These analyses are in contrast to the classical discrete analysis performed at fixed air density/velocity values. It is shown that this method is efficient to identify stability regions in the flight envelope and it offers promise for robust flutter identification.
Resumo:
We have used the periodic quantum-mechanical method with density functional theory at the B3LYP hybrid functional level in order to study the doping of SnO2 with pentavalent Sb5+. The 72-atom 2x3x2 supercell SnO2 (Sn24O48) was employed in the calculations. For the SnO2:4%Sb , one atom of Sn was replaced by one Sb atom. For the SnO2:8%Sb, two atoms of Sn were replaced by two Sb atoms. The Sb doping leads to an enhancement in the electrical conductivity of this material, because these ions substitute Sn4+ in the SnO2 matrix, leading to an electronic density rise in the conduction band, due to the donor-like behavior of the doping atom. This result shows that the bandgap magnitude depends on the doping concentration, because the energy value found for SnO2:4%Sb was 2.8eV whereas for SnO2:8%Sb it was 2.7eV. It was also verified that the difference between the Fermi level and the bottom of the conduction band is directly related to the doping concentration. - See more at: http://www.eurekaselect.com/117255/article#sthash.Z5ezhCQD.dpuf
Resumo:
This work addresses the solution to the problem of robust model predictive control (MPC) of systems with model uncertainty. The case of zone control of multi-variable stable systems with multiple time delays is considered. The usual approach of dealing with this kind of problem is through the inclusion of non-linear cost constraint in the control problem. The control action is then obtained at each sampling time as the solution to a non-linear programming (NLP) problem that for high-order systems can be computationally expensive. Here, the robust MPC problem is formulated as a linear matrix inequality problem that can be solved in real time with a fraction of the computer effort. The proposed approach is compared with the conventional robust MPC and tested through the simulation of a reactor system of the process industry.
Resumo:
We review the status of integrable models from the point of view of their dynamics and integrability conditions. A few integrable models are discussed in detail. We comment on the use it is made of them in string theory. We also discuss the SO(6) symmetric Hamiltonian with SO(6) boundary. This work is especially prepared for the 70th anniversaries of Andr, Swieca (in memoriam) and Roland Koberle.
Resumo:
Starting from the Fisher matrix for counts in cells, we derive the full Fisher matrix for surveys of multiple tracers of large-scale structure. The key step is the classical approximation, which allows us to write the inverse of the covariance of the galaxy counts in terms of the naive matrix inverse of the covariance in a mixed position-space and Fourier-space basis. We then compute the Fisher matrix for the power spectrum in bins of the 3D wavenumber , the Fisher matrix for functions of position (or redshift z) such as the linear bias of the tracers and/or the growth function and the cross-terms of the Fisher matrix that expresses the correlations between estimations of the power spectrum and estimations of the bias. When the bias and growth function are fully specified, and the Fourier-space bins are large enough that the covariance between them can be neglected, the Fisher matrix for the power spectrum reduces to the widely used result that was first derived by Feldman, Kaiser & Peacock. Assuming isotropy, a fully analytical calculation of the Fisher matrix in the classical approximation can be performed in the case of a constant-density, volume-limited survey.
Resumo:
We investigate the classical integrability of the Alday-Arutyunov-Frolov model, and show that the Lax connection can be reduced to a simpler 2 x 2 representation. Based on this result, we calculate the algebra between the L-operators and find that it has a highly non-ultralocal form. We then employ and make a suitable generalization of the regularization technique proposed by Mail let for a simpler class of non-ultralocal models, and find the corresponding r- and s-matrices. We also make a connection between the operator-regularization method proposed earlier for the quantum case, and the Mail let's symmetric limit regularization prescription used for non-ultralocal algebras in the classical theory.
Resumo:
There is special interest in the incorporation of metallic nanoparticles in a surrounding dielectric matrix for obtaining composites with desirable characteristics such as for surface plasmon resonance, which can be used in photonics and sensing, and controlled surface electrical conductivity. We investigated nanocomposites produced through metallic ion implantation in insulating substrate, where the implanted metal self-assembles into nanoparticles. During the implantation, the excess of metal atom concentration above the solubility limit leads to nucleation and growth of metal nanoparticles, driven by the temperature and temperature gradients within the implanted sample including the beam-induced thermal characteristics. The nanoparticles nucleate near the maximum of the implantation depth profile (projected range), that can be estimated by computer simulation using the TRIDYN. This is a Monte Carlo simulation program based on the TRIM (Transport and Range of Ions in Matter) code that takes into account compositional changes in the substrate due to two factors: previously implanted dopant atoms, and sputtering of the substrate surface. Our study suggests that the nanoparticles form a bidimentional array buried few nanometers below the substrate surface. More specifically we have studied Au/PMMA (polymethylmethacrylate), Pt/PMMA, Ti/alumina and Au/alumina systems. Transmission electron microscopy of the implanted samples showed the metallic nanoparticles formed in the insulating matrix. The nanocomposites were characterized by measuring the resistivity of the composite layer as function of the dose implanted. These experimental results were compared with a model based on percolation theory, in which electron transport through the composite is explained by conduction through a random resistor network formed by the metallic nanoparticles. Excellent agreement was found between the experimental results and the predictions of the theory. It was possible to conclude, in all cases, that the conductivity process is due only to percolation (when the conducting elements are in geometric contact) and that the contribution from tunneling conduction is negligible.
Resumo:
[EN]Research and theory on second language reading has reached heightened dimensions in recent years. It is through reading that learners access much information concerning the target language and culture, and consequently reading is an important part of almost all language programs across stages of acquisition. The purpose of this article is to offer informed suggestions for the foreign language instructor of reading. The ideas given in this paper constitute a collaborative project that developed as part of a graduate seminar on L2 Reading and Writing taught at Washington University in St. Louis.
Resumo:
This thesis is focused on the financial model for interest rates called the LIBOR Market Model. In the appendixes, we provide the necessary mathematical theory. In the inner chapters, firstly, we define the main interest rates and financial instruments concerning with the interest rate models, then, we set the LIBOR market model, demonstrate its existence, derive the dynamics of forward LIBOR rates and justify the pricing of caps according to the Black’s formula. Then, we also present the Swap Market Model, which models the forward swap rates instead of the LIBOR ones. Even this model is justified by a theoretical demonstration and the resulting formula to price the swaptions coincides with the Black’s one. However, the two models are not compatible from a theoretical point. Therefore, we derive various analytical approximating formulae to price the swaptions in the LIBOR market model and we explain how to perform a Monte Carlo simulation. Finally, we present the calibration of the LIBOR market model to the markets of both caps and swaptions, together with various examples of application to the historical correlation matrix and the cascade calibration of the forward volatilities to the matrix of implied swaption volatilities provided by the market.
Resumo:
Coupled-cluster (CC) theory is one of the most successful approaches in high-accuracy quantum chemistry. The present thesis makes a number of contributions to the determination of molecular properties and excitation energies within the CC framework. The multireference CC (MRCC) method proposed by Mukherjee and coworkers (Mk-MRCC) has been benchmarked within the singles and doubles approximation (Mk-MRCCSD) for molecular equilibrium structures. It is demonstrated that Mk-MRCCSD yields reliable results for multireference cases where single-reference CC methods fail. At the same time, the present work also illustrates that Mk-MRCC still suffers from a number of theoretical problems and sometimes gives rise to results of unsatisfactory accuracy. To determine polarizability tensors and excitation spectra in the MRCC framework, the Mk-MRCC linear-response function has been derived together with the corresponding linear-response equations. Pilot applications show that Mk-MRCC linear-response theory suffers from a severe problem when applied to the calculation of dynamic properties and excitation energies: The Mk-MRCC sufficiency conditions give rise to a redundancy in the Mk-MRCC Jacobian matrix, which entails an artificial splitting of certain excited states. This finding has established a new paradigm in MRCC theory, namely that a convincing method should not only yield accurate energies, but ought to allow for the reliable calculation of dynamic properties as well. In the context of single-reference CC theory, an analytic expression for the dipole Hessian matrix, a third-order quantity relevant to infrared spectroscopy, has been derived and implemented within the CC singles and doubles approximation. The advantages of analytic derivatives over numerical differentiation schemes are demonstrated in some pilot applications.
Resumo:
We present three methods for the distortion-free enhancement of THz signals measured by electro-optic sampling in zinc blende-type detector crystals, e.g., ZnTe or GaP. A technique commonly used in optically heterodyne-detected optical Kerr effect spectroscopy is introduced, which is based on two measurements at opposite optical biases near the zero transmission point in a crossed polarizer detection geometry. In contrast to other techniques for an undistorted THz signal enhancement, it also works in a balanced detection scheme and does not require an elaborate procedure for the reconstruction of the true signal as the two measured waveforms are simply subtracted to remove distortions. We study three different approaches for setting an optical bias using the Jones matrix formalism and discuss them also in the framework of optical heterodyne detection. We show that there is an optimal bias point in realistic situations where a small fraction of the probe light is scattered by optical components. The experimental demonstration will be given in the second part of this two-paper series [J. Opt. Soc. Am. B, doc. ID 204877 (2014, posted online)].
Resumo:
We study the effects of a finite cubic volume with twisted boundary conditions on pseudoscalar mesons. We apply Chiral Perturbation Theory in the p-regime and introduce the twist by means of a constant vector field. The corrections of masses, decay constants, pseudoscalar coupling constants and form factors are calculated at next-to-leading order. We detail the derivations and compare with results available in the literature. In some case there is disagreement due to a different treatment of new extra terms generated from the breaking of the cubic invariance. We advocate to treat such terms as renormalization terms of the twisting angles and reabsorb them in the on-shell conditions. We confirm that the corrections of masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. Furthermore, we show that the matrix elements of the scalar (resp. vector) form factor satisfies the Feynman–Hellman Theorem (resp. the Ward–Takahashi identity). To show the Ward–Takahashi identity we construct an effective field theory for charged pions which is invariant under electromagnetic gauge transformations and which reproduces the results obtained with Chiral Perturbation Theory at a vanishing momentum transfer. This generalizes considerations previously published for periodic boundary conditions to twisted boundary conditions. Another method to estimate the corrections in finite volume are asymptotic formulae. Asymptotic formulae were introduced by Lüscher and relate the corrections of a given physical quantity to an integral of a specific amplitude, evaluated in infinite volume. Here, we revise the original derivation of Lüscher and generalize it to finite volume with twisted boundary conditions. In some cases, the derivation involves complications due to extra terms generated from the breaking of the cubic invariance. We isolate such terms and treat them as renormalization terms just as done before. In that way, we derive asymptotic formulae for masses, decay constants, pseudoscalar coupling constants and scalar form factors. At the same time, we derive also asymptotic formulae for renormalization terms. We apply all these formulae in combination with Chiral Perturbation Theory and estimate the corrections beyond next-to-leading order. We show that asymptotic formulae for masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. A similar relation connects in an independent way asymptotic formulae for renormalization terms. We check these relations for charged pions through a direct calculation. To conclude, a numerical analysis quantifies the importance of finite volume corrections at next-to-leading order and beyond. We perform a generic Analysis and illustrate two possible applications to real simulations.
Resumo:
The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.
Resumo:
Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space