979 resultados para approximate calculation of sums
Resumo:
First-principles quantum-mechanical techniques, based on density functional theory (B3LYP level) were employed to study the electronic structure of ordered and deformed asymmetric models for Ba0.5Sr 0.5TiO3. Electronic properties are analyzed and the relevance of the present theoretical and experimental results on the photoluminescence behavior is discussed. The presence of localized electronic levels in the band gap, due to the symmetry break, would be responsible for the visible photoluminescence of the amorphous state at room temperature. Thin films were synthesized following a soft chemical processing. Their structure was confirmed by x-ray data and the corresponding photoluminescence properties measured.
Resumo:
This paper presents some initial concepts for including reactive power in linear methods for computing Available Transfer Capability (ATC). It is proposed an approximation for the reactive power flows computation that uses the exact circle equations for the transmission line complex flow, and then it is determined the ATC using active power distribution factors. The transfer capability can be increased using the sensitivities of flow that show the best group of buses which can have their reactive power injection modified in order to remove the overload in the transmission lines. The results of the ATC computation and of the use of the sensitivities of flow are presented using the Cigré 32-bus system. © 2004 IEEE.
Resumo:
Proton radiation therapy is a precise form of radiation therapy, but the avoidance of damage to critical normal tissues and the prevention of geographical tumor misses require accurate knowledge of the dose delivered to the patient and the verification of his position demand a precise imaging technique. In proton therapy facilities, the X-ray Computed Tomography (xCT) is the preferred technique for the planning treatment of patients. This situation has been changing nowadays with the development of proton accelerators for health care and the increase in the number of treated patients. In fact, protons could be more efficient than xCT for this task. One essential difficulty in pCT image reconstruction systems came from the scattering of the protons inside the target due to the numerous small-angle deflections by nuclear Coulomb fields. The purpose of this study is the comparison of an analytical formulation for the determination of beam lateral deflection, based on Molière's theory and Rutherford scattering with Monte Carlo calculations by SRIM 2008 and MCNPX codes. © 2010 American Institute of Physics.
Resumo:
For intricate automotive systems that enclose several components, such as gearboxes, an important aspect of the design is defining the correct assembly parameters. A proper assembly can ensure optimized operating conditions and therefore the components can achieve a longer life. In the case of the support bearings applied to front-axle lightweight differentials, the assembly preload is a major aspect for an adequate performance of the system. During the design phase it is imperative to define reference values to this preload, so the application would endure its requirements. However, with the assistance of computer simulations, it is possible to determine an optimum condition of operation, i.e. optimum pre-load, which would increase the system reliability. This paper presents a study on the influence of preload on the rating life of tapered roller bearings applied to light-weight front axle differentials, evaluating how preload affects several key parameters such as rating life and displacement of components, taking into account the flexibility of the surrounding differential housing. Copyright © 2012 SAE International.
Resumo:
It is presented a software developed with Delphi programming language to compute the reservoir's annual regulated active storage, based on the sequent-peak algorithm. Mathematical models used for that purpose generally require extended hydrological series. Usually, the analysis of those series is performed with spreadsheets or graphical representations. Based on that, it was developed a software for calculation of reservoir active capacity. An example calculation is shown by 30-years (from 1977 to 2009) monthly mean flow historical data, from Corrente River, located at São Francisco River Basin, Brazil. As an additional tool, an interface was developed to manage water resources, helping to manipulate data and to point out information that it would be of interest to the user. Moreover, with that interface irrigation districts where water consumption is higher can be analyzed as a function of specific seasonal water demands situations. From a practical application, it is possible to conclude that the program provides the calculation originally proposed. It was designed to keep information organized and retrievable at any time, and to show simulation on seasonal water demands throughout the year, contributing with the elements of study concerning reservoir projects. This program, with its functionality, is an important tool for decision making in the water resources management.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We set up sum rules for heavy lambda decays in a full QCD calculation which in the heavy quark mass limit incorporates the symmetries of heavy quark effective theory. For the semileptonic Λc decay we obtain a reasonable agreement with experiment. For the Λb semileptonic decay we find at the zero recoil point a violation of the heavy quark symmetry of about 20%. © 1998 Published by Elsevier Science B.V. All rights reserved.
Resumo:
The numerical renormalization-group method was originally developed to calculate the thermodynamical properties of impurity Hamiltonians. A recently proposed generalization capable of computing dynamical properties is discussed. As illustrative applications, essentially exact results for the impurity specttral densities of the spin-degenerate Anderson model and of a model for electronic tunneling between two centers in a metal are presented. © 1991.
Resumo:
We calculate the relic abundance of mixed axion/neutralino cold dark matter which arises in R-parity conserving supersymmetric (SUSY) models wherein the strong CP problem is solved by the Peccei-Quinn (PQ) mechanism with a concommitant axion/saxion/axino supermultiplet. By numerically solving the coupled Boltzmann equations, we include the combined effects of 1. thermal axino production with cascade decays to a neutralino LSP, 2. thermal saxion production and production via coherent oscillations along with cascade decays and entropy injection, 3. thermal neutralino production and re-annihilation after both axino and saxion decays, 4. gravitino production and decay and 5. axion production both thermally and via oscillations. For SUSY models with too high a standard neutralino thermal abundance, we find the combined effect of SUSY PQ particles is not enough to lower the neutralino abundance down to its measured value, while at the same time respecting bounds on late-decaying neutral particles from BBN. However, models with a standard neutralino underabundance can now be allowed with either neutralino or axion domination of dark matter, and furthermore, these models can allow the PQ breaking scale f(a) to be pushed up into the 10(14) - 10(15) GeV range, which is where it is typically expected to be in string theory models.
Resumo:
We propose a novel mathematical approach for the calculation of near-zero energy states by solving potentials which are isospectral with the original one. For any potential, families of strictly isospectral potentials (with very different shape) having desirable and adjustable features are generated by supersymmetric isospectral formalism. The near-zero energy Efimov state in the original potential is effectively trapped in the deep well of the isospectral family and facilitates more accurate calculation of the Efimov state. Application to the first excited state in He-4 trimer is presented.
Resumo:
This work proposes a computational tool to assist power system engineers in the field tuning of power system stabilizers (PSSs) and Automatic Voltage Regulators (AVRs). The outcome of this tool is a range of gain values for theses controllers within which there is a theoretical guarantee of stability for the closed-loop system. This range is given as a set of limit values for the static gains of the controllers of interest, in such a way that the engineer responsible for the field tuning of PSSs and/or AVRs can be confident with respect to system stability when adjusting the corresponding static gains within this range. This feature of the proposed tool is highly desirable from a practical viewpoint, since the PSS and AVR commissioning stage always involve some readjustment of the controller gains to account for the differences between the nominal model and the actual behavior of the system. By capturing these differences as uncertainties in the model, this computational tool is able to guarantee stability for the whole uncertain model using an approach based on linear matrix inequalities. It is also important to remark that the tool proposed in this paper can also be applied to other types of parameters of either PSSs or Power Oscillation Dampers, as well as other types of controllers (such as speed governors, for example). To show its effectiveness, applications of the proposed tool to two benchmarks for small signal stability studies are presented at the end of this paper.
Resumo:
Temperature dependent transient curves of excited levels of a model Eu3+ complex have been measured for the first time. A coincidence between the temperature dependent rise time of the 5D0 emitting level and decay time of the 5D1 excited level in the [Eu(tta)3(H2O)2] complex has been found, which unambiguously proves the T1→5D1→5D0 sensitization pathway. A theoretical approach for the temperature dependent energy transfer rates has been successfully applied to the rationalization of the experimental data.
Resumo:
In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.