853 resultados para Calculation methodology
Resumo:
We present angular basis functions for the Schrödinger equation of two-electron systems in hyperspherical coordinates. By using the hyperspherical adiabatic approach, the wave functions of two-electron systems are expanded in analytical functions, which generalizes the Jacobi polynomials. We show that these functions, obtained by selecting the diagonal terms of the angular equation, allow efficient diagonalization of the Hamiltonian for all values of the hyperspherical radius. The method is applied to the determination of the 1S e energy levels of the Li + and we show that the precision can be improved in a systematic and controllable way. ©2000 The American Physical Society.
Resumo:
A basis-set calculation scheme for S-waves Ps-He elastic scattering below the lowest inelastic threshold was formulated using a variational expression for the transition matrix. The scheme was illustrated numerically by calculating the scattering length in the electronic doublet state: a=1.0±0.1 a.u.
Resumo:
There is no agreement among authors that study fish condition by the allometric method (K=W/Lb) with regard to the best procedure for b coefficient calculation. Some authors use a constant coefficient for all sub-samples (seasons of the year, for instance), whilst others calculate b value for each sub-sample. To demonstrate which of these methods fits better, this study verified that the use of one b value for each sub-sample leads to distortion of Condition Factor values. Comparing the two tested methods, it may be concluded that the method which calculates b coefficient from groupings of all individuals and uses b as a constant value for all sub-samples is the most convenient method to study fish condition.
Resumo:
Indices that report how much a contingency is stable or unstable in an electrical power system have been the object of several studies in the last decades. In some approaches, indices are obtained from time-domain simulation; others explore the calculation of the stability margin from the so-called direct methods, or even by neural networks.The goal is always to obtain a fast and reliable way of analysing large disturbance that might occur on the power systems. A fast classification in stable and unstable, as a function of transient stability is crucial for a dynamic security analysis. All good propositions as how to analyse contingencies must present some important features: classification of contingencies; precision and reliability; and efficiency computation. Indices obtained from time-domain simulations have been used to classify the contingencies as stable or unstable. These indices are based on the concepts of coherence, transient energy conversion between kinetic energy and potential energy, and three dot products of state variable. The classification of the contingencies using the indices individually is not reliable, since the performance of these indices varies with each simulated condition. However, collapsing these indices into a single one can improve the analysis significantly. In this paper, it is presented the results of an approach to filter the contingencies, by a simple classification of them into stable, unstable or marginal. This classification is performed from the composite indices obtained from step by step simulation with a time period of the clearing time plus 0.5 second. The contingencies originally classified as stable or unstable do not require this extra simulation. The methodology requires an initial effort to obtain the values of the intervals for classification, and the weights. This is performed once for each power system and can be used in different operating conditions and for different contingencies. No misplaced classification o- - ccurred in any of the tests, i.e., we detected no stable case classified as unstable or otherwise. The methodology is thus well fitted for it allows for a rapid conclusion about the stability of th system, for the majority of the contingencies (Stable or Unstable Cases). The tests, results and discussions are presented using two power systems: (1) the IEEE17 system, composed of 17 generators, 162 buses and 284 transmission lines; and (2) a South Brazilian system configuration, with 10 generators, 45 buses and 71 lines.
Resumo:
First-principles quantum-mechanical techniques, based on density functional theory (B3LYP level) were employed to study the electronic structure of ordered and deformed asymmetric models for Ba0.5Sr 0.5TiO3. Electronic properties are analyzed and the relevance of the present theoretical and experimental results on the photoluminescence behavior is discussed. The presence of localized electronic levels in the band gap, due to the symmetry break, would be responsible for the visible photoluminescence of the amorphous state at room temperature. Thin films were synthesized following a soft chemical processing. Their structure was confirmed by x-ray data and the corresponding photoluminescence properties measured.
Resumo:
Three-phase three-wire power flow algorithms, as any tool for power systems analysis, require reliable impedances and models in order to obtain accurate results. Kron's reduction procedure, which embeds neutral wire influence into phase wires, has shown good results when three-phase three-wire power flow algorithms based on current summation method were used. However, Kron's reduction can harm reliabilities of some algorithms whose iterative processes need loss calculation (power summation method). In this work, three three-phase three-wire power flow algorithms based on power summation method, will be compared with a three-phase four-wire approach based on backward-forward technique and current summation. Two four-wire unbalanced medium-voltage distribution networks will be analyzed and results will be presented and discussed. © 2004 IEEE.
Resumo:
This paper presents some initial concepts for including reactive power in linear methods for computing Available Transfer Capability (ATC). It is proposed an approximation for the reactive power flows computation that uses the exact circle equations for the transmission line complex flow, and then it is determined the ATC using active power distribution factors. The transfer capability can be increased using the sensitivities of flow that show the best group of buses which can have their reactive power injection modified in order to remove the overload in the transmission lines. The results of the ATC computation and of the use of the sensitivities of flow are presented using the Cigré 32-bus system. © 2004 IEEE.
Resumo:
In this work, the planning of secondary distribution circuits is approached as a mixed integer nonlinear programming problem (MINLP). In order to solve this problem, a dedicated evolutionary algorithm (EA) is proposed. This algorithm uses a codification scheme, genetic operators, and control parameters, projected and managed to consider the specific characteristics of the secondary network planning. The codification scheme maps the possible solutions that satisfy the requirements in order to obtain an effective and low-cost projected system-the conductors' adequate dimensioning, load balancing among phases, and the transformer placed at the center of the secondary system loads. An effective algorithm for three-phase power flow is used as an auxiliary methodology of the EA for the calculation of the fitness function proposed for solutions of each topology. Results for two secondary distribution circuits are presented, whereas one presents radial topology and the other a weakly meshed topology. © 2005 IEEE.
Resumo:
Purpose: The aim of the present study was to evaluate zygomatic bone thickness considering a possible relationship between this parameter and cephalic index (Cl) for better use of Cl in the implant placement technique. Materials and Methods: Cl was calculated for 60 dry Brazilian skulls. The zygo matic bones of the skulls were divided into 13 standardized sections for measurement. Bilateral measurements of zygomatic bone thickness were made on dry skulls. Results: Sections 5, 6, 8, and 9 were appropriate for implant anchorage in terms of location. The mean thicknesses of these sections were 6.05 mm for section 5, 3.15 mm for section 6, 6.13 mm for section 8, and 4.75 mm for section 9. In only 1 section, section 8, did mean thickness on 1 side of of the skull differ significantly from mean thickness on the other side (P <.001). Discussion: For the relationship between quadrant thick ness and Cl, sections 6 and 8 varied independently of Cl. Section 5 associated with brachycephaly, and section 9 associated with subbrachycephaly, presented variations in the corresponding thickness. Conclusion: Based on the results, implants should be placed in sections 5 and 8, since they presented the greatest thickness, except in brachycephalic subjects, where thickness was greatest in section 5, and in subbrachycephalic subjects, where thickness was greatest in section 9. Cl did not prove to be an appropriate parameter for evaluating zygomatic bone thickness for this sampling.
Resumo:
Indirect ELISA and IFAT have been reported to be more sensitive and specific than agglutination tests. However, MAT is cheaper, easier than the others and does not need special equipment. The purpose of this study was to compare an enzyme linked immunosorbent assay using crude rhoptries of Toxoplasma gondii as coating wells (r-ELISA) with indirect fluorescence antibody test (IFAT) and modified agglutination test (MAT) to detect anti-T. gondii antibodies in sera of experimentally infected pigs. Ten mixed breed pigs between 6.5 and 7.5 weeks old were used. All pigs were negative for the presence of T. gondii antibodies by IFAT (titre < 16), r-ELISA (OD < 0.295) and MAT (titre < 16). Animals received 7 × 107 viable tachyzoites of the RH strain by intramuscular (IM) route at day 0. Serum samples were collected at days -6, 0, 7, 14, 21, 28, 35, 42, 50, and 57. IFAT detected anti-T. gondii antibodies earlier than r-ELISA and MAT. The average of antibody levels was higher at day 35 in IFAT (Log10 = 2.9) and in MAT (Log10 = 3.5), and at day 42 in r-ELISA (OD = 0.797). The antibody levels remained high through the 57th day after inoculation in MAT, and there was a decrease tendency in r-ELISA and IFAT. IFAT was used as gold standard and r-ELISA demonstrated a higher prevalence (73.3%), sensitivity (94.3%), negative predictive value (83.3%), and accuracy (95.6%) than MAT. Kappa agreements among tests were calculated, and the best results were shown by r-ELISA × IFAT (κ = 0.88, p < 0.001). Cross-reaction with Sarcocystis miescheriana was investigated in r-ELISA and OD mean was 0.163 ± 0.035 (n = 65). Additionally, none of the animals inoculated with Sarcocystis reacted positively in r-ELISA. Our results indicate that r-ELISA could be a good method for serological detection of T. gondii infection in pigs. © 2005 Elsevier Inc. All rights reserved.
Resumo:
Differences of venom peptide composition as function of two collection methodologies, electrical stimulation (ES) and reservoir disruption (RD), were analyzed by reverse-phase HPLC in Apis mellifera races - A. m. adansonii, A. m. ligustica and Africanized honeybee. The analyses were performed through determination of the relative number and percentage of each molecular form associated to the peaks eluted by chromatography. Comparison of these profiles revealed qualitative and quantitative differences related to the venom collection methodology as well to the three races analyzed. In contrast to data usually found for venom proteins, the three races presented a major number of peaks or molecular forms when venom was collected by ES. Besides, in general, the relative concentration of each peak was higher for ES in relation to RD. That indicates the presence of molecular precursors in the venom obtained by RD. The presence/absence pattern of the peaks, such as their relative concentrations showed a closer similarity between A. m. adansonii and the Africanized honeybees than that observed between these and A. m. ligustica. The obtained data allowed a discussion about the differences in the relative concentration of each venom component according to the collection methodology, and finally the biological action of the venom in different races. So, these results, apart from being useful to establish a peptide profile for each bee race as a function of the venom collection methodology, pointed out once more that the chromatographic techniques are a great tool for the identification of A. mellifera subspecies.
Resumo:
Among the positioning systems that compose GNSS (Global Navigation Satellite System), GPS has the capability of providing low, medium and high precision positioning data. However, GPS observables may be subject to many different types of errors. These systematic errors can degrade the accuracy of the positioning provided by GPS. These errors are mainly related to GPS satellite orbits, multipath, and atmospheric effects. In order to mitigate these errors, a semiparametric model and the penalized least squares technique were employed in this study. This is similar to changing the stochastical model, in which error functions are incorporated and the results are similar to those in which the functional model is changed instead. Using this method, it was shown that ambiguities and the estimation of station coordinates were more reliable and accurate than when employing a conventional least squares methodology.
Resumo:
The main purpose of this work is the development of computational tools in order to assist the on-line automatic detection of burn in the surface grinding process. Most of the parameters currently employed in the burning recognition (DPO, FKS, DPKS, DIFP, among others) do not incorporate routines for automatic selection of the grinding passes, therefore, requiring the user's interference for the choice of the active region. Several methods were employed in the passes extraction; however, those with the best results are presented in this article. Tests carried out in a surface-grinding machine have shown the success of the algorithms developed for pass extraction. Copyright © 2007 by ABCM.
Resumo:
This paper presents a tool box developed to read files describing a SIMULINK® model and translates it into a structural VHDL-AMS description. In translation process, all files and directory structures to simulate the translated model on SystemVision™ environment is generate. The tool box named MS2SV was tested by three models of commercially available digital-to-analogue converters. All models use the R2R ladder network to conversion, but the functionality of these three components is different. The methodology of conversion of the model is presents together with sort theory about R-2R ladder network. In the evaluation of the translated models, we used a sine waveform input signal and the waveform generated by D/A conversion process was compared by FFT analysis. The results show the viability of this type of approach. This work considers some of challenges set by the electronic industry for the further development of simulation methodologies and tools in the field of mixed-signal technology. © 2007 IEEE.
Resumo:
Given that the total amount of losses in a distribution system is known, with a reliable methodology for the technical loss calculation, the non-technical losses can be obtained by subtraction. A usual method of calculation technical losses in the electric utilities uses two important factors: load factor and the loss factor. The load factor is usually obtained with energy and demand measurements, whereas, to compute the loss factor it is necessary the learning of demand and energy loss, which are not, in general, prone of direct measurements. In this work, a statistical analysis of this relationship using the curves of a sampling of consumers in a specific company is presented. These curves will be summarized in different bands of coefficient k. Then, it will be possible determine where each group of consumer has its major concentration of points. ©2008 IEEE.