449 resultados para CUADRATURA DE GAUSS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This present research the aim to show to the reader the Geometry non-Euclidean while anomaly indicating the pedagogical implications and then propose a sequence of activities, divided into three blocks which show the relationship of Euclidean geometry with non-Euclidean, taking the Euclidean with respect to analysis of the anomaly in non-Euclidean. PPGECNM is tied to the line of research of History, Philosophy and Sociology of Science in the Teaching of Natural Sciences and Mathematics. Treat so on Euclid of Alexandria, his most famous work The Elements and moreover, emphasize the Fifth Postulate of Euclid, particularly the difficulties (which lasted several centuries) that mathematicians have to understand him. Until the eighteenth century, three mathematicians: Lobachevsky (1793 - 1856), Bolyai (1775 - 1856) and Gauss (1777-1855) was convinced that this axiom was correct and that there was another geometry (anomalous) as consistent as the Euclid, but that did not adapt into their parameters. It is attributed to the emergence of these three non-Euclidean geometry. For the course methodology we started with some bibliographical definitions about anomalies, after we ve featured so that our definition are better understood by the readers and then only deal geometries non-Euclidean (Hyperbolic Geometry, Spherical Geometry and Taxicab Geometry) confronting them with the Euclidean to analyze the anomalies existing in non-Euclidean geometries and observe its importance to the teaching. After this characterization follows the empirical part of the proposal which consisted the application of three blocks of activities in search of pedagogical implications of anomaly. The first on parallel lines, the second on study of triangles and the third on the shortest distance between two points. These blocks offer a work with basic elements of geometry from a historical and investigative study of geometries non-Euclidean while anomaly so the concept is understood along with it s properties without necessarily be linked to the image of the geometric elements and thus expanding or adapting to other references. For example, the block applied on the second day of activities that provides extend the result of the sum of the internal angles of any triangle, to realize that is not always 180° (only when Euclid is a reference that this conclusion can be drawn)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We presented in this work two methods of estimation for accelerated failure time models with random e_ects to process grouped survival data. The _rst method, which is implemented in software SAS, by NLMIXED procedure, uses an adapted Gauss-Hermite quadrature to determine marginalized likelihood. The second method, implemented in the free software R, is based on the method of penalized likelihood to estimate the parameters of the model. In the _rst case we describe the main theoretical aspects and, in the second, we briey presented the approach adopted with a simulation study to investigate the performance of the method. We realized implement the models using actual data on the time of operation of oil wells from the Potiguar Basin (RN / CE).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is about the 21st century reinforced concrete analysis under the point of view of its constituent materials. First of all it is described the theoretical approach of the bending elements calculated based on the Norms BAEL 91 standarts. After that, numerical load-displacement are presented from reinforced concrete beams and plates validated by experimental data. The numerical modellings has been carried on in the program CASTEM 2000. In this program a elastoplastic model of Drucker-Prager defines the rupture surface of the concrete in non associative plasticity. The crack is smeared on the Gauss points of the finite elements with formation criterion starting from the definition of the rupture surface in the branch traction-traction of the Rankine model. The reinforcements were modeled in a discrete approach with perfect bond. Finally, a comparative analysis is made between the numerical results and calculated criteria showing the future of high performance reinforced concrete in this beginning of 21st century.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Survival models deals with the modeling of time to event data. However in some situations part of the population may be no longer subject to the event. Models that take this fact into account are called cure rate models. There are few studies about hypothesis tests in cure rate models. Recently a new test statistic, the gradient statistic, has been proposed. It shares the same asymptotic properties with the classic large sample tests, the likelihood ratio, score and Wald tests. Some simulation studies have been carried out to explore the behavior of the gradient statistic in fi nite samples and compare it with the classic statistics in diff erent models. The main objective of this work is to study and compare the performance of gradient test and likelihood ratio test in cure rate models. We first describe the models and present the main asymptotic properties of the tests. We perform a simulation study based on the promotion time model with Weibull distribution to assess the performance of the tests in finite samples. An application is presented to illustrate the studied concepts

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acid rain is a major assault on the environment, a consequence of burning fossil fuels and industrial pollutants the basis of sulfur dioxide released into the atmosphere. The objective of this research was to monitor and analyze changes in water quality of rain in the city of Natal, seeking to investigate the influence of quality on a local, regional and global, in addition to possible effects of this quality in the local landscape. Data collection was performed from December 2005 to December 2007. We used techniques of nefanálise in identifying systems sinóticos, field research in the search for possible effects of acid rain on the landscape, and collect and analyze data of precipitation and its degree of acidity. Used descriptive statistics (standard deviation and coefficient of variation) used to monitor the behavior of chemical precipitation, and monitoring of errors in measurements of pH, level of confidence, Normalized distribution of Gauss, confidence intervals, analysis of variance ANOVA were also used. Main results presented as a variation of pH between 5,021 and 6,836, with an average standard deviation of 5,958 and 0,402, showing that the average may represent the sample. Thus, we can infer that, according to the CONAMA Resolution 357 (the index for fresh water acidity should be between 6.0 and 9.0), the precipitation of Natal / RN is slightly acidic. It appears that the intertropical convergence zone figures showed the most acidic among the systems analyzed sinóticos, taking its average value of pH of 5,617, which means an acid value now, with a standard deviation of 0,235 and the coefficient of variation of 4,183% which shows that the average may represent the sample. Already in field research and found several places that suffer strongly the action of acid rain. However, the results are original and need further investigation, including the use of new methodologies

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A study of the reducibility of the Fock space representation of the q-deformed harmonic oscillator algebra for real and root of unity values of the deformation parameter is carried out by using the properties of the Gauss polynomials. When the deformation parameter is a root of unity, an interesting result comes out in the form of a reducibility scheme for the space representation which is based on the classification of the primitive or nonprimitive character of the deformation parameter. An application is carried out for a q-deformed harmonic oscillator Hamiltonian, to which the reducibility scheme is explicitly applied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The results in this paper are motivated by two analogies. First, m-harmonic functions in R(n) are extensions of the univariate algebraic polynomials of odd degree 2m-1. Second, Gauss' and Pizzetti's mean value formulae are natural multivariate analogues of the rectangular and Taylor's quadrature formulae, respectively. This point of view suggests that some theorems concerning quadrature rules could be generalized to results about integration of polyharmonic functions. This is done for the Tchakaloff-Obrechkoff quadrature formula and for the Gaussian quadrature with two nodes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The evaluation of the microscopic generalized interacting boson model (GIBM) Hamiltonian, deduced from the general microscopic nuclear Hamiltonian via the collective O-A-1 invariant microscopic Hamiltonian of the general restricted dynamics model (RDM) in the case of central multipole and multipole-Gauss type effective NN-potential is briefly discussed. The GIBM version, which includes all sixth-order terms in the expansion of the collective part of the NN-potential, has been obtained. This GIBM Hamiltonian contains additional terms compared with the standard (sd-boson) interacting boson model (IBM). The microscopic expressions for the standard IBM Hamiltonian parameters in terms of the employed effective NN-potential parameters have also been obtained.