999 resultados para Custos e Análise de custo


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The frequency selective surfaces, or FSS (Frequency Selective Surfaces), are structures consisting of periodic arrays of conductive elements, called patches, which are usually very thin and they are printed on dielectric layers, or by openings perforated on very thin metallic surfaces, for applications in bands of microwave and millimeter waves. These structures are often used in aircraft, missiles, satellites, radomes, antennae reflector, high gain antennas and microwave ovens, for example. The use of these structures has as main objective filter frequency bands that can be broadcast or rejection, depending on the specificity of the required application. In turn, the modern communication systems such as GSM (Global System for Mobile Communications), RFID (Radio Frequency Identification), Bluetooth, Wi-Fi and WiMAX, whose services are highly demanded by society, have required the development of antennas having, as its main features, and low cost profile, and reduced dimensions and weight. In this context, the microstrip antenna is presented as an excellent choice for communications systems today, because (in addition to meeting the requirements mentioned intrinsically) planar structures are easy to manufacture and integration with other components in microwave circuits. Consequently, the analysis and synthesis of these devices mainly, due to the high possibility of shapes, size and frequency of its elements has been carried out by full-wave models, such as the finite element method, the method of moments and finite difference time domain. However, these methods require an accurate despite great computational effort. In this context, computational intelligence (CI) has been used successfully in the design and optimization of microwave planar structures, as an auxiliary tool and very appropriate, given the complexity of the geometry of the antennas and the FSS considered. The computational intelligence is inspired by natural phenomena such as learning, perception and decision, using techniques such as artificial neural networks, fuzzy logic, fractal geometry and evolutionary computation. This work makes a study of application of computational intelligence using meta-heuristics such as genetic algorithms and swarm intelligence optimization of antennas and frequency selective surfaces. Genetic algorithms are computational search methods based on the theory of natural selection proposed by Darwin and genetics used to solve complex problems, eg, problems where the search space grows with the size of the problem. The particle swarm optimization characteristics including the use of intelligence collectively being applied to optimization problems in many areas of research. The main objective of this work is the use of computational intelligence, the analysis and synthesis of antennas and FSS. We considered the structures of a microstrip planar monopole, ring type, and a cross-dipole FSS. We developed algorithms and optimization results obtained for optimized geometries of antennas and FSS considered. To validate results were designed, constructed and measured several prototypes. The measured results showed excellent agreement with the simulated. Moreover, the results obtained in this study were compared to those simulated using a commercial software has been also observed an excellent agreement. Specifically, the efficiency of techniques used were CI evidenced by simulated and measured, aiming at optimizing the bandwidth of an antenna for wideband operation or UWB (Ultra Wideband), using a genetic algorithm and optimizing the bandwidth, by specifying the length of the air gap between two frequency selective surfaces, using an optimization algorithm particle swarm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robots are present each time more on several areas of our society, however they are still considered expensive equipments that are restricted to few people. This work con- sists on the development of control techniques and architectures that make possible the construction and programming of low cost robots with low programming and building complexity. One key aspect of the proposed architecture is the use of audio interfaces to control actuators and read sensors, thus allowing the usage of any device that can produce sounds as a control unit of a robot. The work also includes the development of web ba- sed programming environments that allow the usage of computers or mobile phones as control units of the robot, which can be remotely programmed and controlled. The work also includes possible applications of such low cost robotic platform, including mainly its educational usage, which was experimentally validated by teachers and students of seve- ral graduation courses. We also present an analysis of data obtained from interviews done with the students before and after the use of our platform, which confirms its acceptance as a teaching support tool

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, one of the biggest challenges for the field of data mining is to perform cluster analysis on complex data. Several techniques have been proposed but, in general, they can only achieve good results within specific areas providing no consensus of what would be the best way to group this kind of data. In general, these techniques fail due to non-realistic assumptions about the true probability distribution of the data. Based on this, this thesis proposes a new measure based on Cross Information Potential that uses representative points of the dataset and statistics extracted directly from data to measure the interaction between groups. The proposed approach allows us to use all advantages of this information-theoretic descriptor and solves the limitations imposed on it by its own nature. From this, two cost functions and three algorithms have been proposed to perform cluster analysis. As the use of Information Theory captures the relationship between different patterns, regardless of assumptions about the nature of this relationship, the proposed approach was able to achieve a better performance than the main algorithms in literature. These results apply to the context of synthetic data designed to test the algorithms in specific situations and to real data extracted from problems of different fields

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, a frequency selective surface (FSS) consists of a two-dimensional periodic structure mounted on a dielectric substrate, which is capable of selecting signals in one or more frequency bands of interest. In search of better performance, more compact dimensions, low cost manufacturing, among other characteristics, these periodic structures have been continually optimized over time. Due to its spectral characteristics, which are similar to band-stop or band-pass filters, the FSSs have been studied and used in several applications for more than four decades. The design of an FSS with a periodic structure composed by pre-fractal elements facilitates the tuning of these spatial filters and the adjustment of its electromagnetic parameters, enabling a compact design which generally has a stable frequency response and superior performance relative to its euclidean counterpart. The unique properties of geometric fractals have shown to be useful, mainly in the production of antennas and frequency selective surfaces, enabling innovative solutions and commercial applications in microwave range. In recent applications, the FSSs modify the indoor propagation environments (emerging concept called wireless building ). In this context, the use of pre-fractal elements has also shown promising results, allowing a more effective filtering of more than one frequency band with a single-layer structure. This thesis approaches the design of FSSs using pre-fractal elements based on Vicsek, Peano and teragons geometries, which act as band-stop spatial filters. The transmission properties of the periodic surfaces are analyzed to design compact and efficient devices with stable frequency responses, applicable to microwave frequency range and suitable for use in indoor communications. The results are discussed in terms of the electromagnetic effect resulting from the variation of parameters such as: fractal iteration number (or fractal level), scale factor, fractal dimension and periodicity of FSS, according the pre-fractal element applied on the surface. The analysis of the fractal dimension s influence on the resonant properties of a FSS is a new contribution in relation to researches about microwave devices that use fractal geometry. Due to its own characteristics and the geometric shape of the Peano pre-fractal elements, the reconfiguration possibility of these structures is also investigated and discussed. This thesis also approaches, the construction of efficient selective filters with new configurations of teragons pre-fractal patches, proposed to control the WLAN coverage in indoor environments by rejecting the signals in the bands of 2.4~2.5 GHz (IEEE 802.11 b) and 5.0~6.0 GHz (IEEE 802.11a). The FSSs are initially analyzed through simulations performed by commercial software s: Ansoft DesignerTM and HFSSTM. The fractal design methodology is validated by experimental characterization of the built prototypes, using alternatively, different measurement setups, with commercial horn antennas and microstrip monopoles fabricated for low cost measurements

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pumping of fluids in pipelines is the most economic and safe form of transporting fluids. That explains why in Europe there was in 1999 about 30.000 Km [7] of pipelines of several diameters, transporting millíons of cubic meters of crude oil end refined products, belonging to COCAWE (assaciation of companies of petroleum of Europe for health, environment and safety, that joint several petroleum companies). In Brazil they are about 18.000 Km of pipelines transporting millions of cubic meters of liquids and gases. In 1999, nine accidents were registered to COCAWE. Among those accidents one brought a fatal victim. The oil loss was of 171 m3, equivalent to O,2 parts per million of the total of the transported volume. Same considering the facts mentioned the costs involved in ao accident can be high. An accident of great proportions can bríng loss of human lives, severe environmental darnages, loss of drained product, loss . for dismissed profit and damages to the image of the company high recovery cost. In consonance with that and in some cases for legal demands, the companies are, more and more, investing in systems of Leak detection in pipelines based on computer algorithm that operate in real time, seeking wíth that to minimize still more the drained volumes. This decreases the impacts at the environment and the costs. In general way, all the systems based on softWare present some type of false alarm. In general a commitment exists betWeen the sensibílity of the system and the number of false alarms. This work has as objective make a review of thé existent methods and to concentrate in the analysis of a specific system, that is, the system based on hydraulic noise, Pressure Point Analyzis (PPA). We will show which are the most important aspects that must be considered in the implementation of a Leak Detection System (LDS), from the initial phase of the analysis of risks passing by the project bases, design, choice of the necessary field instrumentation to several LDS, implementation and tests. We Will make na analysis of events (noises) originating from the flow system that can be generator of false alarms and we will present a computer algorithm that restricts those noises automatically

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O estudo estimou o custo operacional de produção da soja convencional e transgênica no Médio Paranapanema, Estado de São Paulo. Foram utilizados resultados de três experimentos de avaliação regional com 19 cultivares de soja, sendo 17 convencionais e 2 transgênicas. As estruturas de custo utilizadas foram custo operacional efetivo (COE) e o custo operacional total (COT). O COT, por hectare, da soja transgênica foi 10,7% menor que o da soja convencional. Porém, o custo unitário por saca foi menor para a soja convencional em razão da produtividade. A produtividade média foi de 35,2 e de 31,3 para as cultivares convencionais e transgênicas, respectivamente. A maior diferença porcentual no COT ocorreu nos itens sementes e herbicidas. A variação do custo de produção por saca da soja convencional foi de R$ 27,7 a R$ 39,5 e da soja transgênica R$ 29,5 e R$ 40,1. O alto custo dos insumos comprometeu a viabilidade da atividade nos dois sistemas de produção. Há necessidade de continuar a avaliação das cultivares de soja transgênica para conhecer as mais adaptadas regionalmente e tornar mais seguras as indicações técnicas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A avicultura de corte constitui-se numa importante atividade econômica no estado do Paraná e, como qualquer outra, está sujeita a riscos. Objetivou-se, neste trabalho, proceder à análise econômica da produção integrada de frango de corte, avaliando os riscos, considerando os sistemas climatizado, automático e manual. Utilizando-se as variáveis de risco: preço do produto, produtividade e custos de produção foi possível identificar as principais fontes de risco e sua influência na renda líquida. Os resultados apontaram que a rentabilidade da atividade é mais sensível aos componentes da receita do que de custos, sendo o preço a variável de maior sensibilidade. Verificou-se também que o aviário climatizado apresenta possibilidades de prejuízo mais alto para menores níveis de risco e, à medida que o risco aumenta, oferece retornos mais interessantes, em comparação aos sistemas automático e manual. O sistema manual foi o que passou a apresentar retorno a níveis de risco maior (acima de 25%).