873 resultados para Eletric power transmission


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A power transformer needs continuous monitoring and fast protection as it is a very expensive piece of equipment and an essential element in an electrical power system. The most common protection technique used is the percentage differential logic, which provides discrimination between an internal fault and different operating conditions. Unfortunately, there are some operating conditions of power transformers that can mislead the conventional protection affecting the power system stability negatively. This study proposes the development of a new algorithm to improve the protection performance by using fuzzy logic, artificial neural networks and genetic algorithms. An electrical power system was modelled using Alternative Transients Program software to obtain the operational conditions and fault situations needed to test the algorithm developed, as well as a commercial differential relay. Results show improved reliability, as well as a fast response of the proposed technique when compared with conventional ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work, we report experimental results of He stopping power into Al2O3 films by using both transmission and Rutherford backscattering techniques. We have performed measurements along a wide energy range, from 60 to 3000 key, covering the maximum stopping range. The results of this work are compared with previously published dap-, showing a good agreement for the high-energy range, but evidencing discrepancies in the low-energy region. The existing theories follow the same tendency: good theoretical-experimental agreement for higher energies, but they failed to reproduce previous and present results in the low energy regime. On the other hand it is interesting to note that the semi-empirical SRIM code reproduces quite well the present data. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity of power systems has increased in recent years due to the operation of existing transmission lines closer to their limits, using flexible AC transmission system (FACTS) devices, and also due to the increased penetration of new types of generators that have more intermittent characteristics and lower inertial response, such as wind generators. This changing nature of a power system has considerable effect on its dynamic behaviors resulting in power swings, dynamic interactions between different power system devices, and less synchronized coupling. This paper presents some analyses of this changing nature of power systems and their dynamic behaviors to identify critical issues that limit the large-scale integration of wind generators and FACTS devices. In addition, this paper addresses some general concerns toward high compensations in different grid topologies. The studies in this paper are conducted on the New England and New York power system model under both small and large disturbances. From the analyses, it can be concluded that high compensation can reduce the security limits under certain operating conditions, and the modes related to operating slip and shaft stiffness are critical as they may limit the large-scale integration of wind generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mención Internacional

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investigation on impulsive signals, originated from Partial Discharge (PD) phenomena, represents an effective tool for preventing electric failures in High Voltage (HV) and Medium Voltage (MV) systems. The determination of both sensors and instruments bandwidths is the key to achieve meaningful measurements, that is to say, obtaining the maximum Signal-To-Noise Ratio (SNR). The optimum bandwidth depends on the characteristics of the system under test, which can be often represented as a transmission line characterized by signal attenuation and dispersion phenomena. It is therefore necessary to develop both models and techniques which can characterize accurately the PD propagation mechanisms in each system and work out the frequency characteristics of the PD pulses at detection point, in order to design proper sensors able to carry out PD measurement on-line with maximum SNR. Analytical models will be devised in order to predict PD propagation in MV apparatuses. Furthermore, simulation tools will be used where complex geometries make analytical models to be unfeasible. In particular, PD propagation in MV cables, transformers and switchgears will be investigated, taking into account both irradiated and conducted signals associated to PD events, in order to design proper sensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation some novel indices for vulnerability and robustness assessment of power grids are presented. Such indices are mainly defined from the structure of transmission power grids, and with the aim of Blackout (BO) prevention and mitigation. Numerical experiments showing how they could be used alone or in coordination with pre-existing ones to reduce the effects of BOs are discussed. These indices are introduced inside 3 different sujects: The first subject is for taking a look into economical aspects of grids’ operation and their effects in BO propagation. Basically, simulations support that: the determination to operate the grid in the most profitable way could produce an increase in the size or frequency of BOs. Conversely, some uneconomical ways of supplying energy are shown to be less affected by BO phenomena. In the second subject new topological indices are devised to address the question of "which are the best buses to place distributed generation?". The combined use of two indices, is shown as a promising alternative for extracting grid’s significant features regarding robustness against BOs and distributed generation. For this purpose, a new index based on outage shift factors is used along with a previously defined electric centrality index. The third subject is on Static Robustness Analysis of electric networks, from a purely structural point of view. A pair of existing topological indices, (namely degree index and clustering coefficient), are combined to show how degradation of the network structure can be accelerated. Blackout simulations were carried out using the DC Power Flow Method and models of transmission networks from the USA and Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to develop a depth analysis of the inductive power transfer (or wireless power transfer, WPT) along a metamaterial composed of cells arranged in a planar configuration, in order to deliver power to a receiver sliding on them. In this way, the problem of the efficiency strongly affected by the weak coupling between emitter and receiver can be obviated, and the distance of transmission can significantly be increased. This study is made using a circuital approach and the magnetoinductive wave (MIW) theory, in order to simply explain the behavior of the transmission coefficient and efficiency from the circuital and experimental point of view. Moreover, flat spiral resonators are used as metamaterial cells, particularly indicated in literature for WPT metamaterials operating at MHz frequencies (5-30 MHz). Finally, this thesis presents a complete electrical characterization of multilayer and multiturn flat spiral resonators and, in particular, it proposes a new approach for the resistance calculation through finite element simulations, in order to consider all the high frequency parasitic effects. Multilayer and multiturn flat spiral resonators are studied in order to decrease the operating frequency down to kHz, maintaining small external dimensions and allowing the metamaterials to be supplied by electronic power converters (resonant inverters).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind power based generation has been rapidly growing world-wide during the recent past. In order to transmit large amounts of wind power over long distances, system planners may often add series compensation to existing transmission lines owing to several benefits such as improved steady-state power transfer limit, improved transient stability, and efficient utilization of transmission infrastructure. Application of series capacitors has posed resonant interaction concerns such as through subsynchronous resonance (SSR) with conventional turbine-generators. Wind turbine-generators may also be susceptible to such resonant interactions. However, not much information is available in literature and even engineering standards are yet to address these issues. The motivation problem for this research is based on an actual system switching event that resulted in undamped oscillations in a 345-kV series-compensated, typical ring-bus power system configuration. Based on time-domain ATP (Alternative Transients Program) modeling, simulations and analysis of system event records, the occurrence of subsynchronous interactions within the existing 345-kV series-compensated power system has been investigated. Effects of various small-signal and large-signal power system disturbances with both identical and non-identical wind turbine parameters (such as with a statistical-spread) has been evaluated. Effect of parameter variations on subsynchronous oscillations has been quantified using 3D-DFT plots and the oscillations have been identified as due to electrical self-excitation effects, rather than torsional interaction. Further, the generator no-load reactance and the rotor-side converter inner-loop controller gains have been identified as bearing maximum sensitivity to either damping or exacerbating the self-excited oscillations. A higher-order spectral analysis method based on modified Prony estimation has been successfully applied to the field records identifying dominant 9.79 Hz subsynchronous oscillations. Recommendations have been made for exploring countermeasures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanoscale research in energy storage has recently focused on investigating the properties of nanostructures in order to increase energy density, power rate, and capacity. To better understand the intrinsic properties of nanomaterials, a new and advanced in situ system was designed that allows atomic scale observation of materials under external fields. A special holder equipped with a scanning tunneling microscopy (STM) probe inside a transmission electron microscopy (TEM) system was used to perform the in situ studies on mechanical, electrical, and electrochemical properties of nanomaterials. The nanostructures of titanium dioxide (TiO2) nanotubes are characterized by electron imaging, diffraction, and chemical analysis techniques inside TEM. TiO2 nanotube is one of the candidates as anode materials for lithium ion batteries. It is necessary to study their morphological, mechanical, electrical, and electrochemical properties at atomic level. The synthesis of TiO2 nanotubes showed that the aspect ratio of TiO2 could be controlled by processing parameters, such as anodization time and voltage. Ammonium hydroxide (NH4OH) treated TiO2 nanotubes showed unexpected instability. Observation revealed the nanotubes were disintegrated into nanoparticles and the tubular morphology was vanished after annealing. The nitrogen compounds incorporated in surface defects weaken the nanotube and result in the collapse of nanotube into nanoparticles during phase transformation. Next, the electrical and mechanical properties of TiO2 nanotubes were studied by in situ TEM system. Phase transformation of anatase TiO2 nanotubes into rutile nanoparticles was studied by in situ Joule heating. The results showed that single anatase TiO2 nanotubes broke into ultrafine small anatase nanoparticles. On further increasing the bias, the nanoclusters of anatase particles became prone to a solid state reaction and were grown into stable large rutile nanoparticles. The relationship between mechanical and electrical properties of TiO2 nanotubes was also investigated. Initially, both anatase and amorphous TiO2 nanotubes were characterized by using I-V test to demonstrate the semiconductor properties. The observation of mechanical bending on TiO2 nanotubes revealed that the conductivity would increase when bending deformation happened. The defects on the nanotubes created by deformation helped electron transportation to increase the conductivity. Lastly, the electrochemical properties of amorphous TiO2 nanotubes were characterized by in situ TEM system. The direct chemical and imaging evidence of lithium-induced atomic ordering in amorphous TiO2 nanotubes was studied. The results indicated that the lithiation started with the valance reduction of Ti4+ to Ti3+ leading to a LixTiO2 intercalation compound. The continued intercalation of Li ions in TiO2 nanotubes triggered an amorphous to crystalline phase transformation. The crystals were formed as nano islands and identified to be Li2Ti2O4 with cubic structure (a = 8.375 Å). This phase transformation is associated with local inhomogeneities in Li distribution. Based on these observations, a new reaction mechanism is proposed to explain the first cycle lithiation behavior in amorphous TiO2 nanotubes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we describe dynamic unicast to increase communication efficiency in opportunistic Information-centric networks. The approach is based on broadcast requests to quickly find content and dynamically creating unicast links to content sources without the need of neighbor discovery. The links are kept temporarily as long as they deliver content and are quickly removed otherwise. Evaluations in mobile networks show that this approach maintains ICN flexibility to support seamless mobile communication and achieves up to 56.6% shorter transmission times compared to broadcast in case of multiple concurrent requesters. Apart from that, dynamic unicast unburdens listener nodes from processing unwanted content resulting in lower processing overhead and power consumption at these nodes. The approach can be easily included into existing ICN architectures using only available data structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With electricity consumption increasing within the UnitedStates, new paradigms of delivering electricity are required in order to meet demand. One promising option is the increased use of distributedpowergeneration. Already a growing percentage of electricity generation, distributedgeneration locates the power plant physically close to the consumer, avoiding transmission and distribution losses as well as providing the possibility of combined heat and power. Despite the efficiency gains possible, regulators and utilities have been reluctant to implement distributedgeneration, creating numerous technical, regulatory, and business barriers. Certain governments, most notable California, are making concerted efforts to overcome these barriers in order to ensure distributedgeneration plays a part as the country meets demand while shifting to cleaner sources of energy.