870 resultados para power of sale


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Problems as voltage increase at the end of a feeder, demand supply unbalance in a fault condition, power quality decline, increase of power losses, and reduction of reliability levels may occur if Distributed Generators (DGs) are not properly allocated. For this reason, researchers have been employed several solution techniques to solve the problem of optimal allocation of DGs. This work is focused on the ancillary service of reactive power support provided by DGs. The main objective is to price this service by determining the costs in which a DG incurs when it loses sales opportunity of active power, i.e, by determining the Loss of Opportunity Costs (LOC). The LOC will be determined for different allocation alternatives of DGs as a result of a multi-objective optimization process, aiming the minimization of losses in the lines of the system and costs of active power generation from DGs, and the maximization of the static voltage stability margin of the system. The effectiveness of the proposed methodology in improving the goals outlined was demonstrated using the IEEE 34 bus distribution test feeder with two DGs cosidered to be allocated. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a heuristic technique for solving simultaneous short-term transmission network expansion and reactive power planning problem (TEPRPP) via an AC model is presented. A constructive heuristic algorithm (CHA) aimed to obtaining a significant quality solution for such problem is employed. An interior point method (IPM) is applied to solve TEPRPP as a nonlinear programming (NLP) during the solution steps of the algorithm. For each proposed network topology, an indicator is deployed to identify the weak buses for reactive power sources placement. The objective function of NLP includes the costs of new transmission lines, real power losses as well as reactive power sources. By allocating reactive power sources at load buses, the circuit capacity may increase while the cost of new lines can be decreased. The proposed methodology is tested on Garver's system and the obtained results shows its capability and the viability of using AC model for solving such non-convex optimization problem. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed Generation, microgrid technologies, two-way communication systems, and demand response programs are issues that are being studied in recent years within the concept of smart grids. At some level of enough penetration, the Distributed Generators (DGs) can provide benefits for sub-transmission and transmission systems through the so-called ancillary services. This work is focused on the ancillary service of reactive power support provided by DGs, specifically Wind Turbine Generators (WTGs), with high level of impact on transmission systems. The main objective of this work is to propose an optimization methodology to price this service by determining the costs in which a DG incurs when it loses sales opportunity of active power, i.e, by determining the Loss of Opportunity Costs (LOC). LOC occur when more reactive power is required than available, and the active power generation has to be reduced in order to increase the reactive power capacity. In the optimization process, three objectives are considered: active power generation costs of DGs, voltage stability margin of the system, and losses in the lines of the network. Uncertainties of WTGs are reduced solving multi-objective optimal power flows in multiple probabilistic scenarios constructed by Monte Carlo simulations, and modeling the time series associated with the active power generation of each WTG via Fuzzy Logic and Markov Chains. The proposed methodology was tested using the IEEE 14 bus test system with two WTGs installed. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I- 100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We derive asymptotic expansions for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of dispersion models, under a sequence of Pitman alternatives. The asymptotic distributions of these statistics are obtained for testing a subset of regression parameters and for testing the precision parameter. Based on these nonnull asymptotic expansions, the power of all four tests, which are equivalent to first order, are compared. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hadron therapy is a promising technique to treat deep-seated tumors. For an accurate treatment planning, the energy deposition in the soft and hard human tissue must be well known. Water has been usually employed as a phantom of soft tissues, but other biomaterials, such as hydroxyapatite (HAp), used as bone substitute, are also relevant as a phantom for hard tissues. The stopping power of HAp for H+ and He+ beams has been studied experimentally and theoretically. The measurements have been done using the Rutherford backscattering technique in an energy range of 450-2000 keV for H+ and of 400-5000 keV for He+ projectiles. The theoretical calculations are based in the dielectric formulation together with the MELF-GOS (Mermin Energy-Loss Function – Generalized Oscillator Strengths) method [1] to describe the target excitation spectrum. A quite good agreement between the experimental data and the theoretical results has been found. The depth dose profile of H+ and He+ ion beams in HAp has been simulated by the SEICS (Simulation of Energetic Ions and Clusters through Solids) code [2], which incorporates the electronic stopping force due to the energy loss by collisions with the target electrons, including fluctuations due to the energy-loss straggling, the multiple elastic scattering with the target nuclei, with their corresponding nuclear energy loss, and the dynamical charge-exchange processes in the projectile charge state. The energy deposition by H+ and He+ as a function of the depth are compared, at several projectile energies, for HAp and liquid water, showing important differences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is the power transient analysis concerning experimental devices placed within the reflector of Jules Horowitz Reactor (JHR). Since JHR material testing facility is designed to achieve 100 MW core thermal power, a large reflector hosts fissile material samples that are irradiated up to total relevant power of 3 MW. MADISON devices are expected to attain 130 kW, conversely ADELINE nominal power is of some 60 kW. In addition, MOLFI test samples are envisaged to reach 360 kW for what concerns LEU configuration and up to 650 kW according to HEU frame. Safety issues concern shutdown transients and need particular verifications about thermal power decreasing of these fissile samples with respect to core kinetics, as far as single device reactivity determination is concerned. Calculation model is conceived and applied in order to properly account for different nuclear heating processes and relative time-dependent features of device transients. An innovative methodology is carried out since flux shape modification during control rod insertions is investigated regarding the impact on device power through core-reflector coupling coefficients. In fact, previous methods considering only nominal core-reflector parameters are then improved. Moreover, delayed emissions effect is evaluated about spatial impact on devices of a diffuse in-core delayed neutron source. Delayed gammas transport related to fission products concentration is taken into account through evolution calculations of different fuel compositions in equilibrium cycle. Provided accurate device reactivity control, power transients are then computed for every sample according to envisaged shutdown procedures. Results obtained in this study are aimed at design feedback and reactor management optimization by JHR project team. Moreover, Safety Report is intended to utilize present analysis for improved device characterization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ‘Normative Power Europe’ debate has been a leitmotif in the academic discourse for over a decade. Far from being obsolete, the topic is as relevant as when the term was first coined by Ian Manners in 2002.1 ‘To be or not to be a normative power’ is certainly one of the existential dilemmas in the foreign policy of the European Union. This paper, however, intends to move beyond the black-and-white debate on whether the European Union is a normative power and to make it more nuanced by examining the factors that make it such. Contrary to the conventional perception that the European Union is a necessarily ‘benign’ force in the world, it assumes that it has aspirations to be a viable international actor. Consequently, it pursues different types of foreign policy behaviour with a varying degree of normativity in them. The paper addresses the question of under what conditions the European Union is a ‘normative power’. The findings of the study demonstrate that the ‘normative powerof the European Union is conditioned upon internal and external elements, engaged in a complex interaction with a decisive role played by the often neglected external elements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The idea behind the reputational measure for assessing power of political actors is that actors involved in a decision-making process have the best view of their fellows' power. There has been, however, no systematic examination of why actors consider other actors as powerful. Consequently, it is unclear whether reputational power measures what it ought to. The paper analyzes the determinants of power attribution and distinguishes intended from unintended determinants in a data-set of power assessment covering 10 political decision-making processes in Switzerland. Results are overall reassuring, but nevertheless point toward self-promotion or misperception biases, as informants systematically attribute more power to actors with whom they collaborate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power is one of the most fundamental concepts in political science, and it is a crucial aspect of decision-making structures. The distribution of power between political actors and coalitions of actors informs us about who is actually able to influence decision-making processes. It is thus no surprise that power is a centerpiece of our assessment of political decision-making in Switzerland. In line with the main argument of this book, Chapter 3 has uncovered important changes in decision-making structures, which resulted in a rebalancing of power between governing parties, interest groups and state executive actors. Conjecturing about the reasons that may account for these changes, we pointed to factors of an organizational and institutional nature. For example, we put forward the decline of pre-parliamentary procedures oriented towards corporatist intermediation as a possible explanation for the weakening of interest groups. More generally, in several chapters it has been suggested that there is a relationship between the institutional design of a decision-making process, the related importance of decision-making phases and an actor's participation in these phases on the one hand, and the power of actors (and coalitions of actors) on the other. In addition, the analyses carried out in Chapters 2 to 5 draw our attention to the differences in power structure across decision-making processes or types of processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results We show that GPNN has high power to detect even relatively small genetic effects (2–3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (