47 resultados para Operational constraints
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
This paper presents an approach for the active transmission losses allocation between the agents of the system. The approach uses the primal and dual variable information of the Optimal Power Flow in the losses allocation strategy. The allocation coefficients are determined via Lagrange multipliers. The paper emphasizes the necessity to consider the operational constraints and parameters of the systems in the problem solution. An example, for a 3-bus system is presented in details, as well as a comparative test with the main allocation methods. Case studies on the IEEE 14-bus systems are carried out to verify the influence of the constraints and parameters of the system in the losses allocation.
Resumo:
A new, simple approach for modeling and assessing the operation and response of the multiline voltage-source controller (VSC)-based flexible ac transmission system controllers, namely the generalized interline power-flow controller (GIPFC) and the interline power-flow controller (IPFC), is presented in this paper. The model and the analysis developed are based on the converters` power balance method which makes use of the d-q orthogonal coordinates to thereafter present a direct solution for these controllers through a quadratic equation. The main constraints and limitations that such devices present while controlling the two independent ac systems considered, will also be evaluated. In order to examine and validate the steady-state model initially proposed, a phase-shift VSC-based GIPFC was also built in the Alternate Transients Program program whose results are also included in this paper. Where applicable, a comparative evaluation between the GIPFC and the IPFC is also presented.
Resumo:
This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The representation of sustainability concerns in industrial forests management plans, in relation to environmental, social and economic aspects, involve a great amount of details when analyzing and understanding the interaction among these aspects to reduce possible future impacts. At the tactical and operational planning levels, methods based on generic assumptions usually provide non-realistic solutions, impairing the decision making process. This study is aimed at improving current operational harvesting planning techniques, through the development of a mixed integer goal programming model. This allows the evaluation of different scenarios, subject to environmental and supply constraints, increase of operational capacity, and the spatial consequences of dispatching harvest crews to certain distances over the evaluation period. As a result, a set of performance indicators was selected to evaluate all optimal solutions provided to different possible scenarios and combinations of these scenarios, and to compare these outcomes with the real results observed by the mill in the study case area. Results showed that it is possible to elaborate a linear programming model that adequately represents harvesting limitations, production aspects and environmental and supply constraints. The comparison involving the evaluated scenarios and the real observed results showed the advantage of using more holistic approaches and that it is possible to improve the quality of the planning recommendations using linear programming techniques.
Resumo:
Pipeline systems play a key role in the petroleum business. These operational systems provide connection between ports and/or oil fields and refineries (upstream), as well as between these and consumer markets (downstream). The purpose of this work is to propose a novel MINLP formulation based on a continuous time representation for the scheduling of multiproduct pipeline systems that must supply multiple consumer markets. Moreover, it also considers that the pipeline operates intermittently and that the pumping costs depend on the booster stations yield rates, which in turn may generate different flow rates. The proposed continuous time representation is compared with a previously developed discrete time representation [Rejowski, R., Jr., & Pinto, J. M. (2004). Efficient MILP formulations and valid cuts for multiproduct pipeline scheduling. Computers and Chemical Engineering, 28, 1511] in terms of solution quality and computational performance. The influence of the number of time intervals that represents the transfer operation is studied and several configurations for the booster stations are tested. Finally, the proposed formulation is applied to a larger case, in which several booster configurations with different numbers of stages are tested. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This article describes and compares three heuristics for a variant of the Steiner tree problem with revenues, which includes budget and hop constraints. First, a greedy method which obtains good approximations in short computational times is proposed. This initial solution is then improved by means of a destroy-and-repair method or a tabu search algorithm. Computational results compare the three methods in terms of accuracy and speed. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Several numerical methods for boundary value problems use integral and differential operational matrices, expressed in polynomial bases in a Hilbert space of functions. This work presents a sequence of matrix operations allowing a direct computation of operational matrices for polynomial bases, orthogonal or not, starting with any previously known reference matrix. Furthermore, it shows how to obtain the reference matrix for a chosen polynomial base. The results presented here can be applied not only for integration and differentiation, but also for any linear operation.
Resumo:
We present a re-analysis of the Geneva-Copenhagen survey, which benefits from the infrared flux method to improve the accuracy of the derived stellar effective temperatures and uses the latter to build a consistent and improved metallicity scale. Metallicities are calibrated on high-resolution spectroscopy and checked against four open clusters and a moving group, showing excellent consistency. The new temperature and metallicity scales provide a better match to theoretical isochrones, which are used for a Bayesian analysis of stellar ages. With respect to previous analyses, our stars are on average 100 K hotter and 0.1 dex more metal rich, which shift the peak of the metallicity distribution function around the solar value. From Stromgren photometry we are able to derive for the first time a proxy for [alpha/Fe] abundances, which enables us to perform a tentative dissection of the chemical thin and thick disc. We find evidence for the latter being composed of an old, mildly but systematically alpha-enhanced population that extends to super solar metallicities, in agreement with spectroscopic studies. Our revision offers the largest existing kinematically unbiased sample of the solar neighbourhood that contains full information on kinematics, metallicities, and ages and thus provides better constraints on the physical processes relevant in the build-up of the Milky Way disc, enabling a better understanding of the Sun in a Galactic context.
Resumo:
We discuss the dynamics of the Universe within the framework of the massive graviton cold dark matter scenario (MGCDM) in which gravitons are geometrically treated as massive particles. In this modified gravity theory, the main effect of the gravitons is to alter the density evolution of the cold dark matter component in such a way that the Universe evolves to an accelerating expanding regime, as presently observed. Tight constraints on the main cosmological parameters of the MGCDM model are derived by performing a joint likelihood analysis involving the recent supernovae type Ia data, the cosmic microwave background shift parameter, and the baryonic acoustic oscillations as traced by the Sloan Digital Sky Survey red luminous galaxies. The linear evolution of small density fluctuations is also analyzed in detail. It is found that the growth factor of the MGCDM model is slightly different (similar to 1-4%) from the one provided by the conventional flat Lambda CDM cosmology. The growth rate of clustering predicted by MGCDM and Lambda CDM models are confronted to the observations and the corresponding best fit values of the growth index (gamma) are also determined. By using the expectations of realistic future x-ray and Sunyaev-Zeldovich cluster surveys we derive the dark matter halo mass function and the corresponding redshift distribution of cluster-size halos for the MGCDM model. Finally, we also show that the Hubble flow differences between the MGCDM and the Lambda CDM models provide a halo redshift distribution departing significantly from the those predicted by other dark energy models. These results suggest that the MGCDM model can observationally be distinguished from Lambda CDM and also from a large number of dark energy models recently proposed in the literature.
Resumo:
We discuss the properties of homogeneous and isotropic flat cosmologies in which the present accelerating stage is powered only by the gravitationally induced creation of cold dark matter (CCDM) particles (Omega(m) = 1). For some matter creation rates proposed in the literature, we show that the main cosmological functions such as the scale factor of the universe, the Hubble expansion rate, the growth factor, and the cluster formation rate are analytically defined. The best CCDM scenario has only one free parameter and our joint analysis involving baryonic acoustic oscillations + cosmic microwave background (CMB) + SNe Ia data yields (Omega) over tilde = 0.28 +/- 0.01 (1 sigma), where (Omega) over tilde (m) is the observed matter density parameter. In particular, this implies that the model has no dark energy but the part of the matter that is effectively clustering is in good agreement with the latest determinations from the large- scale structure. The growth of perturbation and the formation of galaxy clusters in such scenarios are also investigated. Despite the fact that both scenarios may share the same Hubble expansion, we find that matter creation cosmologies predict stronger small scale dynamics which implies a faster growth rate of perturbations with respect to the usual Lambda CDM cosmology. Such results point to the possibility of a crucial observational test confronting CCDM with Lambda CDM scenarios through a more detailed analysis involving CMB, weak lensing, as well as the large-scale structure.
Resumo:
Aims. We calculate the theoretical event rate of gamma-ray bursts (GRBs) from the collapse of massive first-generation (Population III; Pop III) stars. The Pop III GRBs could be super-energetic with the isotropic energy up to E(iso) greater than or similar to 10(55-57) erg, providing a unique probe of the high-redshift Universe. Methods. We consider both the so-called Pop III.1 stars (primordial) and Pop III.2 stars (primordial but affected by radiation from other stars). We employ a semi-analytical approach that considers inhomogeneous hydrogen reionization and chemical evolution of the intergalactic medium. Results. We show that Pop III.2 GRBs occur more than 100 times more frequently than Pop III.1 GRBs, and thus should be suitable targets for future GRB missions. Interestingly, our optimistic model predicts an event rate that is already constrained by the current radio transient searches. We expect similar to 10-10(4) radio afterglows above similar to 0.3 mJy on the sky with similar to 1 year variability and mostly without GRBs (orphans), which are detectable by ALMA, EVLA, LOFAR, and SKA, while we expect to observe maximum of N < 20 GRBs per year integrated over at z > 6 for Pop III.2 and N < 0.08 per year integrated over at z > 10 for Pop III.1 with EXIST, and N < 0.2 for Pop III.2 GRBs per year integrated over at z > 6 with Swift.
Resumo:
The kinematic approach to cosmological tests provides direct evidence to the present accelerating stage of the Universe that does not depend on the validity of general relativity, as well as on the matter-energy content of the Universe. In this context, we consider here a linear two-parameter expansion for the decelerating parameter, q(z)=q(0)+q(1)z, where q(0) and q(1) are arbitrary constants to be constrained by the union supernovae data. By assuming a flat Universe we find that the best fit to the pair of free parameters is (q(0),q(1))=(-0.73,1.5) whereas the transition redshift is z(t)=0.49(-0.07)(+0.14)(1 sigma) +0.54-0.12(2 sigma). This kinematic result is in agreement with some independent analyses and more easily accommodates many dynamical flat models (like Lambda CDM).
Resumo:
This paper reports results from a search for nu(mu) -> nu(e) transitions by the MINOS experiment based on a 7 x 10(20) protons-on-target exposure. Our observation of 54 candidate nu(e) events in the far detector with a background of 49.1 +/- 7.0(stat) +/- 2.7(syst) events predicted by the measurements in the near detector requires 2sin(2)(2 theta(13))sin(2)theta(23) < 0.12(0.20) at the 90% C.L. for the normal (inverted) mass hierarchy at delta(CP) = 0. The experiment sets the tightest limits to date on the value of theta(13) for nearly all values of delta(CP) for the normal neutrino mass hierarchy and maximal sin(2)(2 theta(23)).
Resumo:
For Au + Au collisions at 200 GeV, we measure neutral pion production with good statistics for transverse momentum, p(T), up to 20 GeV/c. A fivefold suppression is found, which is essentially constant for 5 < p(T) < 20 GeV/c. Experimental uncertainties are small enough to constrain any model-dependent parametrization for the transport coefficient of the medium, e. g., <(q) over cap > in the parton quenching model. The spectral shape is similar for all collision classes, and the suppression does not saturate in Au + Au collisions.
Resumo:
The PHENIX experiment has measured the suppression of semi-inclusive single high-transverse-momentum pi(0)'s in Au+Au collisions at root s(NN) = 200 GeV. The present understanding of this suppression is in terms of energy loss of the parent (fragmenting) parton in a dense color-charge medium. We have performed a quantitative comparison between various parton energy-loss models and our experimental data. The statistical point-to-point uncorrelated as well as correlated systematic uncertainties are taken into account in the comparison. We detail this methodology and the resulting constraint on the model parameters, such as the initial color-charge density dN(g)/dy, the medium transport coefficient <(q) over cap >, or the initial energy-loss parameter epsilon(0). We find that high-transverse-momentum pi(0) suppression in Au+Au collisions has sufficient precision to constrain these model-dependent parameters at the +/- 20-25% (one standard deviation) level. These constraints include only the experimental uncertainties, and further studies are needed to compute the corresponding theoretical uncertainties.