868 resultados para OPTIMAL EXPANSION
Resumo:
This paper presents a methodology to solve the transmission network expansion planning problem (TNEP) considering reliability and uncertainty in the demand. The proposed methodology provides an optimal expansion plan that allows the power system to operate adequately with an acceptable level of reliability and in an enviroment with uncertainness. The reliability criterion limits the expected value of the reliability index (LOLE - Loss Of Load Expectation) of the expanded system. The reliability is evaluated for the transmission system using an analytical technique based in enumeration. The mathematical model is solved, in a efficient way, using a specialized genetic algorithm of Chu-Beasley modified. Detailed results from an illustrative example are presented and discussed. © 2009 IEEE.
Resumo:
In the past few years, prompted by the globalization and the quality and ease of travel, the world has witnessed a boom in the tourism sector. The forecast is that this tendency will continue in the upcoming years, representing a set of opportunities for companies operating in this business area. Boost Tourism operates in the tourism entertainment industry. Its revenues growth has been exponential so the founders decided that it was time to take it to new heights. This Work Project aims to study three alternative growth strategies and, based on a comprehensive analysis of the industry and the market, provide recommendations to outline the optimal expansion path.
Resumo:
The current literature on the role of interleukin (IL)-2 in memory CD8+ T-cell differentiation indicates a significant contribution of IL-2 during primary and also secondary expansion of CD8+ T cells. IL-2 seems to be responsible for optimal expansion and generation of effector functions following primary antigenic challenge. As the magnitude of T-cell expansion determines the numbers of memory CD8+ T cells surviving after pathogen elimination, these event influence memory cell generation. Moreover, during the contraction phase of an immune respons where most antigen-specific CD8+ T cells disappear by apoptosis, IL-2 signals are able to rescu CD8+ T cells from cell death and provide a durable increase in memory CD8+ T-cell counts. At the memory stage, CD8+ T-cell frequencies can be boosted by administration of exogenous IL-2 Significantly, only CD8+ T cells that have received IL-2 signals during initial priming are able t mediate efficient secondary expansion following renewed antigenic challenge. Thus, IL-2 signals during different phases of an immune response are key in optimizing CD8+ T-cell functions, thereby affecting both primary and secondary responses of these T cells.
Resumo:
The current literature on the role of interleukin (IL)-2 in memory CD8(+) T-cell differentiation indicates a significant contribution of IL-2 during primary and also secondary expansion of CD8(+) T cells. IL-2 seems to be responsible for optimal expansion and generation of effector functions following primary antigenic challenge. As the magnitude of T-cell expansion determines the numbers of memory CD8(+) T cells surviving after pathogen elimination, these events influence memory cell generation. Moreover, during the contraction phase of an immune response where most antigen-specific CD8(+) T cells disappear by apoptosis, IL-2 signals are able to rescue CD8(+) T cells from cell death and provide a durable increase in memory CD8(+) T-cell counts. At the memory stage, CD8(+) T-cell frequencies can be boosted by administration of exogenous IL-2. Significantly, only CD8(+) T cells that have received IL-2 signals during initial priming are able to mediate efficient secondary expansion following renewed antigenic challenge. Thus, IL-2 signals during different phases of an immune response are key in optimizing CD8(+) T-cell functions, thereby affecting both primary and secondary responses of these T cells.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
O problema de Planejamento da Expansão de Sistemas de Distribuição (PESD) visa determinar diretrizes para a expansão da rede considerando a crescente demanda dos consumidores. Nesse contexto, as empresas distribuidoras de energia elétrica têm o papel de propor ações no sistema de distribuição com o intuito de adequar o fornecimento da energia aos padrões exigidos pelos órgãos reguladores. Tradicionalmente considera-se apenas a minimização do custo global de investimento de planos de expansão, negligenciando-se questões de confiabilidade e robustez do sistema. Como consequência, os planos de expansão obtidos levam o sistema de distribuição a configurações que são vulneráveis a elevados cortes de carga na ocorrência de contingências na rede. Este trabalho busca a elaboração de uma metodologia para inserir questões de confiabilidade e risco ao problema PESD tradicional, com o intuito de escolher planos de expansão que maximizem a robustez da rede e, consequentemente, atenuar os danos causados pelas contingências no sistema. Formulou-se um modelo multiobjetivo do problema PESD em que se minimizam dois objetivos: o custo global (que incorpora custo de investimento, custo de manutenção, custo de operação e custo de produção de energia) e o risco de implantação de planos de expansão. Para ambos os objetivos, são formulados modelos lineares inteiros mistos que são resolvidos utilizando o solver CPLEX através do software GAMS. Para administrar a busca por soluções ótimas, optou-se por programar em linguagem C++ dois Algoritmos Evolutivos: Non-dominated Sorting Genetic Algorithm-2 (NSGA2) e Strength Pareto Evolutionary Algorithm-2 (SPEA2). Esses algoritmos mostraram-se eficazes nessa busca, o que foi constatado através de simulações do planejamento da expansão de dois sistemas testes adaptados da literatura. O conjunto de soluções encontradas nas simulações contém planos de expansão com diferentes níveis de custo global e de risco de implantação, destacando a diversidade das soluções propostas. Algumas dessas topologias são ilustradas para se evidenciar suas diferenças.
Resumo:
Stem cells, either from embryonic or adult sources, have demonstrated the potential to differentiate into a wide range of tissues depending on culture conditions. This makes them prime candidates for use in tissue engineering applications. Current technology allows us to process biocompatible and biodegradable polymers into three-dimensional (3D) configurations, either as solid porous scaffolds or hydrogels, with controlled macro and/or micro spatial geometry and surface chemistry. Such control provides us with the ability to present highly controlled microenvironments to a chosen cell type. However, the precise microenvironments required for optimal expansion and/or differentiation of stem cells are only now being elucidated, and hence the controlled use of stem cells in tissue engineering remains a very young field. We present here a brief review of the current literature detailing interactions between stem cells and 3D scaffolds of varying morphology and chemical properties, concluding with remaining challenges for those interested in tissue engineering using tailored scaffolds and stem cells.
Resumo:
The main focus of this work is to define a numerical methodology to simulate an aerospike engine and then to analyse the performance of DemoP1, which is a small aerospike demonstrator built by Pangea Aerospace. The aerospike is a promising solution to build more efficient engine than the actual one. Its main advantage is the expansion adaptation that allows to reach the optimal expansion in a wide range of ambient pressures delivering more thrust than an equivalent bell-shaped nozzle. The main drawbacks are the cooling system design and the spike manufacturing but nowadays, these issues seem to be overcome with the use of the additive manufacturing method. The simulations are performed with dbnsTurbFoam which is a solver of OpenFOAM. It has been designed to simulate a supersonic compressible turbulent flow. This work is divided in four chapters. The first one is a short introduction. The second one shows a brief summary of the theoretical performance of the aerospike. The third one introduces the numerical methodology to simulate a compressible supersonic flow. In the fourth chapter, the solver has been verified with an experiment found in literature. And in the fifth chapter, the simulations on DemoP1 engine are illustrated.
Resumo:
This paper makes two points. First, we show that the line-of-sight solution to cosmic microwave anisotropies in Fourier space, even though formally defined for arbitrarily large wavelengths, leads to position-space solutions which only depend on the sources of anisotropies inside the past light cone of the observer. This foretold manifestation of causality in position (real) space happens order by order in a series expansion in powers of the visibility gamma = e(-mu), where mu is the optical depth to Thomson scattering. We show that the contributions of order gamma(N) to the cosmic microwave background (CMB) anisotropies are regulated by spacetime window functions which have support only inside the past light cone of the point of observation. Second, we show that the Fourier-Bessel expansion of the physical fields (including the temperature and polarization momenta) is an alternative to the usual Fourier basis as a framework to compute the anisotropies. The viability of the Fourier-Bessel series for treating the CMB is a consequence of the fact that the visibility function becomes exponentially small at redshifts z >> 10(3), effectively cutting off the past light cone and introducing a finite radius inside which initial conditions can affect physical observables measured at our position (x) over right arrow = 0 and time t(0). Hence, for each multipole l there is a discrete tower of momenta k(il) (not a continuum) which can affect physical observables, with the smallest momenta being k(1l) similar to l. The Fourier-Bessel modes take into account precisely the information from the sources of anisotropies that propagates from the initial value surface to the point of observation-no more, no less. We also show that the physical observables (the temperature and polarization maps), and hence the angular power spectra, are unaffected by that choice of basis. This implies that the Fourier-Bessel expansion is the optimal scheme with which one can compute CMB anisotropies.
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.
Resumo:
In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.
Resumo:
Successful expansion of haematopoietic cells in ex vivo cultures will have important applications in transplantation, gene therapy, immunotherapy and potentially also in the production of non-haematopoietic cell types. Haematopoietic stem cells (HSC), with their capacity to both self-renew and differentiate into all blood lineages, represent the ideal target for expansion protocols. However, human HSC are rare, poorly characterized phenotypically and genotypically, and difficult to test functionally. Defining optimal culture parameters for ex vivo expansion has been a major challenge. We devised a simple and reproducible stroma-free liquid culture system enabling long-term expansion of putative haematopoietic progenitors contained within frozen human fetal liver (FL) crude cell suspensions. Starting from a small number of total nucleated cells, a massive haematopoietic cell expansion, reaching > 1013-fold the input cell number after approximately 300 d of culture, was consistently achieved. Cells with a primitive phenotype were present throughout the culture and also underwent a continuous expansion. Moreover, the capacity for multilineage lymphomyeloid differentiation, as well as the recloning capacity of primitive myeloid progenitors, was maintained in culture. With its better proliferative potential as compared with adult sources, FL represents a promising alternative source of HSC and the culture system described here should be useful for clinical applications.