957 resultados para Optimal reactive dispatch problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the traditional paradigm, the large power plants supply the reactive power required at a transmission level and the capacitors and transformer tap changer were also used at a distribution level. However, in a near future will be necessary to schedule both active and reactive power at a distribution level, due to the high number of resources connected in distribution levels. This paper proposes a new multi-objective methodology to deal with the optimal resource scheduling considering the distributed generation, electric vehicles and capacitor banks for the joint active and reactive power scheduling. The proposed methodology considers the minimization of the cost (economic perspective) of all distributed resources, and the minimization of the voltage magnitude difference (technical perspective) in all buses. The Pareto front is determined and a fuzzy-based mechanism is applied to present the best compromise solution. The proposed methodology has been tested in the 33-bus distribution network. The case study shows the results of three different scenarios for the economic, technical, and multi-objective perspectives, and the results demonstrated the importance of incorporating the reactive scheduling in the distribution network using the multi-objective perspective to obtain the best compromise solution for the economic and technical perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we formulate the electricity retailers’ short-term decision-making problem in a liberalized retail market as a multi-objective optimization model. Retailers with light physical assets, such as generation and storage units in the distribution network, are considered. Following advances in smart grid technologies, electricity retailers are becoming able to employ incentive-based demand response (DR) programs in addition to their physical assets to effectively manage the risks of market price and load variations. In this model, the DR scheduling is performed simultaneously with the dispatch of generation and storage units. The ultimate goal is to find the optimal values of the hourly financial incentives offered to the end-users. The proposed model considers the capacity obligations imposed on retailers by the grid operator. The profit seeking retailer also has the objective to minimize the peak demand to avoid the high capacity charges in form of grid tariffs or penalties. The non-dominated sorting genetic algorithm II (NSGA-II) is used to solve the multi-objective problem. It is a fast and elitist multi-objective evolutionary algorithm. A case study is solved to illustrate the efficient performance of the proposed methodology. Simulation results show the effectiveness of the model for designing the incentive-based DR programs and indicate the efficiency of NSGA-II in solving the retailers’ multi-objective problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY Chagas disease is a public health problem worldwide. The availability of diagnostic tools to predict the development of chronic Chagas cardiomyopathy is crucial to reduce morbidity and mortality. Here we analyze the prognostic value of adenosine deaminase serum activity (ADA) and C-reactive protein serum levels (CRP) in chagasic individuals. One hundred and ten individuals, 28 healthy and 82 chagasic patients were divided according to disease severity in phase I (n = 35), II (n = 29), and III (n = 18). A complete medical history, 12-lead electrocardiogram, chest X-ray, and M-mode echocardiogram were performed on each individual. Diagnosis of Chagas disease was confirmed by ELISA and MABA using recombinant antigens; ADA was determined spectrophotometrically and CRP by ELISA. The results have shown that CRP and ADA increased linearly in relation to disease phase, CRP being significantly higher in phase III and ADA at all phases. Also, CRP and ADA were positively correlated with echocardiographic parameters of cardiac remodeling and with electrocardiographic abnormalities, and negatively with ejection fraction. CRP and ADA were higher in patients with cardiothoracic index ≥ 50%, while ADA was higher in patients with ventricular repolarization disturbances. Finally, CRP was positively correlated with ADA. In conclusion, ADA and CRP are prognostic markers of cardiac dysfunction and remodeling in Chagas disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT:C-reactive protein (CRP) has been widely used in the early risk assessment of patients with acute pancreatitis (AP), but unclear aspects about its prognostic accuracy in this setting persist. This project evaluated first CRP prognostic accuracy for severity, pancreatic necrosis (PNec), and in-hospital mortality (IM) in AP in terms of the best timing for CRP measurement and the optimal CRP cutoff points. Secondly it was evaluated the CRP measured at approximately 24 hours after hospital admission (CRP24) prognostic accuracy for IM in AP individually and in a combined model with a recent developed tool for the early risk assessment of patients with AP, the Bedside Index for Severity in AP (BISAP). Two single-centre retrospective cohort studies were held. The first study included 379 patients and the second study included 134 patients. Statistical methods such as the Hosmer-Lemeshow goodness-of-fit test, the area under the receiver-operating characteristic curve, the net reclassification improvement, and the integrated discrimination improvement were used. It was found that CRP measured at approximately 48 hours after hospital admission (CRP48) had a prognostic accuracy for severity, PNec, and IM in AP better than CRP measured at any other timing. It was observed that the optimal CRP48 cutoff points for severity, PNec, and IM in AP varied from 170mg/l to 190mg/l, values greater than the one most often recommended in the literature – 150mg/l. It was found that CRP24 had a good prognostic accuracy for IM in AP and that the cutoff point of 60mg/l had a negative predictive value of 100%. Finally it was observed that the prognostic accuracy of a combined model including BISAP and CRP24 for IM in AP could perform better than the BISAP alone model. These results might have a direct impact on the early risk assessment of patients with AP in the daily clinical practice.--------- RESUMO: A proteina c-reactiva (CRP) tem sido largamente usada na avaliação precoce do risco em doentes com pancreatite aguda (AP), mas aspectos duvidosos acerca do seu valor prognóstico neste contexto persistem. Este projecto avaliou primeiro o valor prognóstico da CRP para a gravidade, a necrose pancreática (PNec) e a mortalidade intra-hospitalar (IM) na AP em termos do melhor momento para efectuar a sua medição e dos seus pontos-de-corte óptimos. Em segundo lugar foi avaliado o valor prognóstico da proteína c-reactiva medida aproximadamente às 24 horas após a admissão hospitalar (CRP24) para a IM na AP isoladamente e num modelo combinado, que incluiu uma ferramenta de avaliação precoce do risco em doentes com AP recentemente desenvolvida, o Bedside Index for Severity in Acute Pancreatitis (BISAP). Dois estudos unicêntricos de coorte retrospectivo foram realizados. O primeiro estudo incluiu 379 doentes e o segundo estudo incluiu 134 doentes. Metodologias estatísticas como o teste de Hosmer-Lemeshow goodness-of-fit, a area under the receiver-operating characteristic curve, o net reclassification improvement e o integrated discrimination improvement foram usadas. Verificou-se que a CRP medida às 48 horas após a admissão hospitalar (CRP48) teve um valor prognóstico para a gravidade, a PNec e a IM na AP melhor do que a CRP medida em qualquer outro momento. Observou-se que os pontos de corte óptimos da CRP48 para a gravidade, a PNec e a IM na AP variaram entre 170mg/l e 190mg/l, valores acima do valor mais frequentemente recomendado na literatura – 150mg/l. Verificou-se que a CRP medida aproximadamente às 24 horas após a admissão hospitalar (CRP24) teve um bom valor prognóstico para a IM na AP e que o ponto de corte 60mg/l teve um valor preditivo negativo de 100%. Finalmente observou-se que o valor prognóstico de um modelo combinado incluindo o BISAP e a CRP24 para a IM na AP pode ter um desempenho melhor do que o do BISAP isoladamente. Estes resultados podem ter um impacto directo na avaliação precoce do risco em doentes com AP na prática clínica diária.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We characterize the optimal job design in a multitasking environment when the firms rely on implicit incentive contracts (i.e., bonus payments). Two natural forms of job design are compared: (i) individual accountability, where each agent is assigned to a particular job and assumes full responsibility for its outcome; and (ii) team accountability, where a group of agents share responsibility for a job and are jointly accountable for its outcome. The key trade-off is that team accountability mitigates the multitasking problem but may weaken the implicit contracts. The optimal job design follows a cut-off rule: firms with high reputation concerns opt for team accountability, whereas firms with low reputation concerns opt for individual accountability. Team accountability is more likely the more acute the multitasking problem is. However, the cut-off rule need not hold if the firm combines implicit incentives with explicit pay-per-performance contracts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of today’s systems, especially when related to the Web or to multi-agent systems, are not standalone or independent, but are part of a greater ecosystem, where they need to interact with other entities, react to complex changes in the environment, and act both over its own knowledge base and on the external environment itself. Moreover, these systems are clearly not static, but are constantly evolving due to the execution of self updates or external actions. Whenever actions and updates are possible, the need to ensure properties regarding the outcome of performing such actions emerges. Originally purposed in the context of databases, transactions solve this problem by guaranteeing atomicity, consistency, isolation and durability of a special set of actions. However, current transaction solutions fail to guarantee such properties in dynamic environments, since they cannot combine transaction execution with reactive features, or with the execution of actions over domains that the system does not completely control (thus making rolling back a non-viable proposition). In this thesis, we investigate what and how transaction properties can be ensured over these dynamic environments. To achieve this goal, we provide logic-based solutions, based on Transaction Logic, to precisely model and execute transactions in such environments, and where knowledge bases can be defined by arbitrary logic theories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims at assessing the optimal behavior of a firm facing stochastic costs of production. In an imperfectly competitive setting, we evaluate to what extent a firm may decide to locate part of its production in other markets different from which it is actually settled. This decision is taken in a stochastic environment. Portfolio theory is used to derive the optimal solution for the intertemporal profit maximization problem. In such a framework, splitting production between different locations may be optimal when a firm is able to charge different prices in the different local markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the optimal technology policy to solve a free-riding problem between the members of a RJV. We assume that when intervening the Government suffers an additional adverse selection problem because it is not able to distinguish the value of the potential innovation. Although subsidies and monitoring may be equivalent policy tools to solve firms' free-riding problem, they imply different social losses if the Government is not able to perfectly distinguish the value of the potential innovation. The supremacy of monitoring tools over subsidies is proved to depend on which type of information the Government is able to obtain about firms' R&D performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the screening problem that arises in a framework where, initially, the agent is privately informed about both the expected production cost and the cost variability and, at a later stage, he learns privately the cost realization. The speci c set of relevant incentive constraints, and so the characteristics of the optimal mechanism, depend nely upon the curvature of the principal s marginal surplus function as well as the relative importance of the two initial information problems. Pooling of production levels is optimally induced with respect to the cost variability when the principal's knowledge imperfection about the latter is sufficiently less important than that about the expected cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the optimal degree of discretion in monetary policy when the central bank conducts policy based on its private information about the state of the economy and is unable to commit. Society seeks to maximize social welfare by imposing restrictions on the central bank's actions over time, and the central bank takes these restrictions and the New Keynesian Phillips curve as constraints. By solving a dynamic mechanism design problem we find that it is optimal to grant "constrained discretion" to the central bank by imposing both upper and lower bounds on permissible inflation, and that these bounds must be set in a history-dependent way. The optimal degree of discretion varies over time with the severity of the time-inconsistency problem, and, although no discretion is optimal when the time-inconsistency problem is very severe, our numerical experiment suggests that no-discretion is a transient phenomenon, and that some discretion is granted eventually.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When using a polynomial approximating function the most contentious aspect of the Heat Balance Integral Method is the choice of power of the highest order term. In this paper we employ a method recently developed for thermal problems, where the exponent is determined during the solution process, to analyse Stefan problems. This is achieved by minimising an error function. The solution requires no knowledge of an exact solution and generally produces significantly better results than all previous HBI models. The method is illustrated by first applying it to standard thermal problems. A Stefan problem with an analytical solution is then discussed and results compared to the approximate solution. An ablation problem is also analysed and results compared against a numerical solution. In both examples the agreement is excellent. A Stefan problem where the boundary temperature increases exponentially is analysed. This highlights the difficulties that can be encountered with a time dependent boundary condition. Finally, melting with a time-dependent flux is briefly analysed without applying analytical or numerical results to assess the accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper discusses the utilization of new techniques ot select processes for protein recovery, separation and purification. It describesa rational approach that uses fundamental databases of proteins molecules to simplify the complex problem of choosing high resolution separation methods for multi component mixtures. It examines the role of modern computer techniques to help solving these questions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a stabilized conforming finite volume element method for the Stokes equations. On stating the convergence of the method, optimal a priori error estimates in different norms are obtained by establishing the adequate connection between the finite volume and stabilized finite element formulations. A superconvergence result is also derived by using a postprocessing projection method. In particular, the stabilization of the continuous lowest equal order pair finite volume element discretization is achieved by enriching the velocity space with local functions that do not necessarily vanish on the element boundaries. Finally, some numerical experiments that confirm the predicted behavior of the method are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce and analyze two new semi-discrete numerical methods for the multi-dimensional Vlasov-Poisson system. The schemes are constructed by combing a discontinuous Galerkin approximation to the Vlasov equation together with a mixed finite element method for the Poisson problem. We show optimal error estimates in the case of smooth compactly supported initial data. We propose a scheme that preserves the total energy of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced