13 resultados para Ramp rate constraints

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Economic and environmental load dispatch aims to determine the amount of electricity generated from power plants to meet load demand while minimizing fossil fuel costs and air pollution emissions subject to operational and licensing requirements. These two scheduling problems are commonly formulated with non-smooth cost functions respectively considering various effects and constraints, such as the valve point effect, power balance and ramp rate limits. The expected increase in plug-in electric vehicles is likely to see a significant impact on the power system due to high charging power consumption and significant uncertainty in charging times. In this paper, multiple electric vehicle charging profiles are comparatively integrated into a 24-hour load demand in an economic and environment dispatch model. Self-learning teaching-learning based optimization (TLBO) is employed to solve the non-convex non-linear dispatch problems. Numerical results on well-known benchmark functions, as well as test systems with different scales of generation units show the significance of the new scheduling method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Economic dispatch (ED) problems often exhibit non-linear, non-convex characteristics due to the valve point effects. Further, various constraints and factors, such as prohibited operation zones, ramp rate limits and security constraints imposed by the generating units, and power loss in transmission make it even more challenging to obtain the global optimum using conventional mathematical methods. Meta-heuristic approaches are capable of solving non-linear, non-continuous and non-convex problems effectively as they impose no requirements on the optimization problems. However, most methods reported so far mainly focus on a specific type of ED problems, such as static or dynamic ED problems. This paper proposes a hybrid harmony search with arithmetic crossover operation, namely ACHS, for solving five different types of ED problems, including static ED with valve point effects, ED with prohibited operating zones, ED considering multiple fuel cells, combined heat and power ED, and dynamic ED. In this proposed ACHS, the global best information and arithmetic crossover are used to update the newly generated solution and speed up the convergence, which contributes to the algorithm exploitation capability. To balance the exploitation and exploration capabilities, the opposition based learning (OBL) strategy is employed to enhance the diversity of solutions. Further, four commonly used crossover operators are also investigated, and the arithmetic crossover shows its efficiency than the others when they are incorporated into HS. To make a comprehensive study on its scalability, ACHS is first tested on a group of benchmark functions with a 100 dimensions and compared with several state-of-the-art methods. Then it is used to solve seven different ED cases and compared with the results reported in literatures. All the results confirm the superiority of the ACHS for different optimization problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A simple yet efficient harmony search (HS) method with a new pitch adjustment rule (NPAHS) is proposed for dynamic economic dispatch (DED) of electrical power systems, a large-scale non-linear real time optimization problem imposed by a number of complex constraints. The new pitch adjustment rule is based on the perturbation information and the mean value of the harmony memory, which is simple to implement and helps to enhance solution quality and convergence speed. A new constraint handling technique is also developed to effectively handle various constraints in the DED problem, and the violation of ramp rate limits between the first and last scheduling intervals that is often ignored by existing approaches for DED problems is effectively eliminated. To validate the effectiveness, the NPAHS is first tested on 10 popular benchmark functions with 100 dimensions, in comparison with four HS variants and five state-of-the-art evolutionary algorithms. Then, NPAHS is used to solve three 24-h DED systems with 5, 15 and 54 units, which consider the valve point effects, transmission loss, emission and prohibited operating zones. Simulation results on all these systems show the scalability and superiority of the proposed NPAHS on various large scale problems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have calculated 90% confidence limits on the steady-state rate of catastrophic disruptions of main belt asteroids in terms of the absolute magnitude at which one catastrophic disruption occurs per year  as a function of the post-disruption increase in brightness (Δm) and subsequent brightness decay rate (τ  ). The confidence limits were calculated using the brightest unknown main belt asteroid (V=18.5) detected with the Pan-STARRS1 (Pan-STARRS1) telescope. We measured the Pan-STARRS1’s catastrophic disruption detection efficiency over a 453-day interval using the Pan-STARRS moving object processing system (MOPS) and a simple model for the catastrophic disruption event’s photometric behavior in a small aperture centered on the catastrophic disruption event. We then calculated the  contours in the ranges from  and  encompassing measured values from known cratering and disruption events and our model’s predictions. Our simplistic catastrophic disruption model suggests that  and  which would imply that H0≳28—strongly inconsistent withH0,B2005=23.26±0.02 predicted by Bottke et al. (Bottke, W.F., Durda, D.D., Nesvorný, D., Jedicke, R., Morbidelli, A., Vokrouhlický, D., Levison, H.F. [2005]. Icarus, 179, 63–94.) using purely collisional models. However, if we assume that H0=H0,B2005 our results constrain , inconsistent with our simplistic impact-generated catastrophic disruption model. We postulate that the solution to the discrepancy is that >99% of main belt catastrophic disruptions in the size range to which this study was sensitive (∼100 m) are not impact-generated, but are instead due to fainter rotational breakups, of which the recent discoveries of disrupted asteroids P/2013 P5 and P/2013 R3 are probable examples. We estimate that current and upcoming asteroid surveys may discover up to 10 catastrophic disruptions/year brighter than V=18.5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The urinary catheter is a thin plastic tube that has been designed to empty the bladder artificially, effortlessly, and with minimum discomfort. The current CH14 male catheter design was examined with a view to optimizing the mass flow rate. The literature imposed constraints to the analysis of the urinary catheter to ensure that a compromise between optimal flow, patient comfort, and everyday practicality from manufacture to use was achieved in the new design. As a result a total of six design characteristics were examined. The input variables in question were the length and width of eyelets 1 and 2 (four variables), the distance between the eyelets, and the angle of rotation between the eyelets. Due to the high number of possible input combinations a structured approach to the analysis of data was necessary. A combination of computational fluid dynamics (CFD) and design of experiments (DOE) has been used to evaluate the "optimal configuration." The use of CFD couple with DOE is a novel concept, which harnesses the computational power of CFD in the most efficient manner for prediction of the mass flow rate in the catheter. Copyright © 2009 by ASME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract-Channel state information (CSI) at the transmitter can be used to adapt transmission rate or antenna gains in multi-antenna systems. We propose a rate-adaptive M-QAM scheme equipped with orthogonal space-time block coding with simple outdated, finite-rate feedback over independent flat fading channels. We obtain closed-form expressions for the average BER and throughput for our scheme, and analyze the effects of possibly delayed feedback on the performance gains. We derive optimal switching thresholds maximizing the average throughput under average and outage BER constraints with outdated feedback. Our numerical results illustrate the immunity of our optimal thresholds to delayed feedback.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hundsalm ice cave located at 1520 m altitude in a karst region of western Austria contains up to 7-m-thick deposits of snow, firn and congelation ice. Wood fragments exposed in the lower parts of an ice and firn wall were radiocarbon accelerator mass spectrometry (AMS) dated. Although the local stratigraphy is complex, the 19 individual dates - the largest currently available radiocarbon dataset for an Alpine ice cave - allow to place constraints on the accumulation and ablation history of the cave ice. Most of the cave was either ice free or contained only a small firn and ice body during the 'Roman Warm Period'; dates of three wood fragments mark the onset of firn and ice build-up in the 6th and 7th century ad. In the central part of the cave, the oldest samples date back to the 13th century and record ice growth coeval with the onset of the 'Little Ice Age'. The majority of the ice and firn deposit, albeit compromised by a disturbed stratigraphy, appears to have been formed during the subsequent centuries, supported by wood samples from the 15th to the 17th century. The oldest wood remains found so far inside the ice is from the end of the Bronze Age and implies that local relics of prehistoric ice may be preserved in this cave. The wood record from Hundsalm ice cave shows parallels to the Alpine glacier history of the last three millennia, for example, the lack of preserved wood remains during periods of known glacier minima, and underscores the potential of firn and ice in karst cavities as a long-term palaeoclimate archive, which has been degrading at an alarming rate in recent years. © The Author(s) 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last 15 years, the supernova community has endeavoured to directly identify progenitor stars for core-collapse supernovae discovered in nearby galaxies. These precursors are often visible as resolved stars in high-resolution images from space-and ground-based telescopes. The discovery rate of progenitor stars is limited by the local supernova rate and the availability and depth of archive images of galaxies, with 18 detections of precursor objects and 27 upper limits. This review compiles these results (from 1999 to 2013) in a distance-limited sample and discusses the implications of the findings. The vast majority of the detections of progenitor stars are of type II-P, II-L, or IIb with one type Ib progenitor system detected and many more upper limits for progenitors of Ibc supernovae (14 in all). The data for these 45 supernovae progenitors illustrate a remarkable deficit of high-luminosity stars above an apparent limit of log L/L-circle dot similar or equal to 5.1 dex. For a typical Salpeter initial mass function, one would expect to have found 13 high-luminosity and high-mass progenitors by now. There is, possibly, only one object in this time-and volume-limited sample that is unambiguously high-mass (the progenitor of SN2009ip) although the nature of that supernovae is still debated. The possible biases due to the influence of circumstellar dust, the luminosity analysis, and sample selection methods are reviewed. It does not appear likely that these can explain the missing high-mass progenitor stars. This review concludes that the community's work to date shows that the observed populations of supernovae in the local Universe are not, on the whole, produced by high-mass (M greater than or similar to 18 M-circle dot) stars. Theoretical explosions of model stars also predict that black hole formation and failed supernovae tend to occur above an initial mass of M similar or equal to 18 M-circle dot. The models also suggest there is no simple single mass division for neutron star or black-hole formation and that there are islands of explodability for stars in the 8-120 M-circle dot range. The observational constraints are quite consistent with the bulk of stars above M similar or equal to 18 M-circle dot collapsing to form black holes with no visible supernovae.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present nebular-phase optical and near-infrared spectroscopy of the Type IIP supernova SN 2012aw combined with non-local thermodynamic equilibrium radiative transfer calculations applied to ejecta from stellar evolution/explosion models. Our spectral synthesis models generally show good agreement with the ejecta from a MZAMS = 15 Mprogenitor star. The emission lines of oxygen, sodium, and magnesium are all consistent with the nucleosynthesis in a progenitor in the 14-18 M range.We also demonstrate how the evolution of the oxygen cooling lines of [O I] λ5577, [O I] λ6300, and [O I] λ6364 can be used to constrain the mass of oxygen in the non-molecularly cooled ashes to < 1 M, independent of the mixing in the ejecta. This constraint implies that any progenitor model of initial mass greater than 20 M would be difficult to reconcile with the observed line strengths. A stellar progenitor of around MZAMS = 15 M can consistently explain the directly measured luminosity of the progenitor star, the observed nebular spectra, and the inferred pre-supernova mass-loss rate.We conclude that there is still no convincing example of a Type IIP supernova showing the nucleosynthesis products expected from an MZAMS > 20 M progenitor. © 2014 The Author. Published by Oxford University Press on behalf of the Royal Astronomical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm.