932 resultados para peak minimization (PM)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Demand response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralized agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus, it is desirable to use a scalable decentralized algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for peak minimization based on Dantzig-Wolfe decomposition (DWD). In addition, a time weighted maximization option is included in the cost function, which improves the quality of service for devices seeking to receive their desired energy sooner rather than later. This paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Demand Response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralised agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus it is desirable to use a scalable decentralised algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for Peak Minimisation (PM) based on Dantzig-Wolfe Decomposition (DWD). In addition, a Time Weighted Maximisation option is included in the cost function which improves the Quality of Service for devices seeking to receive their desired energy sooner rather than later. The paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Scan circuit generally causes excessive switching activity compared to normal circuit operation. The higher switching activity in turn causes higher peak power supply current which results into supply, voltage droop and eventually yield loss. This paper proposes an efficient methodology for test vector re-ordering to achieve minimum peak power supported by the given test vector set. The proposed methodology also minimizes average power under the minimum peak power constraint. A methodology to further reduce the peak power below the minimum supported peak power, by inclusion of minimum additional vectors is also discussed. The paper defines the lower bound on peak power for a given test set. The results on several benchmarks shows that it can reduce peak power by up to 27%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The overall aim of our research was to characterize airborne particles from selected nanotechnology processes and to utilize the data to develop and test quantitative particle concentration-based criteria that can be used to trigger an assessment of particle emission controls. We investigated particle number concentration (PNC), particle mass (PM) concentration, count median diameter (CMD), alveolar deposited surface area, elemental composition, and morphology from sampling of aerosols arising from six nanotechnology processes. These included fibrous and non-fibrous particles, including carbon nanotubes (CNTs). We adopted standard occupational hygiene principles in relation to controlling peak emission and exposures, as outlined by both Safe Work Australia, (1) and the American Conference of Governmental Industrial Hygienists (ACGIH®). (2) The results from the study were used to analyses peak and 30-minute averaged particle number and mass concentration values measured during the operation of the nanotechnology processes. Analysis of peak (highest value recorded) and 30-minute averaged particle number and mass concentration values revealed: Peak PNC20–1000 nm emitted from the nanotechnology processes were up to three orders of magnitude greater than the local background particle concentration (LBPC). Peak PNC300–3000 nm was up to an order of magnitude greater, and PM2.5 concentrations up to four orders of magnitude greater. For three of these nanotechnology processes, the 30-minute average particle number and mass concentrations were also significantly different from the LBPC (p-value < 0.001). We propose emission or exposure controls may need to be implemented or modified, or further assessment of the controls be undertaken, if concentrations exceed three times the LBPC, which is also used as the local particle reference value, for more than a total of 30 minutes during a workday, and/or if a single short-term measurement exceeds five times the local particle reference value. The use of these quantitative criteria, which we are terming the universal excursion guidance criteria, will account for the typical variation in LBPC and inaccuracy of instruments, while precautionary enough to highlight peaks in particle concentration likely to be associated with particle emission from the nanotechnology process. Recommendations on when to utilize local excursion guidance criteria are also provided.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Damage localization induced by strain softening can be predicted by the direct minimization of a global energy function. This article concerns the computational strategy for implementing this principle for softening materials such as concrete. Instead of using heuristic global optimization techniques, our strategies are a hybrid of local optimization methods with a path-finding approach to ensure a global optimum. With admissible nodal displacements being independent variables, it is easy to deal with the geometric (mesh) constraint conditions. The direct search optimization methods recover the localized solutions for a range of softening lattice models which are representative of quasi-brittle structures

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this chapter we propose clipping with amplitude and phase corrections to reduce the peak-to-average power ratio (PAR) of orthogonal frequency division multiplexed (OFDM) signals in high-speed wireless local area networks defined in IEEE 802.11a physical layer. The proposed techniques can be implemented with a small modification at the transmitter and the receiver remains standard compliant. PAR reduction as much as 4dB can be achieved by selecting a suitable clipping ratio and a correction factor depending on the constellation used. Out of band noise (OBN) is also reduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parallel combinatory orthogonal frequency division multiplexing (PC-OFDM yields lower maximum peak-to-average power ratio (PAR), high bandwidth efficiency and lower bit error rate (BER) on Gaussian channels compared to OFDM systems. However, PC-OFDM does not improve the statistics of PAR significantly. In this chapter, the use of a set of fixed permutations to improve the statistics of the PAR of a PC-OFDM signal is presented. For this technique, interleavers are used to produce K-1 permuted sequences from the same information sequence. The sequence with the lowest PAR, among K sequences is chosen for the transmission. The PAR of a PC-OFDM signal can be further reduced by 3-4 dB by this technique. Mathematical expressions for the complementary cumulative density function (CCDF)of PAR of PC-OFDM signal and interleaved PC-OFDM signal are also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of moving block signallings (MBS) has been adopted in a few mass transit railway systems. When a dense queue of trains begins to move from a complete stop, the trains can re-start in very close succession under MBS. The feeding substations nearby are likely to be overloaded and the service will inevitably be disturbed unless substations of higher power rating are used. By introducing starting time delays among the trains or limiting the trains’ acceleration rate to a certain extent, the peak energy demand can be contained. However, delay is introduced and quality of service is degraded. An expert system approach is presented to provide a supervisory tool for the operators. As the knowledge base is vital for the quality of decisions to be made, the study focuses on its formulation with a balance between delay and peak power demand.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work was to quantify exposure to particles emitted by wood-fired ovens in pizzerias. Overall, 15 microenvironments were chosen and analyzed in a 14-month experimental campaign. Particle number concentration and distribution were measured simultaneously using a Condensation Particle Counter (CPC), a Scanning Mobility Particle Sizer (SMPS), an Aerodynamic Particle Sizer (APS). The surface area and mass distributions and concentrations, as well as the estimation of lung deposition surface area and PM1 were evaluated using the SMPS-APS system with dosimetric models, by taking into account the presence of aggregates on the basis of the Idealized Aggregate (IA) theory. The fraction of inhaled particles deposited in the respiratory system and different fractions of particulate matter were also measured by means of a Nanoparticle Surface Area Monitor (NSAM) and a photometer (DustTrak DRX), respectively. In this way, supplementary data were obtained during the monitoring of trends inside the pizzerias. We found that surface area and PM1 particle concentrations in pizzerias can be very high, especially when compared to other critical microenvironments, such as the transport hubs. During pizza cooking under normal ventilation conditions, concentrations were found up to 74, 70 and 23 times higher than background levels for number, surface area and PM1, respectively. A key parameter is the oven shape factor, defined as the ratio between the size of the face opening in respect

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a novel peak load management scheme for rural areas. The scheme transfers certain customers onto local nonembedded generators during peak load periods to alleviate network under voltage problems. This paper develops and presents this system by way of a case study in Central Queensland, Australia. A methodology is presented for determining the best location for the nonembedded generators as well as the number of generators required to alleviate network problems. A control algorithm to transfer and reconnect customers is developed to ensure that the network voltage profile remains within specification under all plausible load conditions. Finally, simulations are presented to show the performance of the system over a typical maximum daily load profile with large stochastic load variations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work is to develop a Demand-Side-Response (DSR) model, which assists electricity end-users to be engaged in mitigating peak demands on the electricity network in Eastern and Southern Australia. The proposed innovative model will comprise a technical set-up of a programmable internet relay, a router, solid state switches in addition to the suitable software to control electricity demand at user's premises. The software on appropriate multimedia tool (CD Rom) will be curtailing/shifting electric loads to the most appropriate time of the day following the implemented economic model, which is designed to be maximizing financial benefits to electricity consumers. Additionally the model is targeting a national electrical load be spread-out evenly throughout the year in order to satisfy best economic performance for electricity generation, transmission and distribution. The model is applicable in region managed by the Australian Energy Management Operator (AEMO) covering states of Eastern-, Southern-Australia and Tasmania.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.