936 resultados para Demand control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fish cage culture is a rapid aquacultural practice of producing fish with more yield compared to traditional pond culture. Several species cultured by this method include Cyprinus carpio, Orechromis niloticus, Sarotherodon galilaeus, Tilapia zilli, Clarias lazera, C. gariepinus, Heterobranchus bidorsalis, Citharinus citharus, Distochodus rostratus and Alestes dentes. However, the culture of fish in cages has some problems that are due to mechanical defects of the cage or diseases due to infection. The mechanical problems which may lead to clogged net, toxicity and easy access by predators depend on defects associated with various types of nets which include fold sieve cloth net, wire net, polypropylene net, nylon, galvanized and welded net. The diseases problems are of two types namely introduced diseases due to parasites. The introduced parasites include Crustaseans, Ergasilus sp. Argulus africana, and Lamprolegna sp, Helminth, Diplostomulum tregnna: Protozoan, Trichodina sp, Myxosoma sp, Myxobolus sp. the second disease problems are inherent diseases aggravated by the very rich nutrient environment in cages for rapid bacterial, saprophytic fungi, and phytoplanktonic bloom resulting in clogging of net, stagnation of water and low biological oxygen demand (BOD). The consequence is fish kill, prevalence of gill rot and dropsy conditions. Recommendations on routine cage hygiene, diagnosis and control procedures to reduce fish mortality are highlighted

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.

In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been predicted that the global demand for fish for human consumption will increase by more than 50% over the next 15 years. The FAO has projected that the increase in supply will originate primarily from marine fisheries, aquaculture and to a lesser extent from inland fisheries, but with a commensurate price increase. However, there are constraints to increased production in both marine and inland fisheries, such as overfishing, overexploitation limited potential increase and environmental degradation due to industrialization. The author sees aquaculture as having the greatest potential for future expansion. Aquaculture practices vary depending on culture, environment, society amd sources of fish. Inputs are generally low-cost, ecologically efficient and the majority of aquaculture ventures are small-scale and family operated. In the future, advances in technology, genetic improvement of cultured species, improvement in nutrition, disease management, reproduction control and environmental management are expected along with opportunities for complimentary activities with agriculture, industrial and wastewater linkages. The main constraints to aquaculture are from reduced access to suitable land and good quality water due to pollution and habitat degradation. Aquaculture itself carries minimal potential for aquatic pollution. State participation in fisheries production has not proven to be the best way to promote the fisheries sector. The role of governments is increasingly seen as creating an environment for economic sectors to make an optimum contribution, through support in areas such as infrastructure, research, training and extension and a legal framework. The author feels that a holistic approach integrating the natural and social sciences is called for when fisheries policy is being examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An experiment was conducted to evaluate the effects of control of carbon/nitrogen ratio (C/N ratio) by addition of low cost carbohydrate to the water column on water quality and pond ecology in freshwater prawn Macrobrachium rosenbergii post-larvae nursing system. In this experiment, two level of dietary protein 20% and 35% without carbohydrate addition (‘P20' and ‘P35') and with carbohydrate addition (‘P20+CH' and ‘P35+CH') were compared in small ponds of 40 m² area stocked with 20 post-larvae (0.021 ± 0.001g) per m² . Maize flour was used as low cost carbohydrate and applied to the water column followed by the first feeding during the day. The addition of carbohydrate significantly reduced (p< 0.05) ammonia-nitrogen (NH sub(3)-N) and nitrite-nitrogen (NO sub(2) - N) of water in P20 + CH and P35 + CH treatments. It significantly increased (p< 0.05) the total heterotrophic bacteria (THB) population both in water and sediment. Fifty nine genera of plankton were identified belonging to the Bacillariophyceae (11), Chlorophyceae (21), Cyanophyceae (7), Dinophyceae (1), Rotifera (7) and Crustacea (9) without any significant difference (p>0.05) of total phytoplankton and zooplankton among the treatments. Survival rate of prawn was significantly lowest (p<0.05) in P20 and no significant difference (p>0.05) was observed between P20+CH and P35 treatments. Control of C/N ratio by the addition of low-cost carbohydrate to the pond water column benefited the freshwater prawn nursing practices in three ways (1) increased heterotrophic bacterial growth supplying bacterial protein augment the prawn post-larvae growth performances, (2) reduced demand for supplemental feed protein and subsequent reduction in feed cost and (3) reduced toxic NH sub(3)-N and NO sub(2)-N levels in pond nursing system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-power converters usually need longer dead-times than their lower-power counterparts and a lower switching frequency. Also due to the complicated assembly layout and severe variations in parasitics, in practice the conventional dead-time specific adjustment or compensation for high-power converters is less effective, and usually this process is time-consuming and bespoke. For general applications, minimising or eliminating dead-time in the gate drive technology is a desirable solution. With the growing acceptance of power electronics building blocks (PEBB) and intelligent power modules (IPM), gate drives with intelligent functions are in demand. Smart functions including dead time elimination/minimisation can improve modularity, flexibility and reliability. In this paper, a dead-time minimisation using Active Voltage Control (AVC) gate drive is presented. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Active Voltage Control (AVC) is an implementation of classic Proportional-Derivative (PD) control and multi-loop feedback control to force an IGBT to follow a pre-set switching trajectory. Previously, AVC was mainly used for controlling series-connected IGBTs in order to enable voltage balance between IGBTs. In this paper, the nonlinear IGBT turn-off transient is further discussed and the turnoff of a single IGBT under AVC is further optimised in order to meet the demand of Power Electronic Building Block (PEBB) applications. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A field programmable gate array (FPGA)-based predictive controller for a spacecraft rendezvous manoeuvre is presented. A linear time varying prediction model is used to accommodate elliptical orbits, and a variable prediction horizon is used to facilitate finite time completion of manoeuvres. The resulting constrained optimisation problems are solved using a primal dual interior point algorithm. The majority of the computational demand is in solving a set of linear equations at each iteration of this algorithm. To accelerate this operation, a custom circuit is implemented, using a combination of Mathworks HDL Coder and Xilinx System Generator for DSP, and used as a peripheral to a MicroBlaze soft core processor. The system is demonstrated in closed loop by linking the FPGA with a simulation of the plant dynamics running in Simulink on a PC, using Ethernet. © 2013 EUCA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2014 John Wiley & Sons, Ltd. Summary A field programmable gate array (FPGA) based model predictive controller for two phases of spacecraft rendezvous is presented. Linear time-varying prediction models are used to accommodate elliptical orbits, and a variable prediction horizon is used to facilitate finite time completion of the longer range manoeuvres, whilst a fixed and receding prediction horizon is used for fine-grained tracking at close range. The resulting constrained optimisation problems are solved using a primal-dual interior point algorithm. The majority of the computational demand is in solving a system of simultaneous linear equations at each iteration of this algorithm. To accelerate these operations, a custom circuit is implemented, using a combination of Mathworks HDL Coder and Xilinx System Generator for DSP, and used as a peripheral to a MicroBlaze soft-core processor on the FPGA, on which the remainder of the system is implemented. Certain logic that can be hard-coded for fixed sized problems is implemented to be configurable online, in order to accommodate the varying problem sizes associated with the variable prediction horizon. The system is demonstrated in closed-loop by linking the FPGA with a simulation of the spacecraft dynamics running in Simulink on a PC, using Ethernet. Timing comparisons indicate that the custom implementation is substantially faster than pure embedded software-based interior point methods running on the same MicroBlaze and could be competitive with a pure custom hardware implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A direct method for measuring the 5-day biochemical oxygen demand (BODS) of aquaculture samples that does not require sample dilution or bacterial and nutrient enrichment was evaluated. The regression coefficient (R-2) between the direct method and the standard method for the analyses of 32 samples from catfish ponds was 0.996. The slope of the regression line did not differ from 1.0 or the Y-intercept from 0.0 at P = 0.05. Thus, there was almost perfect agreement between the two methods. The control limits (three standard deviations of the mean) for a standard solution containing 15 mg/L each of glutamic acid and glucose were 17.4 and 20.4 mg/L. The precision of the two methods, based on eight replicate analyses of four pond water samples did not differ at P = 0.05. (c) 2005 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increased use of "Virtual Machines" (VMs) as vehicles that isolate applications running on the same host, it is necessary to devise techniques that enable multiple VMs to share underlying resources both fairly and efficiently. To that end, one common approach is to deploy complex resource management techniques in the hosting infrastructure. Alternately, in this paper, we advocate the use of self-adaptation in the VMs themselves based on feedback about resource usage and availability. Consequently, we define a "Friendly" VM (FVM) to be a virtual machine that adjusts its demand for system resources, so that they are both efficiently and fairly allocated to competing FVMs. Such properties are ensured using one of many provably convergent control rules, such as AIMD. By adopting this distributed application-based approach to resource management, it is not necessary to make assumptions about the underlying resources nor about the requirements of FVMs competing for these resources. To demonstrate the elegance and simplicity of our approach, we present a prototype implementation of our FVM framework in User-Mode Linux (UML)-an implementation that consists of less than 500 lines of code changes to UML. We present an analytic, control-theoretic model of FVM adaptation, which establishes convergence and fairness properties. These properties are also backed up with experimental results using our prototype FVM implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MOTIVATION: Technological advances that allow routine identification of high-dimensional risk factors have led to high demand for statistical techniques that enable full utilization of these rich sources of information for genetics studies. Variable selection for censored outcome data as well as control of false discoveries (i.e. inclusion of irrelevant variables) in the presence of high-dimensional predictors present serious challenges. This article develops a computationally feasible method based on boosting and stability selection. Specifically, we modified the component-wise gradient boosting to improve the computational feasibility and introduced random permutation in stability selection for controlling false discoveries. RESULTS: We have proposed a high-dimensional variable selection method by incorporating stability selection to control false discovery. Comparisons between the proposed method and the commonly used univariate and Lasso approaches for variable selection reveal that the proposed method yields fewer false discoveries. The proposed method is applied to study the associations of 2339 common single-nucleotide polymorphisms (SNPs) with overall survival among cutaneous melanoma (CM) patients. The results have confirmed that BRCA2 pathway SNPs are likely to be associated with overall survival, as reported by previous literature. Moreover, we have identified several new Fanconi anemia (FA) pathway SNPs that are likely to modulate survival of CM patients. AVAILABILITY AND IMPLEMENTATION: The related source code and documents are freely available at https://sites.google.com/site/bestumich/issues. CONTACT: yili@umich.edu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extending the work presented in Prasad et al. (IEEE Proceedings on Control Theory and Applications, 147, 523-37, 2000), this paper reports a hierarchical nonlinear physical model-based control strategy to account for the problems arising due to complex dynamics of drum level and governor valve, and demonstrates its effectiveness in plant-wide disturbance handling. The strategy incorporates a two-level control structure consisting of lower-level conventional PI regulators and a higher-level nonlinear physical model predictive controller (NPMPC) for mainly set-point manoeuvring. The lower-level PI loops help stabilise the unstable drum-boiler dynamics and allow faster governor valve action for power and grid-frequency regulation. The higher-level NPMPC provides an optimal load demand (or set-point) transition by effective handling of plant-wide interactions and system disturbances. The strategy has been tested in a simulation of a 200-MW oil-fired power plant at Ballylumford in Northern Ireland. A novel approach is devized to test the disturbance rejection capability in severe operating conditions. Low frequency disturbances were created by making random changes in radiation heat flow on the boiler-side, while condenser vacuum was fluctuating in a random fashion on the turbine side. In order to simulate high-frequency disturbances, pulse-type load disturbances were made to strike at instants which are not an integral multiple of the NPMPC sampling period. Impressive results have been obtained during both types of system disturbances and extremely high rates of load changes, right across the operating range, These results compared favourably with those from a conventional state-space generalized predictive control (GPC) method designed under similar conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates a set of necessary conditions that should generate unbiased, internally consistent estimates of willingness to pay (WTP) from a double referendum mechanism. These conditions are also sufficient for demand revelation in an experimental laboratory environment. However, the control over the mechanism achieved in the lab may not be transferrable to the field and WTP estimates derived from field surveys may remain biased. (c) 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dwindling fossil fuel resources and pressures to reduce greenhouse gas (GHG) emissions will result in a more diverse range of generation portfolios for future electricity systems. Irrespective of the portfolio mix the overarching requirement for all electricity suppliers and system operators is that supply instantaneously meets demand and that robust operating standards are maintained to ensure a consistent supply of high quality electricity to end-users. Therefore all electricity market participants will ultimately need to use a variety of tools to balance the power system. Thus the role of demand side management (DSM) with energy storage will be paramount to integrate future diverse generation portfolios. Electric water heating (EWH) has been studied previously, particularly at the domestic level to provide load control, peak shave and to benefit end-users financially with lower bills, particularly in vertically integrated monopolies. In this paper, a continuous Direct Load Control (DLC) EWH algorithm is applied in a liberalized market environment using actual historical electricity system and market data to examine the potential energy savings, cost reductions and electricity system operational improvements.