958 resultados para EFFICIENT SIMULATION
Resumo:
This paper describes an assessment of the nitrogen and phosphorus dynamics of the River Kennet in the south east of England. The Kennet catchment (1200 km(2)) is a predominantly groundwater fed river impacted by agricultural and sewage sources of nutrient (nitrogen and phosphorus) pollution. The results from a suite of simulation models are integrated to assess the key spatial and temporal variations in the nitrogen (N) and phosphorus (P) chemistry, and the influence of changes in phosphorous inputs from a Sewage Treatment Works on the macrophyte and epiphyte growth patterns. The models used are the Export Co-efficient model, the Integrated Nitrogen in Catchments model, and a new model of in-stream phosphorus and macrophyte dynamics: the 'Kennet' model. The paper concludes with a discussion on the present state of knowledge regarding the water quality functioning, future research needs regarding environmental modelling and the use of models as management tools for large, nutrient impacted riverine systems. (C) 2003 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD). The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Resumo:
The Chartered Institute of Building Service Engineers (CIBSE) produced a technical memorandum (TM36) presenting research on future climate impacting building energy use and thermal comfort. One climate projection for each of four CO2 emissions scenario were used in TM36, so providing a deterministic outlook. As part of the UK Climate Impacts Programme (UKCIP) probabilistic climate projections are being studied in relation to building energy simulation techniques. Including uncertainty in climate projections is considered an important advance to climate impacts modelling and is included in the latest UKCIP data (UKCP09). Incorporating the stochastic nature of these new climate projections in building energy modelling requires a significant increase in data handling and careful statistical interpretation of the results to provide meaningful conclusions. This paper compares the results from building energy simulations when applying deterministic and probabilistic climate data. This is based on two case study buildings: (i) a mixed-mode office building with exposed thermal mass and (ii) a mechanically ventilated, light-weight office building. Building (i) represents an energy efficient building design that provides passive and active measures to maintain thermal comfort. Building (ii) relies entirely on mechanical means for heating and cooling, with its light-weight construction raising concern over increased cooling loads in a warmer climate. Devising an effective probabilistic approach highlighted greater uncertainty in predicting building performance, depending on the type of building modelled and the performance factors under consideration. Results indicate that the range of calculated quantities depends not only on the building type but is strongly dependent on the performance parameters that are of interest. Uncertainty is likely to be particularly marked with regard to thermal comfort in naturally ventilated buildings.
Resumo:
With the fast development of the Internet, wireless communications and semiconductor devices, home networking has received significant attention. Consumer products can collect and transmit various types of data in the home environment. Typical consumer sensors are often equipped with tiny, irreplaceable batteries and it therefore of the utmost importance to design energy efficient algorithms to prolong the home network lifetime and reduce devices going to landfill. Sink mobility is an important technique to improve home network performance including energy consumption, lifetime and end-to-end delay. Also, it can largely mitigate the hot spots near the sink node. The selection of optimal moving trajectory for sink node(s) is an NP-hard problem jointly optimizing routing algorithms with the mobile sink moving strategy is a significant and challenging research issue. The influence of multiple static sink nodes on energy consumption under different scale networks is first studied and an Energy-efficient Multi-sink Clustering Algorithm (EMCA) is proposed and tested. Then, the influence of mobile sink velocity, position and number on network performance is studied and a Mobile-sink based Energy-efficient Clustering Algorithm (MECA) is proposed. Simulation results validate the performance of the proposed two algorithms which can be deployed in a consumer home network environment.
Resumo:
In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental trade off between energy and spectral-efficient transmission designs.
Resumo:
This paper investigates the challenge of representing structural differences in river channel cross-section geometry for regional to global scale river hydraulic models and the effect this can have on simulations of wave dynamics. Classically, channel geometry is defined using data, yet at larger scales the necessary information and model structures do not exist to take this approach. We therefore propose a fundamentally different approach where the structural uncertainty in channel geometry is represented using a simple parameterization, which could then be estimated through calibration or data assimilation. This paper first outlines the development of a computationally efficient numerical scheme to represent generalised channel shapes using a single parameter, which is then validated using a simple straight channel test case and shown to predict wetted perimeter to within 2% for the channels tested. An application to the River Severn, UK is also presented, along with an analysis of model sensitivity to channel shape, depth and friction. The channel shape parameter was shown to improve model simulations of river level, particularly for more physically plausible channel roughness and depth parameter ranges. Calibrating channel Manning’s coefficient in a rectangular channel provided similar water level simulation accuracy in terms of Nash-Sutcliffe efficiency to a model where friction and shape or depth were calibrated. However, the calibrated Manning coefficient in the rectangular channel model was ~2/3 greater than the likely physically realistic value for this reach and this erroneously slowed wave propagation times through the reach by several hours. Therefore, for large scale models applied in data sparse areas, calibrating channel depth and/or shape may be preferable to assuming a rectangular geometry and calibrating friction alone.
Resumo:
This paper aims at assessing the performance of a program of thermal simulation (Arquitrop) in different households in the city of Sao Paulo, Brazil. The households were selected for the Wheezing Project which followed up children under 2 years old to monitor the occurrence of respiratory diseases. The results show that in all three study households there is a good approximation between the observed and the simulated indoor temperatures. It was also observed a fairly consistent and realistic behavior between the simulated indoor and the outdoor temperatures, describing the Arquitrop model as an efficient estimator and good representative of the thermal behavior of households in the city of Sao Paulo. The worst simulation is linked to the poorest type of construction. This may be explained by the bad quality of the construction, which the Architrop could not simulate adequately.
Resumo:
This master thesis presents a new technological combination of two environmentally friendly sources of energy in order to provide DHW, and space heating. Solar energy is used for space heating, and DHW production using PV modules which supply direct current directly to electrical heating elements inside a water storage tank. On the other hand a GSHP system as another source of renewable energy provides heat in the water storage tank of the system in order to provide DHW and space heating. These two sources of renewable energy have been combined in this case-study in order to obtain a more efficient system, which will reduce the amount of electricity consumed by the GSHP system.The key aim of this study is to make simulations, and calculations of the amount ofelectrical energy that can be expected to be produced by a certain amount of PV modules that are already assembled on a house in Vantaa, southern Finland. This energy is then intended to be used as a complement to produce hot water in the heating system of the house beside the original GSHP system. Thus the amount of electrical energy purchased from the grid should be reduced and the compressor in the GSHP would need fewer starts which would reduce the heating cost of the GSHP system for space heating and providing hot water.The produced energy by the PV arrays in three different circuits will be charged directly to three electrical heating elements in the water storage tank of the existing system to satisfy the demand of the heating elements. The excess energy can be used to heat the water in the water storage tank to some extent which leads to a reduction of electricity consumption by the different components of the GSHP system.To increase the efficiency of the existing hybrid system, optimization of different PV configurations have been accomplished, and the results are compared. Optimization of the arrays in southern and western walls shows a DC power increase of 298 kWh/year compared with the existing PV configurations. Comparing the results from the optimization of the arrays on the western roof if the intention is to feed AC power to the components of the GSHP system shows a yearly AC power production of 1,646 kWh.This is with the consideration of no overproduction by the PV modules during the summer months. This means the optimized PV systems will be able to cover a larger part of summer demand compared with the existing system.
Resumo:
The Intelligent Algorithm is designed for theusing a Battery source. The main function is to automate the Hybrid System through anintelligent Algorithm so that it takes the decision according to the environmental conditionsfor utilizing the Photovoltaic/Solar Energy and in the absence of this, Fuel Cell energy isused. To enhance the performance of the Fuel Cell and Photovoltaic Cell we used batterybank which acts like a buffer and supply the current continuous to the load. To develop the main System whlogic based controller was used. Fuzzy Logic based controller used to develop this system,because they are chosen to be feasible for both controlling the decision process and predictingthe availability of the available energy on the basis of current Photovoltaic and Battery conditions. The Intelligent Algorithm is designed to optimize the performance of the system and to selectthe best available energy source(s) in regard of the input parameters. The enhance function of these Intelligent Controller is to predict the use of available energy resources and turn on thatparticular source for efficient energy utilization. A fuzzy controller was chosen to take thedecisions for the efficient energy utilization from the given resources. The fuzzy logic basedcontroller is designed in the Matlab-Simulink environment. Initially, the fuzzy based ruleswere built. Then MATLAB based simulation system was designed and implemented. Thenthis whole proposed model is simulated and tested for the accuracy of design and performanceof the system.
Resumo:
This paper reports the findings of using multi-agent based simulation model to evaluate the sawmill yard operations within a large privately owned sawmill in Sweden, Bergkvist Insjön AB in the current case. Conventional working routines within sawmill yard threaten the overall efficiency and thereby limit the profit margin of sawmill. Deploying dynamic work routines within the sawmill yard is not readily feasible in real time, so discrete event simulation model has been investigated to be able to report optimal work order depending on the situations. Preliminary investigations indicate that the results achieved by simulation model are promising. It is expected that the results achieved in the current case will support Bergkvist-Insjön AB in making optimal decisions by deploying efficient work order in sawmill yard.
Resumo:
With the building sector accounting for around 40% of the total energy consumption in the EU, energy efficiency in buildings is and continues to be an important issue. Great progress has been made in reducing the energy consumption in new buildings, but the large stock of existing buildings with poor energy performance is probably an even more crucial area of focus. This thesis deals with energy efficiency measures that can be suitable for renovation of existing houses, particularly low-temperature heating systems and ventilation systems with heat recovery. The energy performance, environmental impact and costs are evaluated for a range of system combinations, for small and large houses with various heating demands and for different climates in Europe. The results were derived through simulation with energy calculation tools. Low-temperature heating and air heat recovery were both found to be promising with regard to increasing energy efficiency in European houses. These solutions proved particularly effective in Northern Europe as low-temperature heating and air heat recovery have a greater impact in cold climates and on houses with high heating demands. The performance of heat pumps, both with outdoor air and exhaust air, was seen to improve with low-temperature heating. The choice between an exhaust air heat pump and a ventilation system with heat recovery is likely to depend on case specific conditions, but both choices are more cost-effective and have a lower environmental impact than systems without heat recovery. The advantage of the heat pump is that it can be used all year round, given that it produces DHW. Economic and environmental aspects of energy efficiency measures do not always harmonize. On the one hand, lower costs can sometimes mean larger environmental impact; on the other hand there can be divergence between different environmental aspects. This makes it difficult to define financial subsidies to promote energy efficiency measures.
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
Trabalho apresentado no 37th Conference on Stochastic Processes and their Applications - July 28 - August 01, 2014 -Universidad de Buenos Aires
Resumo:
Economic dispatch (ED) problems have recently been solved by artificial neural network approaches. Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. The ability of neural networks to realize some complex non-linear function makes them attractive for system optimization. All ED models solved by neural approaches described in the literature fail to represent the transmission system. Therefore, such procedures may calculate dispatch policies, which do not take into account important active power constraints. Another drawback pointed out in the literature is that some of the neural approaches fail to converge efficiently toward feasible equilibrium points. A modified Hopfield approach designed to solve ED problems with transmission system representation is presented in this paper. The transmission system is represented through linear load flow equations and constraints on active power flows. The internal parameters of such modified Hopfield networks are computed using the valid-subspace technique. These parameters guarantee the network convergence to feasible equilibrium points, which represent the solution for the ED problem. Simulation results and a sensitivity analysis involving IEEE 14-bus test system are presented to illustrate efficiency of the proposed approach. (C) 2004 Elsevier Ltd. All rights reserved.
Design and analysis of an efficient neural network model for solving nonlinear optimization problems
Resumo:
This paper presents an efficient approach based on a recurrent neural network for solving constrained nonlinear optimization. More specifically, a modified Hopfield network is developed, and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it handles optimization and constraint terms in different stages with no interference from each other. Moreover, the proposed approach does not require specification for penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyse its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network.