7 resultados para Operational Data Stores
em Cambridge University Engineering Department Publications Database
Resumo:
Thus far most studies of operational energy use of buildings fail to take a longitudinal view, or in other words, do not take into account how operational energy use changes during the lifetime of a building. However, such a view is important when predicting the impact of climate change, or for long term energy accounting purposes. This article presents an approach to deliver a longitudinal prediction of operational energy use. The work is based on the review of deterioration in thermal performance, building maintenance effects, and future climate change. The key issues are to estimate the service life expectancy and thermal performance degradation of building components while building maintenance and changing weather conditions are considered at the same time. Two examples are presented to demonstrate the application of the deterministic and stochastic approaches, respectively. The work concludes that longitudinal prediction of operational energy use is feasible, but the prediction will depend largely on the availability of extensive and reliable monitoring data. This premise is not met in most current buildings. © 2011 Elsevier Ltd.
Resumo:
In the Climate Change Act of 2008 the UK Government pledged to reduce carbon emissions by 80% by 2050. As one step towards this, regulations are being introduced requiring all new buildings to be ‘zero carbon’ by 2019. These are defined as buildings which emit net zero carbon during their operational lifetime. However, in order to meet the 80% target it is necessary to reduce the carbon emitted during the whole life-cycle of buildings, including that emitted during the processes of construction. These elements make up the ‘embodied carbon’ of the building. While there are no regulations yet in place to restrict embodied carbon, a number of different approaches have been made. There are several existing databases of embodied carbon and embodied energy. Most provide data for the material extraction and manufacturing only, the ‘cradle to factory gate’ phase. In addition to the databases, various software tools have been developed to calculate embodied energy and carbon of individual buildings. A third source of data comes from the research literature, in which individual life cycle analyses of buildings are reported. This paper provides a comprehensive review, comparing and assessing data sources, boundaries and methodologies. The paper concludes that the wide variations in these aspects produce incomparable results. It highlights the areas where existing data is reliable, and where new data and more precise methods are needed. This comprehensive review will guide the future development of a consistent and transparent database and software tool to calculate the embodied energy and carbon of buildings.
Resumo:
Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.
Resumo:
Reducing energy consumption is a major challenge for energy-intensive industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of optimized operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method. © 2006 IEEE.
Resumo:
This paper proposes a method for analysing the operational complexity in supply chains by using an entropic measure based on information theory. The proposed approach estimates the operational complexity at each stage of the supply chain and analyses the changes between stages. In this paper a stage is identified by the exchange of data and/or material. Through analysis the method identifies the stages where the operational complexity is both generated and propagated (exported, imported, generated or absorbed). Central to the method is the identification of a reference point within the supply chain. This is where the operational complexity is at a local minimum along the data transfer stages. Such a point can be thought of as a 'sink' for turbulence generated in the supply chain. Where it exists, it has the merit of stabilising the supply chain by attenuating uncertainty. However, the location of the reference point is also a matter of choice. If the preferred location is other than the current one, this is a trigger for management action. The analysis can help decide appropriate remedial action. More generally, the approach can assist logistics management by highlighting problem areas. An industrial application is presented to demonstrate the applicability of the method. © 2013 Operational Research Society Ltd. All rights reserved.
Resumo:
Most research on technology roadmapping has focused on its practical applications and the development of methods to enhance its operational process. Thus, despite a demand for well-supported, systematic information, little attention has been paid to how/which information can be utilised in technology roadmapping. Therefore, this paper aims at proposing a methodology to structure technological information in order to facilitate the process. To this end, eight methods are suggested to provide useful information for technology roadmapping: summary, information extraction, clustering, mapping, navigation, linking, indicators and comparison. This research identifies the characteristics of significant data that can potentially be used in roadmapping, and presents an approach to extracting important information from such raw data through various data mining techniques including text mining, multi-dimensional scaling and K-means clustering. In addition, this paper explains how this approach can be applied in each step of roadmapping. The proposed approach is applied to develop a roadmap of radio-frequency identification (RFID) technology to illustrate the process practically. © 2013 © 2013 Taylor & Francis.
Resumo:
The importance of properly exploiting a classifier's inherent geometric characteristics when developing a classification methodology is emphasized as a prerequisite to achieving near optimal performance when carrying out thematic mapping. When used properly, it is argued that the long-standing maximum likelihood approach and the more recent support vector machine can perform comparably. Both contain the flexibility to segment the spectral domain in such a manner as to match inherent class separations in the data, as do most reasonable classifiers. The choice of which classifier to use in practice is determined largely by preference and related considerations, such as ease of training, multiclass capabilities, and classification cost. © 1980-2012 IEEE.