60 resultados para Buildings - Energy consumption
Resumo:
Thermal stability is of major importance in polymer extrusion, where product quality is dependent upon the level of melt homogeneity achieved by the extruder screw. Extrusion is an energy intensive process and optimisation of process energy usage while maintaining melt stability is necessary in order to produce good quality product at low unit cost. Optimisation of process energy usage is timely as world energy prices have increased rapidly over the last few years. In the first part of this study, a general discussion was made on the efficiency of an extruder. Then, an attempt was made to explore correlations between melt thermal stability and energy demand in polymer extrusion under different process settings and screw geometries. A commodity grade of polystyrene was extruded using a highly instrumented single screw extruder, equipped with energy consumption and melt temperature field measurement. Moreover, the melt viscosity of the experimental material was observed by using an off-line rheometer. Results showed that specific energy demand of the extruder (i.e. energy for processing of unit mass of polymer) decreased with increasing throughput whilst fluctuation in energy demand also reduced. However, the relationship between melt temperature and extruder throughput was found to be complex, with temperature varying with radial position across the melt flow. Moreover, the melt thermal stability deteriorated as throughput was increased, meaning that a greater efficiency was achieved at the detriment of melt consistency. Extruder screw design also had a significant effect on the relationship between energy consumption and melt consistency. Overall, the relationship between the process energy demand and thermal stability seemed to be negatively correlated and also it was shown to be highly complex in nature. Moreover, the level of process understanding achieved here can help to inform selection of equipment and setting of operating conditions to optimise both energy and thermal efficiencies in parallel.
Resumo:
Extrusion is one of the fundamental production methods in the polymer processing industry and is used in the production of a large number of commodities in a diverse industrial sector. Being an energy intensive production method, process energy efficiency is one of the major concerns and the selection of the most energy efficient processing conditions is a key to reducing operating costs. Usually, extruders consume energy through the drive motor, barrel heaters, cooling fans, cooling water pumps, gear pumps, etc. Typically the drive motor is the largest energy consuming device in an extruder while barrel/die heaters are responsible for the second largest energy demand. This study is focused on investigating the total energy demand of an extrusion plant under various processing conditions while identifying ways to optimise the energy efficiency. Initially, a review was carried out on the monitoring and modelling of the energy consumption in polymer extrusion. Also, the power factor, energy demand and losses of a typical extrusion plant were discussed in detail. The mass throughput, total energy consumption and power factor of an extruder were experimentally observed over different processing conditions and the total extruder energy demand was modelled empirically and also using a commercially available extrusion simulation software. The experimental results show that extruder energy demand is heavily coupled between the machine, material and process parameters. The total power predicted by the simulation software exhibits a lagging offset compared with the experimental measurements. Empirical models are in good agreement with the experimental measurements and hence these can be used in studying process energy behaviour in detail and to identify ways to optimise the process energy efficiency.
Resumo:
Energy consumption has become an important area of research of late. With the advent of new manycore processors, situations have arisen where not all the processors need to be active to reach an optimal relation between performance and energy usage. In this paper a study of the power and energy usage of a series of benchmarks, the PARSEC and the SPLASH- 2X Benchmark Suites, on the Intel Xeon Phi for different threads configurations, is presented. To carry out this study, a tool was designed to monitor and record the power usage in real time during execution time and afterwards to compare the r
Resumo:
Energy in today's short-range wireless communication is mostly spent on the analog- and digital hardware rather than on radiated power. Hence,purely information-theoretic considerations fail to achieve the lowest energy per information bit and the optimization process must carefully consider the overall transceiver. In this paper, we propose to perform cross-layer optimization, based on an energy-aware rate adaptation scheme combined with a physical layer that is able to properly adjust its processing effort to the data rate and the channel conditions to minimize the energy consumption per information bit. This energy proportional behavior is enabled by extending the classical system modes with additional configuration parameters at the various layers. Fine grained models of the power consumption of the hardware are developed to provide awareness of the physical layer capabilities to the medium access control layer. The joint application of the proposed energy-aware rate adaptation and modifications to the physical layer of an IEEE802.11n system, improves energy-efficiency (averaged over many noise and channel realizations) in all considered scenarios by up to 44%.
Resumo:
Increasingly large amounts of data are stored in main memory of data center servers. However, DRAM-based memory is an important consumer of energy and is unlikely to scale in the future. Various byte-addressable non-volatile memory (NVM) technologies promise high density and near-zero static energy, however they suffer from increased latency and increased dynamic energy consumption.
This paper proposes to leverage a hybrid memory architecture, consisting of both DRAM and NVM, by novel, application-level data management policies that decide to place data on DRAM vs. NVM. We analyze modern column-oriented and key-value data stores and demonstrate the feasibility of application-level data management. Cycle-accurate simulation confirms that our methodology reduces the energy with least performance degradation as compared to the current state-of-the-art hardware or OS approaches. Moreover, we utilize our techniques to apportion DRAM and NVM memory sizes for these workloads.
Resumo:
Power, and consequently energy, has recently attained first-class system resource status, on par with conventional metrics such as CPU time. To reduce energy consumption, many hardware- and OS-level solutions have been investigated. However, application-level information - which can provide the system with valuable insights unattainable otherwise - was only considered in a handful of cases. We introduce OpenMPE, an extension to OpenMP designed for power management. OpenMP is the de-facto standard for programming parallel shared memory systems, but does not yet provide any support for power control. Our extension exposes (i) per-region multi-objective optimization hints and (ii) application-level adaptation parameters, in order to create energy-saving opportunities for the whole system stack. We have implemented OpenMPE support in a compiler and runtime system, and empirically evaluated its performance on two architectures, mobile and desktop. Our results demonstrate the effectiveness of OpenMPE with geometric mean energy savings across 9 use cases of 15 % while maintaining full quality of service.
Resumo:
Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.
Resumo:
DRAM technology faces density and power challenges to increase capacity because of limitations of physical cell design. To overcome these limitations, system designers are exploring alternative solutions that combine DRAM and emerging NVRAM technologies. Previous work on heterogeneous memories focuses, mainly, on two system designs: PCache, a hierarchical, inclusive memory system, and HRank, a flat, non-inclusive memory system. We demonstrate that neither of these designs can universally achieve high performance and energy efficiency across a suite of HPC workloads. In this work, we investigate the impact of a number of multilevel memory designs on the performance, power, and energy consumption of applications. To achieve this goal and overcome the limited number of available tools to study heterogeneous memories, we created HMsim, an infrastructure that enables n-level, heterogeneous memory studies by leveraging existing memory simulators. We, then, propose HpMC, a new memory controller design that combines the best aspects of existing management policies to improve performance and energy. Our energy-aware memory management system dynamically switches between PCache and HRank based on the temporal locality of applications. Our results show that HpMC reduces energy consumption from 13% to 45% compared to PCache and HRank, while providing the same bandwidth and higher capacity than a conventional DRAM system.
Resumo:
Among various technologies to tackle the twin challenges of sustainable energy supply and climate change, energy saving through advanced control plays a crucial role in decarbonizing the whole energy system. Modern control technologies, such as optimal control and model predictive control do provide a framework to simultaneously regulate the system performance and limit control energy. However, few have been done so far to exploit the full potential of controller design in reducing the energy consumption while maintaining desirable system performance. This paper investigates the correlations between control energy consumption and system performance using two popular control approaches widely used in the industry, namely the PI control and subspace model predictive control. Our investigation shows that the controller design is a delicate synthesis procedure in achieving better trade-o between system performance and energy saving, and proper choice of values for the control parameters may potentially save a significant amount of energy
Resumo:
Hemp-lime concrete is a sustainable alternative to standard building wall materials, with low associated embodied energy. It exhibits good hygric, acoustic and thermal properties, making it an exciting, sustainable building envelope material. When cast in temporary shuttering around a timber frame, it exhibits lower thermal conductivity than concrete, and consequently achieves low U-values in a primarily mono-material wall construction. Although cast relatively thick hemp-lime walls do not generally achieve the low U-values stipulated in building regulations. However assessment of its thermal performance through evaluation of its resistance to thermal transfer alone, underestimates its true thermal quality. The thermal inertia, or reluctance of the wall to change its temperature when exposed to changing environmental temperatures, also has a significant impact on the thermal quality of the wall, the thermal comfort of the interior space and energy consumption due to space heating. With a focus on energy reduction in buildings, regulations emphasise thermal resistance to heat transfer with only less focus on thermal inertia or storage benefits due to thermal mass. This paper investigates dynamic thermal responsiveness in hemp-lime concrete walls. It reports the influence of thermal conductivity, density and specific heat through analysis of steady state and transient heat transfer, in the walls. A novel hot-box design which isolates the conductive heat flow is used, and compared with tests in standard hot-boxes. Thermal diffusivity and effusivity are evaluated, using experimentally measured conductivity, based on analytical relationships. Experimental results evident that hemp-lime exhibits high thermal inertia. They show the thermal inertia characteristics compensate for any limitations in the thermal resistance of the construction material. When viewed together the thermal resistance and mass characteristics of hemp-lime are appropriate to maintain comfortable thermal indoor conditions and low energy operation.
Resumo:
Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.
Resumo:
Energy consumption is an important concern in modern multicore processors. The energy consumed by a multicore processor during the execution of an application can be minimized by tuning the hardware state utilizing knobs such as frequency, voltage etc. The existing theoretical work on energy minimization using Global DVFS (Dynamic Voltage and Frequency Scaling), despite being thorough, ignores the time and the energy consumed by the CPU on memory accesses and the dynamic energy consumed by the idle cores. This article presents an analytical energy-performance model for parallel workloads that accounts for the time and the energy consumed by the CPU chip on memory accesses in addition to the time and energy consumed by the CPU on CPU instructions. In addition, the model we present also accounts for the dynamic energy consumed by the idle cores. The existing work on global DVFS for parallel workloads shows that using a single frequency for the entire duration of a parallel application is not energy optimal and that varying the frequency according to the changes in the parallelism of the workload can save energy. We present an analytical framework around our energy-performance model to predict the operating frequencies (that depend upon the amount of parallelism) for global DVFS that minimize the overall CPU energy consumption. We show how the optimal frequencies in our model differ from the optimal frequencies in a model that does not account for memory accesses. We further show how the memory intensity of an application affects the optimal frequencies.
Resumo:
Self-compacting concrete (SCC) is generally designed with a relatively higher content of finer, which includes cement, and dosage of superplasticizer than the conventional concrete. The design of the current SCC leads to high compressive strength, which is already used in special applications, where the high cost of materials can be tolerated. Using SCC, which eliminates the need for vibration, leads to increased speed of casting and thus reduces labour requirement, energy consumption, construction time, and cost of equipment. In order to obtain and gain maximum benefit from SCC it has to be used for wider applications. The cost of materials will be decreased by reducing the cement content and using a minimum amount of admixtures. This paper reviews statistical models obtained from a factorial design which was carried out to determine the influence of four key parameters on filling ability, passing ability, segregation and compressive strength. These parameters are important for the successful development of medium strength self-compacting concrete (MS-SCC). The parameters considered in the study were the contents of cement and pulverised fuel ash (PFA), water-to-powder ratio (W/P), and dosage of superplasticizer (SP). The responses of the derived statistical models are slump flow, fluidity loss, rheological parameters, Orimet time, V-funnel time, L-box, JRing combined to Orimet, JRing combined to cone, fresh segregation, and compressive strength at 7, 28 and 90 days. The models are valid for mixes made with 0.38 to 0.72 W/P ratio, 60 to 216 kg/m3 of cement content, 183 to 317 kg/m3 of PFA and 0 to 1% of SP, by mass of powder. The utility of such models to optimize concrete mixes to achieve good balance between filling ability, passing ability, segregation, compressive strength, and cost is discussed. Examples highlighting the usefulness of the models are presented using isoresponse surfaces to demonstrate single and coupled effects of mix parameters on slump flow, loss of fluidity, flow resistance, segregation, JRing combined to Orimet, and compressive strength at 7 and 28 days. Cost analysis is carried out to show trade-offs between cost of materials and specified consistency levels and compressive strength at 7 and 28 days that can be used to identify economic mixes. The paper establishes the usefulness of the mathematical models as a tool to facilitate the test protocol required to optimise medium strength SCC.
Resumo:
Much of the interest in sustainable cities relates to the inexorable rise in the demand for car travel and the contribution that certain urban forms and land-use relationships can make to reducing energy consumption. Indeed, this demand is fuelled more by increased spatial separation of homes and workplaces, shops and schools than by any rise in trip making. This paper evaluates recent efforts to integrate land-use planning and transportation policy in the Belfast Metropolitan Area by reviewing the policy formulation process at both a regional and city scale. The paper suggests that considerable progress has been made in integrating these two areas of public policy, both institutionally and conceptually. However, concerns are expressed that the rhetoric of sustainability may prove difficult to translate into implementation, leading to a further dislocation of land-use and transportation.
Resumo:
Much of the interest in promoting sustainable development in planning for the city-region focuses on the apparently inexorable rise in the demand for car travel and the contribution that certain urban forms and land-use relationships can make to reducing energy consumption. Within this context, policy prescription has increasingly favoured a compact city approach with increasing urban residential densities to address the physical separation of daily activities and the resultant dependency on the private car. This paper aims to outline and evaluate recent efforts to integrate land use and transport policy in the Belfast Metropolitan Area in Northern Ireland. Although considerable progress has been made, this paper underlines the extent of existing car dependency in the metropolitan area and prevailing negative attitudes to public transport, and argues that although there is a rhetorical support for the principles of sustainability and the practice of land-use/transportation integration, this is combined with a selective reluctance to embrace local changes in residential environment or in lifestyle preferences which might facilitate such principles.