974 resultados para energy-aware
Resumo:
The issue of a more sustainable environment has been the aim of many governments and institutions for decades. Current research and literature has shown the continuing impact of global development and population increases on the planet as a whole. Issues such as carbon emissions, global warming, resource sustainability, industrial pollution, waste management and the decline in scarce resources, including food, are now realities and are being addressed at various levels. All levels of government, business and the public now equally share responsibility for the continued sustainable environment in general. Although these issues of global warming, climate change and the overuse of scarce resources are well documented, and constantly covered in all media forms, public attitudes to these issues vary significantly. Despite being aware of these issues many individuals consider that the problem is one for governments to tackle and that their individual efforts are not important or necessary. In many cases individuals are concerned with sustainability, but are either not in the position to take action due to economic circumstances or are not prepared to offset sustainability gains with personal interests...
Resumo:
Sustainability has become crucial for the energy industry as projects in this industry are extensively large and complex and have significant impacts on the environment, community and economy. It demands the energy industry to proactively incorporate sustainability ideas and commit to sustainable project development. This study aims to investigate how the Australian energy industry responds to sustainability requirements and in particular what indicators used to measure sustainability performance. To achieve this, content analysis of sustainability reports, vision statements and policy statements of Australian energy companies listed in the 2013 PLATTS Top 250 Global Energy Company Rankings and government reports relating to sustainability has been conducted. The findings show that the energy companies extensively discuss sustainability aspects within three dimensions, i.e. community, environment, and economy. Their primary goals in sustainability are supplying cleaner energy for future, and doing business in a way that improves outcomes for shareholders, employees, business partners and the communities. In particular, energy companies have valued the employees of the business as a one of the key area that needs to be considered. Furthermore, the energy industry has become increasingly aware of the importance of measuring sustainability performance to achieve sustainability goals. A number of sustainability indicators have been developed on the basis of the key themes beyond economic measures. It is envisaged that findings from this research will help stakeholders in the energy industry to adopt different indicators to evaluate and ultimately achieve sustainability performance.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.
Resumo:
In large flexible software systems, bloat occurs in many forms, causing excess resource utilization and resource bottlenecks. This results in lost throughput and wasted joules. However, mitigating bloat is not easy; efforts are best applied where savings would be substantial. To aid this we develop an analytical model establishing the relation between bottleneck in resources, bloat, performance and power. Analyses with the model places into perspective results from the first experimental study of the power-performance implications of bloat. In the experiments we find that while bloat reduction can provide as much as 40% energy savings, the degree of impact depends on hardware and software characteristics. We confirm predictions from our model with selected results from our experimental study. Our findings show that a software-only view is inadequate when assessing the effects of bloat. The impact of bloat on physical resource usage and power should be understood for a full systems perspective to properly deploy bloat reduction solutions and reap their power-performance benefits.
Resumo:
The twin demands of energy-efficiency and higher performance on DRAM are highly emphasized in multicore architectures. A variety of schemes have been proposed to address either the latency or the energy consumption of DRAMs. These schemes typically require non-trivial hardware changes and end up improving latency at the cost of energy or vice-versa. One specific DRAM performance problem in multicores is that interleaved accesses from different cores can potentially degrade row-buffer locality. In this paper, based on the temporal and spatial locality characteristics of memory accesses, we propose a reorganization of the existing single large row-buffer in a DRAM bank into multiple sub-row buffers (MSRB). This re-organization not only improves row hit rates, and hence the average memory latency, but also brings down the energy consumed by the DRAM. The first major contribution of this work is proposing such a reorganization without requiring any significant changes to the existing widely accepted DRAM specifications. Our proposed reorganization improves weighted speedup by 35.8%, 14.5% and 21.6% in quad, eight and sixteen core workloads along with a 42%, 28% and 31% reduction in DRAM energy. The proposed MSRB organization enables opportunities for the management of multiple row-buffers at the memory controller level. As the memory controller is aware of the behaviour of individual cores it allows us to implement coordinated buffer allocation schemes for different cores that take into account program behaviour. We demonstrate two such schemes, namely Fairness Oriented Allocation and Performance Oriented Allocation, which show the flexibility that memory controllers can now exploit in our MSRB organization to improve overall performance and/or fairness. Further, the MSRB organization enables additional opportunities for DRAM intra-bank parallelism and selective early precharging of the LRU row-buffer to further improve memory access latencies. These two optimizations together provide an additional 5.9% performance improvement.
Resumo:
Wireless Sensor Networks have gained popularity due to their real time applications and low-cost nature. These networks provide solutions to scenarios that are critical, complicated and sensitive like military fields, habitat monitoring, and disaster management. The nodes in wireless sensor networks are highly resource constrained. Routing protocols are designed to make efficient utilization of the available resources in communicating a message from source to destination. In addition to the resource management, the trustworthiness of neighboring nodes or forwarding nodes and the energy level of the nodes to keep the network alive for longer duration is to be considered. This paper proposes a QoS Aware Trust Metric based Framework for Wireless Sensor Networks. The proposed framework safeguards a wireless sensor network from intruders by considering the trustworthiness of the forwarder node at every stage of multi-hop routing. Increases network lifetime by considering the energy level of the node, prevents the adversary from tracing the route from source to destination by providing path variation. The framework is built on NS2 Simulator. Experimental results show that the framework provides energy balance through establishment of trustworthy paths from the source to the destination. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Targets to cut 2050 CO2 emissions in the steel and aluminium sectors by 50%, whilst demand is expected to double, cannot be met by energy efficiency measures alone, so options that reduce total demand for liquid metal production must also be considered. Such reductions could occur through reduced demand for final goods (for instance by life extension), reduced demand for material use in each product (for instance by lightweight design) or reduced demand for material to make existing products. The last option, improving the yield of manufacturing processes from liquid metal to final product, is attractive in being invisible to the final customer, but has had little attention to date. Accordingly this paper aims to provide an estimate of the potential to make existing products with less liquid metal production. Yield ratios have been measured for five case study products, through a series of detailed factory visits, along each supply chain. The results of these studies, presented on graphs of cumulative energy against yield, demonstrate how the embodied energy in final products may be up to 15 times greater than the energy required to make liquid metal, due to yield losses. A top-down evaluation of the global flows of steel and aluminium showed that 26% of liquid steel and 41% of liquid aluminium produced does not make it into final products, but is diverted as process scrap and recycled. Reducing scrap substitutes production by recycling and could reduce total energy use by 17% and 6% and total CO 2 emissions by 16% and 7% for the steel and aluminium industries respectively, using forming and fabrication energy values from the case studies. The abatement potential of process scrap elimination is similar in magnitude to worldwide implementation of best available standards of energy efficiency and demonstrates how decreasing the recycled content may sometimes result in emission reductions. Evidence from the case studies suggests that whilst most companies are aware of their own yield ratios, few, if any, are fully aware of cumulative losses along their whole supply chain. Addressing yield losses requires this awareness to motivate collaborative approaches to improvement. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Background & aims: Little is known about energy requirements in brain injured (TBI) patients, despite evidence suggesting adequate nutritional support can improve clinical outcomes. The study aim was to compare predicted energy requirements with measured resting energy expenditure (REE) values, in patients recovering from TBI.
Methods: Indirect calorimetry (IC) was used to measure REE in 45 patients with TBI. Predicted energy requirements were determined using FAO/WHO/UNU and Harris–Benedict (HB) equations. Bland– Altman and regression analysis were used for analysis.
Results: One-hundred and sixty-seven successful measurements were recorded in patients with TBI. At an individual level, both equations predicted REE poorly. The mean of the differences of standardised areas of measured REE and FAO/WHO/UNU was near zero (9 kcal) but the variation in both directions was substantial (range 591 to þ573 kcal). Similarly, the differences of areas of measured REE and HB demonstrated a mean of 1.9 kcal and range 568 to þ571 kcal. Glasgow coma score, patient status, weight and body temperature were signi?cant predictors of measured REE (p < 0.001; R2= 0.47).
Conclusions: Clinical equations are poor predictors of measured REE in patients with TBI. The variability in REE is substantial. Clinicians should be aware of the limitations of prediction equations when estimating energy requirements in TBI patients.
Resumo:
Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.
Resumo:
The demand for richer multimedia services, multifunctional portable devices and high data rates can only been visioned due to the improvement in semiconductor technology. Unfortunately, sub-90 nm process nodes uncover the nanometer Pandora-box exposing the barriers of technology scaling-parameter variations, that threaten the correct operation of circuits, and increased energy consumption, that limits the operational lifetime of today's systems. The contradictory design requirements for low-power and system robustness, is one of the most challenging design problems of today. The design efforts are further complicated due to the heterogeneous types of designs ( logic, memory, mixed-signal) that are included in today's complex systems and are characterized by different design requirements. This paper presents an overview of techniques at various levels of design abstraction that lead to low power and variation aware logic, memory and mixed-signal circuits and can potentially assist in meeting the strict power budgets and yield/quality requirements of future systems.
Resumo:
Approximate execution is a viable technique for energy-con\-strained environments, provided that applications have the mechanisms to produce outputs of the highest possible quality within the given energy budget.
We introduce a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows users to express the relative importance of computations for the quality of the end result, as well as minimum quality requirements. The significance-aware runtime system uses an application-specific analytical energy model to identify the degree of concurrency and approximation that maximizes quality while meeting user-specified energy constraints. Evaluation on a dual-socket 8-core server shows that the proposed
framework predicts the optimal configuration with high accuracy, enabling energy-constrained executions that result in significantly higher quality compared to loop perforation, a compiler approximation technique.
Resumo:
Current variation aware design methodologies, tuned for worst-case scenarios, are becoming increasingly pessimistic from the perspective of power and performance. A good example of such pessimism is setting the refresh rate of DRAMs according to the worst-case access statistics, thereby resulting in very frequent refresh cycles, which are responsible for the majority of the standby power consumption of these memories. However, such a high refresh rate may not be required, either due to extremely low probability of the actual occurrence of such a worst-case, or due to the inherent error resilient nature of many applications that can tolerate a certain number of potential failures. In this paper, we exploit and quantify the possibilities that exist in dynamic memory design by shifting to the so-called approximate computing paradigm in order to save power and enhance yield at no cost. The statistical characteristics of the retention time in dynamic memories were revealed by studying a fabricated 2kb CMOS compatible embedded DRAM (eDRAM) memory array based on gain-cells. Measurements show that up to 73% of the retention power can be saved by altering the refresh time and setting it such that a small number of failures is allowed. We show that these savings can be further increased by utilizing known circuit techniques, such as body biasing, which can help, not only in extending, but also in preferably shaping the retention time distribution. Our approach is one of the first attempts to access the data integrity and energy tradeoffs achieved in eDRAMs for utilizing them in error resilient applications and can prove helpful in the anticipated shift to approximate computing.
Resumo:
Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.
Resumo:
This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.
Resumo:
Real-time systems demand guaranteed and predictable run-time behaviour in order to ensure that no task has missed its deadline. Over the years we are witnessing an ever increasing demand for functionality enhancements in the embedded real-time systems. Along with the functionalities, the design itself grows more complex. Posed constraints, such as energy consumption, time, and space bounds, also require attention and proper handling. Additionally, efficient scheduling algorithms, as proven through analyses and simulations, often impose requirements that have significant run-time cost, specially in the context of multi-core systems. In order to further investigate the behaviour of such systems to quantify and compare these overheads involved, we have developed the SPARTS, a simulator of a generic embedded real- time device. The tasks in the simulator are described by externally visible parameters (e.g. minimum inter-arrival, sporadicity, WCET, BCET, etc.), rather than the code of the tasks. While our current implementation is primarily focused on our immediate needs in the area of power-aware scheduling, it is designed to be extensible to accommodate different task properties, scheduling algorithms and/or hardware models for the application in wide variety of simulations. The source code of the SPARTS is available for download at [1].