64 resultados para Energy efficiency policy


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In ultra-low data rate wireless sensor networks (WSNs) waking up just to listen to a beacon every superframe can be a major waste of energy. This study introduces MedMAC, a medium access protocol for ultra-low data rate WSNs that achieves significant energy efficiency through a novel synchronisation mechanism. The new draft IEEE 802.15.6 standard for body area networks includes a sub-class of applications such as medical implantable devices and long-term micro miniature sensors with ultra-low power requirements. It will be desirable for these devices to have 10 years or more of operation between battery changes, or to have average current requirements matched to energy harvesting technology. Simulation results are presented to show that the MedMAC allows nodes to maintain synchronisation to the network while sleeping through many beacons with a significant increase in energy efficiency during periods of particularly low data transfer. Results from a comparative analysis of MedMAC and IEEE 802.15.6 MAC show that MedMAC has superior efficiency with energy savings of between 25 and 87 for the presented scenarios. © 2011 The Institution of Engineering and Technology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the UN report by the Brundtland Committee, sustainability in the built environment has mainly been seen from a technical focus on single buildings or products. With the energy efficiency approaching 100%, fossil resources depleting and a considerable part of the world still in need of better prosperity, the playing field of a technical focus has become very limited. It will most probably not lead to the sustainable development needed to avoid irreversible effects on climate, energy provision and, not least, society.
Cities are complex structures of independently functioning elements, all of which are nevertheless connected to different forms of infrastructure, which provide the necessary sources or solve the release of waste material. With the current ambitions regarding carbon- or energy-neutrality, retreating again to the scale of a building is likely to fail. Within an urban context a single building cannot become fully resource-independent, and need not, from our viewpoint. Cities should be considered as an organism that has the ability to intelligently exchange sources and waste flows. Especially in terms of energy, it can be made clear that the present situation in most cities are undesired: there is simultaneous demand for heat and cold, and in summer a lot of excess energy is lost, which needs to be produced again in winter. The solution for this is a system that intelligently exchanges and stores essential sources, e.g. energy, and that optimally utilises waste flows.
This new approach will be discussed and exemplified. The Rotterdam Energy Approach and Planning (REAP) will be illustrated as a means for urban planning, whereas Swarm Planning will be introduced as another nature-based principle for swift changes towards sustainability

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a multipair decode-and-forward relay channel, where multiple sources transmit simultaneously their signals to multiple destinations with the help of a full-duplex relay station. We assume that the relay station is equipped with massive arrays, while all sources and destinations have a single antenna. The relay station uses channel estimates obtained from received pilots and zero-forcing (ZF) or maximum-ratio combining/maximum-ratio transmission (MRC/MRT) to process the signals. To reduce significantly the loop interference effect, we propose two techniques: i) using a massive receive antenna array; or ii) using a massive transmit antenna array together with very low transmit power at the relay station. We derive an exact achievable rate in closed-form for MRC/MRT processing and an analytical approximation of the achievable rate for ZF processing. This approximation is very tight, especially for large number of relay station antennas. These closed-form expressions enable us to determine the regions where the full-duplex mode outperforms the half-duplex mode, as well as, to design an optimal power allocation scheme. This optimal power allocation scheme aims to maximize the energy efficiency for a given sum spectral efficiency and under peak power constraints at the relay station and sources. Numerical results verify the effectiveness of the optimal power allocation scheme. Furthermore, we show that, by doubling the number of transmit/receive antennas at the relay station, the transmit power of each source and of the relay station can be reduced by 1.5dB if the pilot power is equal to the signal power, and by 3dB if the pilot power is kept fixed, while maintaining a given quality-of-service.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we investigate the impact of circuit misbehavior due to parametric variations and voltage scaling on the performance of wireless communication systems. Our study reveals the inherent error resilience of such systems and argues that sufficiently reliable operation can be maintained even in the presence of unreliable circuits and manufacturing defects. We further show how selective application of more robust circuit design techniques is sufficient to deal with high defect rates at low overhead and improve energy efficiency with negligible system performance degradation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The end of Dennard scaling has pushed power consumption into a first order concern for current systems, on par with performance. As a result, near-threshold voltage computing (NTVC) has been proposed as a potential means to tackle the limited cooling capacity of CMOS technology. Hardware operating in NTV consumes significantly less power, at the cost of lower frequency, and thus reduced performance, as well as increased error rates. In this paper, we investigate if a low-power systems-on-chip, consisting of ARM's asymmetric big.LITTLE technology, can be an alternative to conventional high performance multicore processors in terms of power/energy in an unreliable scenario. For our study, we use the Conjugate Gradient solver, an algorithm representative of the computations performed by a large range of scientific and engineering codes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several studies in the last decade have pointed out that many devices, such as computers, are often left powered on even when idle, just to make them available and reachable on the network, leading to large energy waste. The concept of network connectivity proxy (NCP) has been proposed as an effective means to improve energy efficiency. It impersonates the presence of networked devices that are temporally unavailable, by carrying out background networking routines on their behalf. Hence, idle devices could be put into low-power states and save energy. Several architectural alternatives and the applicability of this concept to different protocols and applications have been investigated. However, there is no clear understanding of the limitations and issues of this approach in current networking scenarios. This paper extends the knowledge about the NCP by defining an extended set of tasks that the NCP can carry out, by introducing a suitable communication interface to control NCP operation, and by designing, implementing, and evaluating a functional prototype.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a mathematically rigorous Quality-of-Service (QoS) metric which relates the achievable quality of service metric (QoS) for a real-time analytics service to the server energy cost of offering the service. Using a new iso-QoS evaluation methodology, we scale server resources to meet QoS targets and directly rank the servers in terms of their energy-efficiency and by extension cost of ownership. Our metric and method are platform-independent and enable fair comparison of datacenter compute servers with significant architectural diversity, including micro-servers. We deploy our metric and methodology to compare three servers running financial option pricing workloads on real-life market data. We find that server ranking is sensitive to data inputs and desired QoS level and that although scale-out micro-servers can be up to two times more energy-efficient than conventional heavyweight servers for the same target QoS, they are still six times less energy efficient than high-performance computational accelerators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A methodology is presented that combines a multi-objective evolutionary algorithm and artificial neural networks to optimise single-storey steel commercial buildings for net-zero carbon impact. Both symmetric and asymmetric geometries are considered in conjunction with regulated, unregulated and embodied carbon. Offsetting is achieved through photovoltaic (PV) panels integrated into the roof. Asymmetric geometries can increase the south facing surface area and consequently allow for improved PV energy production. An exemplar carbon and energy breakdown of a retail unit located in Belfast UK with a south facing PV roof is considered. It was found in most cases that regulated energy offsetting can be achieved with symmetric geometries. However, asymmetric geometries were necessary to account for the unregulated and embodied carbon. For buildings where the volume is large due to high eaves, carbon offsetting became increasingly more difficult, and not possible in certain cases. The use of asymmetric geometries was found to allow for lower embodied energy structures with similar carbon performance to symmetrical structures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Peak power consumption is the first order design constraint of data centers. Though peak power consumption is rarely, if ever, observed, the entire data center facility must prepare for it, leading to inefficient usage of its resources. The most prominent way for addressing this issue is to limit the power consumption of the data center IT facility far below its theoretical peak value. Many approaches have been proposed to achieve that, based on the same small set of enforcement mechanisms, but there has been no corresponding work on systematically examining the advantages and disadvantages of each such mechanism. In the absence of such a study,it is unclear what is the optimal mechanism for a given computing environment, which can lead to unnecessarily poor performance if an inappropriate scheme is used. This paper fills this gap by comparing for the first time five widely used power capping mechanisms under the same hardware/software setting. We also explore possible alternative power capping mechanisms beyond what has been previously proposed and evaluate them under the same setup. We systematically analyze the strengths and weaknesses of each mechanism, in terms of energy efficiency, overhead, and predictable behavior. We show how these mechanisms can be combined in order to implement an optimal power capping mechanism which reduces the slow down compared to the most widely used mechanism by up to 88%. Our results provide interesting insights regarding the different trade-offs of power capping techniques, which will be useful for designing and implementing highly efficient power capping in the future. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The need for fast response demand side participation (DSP) has never been greater due to increased wind power penetration. White domestic goods suppliers are currently developing a `smart' chip for a range of domestic appliances (e.g. refrigeration units, tumble dryers and storage heaters) to support the home as a DSP unit in future power systems. This paper presents an aggregated population-based model of a single compressor fridge-freezer. Two scenarios (i.e. energy efficiency class and size) for valley filling and peak shaving are examined to quantify and value DSP savings in 2020. The analysis shows potential peak reductions of 40 MW to 55 MW are achievable in the Single wholesale Electricity Market of Ireland (i.e. the test system), and valley demand increases of up to 30 MW. The study also shows the importance of the control strategy start time and the staggering of the devices to obtain the desired filling or shaving effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wearable devices performing advanced bio-signal analysis algorithms are aimed to foster a revolution in healthcare provision of chronic cardiac diseases. In this context, energy efficiency is of paramount importance, as long-term monitoring must be ensured while relying on a tiny power source. Operating at a scaled supply voltage, just above the threshold voltage, effectively helps in saving substantial energy, but it makes circuits, and especially memories, more prone to errors, threatening the correct execution of algorithms. The use of error detection and correction codes may help to protect the entire memory content, however it incurs in large area and energy overheads which may not be compatible with the tight energy budgets of wearable systems. To cope with this challenge, in this paper we propose to limit the overhead of traditional schemes by selectively detecting and correcting errors only in data highly impacting the end-to-end quality of service of ultra-low power wearable electrocardiogram (ECG) devices. This partition adopts the protection of either significant words or significant bits of each data element, according to the application characteristics (statistical properties of the data in the application buffers), and its impact in determining the output. The proposed heterogeneous error protection scheme in real ECG signals allows substantial energy savings (11% in wearable devices) compared to state-of-the-art approaches, like ECC, in which the whole memory is protected against errors. At the same time, it also results in negligible output quality degradation in the evaluated power spectrum analysis application of ECG signals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In dynamic spectrum access networks, cognitive radio terminals monitor their spectral environment in order to detect and opportunistically access unoccupied frequency channels. The overall performance of such networks depends on the spectrum occupancy or availability patterns. Accurate knowledge on the channel availability enables optimum performance of such networks in terms of spectrum and energy efficiency. This work proposes a novel probabilistic channel availability model that can describe the channel availability in different polarizations for mobile cognitive radio terminals that are likely to change their orientation during their operation. A Gaussian approximation is used to model the empirical occupancy data that was obtained through a measurement campaign in the cellular frequency bands within a realistic operational scenario.