829 resultados para energy efficient
Resumo:
It has been proposed that the field of appropriate technology (AT) - small-scale, energy efficient and low-cost solutions, can be of tremendous assistance in many of the sustainable development challenges, such as food and water security, health, shelter, education and work opportunities. Unfortunately, there has not yet been a significant uptake of AT by organizations, researchers, policy makers or the mainstream public working in the many areas of the development sector. Some of the biggest barriers to higher AT engagement include: 1) AT perceived as inferior or ‘poor persons technology’, 2) questions of technological robustness, design, fit and transferability, 3) funding, 4) institutional support, as well as 5) general barriers associated with tackling rural poverty. With the rise of information and communication technologies (ICTs) for online networking and knowledge sharing, the possibilities to tap into the collaborative open-access and open-source AT are growing, and so is the prospect for collective poverty reducing strategies, enhancement of entrepreneurship, communications, education and a diffusion of life-changing technologies. In short, the same collaborative philosophy employed in the success of open source software can be applied to hardware design of technologies to improve sustainable development efforts worldwide. To analyze current barriers to open source appropriate technology (OSAT) and explore opportunities to overcome such obstacles, a series of interviews with researchers and organizations working in the field of AT were conducted. The results of the interviews confirmed the majority of literature identified barriers, but also revealed that the most pressing problem for organizations and researchers currently working in the field of AT is the need for much better communication and collaboration to share the knowledge and resources and work in partnership. In addition, interviews showcased general receptiveness to the principles of collaborative innovation and open source on the ground level. A much greater focus on networking, collaboration, demand-led innovation, community participation, and the inclusion of educational institutions through student involvement can be of significant help to build the necessary knowledge base, networks and the critical mass exposure for the growth of appropriate technology.
Resumo:
Environmental concerns relating to gaseous emissions from transport have led to growth in the use of compressed natural gas vehicles worldwide with an estimated 13 million Natural Gas Vehicles (NGVs) currently in operation. Across Europe, many countries are replacing traditional diesel oil in captive fleets such as buses used for public transport and heavy and light goods vehicles used for freight and logistics with CNG vehicles. Initially this was to reduce localised air pollution in urban environments. However, with the need to reduce greenhouse gas emissions CNG is seen as a cleaner more energy efficient and environmental friendly alternative. This paper briefly examines the growth of NGVs in Europe and worldwide. Then a case study on CNG the introduction in Spain and Italy is presented. As part of the case study, policy interventions are examined. Finally, a statistical analysis of private and public refuelling stations in both countries is also provided. CNG can also be mixed with biogas. This study and the role of CNG is relevant because of the existing European Union Directive 2009/28/EC target, requiring that 10% of transport energy come from renewable sources, not alone biofuels such as biogas. CNG offers another alternative transport fuel.
Resumo:
Computing has recently reached an inflection point with the introduction of multicore processors. On-chip thread-level parallelism is doubling approximately every other year. Concurrency lends itself naturally to allowing a program to trade performance for power savings by regulating the number of active cores; however, in several domains, users are unwilling to sacrifice performance to save power. We present a prediction model for identifying energy-efficient operating points of concurrency in well-tuned multithreaded scientific applications and a runtime system that uses live program analysis to optimize applications dynamically. We describe a dynamic phase-aware performance prediction model that combines multivariate regression techniques with runtime analysis of data collected from hardware event counters to locate optimal operating points of concurrency. Using our model, we develop a prediction-driven phase-aware runtime optimization scheme that throttles concurrency so that power consumption can be reduced and performance can be set at the knee of the scalability curve of each program phase. The use of prediction reduces the overhead of searching the optimization space while achieving near-optimal performance and power savings. A thorough evaluation of our approach shows a reduction in power consumption of 10.8 percent, simultaneous with an improvement in performance of 17.9 percent, resulting in energy savings of 26.7 percent.
Resumo:
Cloud services are exploding, and organizations are converging their data centers in order to take advantage of the predictability, continuity, and quality of service delivered by virtualization technologies. In parallel, energy-efficient and high-security networking is of increasing importance. Network operators, and service and product providers require a new network solution to efficiently tackle the increasing demands of this changing network landscape. Software-defined networking has emerged as an efficient network technology capable of supporting the dynamic nature of future network functions and intelligent applications while lowering operating costs through simplified hardware, software, and management. In this article, the question of how to achieve a successful carrier grade network with software-defined networking is raised. Specific focus is placed on the challenges of network performance, scalability, security, and interoperability with the proposal of potential solution directions.
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
One of the core properties of Software Defined Networking (SDN) is the ability for third parties to develop network applications. This introduces increased potential for innovation in networking from performance-enhanced to energy-efficient designs. In SDN, the application connects with the network via the SDN controller. A specific concern relating to this communication channel is whether an application can be trusted or not. For example, what information about the network state is gathered by the application? Is this information necessary for the application to execute or is it gathered for malicious intent? In this paper we present an approach to secure the northbound interface by introducing a permissions system that ensures that controller operations are available to trusted applications only. Implementation of this permissions system with our Operation Checkpoint adds negligible overhead and illustrates successful defense against unauthorized control function access attempts.
Resumo:
We present a mathematically rigorous Quality-of-Service (QoS) metric which relates the achievable quality of service metric (QoS) for a real-time analytics service to the server energy cost of offering the service. Using a new iso-QoS evaluation methodology, we scale server resources to meet QoS targets and directly rank the servers in terms of their energy-efficiency and by extension cost of ownership. Our metric and method are platform-independent and enable fair comparison of datacenter compute servers with significant architectural diversity, including micro-servers. We deploy our metric and methodology to compare three servers running financial option pricing workloads on real-life market data. We find that server ranking is sensitive to data inputs and desired QoS level and that although scale-out micro-servers can be up to two times more energy-efficient than conventional heavyweight servers for the same target QoS, they are still six times less energy efficient than high-performance computational accelerators.
Resumo:
We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.
Resumo:
Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable data memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. This skews the bit-error distribution towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area, and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in read power, 77% in read access time, and 89% in area, when applied to various data mining applications in 28nm process technology.
Resumo:
Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application.
Resumo:
There is a significant lack of indoor air quality research in low energy homes. This study compared the indoor air quality of eight
newly built case study homes constructed to similar levels of air-tightness and insulation; with two different ventilation strategies (four homes with Mechanical Ventilation with Heat Recovery (MVHR) systems/Code level 4 and four homes naturally ventilated/Code level 3). Indoor air quality measurements were conducted over a 24 h period in the living room and main bedroom of each home during the summer and winter seasons. Simultaneous outside measurements and an occupant diary were also employed during the measurement period. Occupant interviews were conducted to gain information on perceived indoor air quality, occupant behaviour and building related illnesses. Knowledge of the MVHR system including ventilation related behaviour was also studied. Results suggest indoor air quality problems in both the mechanically ventilated and naturally ventilated homes, with significant issues identified regarding occupant use in the social homes
Resumo:
We propose a new selective multi-carrier index keying in orthogonal frequency division multiplexing (OFDM) systems that opportunistically modulate both a small subset of sub-carriers and their indices. Particularly, we investigate the performance enhancement in two cases of error propagation sensitive and compromised deviceto-device (D2D) communications. For the performance evaluation, we focus on analyzing the error propagation probability (EPP) introducing the exact and upper bound expressions on the detection error probability, in the presence of both imperfect and perfect detection of active multi-carrier indices. The average EPP results in closedform are generalized for various fading distribution using the moment generating function, and our numerical results clearly show that the proposed approach is desirable for reliable and energy-efficient D2D applications.
Resumo:
Exascale computation is the next target of high performance computing. In the push to create exascale computing platforms, simply increasing the number of hardware devices is not an acceptable option given the limitations of power consumption, heat dissipation, and programming models which are designed for current hardware platforms. Instead, new hardware technologies, coupled with improved programming abstractions and more autonomous runtime systems, are required to achieve this goal. This position paper presents the design of a new runtime for a new heterogeneous hardware platform being developed to explore energy efficient, high performance computing. By combining a number of different technologies, this framework will both simplify the programming of current and future HPC applications, as well as automating the scheduling of data and computation across this new hardware platform. In particular, this work explores the use of FPGAs to achieve both the power and performance goals of exascale, as well as utilising the runtime to automatically effect dynamic configuration and reconfiguration of these platforms.
Resumo:
This thesis contributes to the advancement of Fiber-Wireless (FiWi) access technologies, through the development of algorithms for resource allocation and energy efficient routing. FiWi access networks use both optical and wireless/cellular technologies to provide high bandwidth and ubiquity, required by users and current high demanding services. FiWi access technologies are divided in two parts. In one of the parts, fiber is brought from the central office to near the users, while in the other part wireless routers or base stations take over and provide Internet access to users. Many technologies can be used at both the optical and wireless parts, which lead to different integration and optimization problems to be solved. In this thesis, the focus will be on FiWi access networks that use a passive optical network at the optical section and a wireless mesh network at the wireless section. In such networks, two important aspects that influence network performance are: allocation of resources and traffic routing throughout the mesh section. In this thesis, both problems are addressed. A fair bandwidth allocation algorithm is developed, which provides fairness in terms of bandwidth and in terms of experienced delays among all users. As for routing, an energy efficient routing algorithm is proposed that optimizes sleeping and productive periods throughout the wireless and optical sections. To develop the stated algorithms, game theory and networks formation theory were used. These are powerful mathematical tools that can be used to solve problems involving agents with conflicting interests. Since, usually, these tools are not common knowledge, a brief survey on game theory and network formation theory is provided to explain the concepts that are used throughout the thesis. As such, this thesis also serves as a showcase on the use of game theory and network formation theory to develop new algorithms.
Resumo:
Num mundo em que as redes de telecomunicações estão em constante evolução e crescimento, o consumo energético destas também aumenta. Com a evolução tanto por parte das redes como dos seus equipamentos, o custo de implementação de uma rede tem-se reduzido até ao ponto em que o maior obstáculo para o crescimento das redes é já o seu custo de manutenção e funcionamento. Nas últimas décadas têm sido criados esforços para tornar as redes cada fez mais eficientes ao nível energético, reduzindo-se assim os seus custos operacionais, como também a redução dos problemas relacionados com as fontes de energia que alimentam estas redes. Neste sentido, este trabalho tem como objectivo principal o estudo do consumo energético de redes IP sobre WDM, designadamente o estudo de métodos de encaminhamento que sejam eficientes do ponto de vista energético. Neste trabalho formalizámos um modelo de optimização que foi avaliado usando diferentes topologias de rede. O resultado da análise mostrou que na maioria dos casos é possível obter uma redução do consumo na ordem dos 25%.