871 resultados para Eletric power consumption - Reduction
Resumo:
Current variation aware design methodologies, tuned for worst-case scenarios, are becoming increasingly pessimistic from the perspective of power and performance. A good example of such pessimism is setting the refresh rate of DRAMs according to the worst-case access statistics, thereby resulting in very frequent refresh cycles, which are responsible for the majority of the standby power consumption of these memories. However, such a high refresh rate may not be required, either due to extremely low probability of the actual occurrence of such a worst-case, or due to the inherent error resilient nature of many applications that can tolerate a certain number of potential failures. In this paper, we exploit and quantify the possibilities that exist in dynamic memory design by shifting to the so-called approximate computing paradigm in order to save power and enhance yield at no cost. The statistical characteristics of the retention time in dynamic memories were revealed by studying a fabricated 2kb CMOS compatible embedded DRAM (eDRAM) memory array based on gain-cells. Measurements show that up to 73% of the retention power can be saved by altering the refresh time and setting it such that a small number of failures is allowed. We show that these savings can be further increased by utilizing known circuit techniques, such as body biasing, which can help, not only in extending, but also in preferably shaping the retention time distribution. Our approach is one of the first attempts to access the data integrity and energy tradeoffs achieved in eDRAMs for utilizing them in error resilient applications and can prove helpful in the anticipated shift to approximate computing.
Resumo:
The idea of proxying network connectivity has been proposed as an efficient mechanism to maintain network presence on behalf of idle devices, so that they can “sleep”. The concept has been around for many years; alternative architectural solutions have been proposed to implement it, which lead to different considerations about capability, effectiveness and energy efficiency. However, there is neither a clear understanding of the potential for energy saving nor a detailed performance comparison among the different proxy architectures. In this paper, we estimate the potential energy saving achievable by different architectural solutions for proxying network connectivity. Our work considers the trade-off between the saving achievable by putting idle devices to sleep and the additional power consumption to run the proxy. Our analysis encompasses a broad range of alternatives, taking into consideration both implementations already available in the market and prototypes built for research purposes. We remark that the main value of our work is the estimation under realistic conditions, taking into consideration power measurements, usage profiles and proxying capabilities.
Resumo:
Exascale computation is the next target of high performance computing. In the push to create exascale computing platforms, simply increasing the number of hardware devices is not an acceptable option given the limitations of power consumption, heat dissipation, and programming models which are designed for current hardware platforms. Instead, new hardware technologies, coupled with improved programming abstractions and more autonomous runtime systems, are required to achieve this goal. This position paper presents the design of a new runtime for a new heterogeneous hardware platform being developed to explore energy efficient, high performance computing. By combining a number of different technologies, this framework will both simplify the programming of current and future HPC applications, as well as automating the scheduling of data and computation across this new hardware platform. In particular, this work explores the use of FPGAs to achieve both the power and performance goals of exascale, as well as utilising the runtime to automatically effect dynamic configuration and reconfiguration of these platforms.
Resumo:
NanoStreams explores the design, implementation,and system software stack of micro-servers aimed at processingdata in-situ and in real time. These micro-servers can serve theemerging Edge computing ecosystem, namely the provisioningof advanced computational, storage, and networking capabilitynear data sources to achieve both low latency event processingand high throughput analytical processing, before consideringoff-loading some of this processing to high-capacity datacentres.NanoStreams explores a scale-out micro-server architecture thatcan achieve equivalent QoS to that of conventional rack-mountedservers for high-capacity datacentres, but with dramaticallyreduced form factors and power consumption. To this end,NanoStreams introduces novel solutions in programmable & con-figurable hardware accelerators, as well as the system softwarestack used to access, share, and program those accelerators.Our NanoStreams micro-server prototype has demonstrated 5.5×higher energy-efficiency than a standard Xeon Server. Simulationsof the microserver’s memory system extended to leveragehybrid DDR/NVM main memory indicated 5× higher energyefficiencythan a conventional DDR-based system.
Resumo:
Nesta tese investigam-se e desenvolvem-se dispositivos para processamento integralmente óptico em redes com multiplexagem densa por divisão no comprimento de onda (DWDM). O principal objectivo das redes DWDM é transportar e distribuir um espectro óptico densamente multiplexado com sinais de débito binário ultra elevado, ao longo de centenas ou milhares de quilómetros de fibra óptica. Estes sinais devem ser transportados e encaminhados no domínio óptico de forma transparente, sem conversões óptico-eléctrico-ópticas (OEO), evitando as suas limitações e custos. A tecnologia baseada em amplificadores ópticos de semicondutor (SOA) é promissora graças aos seus efeitos não-lineares ultra-rápidos e eficientes, ao potencial para integração, reduzido consumo de potência e custos. Conversores de comprimento de onda são o elemento óptico básico para aumentar a capacidade da rede e evitar o bloqueio de comprimentos de onda. Neste trabalho, são estudados e analisados experimentalmente métodos para aumentar a largura de banda operacional de conversores de modulação cruzada de ganho (XGM), a fim de permitir a operação do SOA para além das suas limitações físicas. Conversão de um comprimento de onda, e conversão simultânea de múltiplos comprimentos de onda são testadas, usando interferómetros de Mach-Zehnder com SOA. As redes DWDM de alto débito binário requerem formatos de modulação optimizados, com elevada tolerância aos efeitos nefastos da fibra, e reduzida ocupação espectral. Para esse efeito, é vital desenvolver conversores integramente ópticos de formatos de modulação, a fim de permitir a interligação entre as redes já instaladas, que operam com modulação de intensidade, e as redes modernas, que utilizam formatos de modulação avançados. No âmbito deste trabalho é proposto um conversor integralmente óptico de formato entre modulação óptica de banda lateral dupla e modulação óptica de banda lateral residual; este é caracterizado através de simulação e experimentalmente. Adicionalmente, é proposto um conversor para formato de portadora suprimida, através de XGM e modulação cruzada de fase. A interligação entre as redes de transporte com débito binário ultra-elevado e as redes de acesso com débito binário reduzido requer conversão óptica de formato de impulso entre retorno-a-zero (RZ) e não-RZ. São aqui propostas e investigadas duas estruturas distintas: uma baseada em filtragem desalinhada do sinal convertido por XGM; uma segunda utiliza as dinâmicas do laser interno de um SOA com ganho limitado (GC-SOA). Regeneração integralmente óptica é essencial para reduzir os custos das redes. Dois esquemas distintos são utilizados para regeneração: uma estrutura baseada em MZI-SOA, e um método no qual o laser interno de um GC-SOA é modulado com o sinal distorcido a regenerar. A maioria dos esquemas referidos é testada experimentalmente a 40 Gb/s, com potencial para aplicação a débitos binários superiores, demonstrado que os SOA são uma tecnologia basilar para as redes ópticas do futuro.
Resumo:
Future emerging market trends head towards positioning based services placing a new perspective on the way we obtain and exploit positioning information. On one hand, innovations in information technology and wireless communication systems enabled the development of numerous location based applications such as vehicle navigation and tracking, sensor networks applications, home automation, asset management, security and context aware location services. On the other hand, wireless networks themselves may bene t from localization information to improve the performances of di erent network layers. Location based routing, synchronization, interference cancellation are prime examples of applications where location information can be useful. Typical positioning solutions rely on measurements and exploitation of distance dependent signal metrics, such as the received signal strength, time of arrival or angle of arrival. They are cheaper and easier to implement than the dedicated positioning systems based on ngerprinting, but at the cost of accuracy. Therefore intelligent localization algorithms and signal processing techniques have to be applied to mitigate the lack of accuracy in distance estimates. Cooperation between nodes is used in cases where conventional positioning techniques do not perform well due to lack of existing infrastructure, or obstructed indoor environment. The objective is to concentrate on hybrid architecture where some nodes have points of attachment to an infrastructure, and simultaneously are interconnected via short-range ad hoc links. The availability of more capable handsets enables more innovative scenarios that take advantage of multiple radio access networks as well as peer-to-peer links for positioning. Link selection is used to optimize the tradeo between the power consumption of participating nodes and the quality of target localization. The Geometric Dilution of Precision and the Cramer-Rao Lower Bound can be used as criteria for choosing the appropriate set of anchor nodes and corresponding measurements before attempting location estimation itself. This work analyzes the existing solutions for node selection in order to improve localization performance, and proposes a novel method based on utility functions. The proposed method is then extended to mobile and heterogeneous environments. Simulations have been carried out, as well as evaluation with real measurement data. In addition, some speci c cases have been considered, such as localization in ill-conditioned scenarios and the use of negative information. The proposed approaches have shown to enhance estimation accuracy, whilst signi cantly reducing complexity, power consumption and signalling overhead.
Resumo:
Network virtualisation is seen as a promising approach to overcome the so-called “Internet impasse” and bring innovation back into the Internet, by allowing easier migration towards novel networking approaches as well as the coexistence of complementary network architectures on a shared infrastructure in a commercial context. Recently, the interest from the operators and mainstream industry in network virtualisation has grown quite significantly, as the potential benefits of virtualisation became clearer, both from an economical and an operational point of view. In the beginning, the concept has been mainly a research topic and has been materialized in small-scale testbeds and research network environments. This PhD Thesis aims to provide the network operator with a set of mechanisms and algorithms capable of managing and controlling virtual networks. To this end, we propose a framework that aims to allocate, monitor and control virtual resources in a centralized and efficient manner. In order to analyse the performance of the framework, we performed the implementation and evaluation on a small-scale testbed. To enable the operator to make an efficient allocation, in real-time, and on-demand, of virtual networks onto the substrate network, it is proposed a heuristic algorithm to perform the virtual network mapping. For the network operator to obtain the highest profit of the physical network, it is also proposed a mathematical formulation that aims to maximize the number of allocated virtual networks onto the physical network. Since the power consumption of the physical network is very significant in the operating costs, it is important to make the allocation of virtual networks in fewer physical resources and onto physical resources already active. To address this challenge, we propose a mathematical formulation that aims to minimize the energy consumption of the physical network without affecting the efficiency of the allocation of virtual networks. To minimize fragmentation of the physical network while increasing the revenue of the operator, it is extended the initial formulation to contemplate the re-optimization of previously mapped virtual networks, so that the operator has a better use of its physical infrastructure. It is also necessary to address the migration of virtual networks, either for reasons of load balancing or for reasons of imminent failure of physical resources, without affecting the proper functioning of the virtual network. To this end, we propose a method based on cloning techniques to perform the migration of virtual networks across the physical infrastructure, transparently, and without affecting the virtual network. In order to assess the resilience of virtual networks to physical network failures, while obtaining the optimal solution for the migration of virtual networks in case of imminent failure of physical resources, the mathematical formulation is extended to minimize the number of nodes migrated and the relocation of virtual links. In comparison with our optimization proposals, we found out that existing heuristics for mapping virtual networks have a poor performance. We also found that it is possible to minimize the energy consumption without penalizing the efficient allocation. By applying the re-optimization on the virtual networks, it has been shown that it is possible to obtain more free resources as well as having the physical resources better balanced. Finally, it was shown that virtual networks are quite resilient to failures on the physical network.
Resumo:
The speed control system for a concept for cost effective drives with high precision is presented. The drive concept consists of two parallel working drives. The concept is an alternative to direct drives. One big advantage is the use of standard gear boxes with economical components. This paper deals with the control of the drive system consisting of two parts: one drive produces the power for the machine, another drive makes the motion precice and dynamic. Both drives are combined to one double drive by a control system. The drive system is usefull for printing machines and other machines with high power consumption at a nearly constant speed and high accuracy requirements. The calculation for a drive system with 37 kW shows, that the control drive has to supply only about 20 % of the total torque and power needed to compensate the errors of the power drive. The stability of the system is shown by a simulation of the double drive.
Resumo:
The speed control system for a concept for cost effective drives with high precision is presented. The drive concept consists of two parallel working drives. The concept is an alternative to direct drives. One big advantage is the use of standard gear boxes with economical components. This paper deals with the control of the drive system consisting of two parts: one drive produces the power for the machine, another drive makes the motion precice and dynamic. Both drives are combined to one double drive by a control system. The drive system is usefull for printing machines and other machines with high power consumption at a nearly constant speed and high accuracy requirements. The calculation for a drive system with 37 kW shows, that the control drive has to supply only about 20 % of the total torque and power needed to compensate the errors of the power drive. The stability of the system is shown by a simulation of the double drive.
Resumo:
Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Recent developments of high-end processors recognize temperature monitoring and tuning as one of the main challenges towards achieving higher performance given the growing power and temperature constraints. To address this challenge, one needs both suitable thermal energy abstraction and corresponding instrumentation. Our model is based on application-specific parameters such as power consumption, execution time, and asymptotic temperature as well as hardware-specific parameters such as half time for thermal rise or fall. As observed with our out-of-band instrumentation and monitoring infrastructure, the temperature changes follow a relatively slow capacitor-style charge-discharge process. Therefore, we use the lumped thermal model that initiates an exponential process whenever there is a change in processor’s power consumption. Initial experiments with two codes – Firestarter and Nekbone – validate our thermal energy model and demonstrate its use for analyzing and potentially improving the application-specific balance between temperature, power, and performance.
Resumo:
This paper presents a low complexity high efficiency decimation filter which can be employed in EletroCardioGram (ECG) acquisition systems. The decimation filter with a decimation ratio of 128 works along with a third order sigma delta modulator. It is designed in four stages to reduce cost and power consumption. The work reported here provides an efficient approach for the decimation process for high resolution biomedical data conversion applications by employing low complexity two-path all-pass based decimation filters. The performance of the proposed decimation chain was validated by using the MIT-BIH arrhythmia database and comparative simulations were conducted with the state of the art.
Resumo:
In recent years the use of several new resources in power systems, such as distributed generation, demand response and more recently electric vehicles, has significantly increased. Power systems aim at lowering operational costs, requiring an adequate energy resources management. In this context, load consumption management plays an important role, being necessary to use optimization strategies to adjust the consumption to the supply profile. These optimization strategies can be integrated in demand response programs. The control of the energy consumption of an intelligent house has the objective of optimizing the load consumption. This paper presents a genetic algorithm approach to manage the consumption of a residential house making use of a SCADA system developed by the authors. Consumption management is done reducing or curtailing loads to keep the power consumption in, or below, a specified energy consumption limit. This limit is determined according to the consumer strategy and taking into account the renewable based micro generation, energy price, supplier solicitations, and consumers’ preferences. The proposed approach is compared with a mixed integer non-linear approach.
Resumo:
A supervisory control and data acquisition (SCADA) system is an integrated platform that incorporates several components and it has been applied in the field of power systems and several engineering applications to monitor, operate and control a lot of processes. In the future electrical networks, SCADA systems are essential for an intelligent management of resources like distributed generation and demand response, implemented in the smart grid context. This paper presents a SCADA system for a typical residential house. The application is implemented on MOVICON™11 software. The main objective is to manage the residential consumption, reducing or curtailing loads to keep the power consumption in or below a specified setpoint, imposed by the costumer and the generation availability.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Edificações