829 resultados para energy-efficient
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) is a very effective tool for designing trade-offs between energy and performance. In this paper, we use a formal Petri net based program performance model that directly captures both the application and system properties, to find energy efficient DVFS settings for CMP systems, that satisfy a given performance constraint, for SPMD multithreaded programs. Experimental evaluation shows that we achieve significant energy savings, while meeting the performance constraints.
Resumo:
We consider cooperative spectrum sensing for cognitive radios. We develop an energy efficient detector with low detection delay using sequential hypothesis testing. Sequential Probability Ratio Test (SPRT) is used at both the local nodes and the fusion center. We also analyse the performance of this algorithm and compare with the simulations. Modelling uncertainties in the distribution parameters are considered. Slow fading with and without perfect channel state information at the cognitive radios is taken into account.
Resumo:
We consider cooperative spectrum sensing for cognitive radios. We develop an energy efficient detector with low detection delay using sequential hypothesis testing. Sequential Probability Ratio Test (SPRT) is used at both the local nodes and the fusion center. We also analyse the performance of this algorithm and compare with the simulations. Modelling uncertainties in the distribution parameters are considered. Slow fading with and without perfect channel state information at the cognitive radios is taken into account.
Resumo:
Amplify-and-forward (AF) relay based cooperation has been investigated in the literature given its simplicity and practicality. Two models for AF, namely, fixed gain and fixed power relaying, have been extensively studied. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay (SR) channel gain. In fixed power relaying, the relay's instantaneous transmit power is fixed, but its gain varies. We propose a general AF cooperation model in which an average transmit power constrained relay jointly adapts its gain and transmit power as a function of the channel gains. We derive the optimal AF gain policy that minimizes the fading- averaged symbol error probability (SEP) of MPSK and present insightful and tractable lower and upper bounds for it. We then analyze the SEP of the optimal policy. Our results show that the optimal scheme is up to 39.7% and 47.5% more energy-efficient than fixed power relaying and fixed gain relaying, respectively. Further, the weaker the direct source-destination link, the greater are the energy-efficiency gains.
Resumo:
Low power consumption per channel and data rate minimization are two key challenges which need to be addressed in future generations of neural recording systems (NRS). Power consumption can be reduced by avoiding unnecessary processing whereas data rate is greatly decreased by sending spike time-stamps along with spike features as opposed to raw digitized data. Dynamic range in NRS can vary with time due to change in electrode-neuron distance or background noise, which demands adaptability. An analog-to-digital converter (ADC) is one of the most important blocks in a NRS. This paper presents an 8-bit SAR ADC in 0.13-mu m CMOS technology along with input and reference buffer. A novel energy efficient digital-to-analog converter switching scheme is proposed, which consumes 37% less energy than the present state-of-the-art. The use of a ping-pong input sampling scheme is emphasized for multichannel input to alleviate the bandwidth requirement of the input buffer. To reduce the data rate, the A/D process is only enabled through the in-built background noise rejection logic to ensure that the noise is not processed. The ADC resolution can be adjusted from 8 to 1 bit in 1-bit step based on the input dynamic range. The ADC consumes 8.8 mu W from 1 V supply at 1 MS/s speed. It achieves effective number of bits of 7.7 bits and FoM of 42.3 fJ/conversion-step.
Resumo:
The demand for energy efficient, low weight structures has boosted the use of composite structures assembled using increased quantities of structural adhesives. Bonded structures may be subjected to severe working environments such as high temperature and moisture due to which the adhesive gets degraded over a period of time. This reduces the strength of a joint and leads to premature failure. Measurement of strains in the adhesive bondline at any point of time during service may be beneficial as an assessment can be made on the integrity of a joint and necessary preventive actions may be taken before failure. This paper presents an experimental approach of measuring peel and shear strains in the adhesive bondline of composite single-lap joints using digital image correlation. Different sets of composite adhesive joints with varied bond quality were prepared and subjected to tensile load during which digital images were taken and processed using digital image correlation software. The measured peel strain at the joint edge showed a rapid increase with the initiation of a crack till failure of the joint. The measured strains were used to compute the corresponding stresses assuming a plane strain condition and the results were compared with stresses predicted using theoretical models, namely linear and nonlinear adhesive beam models. A similar trend in stress distribution was observed. Further comparison of peel and shear strains also exhibited similar trend for both healthy and degraded joints. Maximum peel stress failure criterion was used to predict the failure load of a composite adhesive joint and a comparison was made between predicted and actual failure loads. The predicted failure loads from theoretical models were found to be higher than the actual failure load for all the joints.
Resumo:
This paper studies a pilot-assisted physical layer data fusion technique known as Distributed Co-Phasing (DCP). In this two-phase scheme, the sensors first estimate the channel to the fusion center (FC) using pilots sent by the latter; and then they simultaneously transmit their common data by pre-rotating them by the estimated channel phase, thereby achieving physical layer data fusion. First, by analyzing the symmetric mutual information of the system, it is shown that the use of higher order constellations (HOC) can improve the throughput of DCP compared to the binary signaling considered heretofore. Using an HOC in the DCP setting requires the estimation of the composite DCP channel at the FC for data decoding. To this end, two blind algorithms are proposed: 1) power method, and 2) modified K-means algorithm. The latter algorithm is shown to be computationally efficient and converges significantly faster than the conventional K-means algorithm. Analytical expressions for the probability of error are derived, and it is found that even at moderate to low SNRs, the modified K-means algorithm achieves a probability of error comparable to that achievable with a perfect channel estimate at the FC, while requiring no pilot symbols to be transmitted from the sensor nodes. Also, the problem of signal corruption due to imperfect DCP is investigated, and constellation shaping to minimize the probability of signal corruption is proposed and analyzed. The analysis is validated, and the promising performance of DCP for energy-efficient physical layer data fusion is illustrated, using Monte Carlo simulations.
Resumo:
With the pressing need to meet an ever-increasing energy demand, the combustion systems utilizing fossil fuels have been the major contributors to carbon footprint. As the combustion of conventional energy resources continue to produce significant Green House gas (GHG) emissions, there is a strong emphasis to either upgrade or find an energy-efficient eco-friendly alternative to the traditional hydrocarbon fuels. With recent developments in nanotechnology, the ability to manufacture materials with custom tailored properties at nanoscale has led to the discovery of a new class of high energy density fuels containing reactive metallic nanoparticles (NPs). Due to the high reactive interfacial area and enhanced thermal and mass transport properties of nanomaterials, the high heat of formation of these metallic fuels can now be released rapidly, thereby saving on specific fuel consumption and hence reducing GHG emissions. In order to examine the efficacy of nanofuels in energetic formulations, it is imperative to first study their combustion characteristics at the droplet scale that form the fundamental building block for any combustion system utilizing liquid fuel spray. During combustion of such multiphase, multicomponent droplets, the phenomenon of diffusional entrapment of high volatility species leads to its explosive boiling (at the superheat limit) thereby leading to an intense internal pressure build-up. This pressure upsurge causes droplet fragmentation either in form of a microexplosion or droplet puffing followed by atomization (with formation of daughter droplets) featuring disruptive burning. Both these atomization modes represent primary mechanisms for extracting the high oxidation energies of metal NP additives by exposing them to the droplet flame (with daughter droplets acting as carriers of NPs). Atomization also serves as a natural mechanism for uniform distribution and mixing of the base fuel and enhancing burning rates (due to increase in specific surface area through formation of smaller daughter droplets). However, the efficiency of atomization depends on the thermo-physical properties of the base fuel, NP concentration and type. For instance, at dense loading NP agglomeration may lead to shell formation which would sustain the pressure upsurge and hence suppress atomization thereby reducing droplet gasification rate. Contrarily, the NPs may act as nucleation sites and aid boiling and the radiation absorption by NPs (from the flame) may lead to enhanced burning rates. Thus, nanoadditives may have opposing effects on the burning rate depending on the relative dominance of processes occurring at the droplet scale. The fundamental idea in this study is to: First, review different thermo-physical processes that occur globally at the droplet and sub-droplet scale such as surface regression, shell formation due to NP agglomeration, internal boiling, atomization/NP transport to flame zone and flame acoustic interaction that occur at the droplet scale and second, understand how their interaction changes as a function of droplet size, NP type, NP concentration and the type of base fuel. This understanding is crucial for obtaining phenomenological insights on the combustion behavior of novel nanofluid fuels that show great promise for becoming the next-generation fuels. (C) 2016 Elsevier Ltd. All rights reserved.
Resumo:
Ensuring reliable energy efficient data communication in resource constrained Wireless Sensor Networks (WSNs) is of primary concern. Traditionally, two types of re-transmission have been proposed for the data-loss, namely, End-to-End loss recovery (E2E) and per hop. In these mechanisms, lost packets are re-transmitted from a source node or an intermediate node with a low success rate. The proliferation routing(1) for QoS provisioning in WSNs low End-to-End reliability, not energy efficient and works only for transmissions from sensors to sink. This paper proposes a Reliable Proliferation Routing with low Duty Cycle RPRDC] in WSNs that integrates three core concepts namely, (i) reliable path finder, (ii) a randomized dispersity, and (iii) forwarding. Simulation results demonstrates that packet successful delivery rate can be maintained upto 93% in RPRDC and outperform Proliferation Routing(1). (C) 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Resumo:
With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.
Resumo:
[ES]El Trabajo Fin de Grado “Diseño electrónico de una red de comunicaciones para monitorización de estructuras” consiste en el desarrollo de un sistema de comunicación orientado al despliegue y gestión de una red con un alto número de sensores para monitorización de estructuras a bordo de una aeronave. El objetivo principal es hacer viable la utilización de dicha red, partiendo de la premisa de crear una alternativa con mayor eficiencia energética que los sistemas actualmente disponibles para tal fin.
Resumo:
BACKGROUND: With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. FINDINGS: Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. CONCLUSIONS: BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology.BarraCUDA is currently available from http://seqbarracuda.sf.net.
Resumo:
This paper presents a heterogeneous reconfigurable system for real-time applications applying particle filters. The system consists of an FPGA and a multi-threaded CPU. We propose a method to adapt the number of particles dynamically and utilise the run-time reconfigurability of the FPGA for reduced power and energy consumption. An application is developed which involves simultaneous mobile robot localisation and people tracking. It shows that the proposed adaptive particle filter can reduce up to 99% of computation time. Using run-time reconfiguration, we achieve 34% reduction in idle power and save 26-34% of system energy. Our proposed system is up to 7.39 times faster and 3.65 times more energy efficient than the Intel Xeon X5650 CPU with 12 threads, and 1.3 times faster and 2.13 times more energy efficient than an NVIDIA Tesla C2070 GPU. © 2013 Springer-Verlag.
Resumo:
This paper investigates how the efficiency and robustness of a skilled rhythmic task compete against each other in the control of a bimanual movement. Human subjects juggled a puck in 2D through impacts with two metallic arms, requiring rhythmic bimanual actuation. The arms kinematics were only constrained by the position, velocity and time of impacts while the rest of the trajectory did not influence the movement of the puck. In order to expose the task robustness, we manipulated the task context in two distinct manners: the task tempo was assigned at four different values (hence manipulating the time available to plan and execute each impact movement individually); and vision was withdrawn during half of the trials (hence reducing the sensory inflows). We show that when the tempo was fast, the actuation was rhythmic (no pause in the trajectory) while at slow tempo, the actuation was discrete (with pause intervals between individual movements). Moreover, the withdrawal of visual information encouraged the rhythmic behavior at the four tested tempi. The discrete versus rhythmic behavior give different answers to the efficiency/robustness trade-off: discrete movements result in energy efficient movements, while rhythmic movements impact the puck with negative acceleration, a property preserving robustness. Moreover, we report that in all conditions the impact velocity of the arms was negatively correlated with the energy of the puck. This correlation tended to stabilize the task and was influenced by vision, revealing again different control strategies. In conclusion, this task involves different modes of control that balance efficiency and robustness, depending on the context. © 2008 Springer-Verlag.
Resumo:
A monolithic design is proposed for low-noise sub-THz signal generation by integrating a reflector onto a dual laser source. The reflectivity and the position of such a reflector can be adjusted to obtain constructive feedback from the reflector to both lasers, thus causing a Vernier feedback effect. As a result, 10-fold line narrowing, the narrowing being limited by the resolution of the simulation, is predicted using a transmission line model. Finally, a simple control scheme using an electrical feedback loop to adjust laser biases is proposed to maintain the line narrowing performance. This line narrowing technique, comprising a passive integrated reflector, could allow the development of a low-cost, compact and energy-efficient solution for high-purity sub-THz signal generation. © The Institution of Engineering and Technology 2014.