8 resultados para energy-efficient
em Duke University
Resumo:
We propose a novel data-delivery method for delay-sensitive traffic that significantly reduces the energy consumption in wireless sensor networks without reducing the number of packets that meet end-to-end real-time deadlines. The proposed method, referred to as SensiQoS, leverages the spatial and temporal correlation between the data generated by events in a sensor network and realizes energy savings through application-specific in-network aggregation of the data. SensiQoS maximizes energy savings by adaptively waiting for packets from upstream nodes to perform in-network processing without missing the real-time deadline for the data packets. SensiQoS is a distributed packet scheduling scheme, where nodes make localized decisions on when to schedule a packet for transmission to meet its end-to-end real-time deadline and to which neighbor they should forward the packet to save energy. We also present a localized algorithm for nodes to adapt to network traffic to maximize energy savings in the network. Simulation results show that SensiQoS improves the energy savings in sensor networks where events are sensed by multiple nodes, and spatial and/or temporal correlation exists among the data packets. Energy savings due to SensiQoS increase with increase in the density of the sensor nodes and the size of the sensed events. © 2010 Harshavardhan Sabbineni and Krishnendu Chakrabarty.
Resumo:
We analyze technology adoption decisions of manufacturing plants in response to government-sponsored energy audits. Overall, plants adopt about half of the recommended energy-efficiency projects. Using fixed effects logit estimation, we find that adoption rates are higher for projects with shorter paybacks, lower costs, greater annual savings, higher energy prices, and greater energy conservation. Plants are 40% more responsive to initial costs than annual savings, suggesting that subsidies may be more effective at promoting energy-efficient technologies than energy price increases. Adoption decisions imply hurdle rates of 50-100%, which is consistent with the investment criteria small and medium-size firms state they use. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.
The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.
This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.
Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.
The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.
Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.
Resumo:
Based on Pulay's direct inversion iterative subspace (DIIS) approach, we present a method to accelerate self-consistent field (SCF) convergence. In this method, the quadratic augmented Roothaan-Hall (ARH) energy function, proposed recently by Høst and co-workers [J. Chem. Phys. 129, 124106 (2008)], is used as the object of minimization for obtaining the linear coefficients of Fock matrices within DIIS. This differs from the traditional DIIS of Pulay, which uses an object function derived from the commutator of the density and Fock matrices. Our results show that the present algorithm, abbreviated ADIIS, is more robust and efficient than the energy-DIIS (EDIIS) approach. In particular, several examples demonstrate that the combination of ADIIS and DIIS ("ADIIS+DIIS") is highly reliable and efficient in accelerating SCF convergence.
Resumo:
Nonradiative coupling between conductive coils is a candidate mechanism for wireless energy transfer applications. In this paper we propose a power relay system based on a near-field metamaterial superlens and present a thorough theoretical analysis of this system. We use time-harmonic circuit formalism to describe all interactions between two coils attached to external circuits and a slab of anisotropic medium with homogeneous permittivity and permeability. The fields of the coils are found in the point-dipole approximation using Sommerfeld integrals which are reduced to standard special functions in the long-wavelength limit. We show that, even with a realistic magnetic loss tangent of order 0.1, the power transfer efficiency with the slab can be an order of magnitude greater than free-space efficiency when the load resistance exceeds a certain threshold value. We also find that the volume occupied by the metamaterial between the coils can be greatly compressed by employing magnetic permeability with a large anisotropy ratio. © 2011 American Physical Society.
Resumo:
Energy storage technologies are crucial for efficient utilization of electricity. Supercapacitors and rechargeable batteries are of currently available energy storage systems. Transition metal oxides, hydroxides, and phosphates are the most intensely investigated electrode materials for supercapacitors and rechargeable batteries due to their high theoretical charge storage capacities resulted from reversible electrochemical reactions. Their insulating nature, however, causes sluggish electron transport kinetics within these electrode materials, hindering them from reaching the theoretical maximum. The conductivity of these transition metal based-electrode materials can be improved through three main approaches; nanostructuring, chemical substitution, and introducing carbon matrices. These approaches often lead to unique electrochemical properties when combined and balanced.
Ethanol-mediated solvothermal synthesis we developed is found to be highly effective for controlling size and morphology of transition metal-based electrode materials for both pseudocapacitors and batteries. The morphology and the degree of crystallinity of nickel hydroxide are systematically changed by adding various amounts glucose to the solvothermal synthesis. Nickel hydroxide produced in this manner exhibited increased pseudocapacitance, which is partially attributed to the increased surface area. Interestingly, this morphology effect on cobalt doped-nickel hydroxide is found to be more effective at low cobalt contents than at high cobalt contents in terms of improving the electrochemical performance.
Moreover, a thin layer of densely packed nickel oxide flakes on carbon paper substrate was successfully prepared via the glucose-assisted solvothermal synthesis, resulting in the improved electrode conductivity. When reduced graphene oxide was used for conductive coating on as-prepared nickel oxide electrode, the electrode conductivity was only slightly improved. This finding reveals that the influence of reduced graphene oxide coating, increasing the electrode conductivity, is not that obvious when the electrode is already highly conductive to begin with.
We were able to successfully control the interlayer spacing and reduce the particle size of layered titanium hydrogeno phosphate material using our ethanol-mediated solvothermal reaction. In layered structure, interlayer spacing is the key parameter for fast ion diffusion kinetics. The nanosized layered structure prepared via our method, however, exhibited high sodium-ion storage capacity regardless of the interlayer spacing, implying that interlayer space may not be the primary factor for sodium-ion diffusion in nanostructured materials, where many interstitials are available for sodium-ion diffusion.
Our ethanol-mediated solvothermal reaction was also effective for synthesis of NaTi2(PO4)3 nanoparticles with uniform size and morphology, well connected by a carbon nanotube network. This composite electrode exhibited high capacity, which is comparable to that in aqueous electrolyte, probably due to the uniform morphology and size where the preferable surface for sodium-ion diffusion is always available in all individual particles.
Fundamental understandings of the relationship between electrode microstructures and electrochemical properties discussed in this dissertation will be important to design high performance energy storage system applications.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.