819 resultados para Energy consumption data sets
Resumo:
A common and practical paradigm in cooperative communication systems is the use of a dynamically selected `best' relay to decode and forward information from a source to a destination. Such systems use two phases - a relay selection phase, in which the system uses transmission time and energy to select the best relay, and a data transmission phase, in which it uses the spatial diversity benefits of selection to transmit data. In this paper, we derive closed-form expressions for the overall throughput and energy consumption, and study the time and energy trade-off between the selection and data transmission phases. To this end, we analyze a baseline non-adaptive system and several adaptive systems that adapt the selection phase, relay transmission power, or transmission time. Our results show that while selection yields significant benefits, the selection phase's time and energy overhead can be significant. In fact, at the optimal point, the selection can be far from perfect, and depends on the number of relays and the mode of adaptation. The results also provide guidelines about the optimal system operating point for different modes of adaptation. The analysis also sheds new insights on the fast splitting-based algorithm considered in this paper for relay selection.
Resumo:
We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.
Resumo:
In the recent years. India has emerged as one of the fast growing economies of the world necessitating equally rapid increase in modern energy consumption. With an imminent global climate change threat, India will have difficulties in continuing with this rising energy use levels towards achieving high economic growth. It will have to follow an energy-efficient pathway in attaining this goal. In this context, an attempt is made to present India's achievements on the energy efficiency front by tracing the evolution of policies and their impacts. The results indicate that India has made substantial progress in improving energy efficiency which is evident from the reductions achieved in energy intensities of GDP to the tune of 88% during 1980-2007. Similar reductions have been observed both with respect to overall Indian economy and the major sectors of the economy. In terms of energy intensity of GDP, India occupies a relatively high position of nine among the top 30 energy consuming countries of the world. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Energy is a major constituent of a small-scale industry such as grain mills. Based on a sample survey of several mills spread over Karnataka, a state in India, a number of energy analyses were conducted primarily to establish relationships and secondarily to look at them in more detail. Initially specific energy consumption (SEC) was computed for all industries so as to compare their efficiencies of energy use. A wide disparity exists in SEC among various grain mills. In order to understand the disparities better, regression analyses were performed on the variables energy and production, SEC and production, and energy/SEC with percentage production capacity utilization. The studies show that smaller range industries have lower capacity utilization. This paper also examines the energy savings possible by shifting industries from the lower production ranges to the next higher range (thereby utilizing installed production capacity optimally). This leads to an overall energy capacity saving of 23.12% for the foodgrain sector and 18.67% for the paddy dehusking subgroup. If this is extrapolated to the whole state, we obtain a saving of 55 million kWh.
Resumo:
This dissertation examines the impacts of energy and climate policies on the energy and forest sectors, focusing on the case of Finland. The thesis consists of an introduction article and four separate studies. The dissertation was motivated by the climate concern and the increasing demand of renewable energy. In particular, the renewable energy consumption and greenhouse gas emission reduction targets of the European Union were driving this work. In Finland, both forest and energy sectors are in key roles in achieving these targets. In fact, the separation between forest and energy sector is diminishing as the energy sector is utilizing increasing amounts of wood in energy production and as the forest sector is becoming more and more important energy producer. The objective of this dissertation is to find out and measure the impacts of climate and energy policies on the forest and energy sectors. In climate policy, the focus is on emissions trading, and in energy policy the dissertation focuses on the promotion of renewable forest-based energy use. The dissertation relies on empirical numerical models that are based on microeconomic theory. Numerical partial equilibrium mixed complementarity problem models were constructed to study the markets under scrutiny. The separate studies focus on co-firing of wood biomass and fossil fuels, liquid biofuel production in the pulp and paper industry, and the impacts of climate policy on the pulp and paper sector. The dissertation shows that the policies promoting wood-based energy may have have unexpected negative impacts. When feed-in tariff is imposed together with emissions trading, in some plants the production of renewable electricity might decrease as the emissions price increases. The dissertation also shows that in liquid biofuel production, investment subsidy may cause high direct policy costs and other negative impacts when compared to other policy instruments. The results of the dissertation also indicate that from the climate mitigation perspective, perfect competition is the favored wood market competition structure, at least if the emissions trading system is not global. In conclusion, this dissertation suggests that when promoting the use of wood biomass in energy production, the favored policy instruments are subsidies that promote directly the renewable energy production (i.e. production subsidy, renewables subsidy or feed-in premium). Also, the policy instrument should be designed to be dependent on the emissions price or on the substitute price. In addition, this dissertation shows that when planning policies to promote wood-based renewable energy, the goals of the policy scheme should be clear before decisions are made on the choice of the policy instruments.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.
Resumo:
Relentless CMOS scaling coupled with lower design tolerances is making ICs increasingly susceptible to wear-out related permanent faults and transient faults, necessitating on-chip fault tolerance in future chip microprocessors (CMPs). In this paper we introduce a new energy-efficient fault-tolerant CMP architecture known as Redundant Execution using Critical Value Forwarding (RECVF). RECVF is based on two observations: (i) forwarding critical instruction results from the leading to the trailing core enables the latter to execute faster, and (ii) this speedup can be exploited to reduce energy consumption by operating the trailing core at a lower voltage-frequency level. Our evaluation shows that RECVF consumes 37% less energy than conventional dual modular redundant (DMR) execution of a program. It consumes only 1.26 times the energy of a non-fault-tolerant baseline and has a performance overhead of just 1.2%.
Resumo:
We study the trade-off between delivery delay and energy consumption in delay tolerant mobile wireless networks that use two-hop relaying. The source may not have perfect knowledge of the delivery status at every instant. We formulate the problem as a stochastic control problem with partial information, and study structural properties of the optimal policy. We also propose a simple suboptimal policy. We then compare the performance of the suboptimal policy against that of the optimal control with perfect information. These are bounds on the performance of the proposed policy with partial information. Several other related open loop policies are also compared with these bounds.
Resumo:
We introduce a multifield comparison measure for scalar fields that helps in studying relations between them. The comparison measure is insensitive to noise in the scalar fields and to noise in their gradients. Further, it can be computed robustly and efficiently. Results from the visual analysis of various data sets from climate science and combustion applications demonstrate the effective use of the measure.
Resumo:
Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences critical system design objectives like area, power and performance. Hence the embedded system designer performs a complete memory architecture exploration to custom design a memory architecture for a given set of applications. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of exhaustive-search based memory exploration at the outer level and a two step based integrated data layout for SPRAM-Cache based architectures at the inner level. We present a two step integrated approach for data layout for SPRAM-Cache based hybrid architectures with the first step as data-partitioning that partitions data between SPRAM and Cache, and the second step is the cache conscious data layout. We formulate the cache-conscious data layout as a graph partitioning problem and show that our approach gives up to 34% improvement over an existing approach and also optimizes the off-chip memory address space. We experimented our approach with 3 embedded multimedia applications and our approach explores several hundred memory configurations for each application, yielding several optimal design points in a few hours of computation on a standard desktop.
Resumo:
Frequent accesses to the register file make it one of the major sources of energy consumption in ILP architectures. The large number of functional units connected to a large unified register file in VLIW architectures make power dissipation in the register file even worse because of the need for a large number of ports. High power dissipation in a relatively smaller area occupied by a register file leads to a high power density in the register file and makes it one of the prime hot-spots. This makes it highly susceptible to the possibility of a catastrophic heatstroke. This in turn impacts the performance and cost because of the need for periodic cool down and sophisticated packaging and cooling techniques respectively. Clustered VLIW architectures partition the register file among clusters of functional units and reduce the number of ports required thereby reducing the power dissipation. However, we observe that the aggregate accesses to register files in clustered VLIW architectures (and associated energy consumption) become very high compared to the centralized VLIW architectures and this can be attributed to a large number of explicit inter-cluster communications. Snooping based clustered VLIW architectures provide very limited but very fast way of inter-cluster communication by allowing some of the functional units to directly read some of the operands from the register file of some of the other clusters. In this paper, we propose instruction scheduling algorithms that exploit the limited snooping capability to reduce the register file energy consumption on an average by 12% and 18% and improve the overall performance by 5% and 11% for a 2-clustered and a 4-clustered machine respectively, over an earlier state-of-the-art clustered scheduling algorithm when evaluated in the context of snooping based clustered VLIW architectures.
Resumo:
Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences these parameters. Hence the embedded system designer performs a complete memory architecture exploration. This problem is a multi-objective optimization problem and can be tackled as a two-level optimization problem. The outer level explores various memory architecture while the inner level explores placement of data sections (data layout problem) to minimize memory stalls. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of Multi-objective Genetic Algorithm (Memory Architecture exploration) and an efficient heuristic data placement algorithm. At the outer level the memory architecture exploration is done by picking memory modules directly from a ASIC memory Library. This helps in performing the memory architecture exploration in a integrated framework, where the memory allocation, memory exploration and data layout works in a tightly coupled way to yield optimal design points with respect to area, power and performance. We experimented our approach for 3 embedded applications and our approach explores several thousand memory architecture for each application, yielding a few hundred optimal design points in a few hours of computation time on a standard desktop.
Resumo:
Miniaturization of devices and the ensuing decrease in the threshold voltage has led to a substantial increase in the leakage component of the total processor energy consumption. Relatively simpler issue logic and the presence of a large number of function units in the VLIW and the clustered VLIW architectures attribute a large fraction of this leakage energy consumption in the functional units. However, functional units are not fully utilized in the VLIW architectures because of the inherent variations in the ILP of the programs. This underutilization is even more pronounced in the context of clustered VLIW architectures because of the contentions for the limited number of slow intercluster communication channels which lead to many short idle cycles.In the past, some architectural schemes have been proposed to obtain leakage energy bene .ts by aggressively exploiting the idleness of functional units. However, presence of many short idle cycles cause frequent transitions from the active mode to the sleep mode and vice-versa and adversely a ffects the energy benefits of a purely hardware based scheme. In this paper, we propose and evaluate a compiler instruction scheduling algorithm that assist such a hardware based scheme in the context of VLIW and clustered VLIW architectures. The proposed scheme exploits the scheduling slacks of instructions to orchestrate the functional unit mapping with the objective of reducing the number of transitions in functional units thereby keeping them off for a longer duration. The proposed compiler-assisted scheme obtains a further 12% reduction of energy consumption of functional units with negligible performance degradation over a hardware-only scheme for a VLIW architecture. The benefits are 15% and 17% in the context of a 2-clustered and a 4-clustered VLIW architecture respectively. Our test bed uses the Trimaran compiler infrastructure.
Resumo:
In most taxa, species boundaries are inferred based on differences in morphology or DNA sequences revealed by taxonomic or phylogenetic analyses. In crickets, acoustic mating signals or calling songs have species-specific structures and provide a third data set to infer species boundaries. We examined the concordance in species boundaries obtained using acoustic, morphological, and molecular data sets in the field cricket genus Itaropsis. This genus is currently described by only one valid species, Itaropsis tenella, with a broad distribution in western peninsular India and Sri Lanka. Calling songs of males sampled from four sites in peninsular India exhibited significant differences in a number of call features, suggesting the existence of multiple species. Cluster analysis of the acoustic data, molecular phylogenetic analyses, and phylogenetic analyses combining all data sets suggested the existence of three clades. Whatever the differences in calling signals, no full congruence was obtained between all the data sets, even though the resultant lineages were largely concordant with the acoustic clusters. The genus Itaropsis could thus be represented by three morphologically cryptic incipient species in peninsular India; their distributions are congruent with usual patterns of endemism in the Western Ghats, India. Song evolution is analysed through the divergence in syllable period, syllable and call duration, and dominant frequency.
Resumo:
Unending quest for performance improvement coupled with the advancements in integrated circuit technology have led to the development of new architectural paradigm. Speculative multithreaded architecture (SpMT) philosophy relies on aggressive speculative execution for improved performance. However, aggressive speculative execution comes with a mixed flavor of improving performance, when successful, and adversely affecting the energy consumption (and performance) because of useless computation in the event of mis-speculation. Dynamic instruction criticality information can be usefully applied to control and guide such an aggressive speculative execution. In this paper, we present a model of micro-execution for SpMT architecture that we have developed to determine the dynamic instruction criticality. We have also developed two novel techniques utilizing the criticality information namely delaying the non-critical loads and the criticality based thread-prediction for reducing useless computations and energy consumption. Experimental results showing break-up of critical instructions and effectiveness of proposed techniques in reducing energy consumption are presented in the context of multiscalar processor that implements SpMT architecture. Our experiments show 17.7% and 11.6% reduction in dynamic energy for criticality based thread prediction and criticality based delayed load scheme respectively while the improvement in dynamic energy delay product is 13.9% and 5.5%, respectively. (c) 2012 Published by Elsevier B.V.