14 resultados para Trade liberalisation
em Indian Institute of Science - Bangalore - Índia
Resumo:
This paper presents a power, latency and throughput trade-off study on NoCs by varying microarchitectural (e.g. pipelining) and circuit level (e.g. frequency and voltage) parameters. We change pipelining depth, operating frequency and supply voltage for 3 example NoCs - 16 node 2D Torus, Tree network and Reduced 2D Torus. We use an in-house NoC exploration framework capable of topology generation and comparison using parameterized models of Routers and links developed in SystemC. The framework utilizes interconnect power and delay models from a low-level modelling tool called Intacte[1]1. We find that increased pipelining can actually reduce latency. We also find that there exists an optimal degree of pipelining which is the most energy efficient in terms of minimizing energy-delay product.
Resumo:
Abstract is not available.
Resumo:
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving clock speed, reducing energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long global wires which leads to delay in execution and significantly high energy consumption.In this paper, we propose a new instruction scheduling algorithm that exploits scheduling slacks of instructions and communication slacks of data values together to achieve better energy-performance trade-offs for clustered architectures with heterogeneous interconnect. Our instruction scheduling algorithm achieves 35% and 40% reduction in communication energy, whereas the overall energy-delay product improves by 4.5% and 6.5% respectively for 2 cluster and 4 cluster machines with marginal increase (1.6% and 1.1%) in execution time. Our test bed uses the Trimaran compiler infrastructure.
Resumo:
A common and practical paradigm in cooperative communication systems is the use of a dynamically selected `best' relay to decode and forward information from a source to a destination. Such systems use two phases - a relay selection phase, in which the system uses transmission time and energy to select the best relay, and a data transmission phase, in which it uses the spatial diversity benefits of selection to transmit data. In this paper, we derive closed-form expressions for the overall throughput and energy consumption, and study the time and energy trade-off between the selection and data transmission phases. To this end, we analyze a baseline non-adaptive system and several adaptive systems that adapt the selection phase, relay transmission power, or transmission time. Our results show that while selection yields significant benefits, the selection phase's time and energy overhead can be significant. In fact, at the optimal point, the selection can be far from perfect, and depends on the number of relays and the mode of adaptation. The results also provide guidelines about the optimal system operating point for different modes of adaptation. The analysis also sheds new insights on the fast splitting-based algorithm considered in this paper for relay selection.
Resumo:
We describe a System-C based framework we are developing, to explore the impact of various architectural and microarchitectural level parameters of the on-chip interconnection network elements on its power and performance. The framework enables one to choose from a variety of architectural options like topology, routing policy, etc., as well as allows experimentation with various microarchitectural options for the individual links like length, wire width, pitch, pipelining, supply voltage and frequency. The framework also supports a flexible traffic generation and communication model. We provide preliminary results of using this framework to study the power, latency and throughput of a 4x4 multi-core processing array using mesh, torus and folded torus, for two different communication patterns of dense and sparse linear algebra. The traffic consists of both Request-Response messages (mimicing cache accesses)and One-Way messages. We find that the average latency can be reduced by increasing the pipeline depth, as it enables higher link frequencies. We also find that there exists an optimum degree of pipelining which minimizes energy-delay product.
Resumo:
This paper analyses the efficiency and productivity growth of the Electronic Sector of India in the liberalization era since 1991. The study gives an insight into the process of the growth of one of the most upcoming sector of this decade. This sector has experienced a vast structural change along with the changing economic structures in India after liberalisation. With the opening up of this sector to foreign market and incoming of multinational companies, the environment has become highly competitive. The law that operates is that of Darwin’s ‘Survival of the fittest’. Existing industries experience a continuous threat of exit due to entrance of new potential entrants. Thus, it becomes inevitable for the existing industries in this sector to improve productivity growth for their survival. It is thus important to analyze how the industries in this sector have performed over the years and what are the factors that have contributed to the overall output growth.
Resumo:
Electronic exchanges are double-sided marketplaces that allow multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. Two important issues in the design of exchanges are (1) trade determination (determining the number of goods traded between any buyer-seller pair) and (2) pricing. In this paper we address the trade determination issue for one-shot, multi-attribute exchanges that trade multiple units of the same good. The bids are configurable with separable additive price functions over the attributes and each function is continuous and piecewise linear. We model trade determination as mixed integer programming problems for different possible bid structures and show that even in two-attribute exchanges, trade determination is NP-hard for certain bid structures. We also make some observations on the pricing issues that are closely related to the mixed integer formulations.
Resumo:
The impact of gate-to-source/drain overlap length on performance and variability of 65 nm CMOS is presented. The device and circuit variability is investigated as a function of three significant process parameters, namely gate length, gate oxide thickness, and halo dose. The comparison is made with three different values of gate-to-source/drain overlap length namely 5 nm, 0 nm, and -5 nm and at two different leakage currents of 10 nA and 100 nA. The Worst-Case-Analysis approach is used to study the inverter delay fluctuations at the process corners. The drive current of the device for device robustness and stage delay of an inverter for circuit robustness are taken as performance metrics. The design trade-off between performance and variability is demonstrated both at the device level and circuit level. It is shown that larger overlap length leads to better performance, while smaller overlap length results in better variability. Performance trades with variability as overlap length is varied. An optimal value of overlap length of 0 nm is recommended at 65 nm gate length, for a reasonable combination of performance and variability.
Resumo:
We consider a complex, additive, white Gaussian noise channel with flat fading. We study its diversity order vs transmission rate for some known power allocation schemes. The capacity region is divided into three regions. For one power allocation scheme, the diversity order is exponential throughout the capacity region. For selective channel inversion (SCI) scheme, the diversity order is exponential in low and high rate region but polynomial in mid rate region. For fast fading case we also provide a new upper bound on block error probability and a power allocation scheme that minimizes it. The diversity order behaviour of this scheme is same as for SCI but provides lower BER than the other policies.
Resumo:
In recent times, crowdsourcing over social networks has emerged as an active tool for complex task execution. In this paper, we address the problem faced by a planner to incen-tivize agents in the network to execute a task and also help in recruiting other agents for this purpose. We study this mecha-nism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner’s goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We define a set of fairness properties that should beideally satisfied by a crowdsourcing mechanism. We prove that no mechanism can satisfy all these properties simultane-ously. We relax some of these properties and define their ap-proximate counterparts. Under appropriate approximate fair-ness criteria, we obtain a non-trivial family of payment mech-anisms. Moreover, we provide precise characterizations of cost critical and time critical mechanisms.
Resumo:
In this paper, the storage-repair-bandwidth (SRB) trade-off curve of regenerating codes is reformulated to yield a tradeoff between two global parameters of practical relevance, namely information rate and repair rate. The new information-repair-rate (IRR) tradeoff provides a different and insightful perspective on regenerating codes. For example, it provides a new motivation for seeking to investigate constructions corresponding to the interior of the SRB tradeoff. Interestingly, each point on the SRB tradeoff corresponds to a curve in the IRR tradeoff setup. We characterize completely, functional repair under the IRR framework, while for exact repair, an achievable region is presented. In the second part of this paper, a rate-half regenerating code for the minimum storage regenerating point is constructed that draws upon the theory of invariant subspaces. While the parameters of this rate-half code are the same as those of the MISER code, the construction itself is quite different.
Resumo:
The agriculture, forestry and other land use (AFOLU) sector is responsible for approximately 25% of anthropogenic GHG emissions mainly from deforestation and agricultural emissions from livestock, soil and nutrient management. Mitigation from the sector is thus extremely important in meeting emission reduction targets. The sector offers a variety of cost-competitive mitigation options with most analyses indicating a decline in emissions largely due to decreasing deforestation rates. Sustainability criteria are needed to guide development and implementation of AFOLU mitigation measures with particular focus on multifunctional systems that allow the delivery of multiple services from land. It is striking that almost all of the positive and negative impacts, opportunities and barriers are context specific, precluding generic statements about which AFOLU mitigation measures have the greatest promise at a global scale. This finding underlines the importance of considering each mitigation strategy on a case-by-case basis, systemic effects when implementing mitigation options on the national scale, and suggests that policies need to be flexible enough to allow such assessments. National and international agricultural and forest (climate) policies have the potential to alter the opportunity costs of specific land uses in ways that increase opportunities or barriers for attaining climate change mitigation goals. Policies governing practices in agriculture and in forest conservation and management need to account for both effective mitigation and adaptation and can help to orient practices in agriculture and in forestry towards global sharing of innovative technologies for the efficient use of land resources. Different policy instruments, especially economic incentives and regulatory approaches, are currently being applied however, for its successful implementation it is critical to understand how land-use decisions are made and how new social, political and economic forces in the future will influence this process.
Resumo:
Several operational aspects for thermal power plants in general are non-intuitive and involve simultaneous optimization of a number of operational parameters. In the case of solar operated power plants, it is even more difficult due to varying heat source temperatures induced by variability in insolation levels. This paper introduces a quantitative methodology for load regulation of a CO2 based Brayton cycle power plant using the `thermal efficiency and specific work output' coordinate system. The analysis shows that a transcritical CO2 cycle offers more flexibility under part load performance than the supercritical cycle in case of non-solar power plants. However, for concentrated solar power, where efficiency is important, supercritical CO2 cycle fares better than transcritical CO2 cycle. A number of empirical equations relating heat source temperature, high side pressure with efficiency and specific work output are proposed which could assist in generating control algorithms. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Computing the maximum of sensor readings arises in several environmental, health, and industrial monitoring applications of wireless sensor networks (WSNs). We characterize the several novel design trade-offs that arise when green energy harvesting (EH) WSNs, which promise perpetual lifetimes, are deployed for this purpose. The nodes harvest renewable energy from the environment for communicating their readings to a fusion node, which then periodically estimates the maximum. For a randomized transmission schedule in which a pre-specified number of randomly selected nodes transmit in a sensor data collection round, we analyze the mean absolute error (MAE), which is defined as the mean of the absolute difference between the maximum and that estimated by the fusion node in each round. We optimize the transmit power and the number of scheduled nodes to minimize the MAE, both when the nodes have channel state information (CSI) and when they do not. Our results highlight how the optimal system operation depends on the EH rate, availability and cost of acquiring CSI, quantization, and size of the scheduled subset. Our analysis applies to a general class of sensor reading and EH random processes.