16 resultados para Finished Goods Trade
em Indian Institute of Science - Bangalore - Índia
Resumo:
Electronic exchanges are double-sided marketplaces that allow multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. Two important issues in the design of exchanges are (1) trade determination (determining the number of goods traded between any buyer-seller pair) and (2) pricing. In this paper we address the trade determination issue for one-shot, multi-attribute exchanges that trade multiple units of the same good. The bids are configurable with separable additive price functions over the attributes and each function is continuous and piecewise linear. We model trade determination as mixed integer programming problems for different possible bid structures and show that even in two-attribute exchanges, trade determination is NP-hard for certain bid structures. We also make some observations on the pricing issues that are closely related to the mixed integer formulations.
Resumo:
This paper presents a power, latency and throughput trade-off study on NoCs by varying microarchitectural (e.g. pipelining) and circuit level (e.g. frequency and voltage) parameters. We change pipelining depth, operating frequency and supply voltage for 3 example NoCs - 16 node 2D Torus, Tree network and Reduced 2D Torus. We use an in-house NoC exploration framework capable of topology generation and comparison using parameterized models of Routers and links developed in SystemC. The framework utilizes interconnect power and delay models from a low-level modelling tool called Intacte[1]1. We find that increased pipelining can actually reduce latency. We also find that there exists an optimal degree of pipelining which is the most energy efficient in terms of minimizing energy-delay product.
Resumo:
Abstract is not available.
Resumo:
In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.
Resumo:
We discuss a dynamic pricing model which will aid automobile manufacturer in choosing the right price for customer segment. Though there is oligopoly market structure, the customers get "locked" into a particular technology/company which virtually makes the situation akin to a monopoly. There are associated network externalities and positive feedback. The key idea in monopoly pricing lies in extracting the customer surplus by exploiting the respective elasticities of demand. We present a Walrasian general equilibrium approach to determine the segment price. We compare the prices obtained from optimization model with that from Walrasian dynamics. The results are encouraging and can serve as a critical factor in Customer Relationship Management (CRM) and thereby effectively manage the lock-in.
Resumo:
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving clock speed, reducing energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long global wires which leads to delay in execution and significantly high energy consumption.In this paper, we propose a new instruction scheduling algorithm that exploits scheduling slacks of instructions and communication slacks of data values together to achieve better energy-performance trade-offs for clustered architectures with heterogeneous interconnect. Our instruction scheduling algorithm achieves 35% and 40% reduction in communication energy, whereas the overall energy-delay product improves by 4.5% and 6.5% respectively for 2 cluster and 4 cluster machines with marginal increase (1.6% and 1.1%) in execution time. Our test bed uses the Trimaran compiler infrastructure.
Resumo:
A common and practical paradigm in cooperative communication systems is the use of a dynamically selected `best' relay to decode and forward information from a source to a destination. Such systems use two phases - a relay selection phase, in which the system uses transmission time and energy to select the best relay, and a data transmission phase, in which it uses the spatial diversity benefits of selection to transmit data. In this paper, we derive closed-form expressions for the overall throughput and energy consumption, and study the time and energy trade-off between the selection and data transmission phases. To this end, we analyze a baseline non-adaptive system and several adaptive systems that adapt the selection phase, relay transmission power, or transmission time. Our results show that while selection yields significant benefits, the selection phase's time and energy overhead can be significant. In fact, at the optimal point, the selection can be far from perfect, and depends on the number of relays and the mode of adaptation. The results also provide guidelines about the optimal system operating point for different modes of adaptation. The analysis also sheds new insights on the fast splitting-based algorithm considered in this paper for relay selection.
Resumo:
We describe a System-C based framework we are developing, to explore the impact of various architectural and microarchitectural level parameters of the on-chip interconnection network elements on its power and performance. The framework enables one to choose from a variety of architectural options like topology, routing policy, etc., as well as allows experimentation with various microarchitectural options for the individual links like length, wire width, pitch, pipelining, supply voltage and frequency. The framework also supports a flexible traffic generation and communication model. We provide preliminary results of using this framework to study the power, latency and throughput of a 4x4 multi-core processing array using mesh, torus and folded torus, for two different communication patterns of dense and sparse linear algebra. The traffic consists of both Request-Response messages (mimicing cache accesses)and One-Way messages. We find that the average latency can be reduced by increasing the pipeline depth, as it enables higher link frequencies. We also find that there exists an optimum degree of pipelining which minimizes energy-delay product.
Resumo:
The impact of gate-to-source/drain overlap length on performance and variability of 65 nm CMOS is presented. The device and circuit variability is investigated as a function of three significant process parameters, namely gate length, gate oxide thickness, and halo dose. The comparison is made with three different values of gate-to-source/drain overlap length namely 5 nm, 0 nm, and -5 nm and at two different leakage currents of 10 nA and 100 nA. The Worst-Case-Analysis approach is used to study the inverter delay fluctuations at the process corners. The drive current of the device for device robustness and stage delay of an inverter for circuit robustness are taken as performance metrics. The design trade-off between performance and variability is demonstrated both at the device level and circuit level. It is shown that larger overlap length leads to better performance, while smaller overlap length results in better variability. Performance trades with variability as overlap length is varied. An optimal value of overlap length of 0 nm is recommended at 65 nm gate length, for a reasonable combination of performance and variability.
Resumo:
We consider a complex, additive, white Gaussian noise channel with flat fading. We study its diversity order vs transmission rate for some known power allocation schemes. The capacity region is divided into three regions. For one power allocation scheme, the diversity order is exponential throughout the capacity region. For selective channel inversion (SCI) scheme, the diversity order is exponential in low and high rate region but polynomial in mid rate region. For fast fading case we also provide a new upper bound on block error probability and a power allocation scheme that minimizes it. The diversity order behaviour of this scheme is same as for SCI but provides lower BER than the other policies.
Resumo:
In recent times, crowdsourcing over social networks has emerged as an active tool for complex task execution. In this paper, we address the problem faced by a planner to incen-tivize agents in the network to execute a task and also help in recruiting other agents for this purpose. We study this mecha-nism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner’s goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We define a set of fairness properties that should beideally satisfied by a crowdsourcing mechanism. We prove that no mechanism can satisfy all these properties simultane-ously. We relax some of these properties and define their ap-proximate counterparts. Under appropriate approximate fair-ness criteria, we obtain a non-trivial family of payment mech-anisms. Moreover, we provide precise characterizations of cost critical and time critical mechanisms.
Resumo:
Feeding 9-10billion people by 2050 and preventing dangerous climate change are two of the greatest challenges facing humanity. Both challenges must be met while reducing the impact of land management on ecosystem services that deliver vital goods and services, and support human health and well-being. Few studies to date have considered the interactions between these challenges. In this study we briefly outline the challenges, review the supply- and demand-side climate mitigation potential available in the Agriculture, Forestry and Other Land Use AFOLU sector and options for delivering food security. We briefly outline some of the synergies and trade-offs afforded by mitigation practices, before presenting an assessment of the mitigation potential possible in the AFOLU sector under possible future scenarios in which demand-side measures codeliver to aid food security. We conclude that while supply-side mitigation measures, such as changes in land management, might either enhance or negatively impact food security, demand-side mitigation measures, such as reduced waste or demand for livestock products, should benefit both food security and greenhouse gas (GHG) mitigation. Demand-side measures offer a greater potential (1.5-15.6Gt CO2-eq. yr(-1)) in meeting both challenges than do supply-side measures (1.5-4.3Gt CO2-eq. yr(-1) at carbon prices between 20 and 100US$ tCO(2)-eq. yr(-1)), but given the enormity of challenges, all options need to be considered. Supply-side measures should be implemented immediately, focussing on those that allow the production of more agricultural product per unit of input. For demand-side measures, given the difficulties in their implementation and lag in their effectiveness, policy should be introduced quickly, and should aim to codeliver to other policy agenda, such as improving environmental quality or improving dietary health. These problems facing humanity in the 21st Century are extremely challenging, and policy that addresses multiple objectives is required now more than ever.
Resumo:
In this paper, the storage-repair-bandwidth (SRB) trade-off curve of regenerating codes is reformulated to yield a tradeoff between two global parameters of practical relevance, namely information rate and repair rate. The new information-repair-rate (IRR) tradeoff provides a different and insightful perspective on regenerating codes. For example, it provides a new motivation for seeking to investigate constructions corresponding to the interior of the SRB tradeoff. Interestingly, each point on the SRB tradeoff corresponds to a curve in the IRR tradeoff setup. We characterize completely, functional repair under the IRR framework, while for exact repair, an achievable region is presented. In the second part of this paper, a rate-half regenerating code for the minimum storage regenerating point is constructed that draws upon the theory of invariant subspaces. While the parameters of this rate-half code are the same as those of the MISER code, the construction itself is quite different.
Resumo:
The agriculture, forestry and other land use (AFOLU) sector is responsible for approximately 25% of anthropogenic GHG emissions mainly from deforestation and agricultural emissions from livestock, soil and nutrient management. Mitigation from the sector is thus extremely important in meeting emission reduction targets. The sector offers a variety of cost-competitive mitigation options with most analyses indicating a decline in emissions largely due to decreasing deforestation rates. Sustainability criteria are needed to guide development and implementation of AFOLU mitigation measures with particular focus on multifunctional systems that allow the delivery of multiple services from land. It is striking that almost all of the positive and negative impacts, opportunities and barriers are context specific, precluding generic statements about which AFOLU mitigation measures have the greatest promise at a global scale. This finding underlines the importance of considering each mitigation strategy on a case-by-case basis, systemic effects when implementing mitigation options on the national scale, and suggests that policies need to be flexible enough to allow such assessments. National and international agricultural and forest (climate) policies have the potential to alter the opportunity costs of specific land uses in ways that increase opportunities or barriers for attaining climate change mitigation goals. Policies governing practices in agriculture and in forest conservation and management need to account for both effective mitigation and adaptation and can help to orient practices in agriculture and in forestry towards global sharing of innovative technologies for the efficient use of land resources. Different policy instruments, especially economic incentives and regulatory approaches, are currently being applied however, for its successful implementation it is critical to understand how land-use decisions are made and how new social, political and economic forces in the future will influence this process.
Resumo:
Several operational aspects for thermal power plants in general are non-intuitive and involve simultaneous optimization of a number of operational parameters. In the case of solar operated power plants, it is even more difficult due to varying heat source temperatures induced by variability in insolation levels. This paper introduces a quantitative methodology for load regulation of a CO2 based Brayton cycle power plant using the `thermal efficiency and specific work output' coordinate system. The analysis shows that a transcritical CO2 cycle offers more flexibility under part load performance than the supercritical cycle in case of non-solar power plants. However, for concentrated solar power, where efficiency is important, supercritical CO2 cycle fares better than transcritical CO2 cycle. A number of empirical equations relating heat source temperature, high side pressure with efficiency and specific work output are proposed which could assist in generating control algorithms. (C) 2015 Elsevier B.V. All rights reserved.