72 resultados para Expected Cost
em Indian Institute of Science - Bangalore - Índia
Resumo:
Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind. (C) 2016 Elsevier B.V. All rights reserved.
Resumo:
This paper presents stylized models for conducting performance analysis of the manufacturing supply chain network (SCN) in a stochastic setting for batch ordering. We use queueing models to capture the behavior of SCN. The analysis is clubbed with an inventory optimization model, which can be used for designing inventory policies . In the first case, we model one manufacturer with one warehouse, which supplies to various retailers. We determine the optimal inventory level at the warehouse that minimizes total expected cost of carrying inventory, back order cost associated with serving orders in the backlog queue, and ordering cost. In the second model we impose service level constraint in terms of fill rate (probability an order is filled from stock at warehouse), assuming that customers do not balk from the system. We present several numerical examples to illustrate the model and to illustrate its various features. In the third case, we extend the model to a three-echelon inventory model which explicitly considers the logistics process.
Resumo:
A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.
Resumo:
We develop in this article the first actor-critic reinforcement learning algorithm with function approximation for a problem of control under multiple inequality constraints. We consider the infinite horizon discounted cost framework in which both the objective and the constraint functions are suitable expected policy-dependent discounted sums of certain sample path functions. We apply the Lagrange multiplier method to handle the inequality constraints. Our algorithm makes use of multi-timescale stochastic approximation and incorporates a temporal difference (TD) critic and an actor that makes a gradient search in the space of policy parameters using efficient simultaneous perturbation stochastic approximation (SPSA) gradient estimates. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal policy. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We propose for the first time two reinforcement learning algorithms with function approximation for average cost adaptive control of traffic lights. One of these algorithms is a version of Q-learning with function approximation while the other is a policy gradient actor-critic algorithm that incorporates multi-timescale stochastic approximation. We show performance comparisons on various network settings of these algorithms with a range of fixed timing algorithms, as well as a Q-learning algorithm with full state representation that we also implement. We observe that whereas (as expected) on a two-junction corridor, the full state representation algorithm shows the best results, this algorithm is not implementable on larger road networks. The algorithm PG-AC-TLC that we propose is seen to show the best overall performance.
Resumo:
Graphene layers have been transferred directly on to paper without any intermediate layers to yield G-paper. Resistive gas sensors have been fabricated using strips of G-paper. These sensors achieved a remarkable lower limit of detection of similar to 300 parts per trillion (ppt) for NO2, which is comparable to or better than those from other paper-based sensors. Ultraviolet exposure was found to dramatically reduce the recovery time and improve response times. G-paper sensors are also found to be robust against minor strain, which was also found to increase sensitivity. G-paper is expected to enable a simple and inexpensive low-cost flexible graphene platform
Resumo:
Demagnetization to zero remanent value or to a predetermined value is of interest to magnet manufacturers and material users. Conventional methods of demagnetization using a varying alternating demagnetizing field, under a damped oscillatory or conveyor system, result in either high cost for demagnetization or large power dissipation. A simple technique using thyristors is presented for demagnetizing the material. Power consumption is mainly in the first two half-cycles of applied voltage. Hence power dissipation is very much reduced. An optimum value calculation for a thyristor triggering angle for demagnetizing high coercive materials is also presented.
Resumo:
In the present investigation, the wear behaviour of a creep-resistant AE42 magnesium alloy and its composites reinforced with Saffil short fibres and SiC particles in various combinations is examined in the longitudinal direction i.e., the plane containing random fibre orientation is perpendicular to the steel counter-face. Wear tests are conducted on a pin-on-disc set-up under dry sliding condition having a constant sliding velocity of 0.837 m/s for a constant sliding distance of 2.5 km in the load range of 10-40 N. It is observed that the wear rate increases with increase in load for the alloy and the composites, as expected. Wear rate of the composites is lower than the alloy and the hybrid composites exhibit a lower wear rate than the Saffil short fibres reinforced composite at all the loads. Therefore, the partial replacement of Saffil short fibres by an equal volume fraction of SiC particles not only reduces the cost but also improves the wear resistance of the composite. Microstructural investigation of the surface and subsurface of the worn pin and wear debris is carried out to explain the observed results and to understand the wear mechanisms. It is concluded that the presence of SiC particles in the hybrid composites improves the wear resistance because these particles remain intact and retain their load bearing capacity even at the highest load employed, they promote the formation of iron-rich transfer layer and they also delay the fracture of Saffil short fibres to higher loads. Under the experimental conditions used in the present investigation, the dominant wear mechanism is found to be abrasion for the AE42 alloy and its composites. It is accompanied by severe plastic deformation of surface layers in case of alloy and by the fracture of Saffil short fibres as well as the formation of iron-rich transfer layer in case of composites.
Resumo:
Provision of modern energy services for cooking (with gaseous fuels)and lighting (with electricity) is an essential component of any policy aiming to address health, education or welfare issues; yet it gets little attention from policy-makers. Secure, adequate, low-cost energy of quality and convenience is core to the delivery of these services. The present study analyses the energy consumption pattern of Indian domestic sector and examines the urban-rural divide and income energy linkage. A comprehensive analysis is done to estimate the cost for providing modern energy services to everyone by 2030. A public-private partnership-driven business model, with entrepreneurship at the core, is developed with institutional, financing and pricing mechanisms for diffusion of energy services. This approach, termed as EMPOWERS (entrepreneurship model for provision of wholesome energy-related basic services), if adopted, can facilitate large-scale dissemination of energy-efficient and renewable technologies like small-scale biogas/biofuel plants, and distributed power generation technologies to provide clean, safe, reliable and sustainable energy to rural households and urban poor. It is expected to integrate the processes of market transformation and entrepreneurship development involving government, NGOs, financial institutions and community groups as stakeholders. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
At the beginning of 2008, I visited a watershed, located in Karkinatam village in the state of Karnataka, South India, where crops are intensively irrigated using groundwater. The water table had been depleted from a depth of 5 to 50 m in a large part of the area. Presently, 42% of a total of 158 water wells in the watershed are dry. Speaking with the farmers, I have been amazed to learn that they were drilling down to 500 m to tap water. This case is, of course, not isolated.
Resumo:
A business cluster is a co-located group of micro, small, medium scale enterprises. Such firms can benefit significantly from their co-location through shared infrastructure and shared services. Cost sharing becomes an important issue in such sharing arrangements especially when the firms exhibit strategic behavior. There are many cost sharing methods and mechanisms proposed in the literature based on game theoretic foundations. These mechanisms satisfy a variety of efficiency and fairness properties such as allocative efficiency, budget balance, individual rationality, consumer sovereignty, strategyproofness, and group strategyproofness. In this paper, we motivate the problem of cost sharing in a business cluster with strategic firms and illustrate different cost sharing mechanisms through the example of a cluster of firms sharing a logistics service. Next we look into the problem of a business cluster sharing ICT (information and communication technologies) infrastructure and explore the use of cost sharing mechanisms.
Resumo:
he growth of high-performance application in computer graphics, signal processing and scientific computing is a key driver for high performance, fixed latency; pipelined floating point dividers. Solutions available in the literature use large lookup table for double precision floating point operations.In this paper, we propose a cost effective, fixed latency pipelined divider using modified Taylor-series expansion for double precision floating point operations. We reduce chip area by using a smaller lookup table. We show that the latency of the proposed divider is 49.4 times the latency of a full-adder. The proposed divider reduces chip area by about 81% than the pipelined divider in [9] which is based on modified Taylor-series.
Resumo:
We present a generic study of inventory costs in a factory stockroom that supplies component parts to an assembly line. Specifically, we are concerned with the increase in component inventories due to uncertainty in supplier lead-times, and the fact that several different components must be present before assembly can begin. It is assumed that the suppliers of the various components are independent, that the suppliers' operations are in statistical equilibrium, and that the same amount of each type of component is demanded by the assembly line each time a new assembly cycle is scheduled to begin. We use, as a measure of inventory cost, the expected time for which an order of components must be held in the stockroom from the time it is delivered until the time it is consumed by the assembly line. Our work reveals the effects of supplier lead-time variability, the number of different types of components, and their desired service levels, on the inventory cost. In addition, under the assumptions that inventory holding costs and the cost of delaying assembly are linear in time, we study optimal ordering policies and present an interesting characterization that is independent of the supplier lead-time distributions.
Resumo:
A novel, cost effective,environment-friendly and energetically beneficial alternative method for the synthesis of giant dielectric pseudo-perovskite material CaCu3Ti4O12 (CCTO) is presented. The method involved auto-combustion of an aqueous precursor solution in oxygen atmosphere with the help of external fuels and is capable of producing high amount of CCTO at ultra-low temperature, in the combustion residue itself. The amount of phase generated was observed to be highly dependent on the combustion process i.e. on the nature and amount of external-fuels added for combustion. Two successful fuel combinations capable of producing reasonably higher amount of the desired compound were investigated. On a structural characterization grain size was observed to decrease drastically to nano-dimension compared to submicron-size that was obtained in a traditional sol-gel combustion and subsequent cacination method. Therefore, the method reported can produce nano-crystalline CaCu3Ti4O12 ceramic matrix at an ultra-low temperature and is expected to be applicable for other multifunctional perovskite oxide materials.
Resumo:
The phenomena of nonlinear I-V behavior and electrical switching find extensive applications in power control, information storage, oscillators, etc. The study of I-V characteristics and switching parameters is necessary for the proper application of switching materials and devices. In the present work, a simple low-cost electrical switching analyzer has been developed for the measurement of the electrical characteristics of switching materials and devices. The system developed consists of a microcontroller-based excitation source and a high-speed data acquisition system. The design details of the excitation source, its interface with the high-speed data acquisition system and personal computer, and the details of the application software developed for automated measurements are described. Typical I-V characteristics and switching curves obtained with the system developed are also presented to illustrate the capability of the instrument developed.