7 resultados para Retail Experience Model
em Indian Institute of Science - Bangalore - Índia
Resumo:
Perfectly hard particles are those which experience an infinite repulsive force when they overlap, and no force when they do not overlap. In the hard-particle model, the only static state is the isostatic state where the forces between particles are statically determinate. In the flowing state, the interactions between particles are instantaneous because the time of contact approaches zero in the limit of infinite particle stiffness. Here, we discuss the development of a hard particle model for a realistic granular flow down an inclined plane, and examine its utility for predicting the salient features both qualitatively and quantitatively. We first discuss Discrete Element simulations, that even very dense flows of sand or glass beads with volume fraction between 0.5 and 0.58 are in the rapid flow regime, due to the very high particle stiffness. An important length scale in the shear flow of inelastic particles is the `conduction length' delta = (d/(1 - e(2))(1/2)), where d is the particle diameter and e is the coefficient of restitution. When the macroscopic scale h (height of the flowing layer) is larger than the conduction length, the rates of shear production and inelastic dissipation are nearly equal in the bulk of the flow, while the rate of conduction of energy is O((delta/h)(2)) smaller than the rate of dissipation of energy. Energy conduction is important in boundary layers of thickness delta at the top and bottom. The flow in the boundary layer at the top and bottom is examined using asymptotic analysis. We derive an exact relationship showing that the a boundary layer solution exists only if the volume fraction in the bulk decreases as the angle of inclination is increased. In the opposite case, where the volume fraction increases as the angle of inclination is increased, there is no boundary layer solution. The boundary layer theory also provides us with a way of understanding the cessation of flow when at a given angle of inclination when the height of the layer is decreased below a value h(stop), which is a function of the angle of inclination. There is dissipation of energy due to particle collisions in the flow as well as due to particle collisions with the base, and the fraction of energy dissipation in the base increases as the thickness decreases. When the shear production in the flow cannot compensate for the additional energy drawn out of the flow due to the wall collisions, the temperature decreases to zero and the flow stops. Scaling relations can be derived for h(stop) as a function of angle of inclination.
Resumo:
In this paper, we use reinforcement learning (RL) as a tool to study price dynamics in an electronic retail market consisting of two competing sellers, and price sensitive and lead time sensitive customers. Sellers, offering identical products, compete on price to satisfy stochastically arriving demands (customers), and follow standard inventory control and replenishment policies to manage their inventories. In such a generalized setting, RL techniques have not previously been applied. We consider two representative cases: 1) no information case, were none of the sellers has any information about customer queue levels, inventory levels, or prices at the competitors; and 2) partial information case, where every seller has information about the customer queue levels and inventory levels of the competitors. Sellers employ automated pricing agents, or pricebots, which use RL-based pricing algorithms to reset the prices at random intervals based on factors such as number of back orders, inventory levels, and replenishment lead times, with the objective of maximizing discounted cumulative profit. In the no information case, we show that a seller who uses Q-learning outperforms a seller who uses derivative following (DF). In the partial information case, we model the problem as a Markovian game and use actor-critic based RL to learn dynamic prices. We believe our approach to solving these problems is a new and promising way of setting dynamic prices in multiseller environments with stochastic demands, price sensitive customers, and inventory replenishments.
Resumo:
A tactical gaming model for wargame play between two teams A and B through a control unit C has been developed, which can be handled using IBM personal computers (XT and AT models) having a local area network facility. This simulation model involves communication between the teams involved, logging and validation of the actions of the teams by the control unit. The validation procedure uses statistical and also monte carlo techniques. This model has been developed to evaluate the planning strategies of the teams involved. This application software using about 120 files has been developed in BASIC, DBASE and the associated network software. Experience gained in the instruction courses using this model will also be discussed.
Resumo:
The paper reports the operational experience from a 100 kWe gasification power plant connected to the grid in Karnataka. Biomass Energy for Rural India (BERI) is a program that implemented gasification based power generation with an installed capacity of 0.88 MWe distributed over three locations to meet the electrical energy needs in the district of Tumkur. The operation of one 100 kWe power plant was found unsatisfactory and not meeting the designed performance. The Indian Institute of Science, Bangalore, the technology developer, took the initiative to ensure the system operation, capacity building and prove the designed performance. The power plant connected to the grid consists of the IISc gasification system which includes reactor, cooling, cleaning system, fuel drier and water treatment system to meet the producer gas quality for an engine. The producer gas is used as a fuel in Cummins India Limited, GTA 855 G model, turbo charged engine and the power output is connected to the grid. The system has operated for over 1000 continuous hours, with only about 70 h of grid outages. The total biomass consumption for 1035 h of operation was 111 t at an average of 107 kg/h. Total energy generated was 80.6 MWh reducing over loot of CO(2) emissions. The overall specific fuel consumption was about 1.36 kg/kWh, amounting to an overall efficiency from biomass to electricity of about 18%. The present operations indicate that a maintenance schedule for the plant can be at the end of 1000 h. The results for another 1000 h of operation by the local team are also presented. (C) 2011 International Energy Initiative. Published by Elsevier Inc. All rights reserved.
Resumo:
In this paper, we investigate the use of reinforcement learning (RL) techniques to the problem of determining dynamic prices in an electronic retail market. As representative models, we consider a single seller market and a two seller market, and formulate the dynamic pricing problem in a setting that easily generalizes to markets with more than two sellers. We first formulate the single seller dynamic pricing problem in the RL framework and solve the problem using the Q-learning algorithm through simulation. Next we model the two seller dynamic pricing problem as a Markovian game and formulate the problem in the RL framework. We solve this problem using actor-critic algorithms through simulation. We believe our approach to solving these problems is a promising way of setting dynamic prices in multi-agent environments. We illustrate the methodology with two illustrative examples of typical retail markets.
Resumo:
The paper presents a new controller inspired by the human experience based, voluntary body action control (dubbed motor control) learning mechanism. The controller is called Experience Mapping based Prediction Controller (EMPC). EMPC is designed with auto-learning features without the need for the plant model. The core of the controller is formed around the motor action prediction-control mechanism of humans based on past experiential learning with the ability to adapt to environmental changes intelligently. EMPC is utilized for high precision position control of DC motors. The simulation results are presented to show that accurate position control is achieved using EMPC for step and dynamic demands. The performance of EMPC is compared with conventional PD controller and MRAC based position controller under different system conditions. Position Control using EMPC is practically implemented and the results are presented.
Resumo:
Non-crystalline semiconductor based thin film transistors are the building blocks of large area electronic systems. These devices experience a threshold voltage shift with time due to prolonged gate bias stress. In this paper we integrate a recursive model for threshold voltage shift with the open source BSIM4V4 model of AIM-Spice. This creates a tool for circuit simulation for TFTs. We demonstrate the integrity of the model using several test cases including display driver circuits.