44 resultados para Budget balance

em Indian Institute of Science - Bangalore - Índia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The problem addressed in this paper is concerned with an important issue faced by any green aware global company to keep its emissions within a prescribed cap. The specific problem is to allocate carbon reductions to its different divisions and supply chain partners in achieving a required target of reductions in its carbon reduction program. The problem becomes a challenging one since the divisions and supply chain partners, being autonomous, may exhibit strategic behavior. We use a standard mechanism design approach to solve this problem. While designing a mechanism for the emission reduction allocation problem, the key properties that need to be satisfied are dominant strategy incentive compatibility (DSIC) (also called strategy-proofness), strict budget balance (SBB), and allocative efficiency (AE). Mechanism design theory has shown that it is not possible to achieve the above three properties simultaneously. In the literature, a mechanism that satisfies DSIC and AE has recently been proposed in this context, keeping the budget imbalance minimal. Motivated by the observation that SBB is an important requirement, in this paper, we propose a mechanism that satisfies DSIC and SBB with slight compromise in allocative efficiency. Our experimentation with a stylized case study shows that the proposed mechanism performs satisfactorily and provides an attractive alternative mechanism for carbon footprint reduction by global companies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A business cluster is a co-located group of micro, small, medium scale enterprises. Such firms can benefit significantly from their co-location through shared infrastructure and shared services. Cost sharing becomes an important issue in such sharing arrangements especially when the firms exhibit strategic behavior. There are many cost sharing methods and mechanisms proposed in the literature based on game theoretic foundations. These mechanisms satisfy a variety of efficiency and fairness properties such as allocative efficiency, budget balance, individual rationality, consumer sovereignty, strategyproofness, and group strategyproofness. In this paper, we motivate the problem of cost sharing in a business cluster with strategic firms and illustrate different cost sharing mechanisms through the example of a cluster of firms sharing a logistics service. Next we look into the problem of a business cluster sharing ICT (information and communication technologies) infrastructure and explore the use of cost sharing mechanisms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Query incentive networks capture the role of incentives in extracting information from decentralized information networks such as a social network. Several game theoretic tilt:Kids of query incentive networks have been proposed in the literature to study and characterize the dependence, of the monetary reward required to extract the answer for a query, on various factors such as the structure of the network, the level of difficulty of the query, and the required success probability.None of the existing models, however, captures the practical andimportant factor of quality of answers. In this paper, we develop a complete mechanism design based framework to incorporate the quality of answers, in the monetization of query incentive networks. First, we extend the model of Kleinberg and Raghavan [2] to allow the nodes to modulate the incentive on the basis of the quality of the answer they receive. For this qualify conscious model. we show are existence of a unique Nash equilibrium and study the impact of quality of answers on the growth rate of the initial reward, with respect to the branching factor of the network. Next, we present two mechanisms; the direct comparison mechanism and the peer prediction mechanism, for truthful elicitation of quality from the agents. These mechanisms are based on scoring rules and cover different; scenarios which may arise in query incentive networks. We show that the proposed quality elicitation mechanisms are incentive compatible and ex-ante budget balanced. We also derive conditions under which ex-post budget balance can beachieved by these mechanisms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. When the objects are identical, this problem has been solved which we refer as WCO mechanism. We measure the performance of such mechanisms by the redistribution index. We first prove an impossibility theorem which rules out linear rebate functions with non-zero redistribution index in heterogeneous object assignment. Motivated by this theorem,we explore two approaches to get around this impossibility. In the first approach, we show that linear rebate functions with non-zero redistribution index are possible when the valuations for the objects have a certain type of relationship and we design a mechanism with linear rebate function that is worst case optimal. In the second approach, we show that rebate functions with non-zero efficiency are possible if linearity is relaxed. We extend the rebate functions of the WCO mechanism to heterogeneous objects assignment and conjecture them to be worst case optimal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electronic Exchanges are double-sided marketplaces that allows multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. In this paper, we propose a new design approach for an one-shot exchange that collects bids from buyers and sellers and clears the market at the end of the bidding period. The main principle of the approach is to decouple the allocation from pricing. It is well known that it is impossible for an exchange with voluntary participation to be efficient and budget-balanced. Budget-balance is a mandatory requirement for an exchange to operate in profit. Our approach is to allocate the trade to maximize the reported values of the agents. The pricing is posed as payoff determination problem that distributes the total payoff fairly to all agents with budget-balance imposed as a constraint. We devise an arbitration scheme by axiomatic approach to solve the payoff determination problem using the added-value concept of game theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The measurement of surface energy balance over a land surface in an open area in Bangalore is reported. Measurements of all variables needed to calculate the surface energy balance on time scales longer than a week are made. Components of radiative fluxes are measured while sensible and latent heat fluxes are based on the bulk method using measurements made at two levels on a micrometeorological tower of 10 m height. The bulk flux formulation is verified by comparing its fluxes with direct fluxes using sonic anemometer data sampled at 10 Hz. Soil temperature is measured at 4 depths. Data have been continuously collected for over 6 months covering pre-monsoon and monsoon periods during the year 2006. The study first addresses the issue of getting the fluxes accurately. It is shown that water vapour measurements are the most crucial. A bias of 0.25% in relative humidity, which is well above the normal accuracy assumed the manufacturers but achievable in the field using a combination of laboratory calibration and field intercomparisons, results in about 20 W m(-2) change in the latent heat flux on the seasonal time scale. When seen on the seasonal time scale, the net longwave radiation is the largest energy loss term at the experimental site. The seasonal variation in the energy sink term is small compared to that in the energy source term.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An estimate of the groundwater budget at the catchment scale is extremely important for the sustainable management of available water resources. Water resources are generally subjected to over-exploitation for agricultural and domestic purposes in agrarian economies like India. The double water-table fluctuation method is a reliable method for calculating the water budget in semi-arid crystalline rock areas. Extensive measurements of water levels from a dense network before and after the monsoon rainfall were made in a 53 km(2)atershed in southern India and various components of the water balance were then calculated. Later, water level data underwent geostatistical analyses to determine the priority and/or redundancy of each measurement point using a cross-validation method. An optimal network evolved from these analyses. The network was then used in re-calculation of the water-balance components. It was established that such an optimized network provides far fewer measurement points without considerably changing the conclusions regarding groundwater budget. This exercise is helpful in reducing the time and expenditure involved in exhaustive piezometric surveys and also in determining the water budget for large watersheds (watersheds greater than 50 km(2)).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On calm clear nights, air at a height of a few decimetres above bare soil can be cooler than the surface by several degrees in what we shall call the Ramdas layer (Ramdas and Atmanathan, 1932). The authors have recently offered a logical explanation for such a lifted temperature minimum, together with a detailed numerical model. In this paper, we provide physical insight into the phenomenon by a detailed discussion of the energy budget in four typical cases, including one with a lifted minimum. It is shown that the net cooling rate near ground is the small difference between two dominant terms, representing respectively radiative upflux from the ground and from the air layers just above ground. The delicate energy balance that leads to the lifted minimum is upset by turbulent transport, by surface emissivity approaching unity, or by high ground cooling rates. The rapid variation of the flux emissivity of humid air is shown to dominate radiative transport near the ground.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we applied the integration methodology developed in the companion paper by Aires (2014) by using real satellite observations over the Mississippi Basin. The methodology provides basin-scale estimates of the four water budget components (precipitation P, evapotranspiration E, water storage change Delta S, and runoff R) in a two-step process: the Simple Weighting (SW) integration and a Postprocessing Filtering (PF) that imposes the water budget closure. A comparison with in situ observations of P and E demonstrated that PF improved the estimation of both components. A Closure Correction Model (CCM) has been derived from the integrated product (SW+PF) that allows to correct each observation data set independently, unlike the SW+PF method which requires simultaneous estimates of the four components. The CCM allows to standardize the various data sets for each component and highly decrease the budget residual (P - E - Delta S - R). As a direct application, the CCM was combined with the water budget equation to reconstruct missing values in any component. Results of a Monte Carlo experiment with synthetic gaps demonstrated the good performances of the method, except for the runoff data that has a variability of the same order of magnitude as the budget residual. Similarly, we proposed a reconstruction of Delta S between 1990 and 2002 where no Gravity Recovery and Climate Experiment data are available. Unlike most of the studies dealing with the water budget closure at the basin scale, only satellite observations and in situ runoff measurements are used. Consequently, the integrated data sets are model independent and can be used for model calibration or validation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study is to propose a method to assess the long-term chemical weathering mass balance for a regolith developed on a heterogeneous silicate substratum at the small experimental watershed scale by adopting a combined approach of geophysics, geochemistry and mineralogy. We initiated in 2003 a study of the steep climatic gradient and associated geomorphologic features of the edge of the rifted continental passive margin of the Karnataka Plateau, Peninsular India. In the transition sub-humid zone of this climatic gradient we have studied the pristine forested small watershed of Mule Hole (4.3 km(2)) mainly developed on gneissic substratum. Mineralogical, geochemical and geophysical investigations were carried out (i) in characteristic red soil profiles and (ii) in boreholes up to 60 m deep in order to take into account the effect of the weathering mantle roots. In addition, 12 Electrical Resistivity Tomography profiles (ERT), with an investigation depth of 30 m, were generated at the watershed scale to spatially characterize the information gathered in boreholes and soil profiles. The location of the ERT profiles is based on a previous electromagnetic survey, with an investigation depth of about 6 m. The soil cover thickness was inferred from the electromagnetic survey combined with a geological/pedological survey. Taking into account the parent rock heterogeneity, the degree of weathering of each of the regolith samples has been defined using both the mineralogical composition and the geochemical indices (Loss on Ignition, Weathering Index of Parker, Chemical Index of Alteration). Comparing these indices with electrical resistivity logs, it has been found that a value of 400 Ohm m delineates clearly the parent rocks and the weathered materials, Then the 12 inverted ERT profiles were constrained with this value after verifying the uncertainty due to the inversion procedure. Synthetic models based on the field data were used for this purpose. The estimated average regolith thickness at the watershed scale is 17.2 m, including 15.2 m of saprolite and 2 m of soil cover. Finally, using these estimations of the thicknesses, the long-term mass balance is calculated for the average gneiss-derived saprolite and red soil. In the saprolite, the open-system mass-transport function T indicates that all the major elements except Ca are depleted. The chlorite and biotite crystals, the chief sources for Mg (95%), Fe (84%), Mn (86%) and K (57%, biotite only), are the first to undergo weathering and the oligoclase crystals are relatively intact within the saprolite with a loss of only 18%. The Ca accumulation can be attributed to the precipitation of CaCO3 from the percolating solution due to the current and/or the paleoclimatic conditions. Overall, the most important losses occur for Si, Mg and Na with -286 x 10(6) mol/ha (62% of the total mass loss), -67 x 10(6) mol/ha (15% of the total mass loss) and -39 x 10(6) mol/ha (9% of the total mass loss), respectively. Al, Fe and K account for 7%, 4% and 3% of the total mass loss, respectively. In the red soil profiles, the open-system mass-transport functions point out that all major elements except Mn are depleted. Most of the oligoclase crystals have broken down with a loss of 90%. The most important losses occur for Si, Na and Mg with -55 x 10(6) mol/ha (47% of the total mass loss), -22 x 10(6) mol/ha (19% of the total mass loss) and -16 x 10(6) mol/ha (14% of the total mass loss), respectively. Ca, Al, K and Fe account for 8%, 6%, 4% and 2% of the total mass loss, respectively. Overall these findings confirm the immaturity of the saprolite at the watershed scale. The soil profiles are more evolved than saprolite but still contain primary minerals that can further undergo weathering and hence consume atmospheric CO2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I (Manjunath et al., 1994, Chem. Engng Sci. 49, 1451-1463) of this paper showed that the random particle numbers and size distributions in precipitation processes in very small drops obtained by stochastic simulation techniques deviate substantially from the predictions of conventional population balance. The foregoing problem is considered in this paper in terms of a mean field approximation obtained by applying a first-order closure to an unclosed set of mean field equations presented in Part I. The mean field approximation consists of two mutually coupled partial differential equations featuring (i) the probability distribution for residual supersaturation and (ii) the mean number density of particles for each size and supersaturation from which all average properties and fluctuations can be calculated. The mean field equations have been solved by finite difference methods for (i) crystallization and (ii) precipitation of a metal hydroxide both occurring in a single drop of specified initial supersaturation. The results for the average number of particles, average residual supersaturation, the average size distribution, and fluctuations about the average values have been compared with those obtained by stochastic simulation techniques and by population balance. This comparison shows that the mean field predictions are substantially superior to those of population balance as judged by the close proximity of results from the former to those from stochastic simulations. The agreement is excellent for broad initial supersaturations at short times but deteriorates progressively at larger times. For steep initial supersaturation distributions, predictions of the mean field theory are not satisfactory thus calling for higher-order approximations. The merit of the mean field approximation over stochastic simulation lies in its potential to reduce expensive computation times involved in simulation. More effective computational techniques could not only enhance this advantage of the mean field approximation but also make it possible to use higher-order approximations eliminating the constraints under which the stochastic dynamics of the process can be predicted accurately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative estimates of the vertical structure and the spatial gradients of aerosol extinction coefficients have been made from airborne lidar measurements across the coastline into offshore oceanic regions along the east and west coasts of India. The vertical structure revealed the presence of strong, elevated aerosol layers in the altitude region of similar to 2-4 km, well above the atmospheric boundary layer (ABL). Horizontal gradients also showed a vertical structure, being sharp with the e(-1) scaling distance (D-0H) as small as similar to 150 km in the well-mixed regions mostly under the influence of local source effects. Above the ABL, where local effects are subdued, the gradients were much shallower (similar to 600-800 km); nevertheless, they were steep compared to the value of similar to 1500-2500 km reported for columnar AOD during winter. The gradients of these elevated layers were steeper over the east coast of India than over the west coast. Near-simultaneous radio sonde (Vaisala, Inc., Finland) ascents made over the northern Bay of Bengal showed the presence of convectively unstable regions, first from surface to similar to 750-1000 m and the other extending from 1750 to 3000 m separated by a stable region in between. These can act as a conduit for the advection of aerosols and favor the transport of continental aerosols in the higher levels (> 2 km) into the oceans without entering the marine boundary layer below. Large spatial gradient in aerosol optical and hence radiative impacts between the coastal landmass and the adjacent oceans within a short distance of < 300 km (even at an altitude of 3 km) during summer and the premonsoon is of significance to the regional climate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sodium salt of poly(dG-dC) is known to exhibit a B + Z transition in the presence of various cations and 60% alcohol. We here show that the lithium salt of poly(dG-dC) does not undergo B 4 Z transition in the presence of 60% alcohol since Li’ with its large hydration shell cannot stabilize the Z-form. On the other hand, high concentrations of Mg2* or micromolar concentrations of the cobalt hexamine complex which are known to stabilize the Z-form can compete with Li+ for charge neutraIization and hence bring about a B--t Z transition in the same polymer. From the model building studies the mode of action of the cobalt-hexamine complex in stabilizing the Z-form is postulated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge of drag force is an important design parameter in aerodynamics. Measurement of aerodynamic forces at hypersonic speed is a challenge and usually ground test facilities like shock tunnels are used to carry out such tests. Accelerometer based force balances are commonly employed for measuring aerodynamic drag around bodies in hypersonic shock tunnels. In this study, we present an analysis of the effect of model material on the performance of an accelerometer balance used for measurement of drag in impulse facilities. From the experimental studies performed on models constructed out of Bakelite HYLEM and Aluminum, it is clear that the rigid body assumption does not hold good during the short testing duration available in shock tunnels. This is notwithstanding the fact that the rubber bush used for supporting the model allows unconstrained motion of the model during the short testing time available in the shock tunnel. The vibrations induced in the model on impact loading in the shock tunnel are damped out in metallic model, resulting in a smooth acceleration signal, while the signal become noisy and non-linear when we use non-isotropic materials like Bakelite HYLEM. This also implies that careful analysis and proper data reduction methodologies are necessary for measuring aerodynamic drag for non-metallic models in shock tunnels. The results from the drag measurements carried out using a 60 degrees half angle blunt cone is given in the present analysis.