52 resultados para fair value
Resumo:
In Universal Mobile Telecommunication Systems (UMTS), the Downlink Shared Channel (DSCH) can be used for providing streaming services. The traffic model for streaming services is different from the commonly used continuously- backlogged model. Each connection specifies a required service rate over an interval of time, k, called the "control horizon". In this paper, our objective is to determine how k DSCH frames should be shared among a set of I connections. We need a scheduler that is efficient and fair and introduce the notion of discrepancy to balance the conflicting requirements of aggregate throughput and fairness. Our motive is to schedule the mobiles in such a way that the schedule minimizes the discrepancy over the k frames. We propose an optimal and computationally efficient algorithm, called STEM+. The proof of the optimality of STEM+, when applied to the UMTS rate sets is the major contribution of this paper. We also show that STEM+ performs better in terms of both fairness and aggregate throughput compared to other scheduling algorithms. Thus, STEM+ achieves both fairness and efficiency and is therefore an appealing algorithm for scheduling streaming connections.
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data; this leads to variable user data rates. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. User preferences are modelled by concave increasing utility functions. Further, we introduce two additional elements: a convex increasing disutility function and a convex increasing multiplicative congestion-penally function. The disutility function takes the shortfall (contracted rate minus present rate) as its argument, and essentially encourages users to send traffic at their contracted rates, while the congestion-penalty function discourages heavy users from sending excess data when the link is congested. We obtain simple necessary and sufficient conditions on prices for fair and efficient link sharing; moreover, we show that a single price for all users achieves this. We illustrate the ideas using a simple experiment.
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. We obtain simple necessary and sufficient conditions on prices such that efficient and fair link sharing is possible. We illustrate the ideas using a simple example
Resumo:
The high temperature region of the MnO-A1203 phase diagram has been redetermined to resolve some discrepancies reported in the literature regarding the melting behaviour of MnA1,04. This spinel was found to melt congruently at 2108 (+ 15) K. Theactivity of MnOin MnO-Al,03 meltsand in the two phase regions, melt + MnAI,04 and MnAI2O4 + A1203, has been determined by measuring the manganese concentration in platinum foils in equilibrium under controlled oxygen potentials. The activity of MnO obtained in this study for M ~ O - A I ,m~el~ts is in fair agreement with the results of Sharma and Richardson.However. the alumina-rich melt is found to be in equilibrium with MnAl,04 rather than AI2O3. as suggested by ~ha rmaan d Richardson. The value for the acthity of MnO in the M~AI ,O,+ A1,03 two phaseregion permits a rigorous application of the Gibbs-Duhem equation for calculating the activity of A1203 and the integral Gibbs' energy of mixing of MnO-A1203 melts, which are significantly different from those reported in the literature.
Resumo:
The chemical potentials of CaO in two-phase fields (TiO2 + CaTiO3), (CaTiO3 + Ca4Ti3O10), and (Ca4Ti3O10 + Ca3Ti2O7) of the pseudo-binary system (CaO + TiO2) have been measured in the temperature range (900 to 1250) K, relative to pure CaO as the reference state, using solid-state galvanic cells incorporating single crystal CaF2 as the solid electrolyte. The cells were operated under pure oxygen at ambient pressure. The standard Gibbs free energies of formation of calcium titanates, CaTiO3, Ca4Ti3O10, and Ca3Ti2O7, from their component binary oxides were derived from the reversible e.m.f.s. The results can be summarised by the following equations: CaO(solid) + TiO2(solid) → CaTiO3(solid), ΔG° ± 85/(J · mol−1) = −80,140 − 6.302(T/K); 4CaO(solid) + 3TiO2(solid) → Ca4Ti3O10(solid), ΔG° ± 275/(J · mol−1) = −243,473 − 25.758(T/K); 3CaO(solid) + 2TiO2(solid) → Ca3Ti2O7(solid), ΔG° ± 185/(J · mol−1) = −164,217 − 16.838(T/K). The reference state for solid TiO2 is the rutile form. The results of this study are in good agreement with thermodynamic data for CaTiO3 reported in the literature. For Ca4Ti3O10 Gibbs free energy of formation obtained in this study differs significantly from that reported by Taylor and Schmalzried at T = 873 K. For Ca3Ti2O7 experimental measurements are not available in the literature for direct comparison with the results obtained in this study. Nevertheless, the standard entropy for Ca3Ti2O7 at T = 298.15 K estimated from the results of this study using the Neumann–Koop rule is in fair agreement with the value obtained from low-temperature heat capacity measurements.
Resumo:
Notched three point bend (TPB) specimens made with plain concrete and cement mortar were tested under crack mouth opening displacement (CMOD) control at a rate of 0.0004 mm/s and simultaneously acoustic emissions (AE) released were recorded during the experiments. Amplitude distribution analysis of AE released during concrete was carried out to study the development of fracture process in concrete and mortar specimens. The slope of the log-linear frequency-amplitude distribution of AE is known as the AE based b-value. The AE based b-value was computed in terms of physical process of time varying applied load using cumulative frequency distribution (Gutenberg-Richter relationship) and discrete frequency distribution (Aki's method) of AE released during concrete fracture. AE characteristics of plain concrete and cement mortar were studied and discussed and it was observed that the AE based b-value analysis serves as a tool to identify the damage in concrete structural members. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this article, we address stochastic differential games of mixed type with both control and stopping times. Under standard assumptions, we show that the value of the game can be characterized as the unique viscosity solution of corresponding Hamilton-Jacobi-Isaacs (HJI) variational inequalities.
Resumo:
We develop a quadratic C degrees interior penalty method for linear fourth order boundary value problems with essential and natural boundary conditions of the Cahn-Hilliard type. Both a priori and a posteriori error estimates are derived. The performance of the method is illustrated by numerical experiments.
Resumo:
We investigate the problem of influence limitation in the presence of competing campaigns in a social network. Given a negative campaign which starts propagating from a specified source and a positive/counter campaign that is initiated, after a certain time delay, to limit the the influence or spread of misinformation by the negative campaign, we are interested in finding the top k influential nodes at which the positive campaign may be triggered. This problem has numerous applications in situations such as limiting the propagation of rumor, arresting the spread of virus through inoculation, initiating a counter-campaign against malicious propaganda, etc. The influence function for the generic influence limitation problem is non-submodular. Restricted versions of the influence limitation problem, reported in the literature, assume submodularity of the influence function and do not capture the problem in a realistic setting. In this paper, we propose a novel computational approach for the influence limitation problem based on Shapley value, a solution concept in cooperative game theory. Our approach works equally effectively for both submodular and non-submodular influence functions. Experiments on standard real world social network datasets reveal that the proposed approach outperforms existing heuristics in the literature. As a non-trivial extension, we also address the problem of influence limitation in the presence of multiple competing campaigns.
Resumo:
Subsurface lithology and seismic site classification of Lucknow urban center located in the central part of the Indo-Gangetic Basin (IGB) are presented based on detailed shallow subsurface investigations and borehole analysis. These are done by carrying out 47 seismic surface wave tests using multichannel analysis of surface waves (MASW) and 23 boreholes drilled up to 30 m with standard penetration test (SPT) N values. Subsurface lithology profiles drawn from the drilled boreholes show low- to medium-compressibility clay and silty to poorly graded sand available till depth of 30 m. In addition, deeper boreholes (depth >150 m) were collected from the Lucknow Jal Nigam (Water Corporation), Government of Uttar Pradesh to understand deeper subsoil stratification. Deeper boreholes in this paper refer to those with depth over 150 m. These reports show the presence of clay mix with sand and Kankar at some locations till a depth of 150 m, followed by layers of sand, clay, and Kankar up to 400 m. Based on the available details, shallow and deeper cross-sections through Lucknow are presented. Shear wave velocity (SWV) and N-SPT values were measured for the study area using MASW and SPT testing. Measured SWV and N-SPT values for the same locations were found to be comparable. These values were used to estimate 30 m average values of N-SPT (N-30) and SWV (V-s(30)) for seismic site classification of the study area as per the National Earthquake Hazards Reduction Program (NEHRP) soil classification system. Based on the NEHRP classification, the entire study area is classified into site class C and D based on V-s(30) and site class D and E based on N-30. The issue of larger amplification during future seismic events is highlighted for a major part of the study area which comes under site class D and E. Also, the mismatch of site classes based on N-30 and V-s(30) raises the question of the suitability of the NEHRP classification system for the study region. Further, 17 sets of SPT and SWV data are used to develop a correlation between N-SPT and SWV. This represents a first attempt of seismic site classification and correlation between N-SPT and SWV in the Indo-Gangetic Basin.
Resumo:
A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.
Resumo:
We consider the problem of ``fair'' scheduling the resources to one of the many mobile stations by a centrally controlled base station (BS). The BS is the only entity taking decisions in this framework based on truthful information from the mobiles on their radio channel. We study the well-known family of parametric alpha-fair scheduling problems from a game-theoretic perspective in which some of the mobiles may be noncooperative. We first show that if the BS is unaware of the noncooperative behavior from the mobiles, the noncooperative mobiles become successful in snatching the resources from the other cooperative mobiles, resulting in unfair allocations. If the BS is aware of the noncooperative mobiles, a new game arises with BS as an additional player. It can then do better by neglecting the signals from the noncooperative mobiles. The BS, however, becomes successful in eliciting the truthful signals from the mobiles only when it uses additional information (signal statistics). This new policy along with the truthful signals from mobiles forms a Nash equilibrium (NE) that we call a Truth Revealing Equilibrium. Finally, we propose new iterative algorithms to implement fair scheduling policies that robustify the otherwise nonrobust (in presence of noncooperation) alpha-fair scheduling algorithms.