38 resultados para Resource Dilemma
Resumo:
In a computational grid, the presence of grid resource providers who are rational and intelligent could lead to an overall degradation in the efficiency of the grid. In this paper, we design incentive compatible grid resource procurement mechanisms which ensure that the efficiency of the grid is not affected by the rational behavior of resource providers.In particular, we offer three elegant incentive compatible mechanisms for this purpose: (1) G-DSIC (Grid-Dominant Strategy Incentive Compatible) mechanism (2) G-BIC (Grid-Bayesian Nash Incentive Compatible) mechanism (3) G-OPT(Grid-Optimal) mechanism which minimizes the cost to the grid user, satisfying at the same time, (a) Bayesian incentive compatibility and (b) individual rationality. We evaluate the relative merits and demerits of the above three mechanisms using game theoretical analysis and numerical experiments.
Resumo:
In this paper, we study how TCP and UDP flows interact with each other when the end system is a CPU resource constrained thin client. The problem addressed is twofold, 1) the throughput of TCP flows degrades severely in the presence of heavily loaded UDP flows 2) fairness and minimum QoS requirements of UDP are not maintained. First, we identify the factors affecting the TCP throughput by providing an in-depth analysis of end to end delay and packet loss variations. The results obtained from the first part leads us to our second contribution. We propose and study the use of an algorithm that ensures fairness across flows. The algorithm improves the performance of TCP flows in the presence of multiple UDP flows admitted under an admission algorithm and maintains the minimum QoS requirements of the UDP flows. The advantage of the algorithm is that it requires no changes to TCP/IP stack and control is achieved through receiver window control.
Resumo:
A resource interaction based game theoretical model for military conflicts is presented in this paper. The model includes both the spatial decision capability of adversaries (decision regarding movement and subsequent distribution of resources) as well as their temporal decision capability (decision regarding level of allocation of resources for conflict with adversary’s resources). Attrition is decided at present by simple deterministic models. An additional feature of this model is the inclusion of the possibility of a given resource interacting with several resources of the adversary.The decisions of the adversaries is determined by solving for the equilibrium Nash strategies given that the objectives of the adversaries may not be in direct conflict. Examples are given to show the applicability of these models and solution concepts.
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. We obtain simple necessary and sufficient conditions on prices such that efficient and fair link sharing is possible. We illustrate the ideas using a simple example
Resumo:
The problem of finding optimal parameterized feedback policies for dynamic bandwidth allocation in communication networks is studied. We consider a queueing model with two queues to which traffic from different competing flows arrive. The queue length at the buffers is observed every T instants of time, on the basis of which a decision on the amount of bandwidth to be allocated to each buffer for the next T instants is made. We consider two different classes of multilevel closed-loop feedback policies for the system and use a two-timescale simultaneous perturbation stochastic approximation (SPSA) algorithm to find optimal policies within each prescribed class. We study the performance of the proposed algorithm on a numerical setting and show performance comparisons of the two optimal multilevel closedloop policies with optimal open loop policies. We observe that closed loop policies of Class B that tune parameters for both the queues and do not have the constraint that the entire bandwidth be used at each instant exhibit the best results overall as they offer greater flexibility in parameter tuning. Index Terms — Resource allocation, dynamic bandwidth allocation in communication networks, two-timescale SPSA algorithm, optimal parameterized policies. I.
Resumo:
Traffic Engineering has been the prime concern for Internet Service Providers (ISPs), with the main focus being minimization of over-utilization of network capacity even though additional capacity is available which is under-utilized, Furthermore, requirements of timely delivery of digitized audiovisual information raises a new challenge of finding a path meeting these requirements. This paper addresses the issue of (a) distributing load to achieve global efficiency in resource utilization. (b) Finding a path satisfying the real time requirements of, delay and bandwidth requested by the applications. In this paper we do a critical study of the link utilization that varies over time and determine the time interval during which the link occupancy remains constant across days. This information helps in pre-determining link utilization that is useful in balancing load in the network Finally, we run simulations that use a dynamic time interval for profiling traffic and show improvement in terms number of calls admitted/blocked.
Resumo:
We consider a network in which several service providers offer wireless access to their respective subscribed customers through potentially multihop routes. If providers cooperate by jointly deploying and pooling their resources, such as spectrum and infrastructure (e.g., base stations) and agree to serve each others' customers, their aggregate payoffs, and individual shares, may substantially increase through opportunistic utilization of resources. The potential of such cooperation can, however, be realized only if each provider intelligently determines with whom it would cooperate, when it would cooperate, and how it would deploy and share its resources during such cooperation. Also, developing a rational basis for sharing the aggregate payoffs is imperative for the stability of the coalitions. We model such cooperation using the theory of transferable payoff coalitional games. We show that the optimum cooperation strategy, which involves the acquisition, deployment, and allocation of the channels and base stations (to customers), can be computed as the solution of a concave or an integer optimization. We next show that the grand coalition is stable in many different settings, i.e., if all providers cooperate, there is always an operating point that maximizes the providers' aggregate payoff, while offering each a share that removes any incentive to split from the coalition. The optimal cooperation strategy and the stabilizing payoff shares can be obtained in polynomial time by respectively solving the primals and the duals of the above optimizations, using distributed computations and limited exchange of confidential information among the providers. Numerical evaluations reveal that cooperation substantially enhances individual providers' payoffs under the optimal cooperation strategy and several different payoff sharing rules.
Resumo:
Background & objectives: There is a need to develop an affordable and reliable tool for hearing screening of neonates in resource constrained, medically underserved areas of developing nations. This study valuates a strategy of health worker based screening of neonates using a low cost mechanical calibrated noisemaker followed up with parental monitoring of age appropriate auditory milestones for detecting severe-profound hearing impairment in infants by 6 months of age. Methods: A trained health worker under the supervision of a qualified audiologist screened 425 neonates of whom 20 had confirmed severe-profound hearing impairment. Mechanical calibrated noisemakers of 50, 60, 70 and 80 dB (A) were used to elicit the behavioural responses. The parents of screened neonates were instructed to monitor the normal language and auditory milestones till 6 months of age. This strategy was validated against the reference standard consisting of a battery of tests - namely, auditory brain stem response (ABR), otoacoustic emissions (OAE) and behavioural assessment at 2 years of age. Bayesian prevalence weighted measures of screening were calculated. Results: The sensitivity and specificity was high with least false positive referrals for. 70 and 80 dB (A) noisemakers. All the noisemakers had 100 per cent negative predictive value. 70 and 80 dB (A) noisemakers had high positive likelihood ratios of 19 and 34, respectively. The probability differences for pre- and post- test positive was 43 and 58 for 70 and 80 dB (A) noisemakers, respectively. Interpretation & conclusions: In a controlled setting, health workers with primary education can be trained to use a mechanical calibrated noisemaker made of locally available material to reliably screen for severe-profound hearing loss in neonates. The monitoring of auditory responses could be done by informed parents. Multi-centre field trials of this strategy need to be carried out to examine the feasibility of community health care workers using it in resource constrained settings of developing nations to implement an effective national neonatal hearing screening programme.
Resumo:
This paper shows how multidisciplinary research can help policy makers develop policies for sustainable agricultural water management interventions by supporting a dialogue between government departments that are in charge of different aspects of agricultural development. In the Jaldhaka Basin in West Bengal, India, a stakeholder dialogue helped identify potential water resource impacts and livelihood implications of an agricultural water management rural electrification scenario. Hydrologic modelling demonstrated that the expansion of irrigation is possible with only a localized effect on groundwater levels, but cascading effects such as declining soil fertility and negative impacts from agrochemicals will need to be addressed.
Resumo:
Heat shock protein information resource (HSPIR) is a concerted database of six major heat shock proteins (HSPs), namely, Hsp70, Hsp40, Hsp60, Hsp90, Hsp100 and small HSP. The HSPs are essential for the survival of all living organisms, as they protect the conformations of proteins on exposure to various stress conditions. They are a highly conserved group of proteins involved in diverse physiological functions, including de novo folding, disaggregation and protein trafficking. Moreover, their critical role in the control of disease progression made them a prime target of research. Presently, limited information is available on HSPs in reference to their identification and structural classification across genera. To that extent, HSPIR provides manually curated information on sequence, structure, classification, ontology, domain organization, localization and possible biological functions extracted from UniProt, GenBank, Protein Data Bank and the literature. The database offers interactive search with incorporated tools, which enhances the analysis. HSPIR is a reliable resource for researchers exploring structure, function and evolution of HSPs.
Resumo:
Monitoring of infrastructural resources in clouds plays a crucial role in providing application guarantees like performance, availability, and security. Monitoring is crucial from two perspectives - the cloud-user and the service provider. The cloud user’s interest is in doing an analysis to arrive at appropriate Service-level agreement (SLA) demands and the cloud provider’s interest is to assess if the demand can be met. To support this, a monitoring framework is necessary particularly since cloud hosts are subject to varying load conditions. To illustrate the importance of such a framework, we choose the example of performance being the Quality of Service (QoS) requirement and show how inappropriate provisioning of resources may lead to unexpected performance bottlenecks. We evaluate existing monitoring frameworks to bring out the motivation for building much more powerful monitoring frameworks. We then propose a distributed monitoring framework, which enables fine grained monitoring for applications and demonstrate with a prototype system implementation for typical use cases.
Resumo:
In animal populations, the constraints of energy and time can cause intraspecific variation in foraging behaviour. The proximate developmental mediators of such variation are often the mechanisms underlying perception and associative learning. Here, experience-dependent changes in foraging behaviour and their consequences were investigated in an urban population of free-ranging dogs, Canis familiaris by continually challenging them with the task of food extraction from specially crafted packets. Typically, males and pregnant/lactating (PL) females extracted food using the sophisticated `gap widening' technique, whereas non-pregnant/non-lactating (NPNL) females, the relatively underdeveloped `rip opening' technique. In contrast to most males and PL females (and a few NPNL females) that repeatedly used the gap widening technique and improved their performance in food extraction with experience, most NPNL females (and a few males and PL females) non-preferentially used the two extraction techniques and did not improve over successive trials. Furthermore, the ability of dogs to sophisticatedly extract food was positively related to their ability to improve their performance with experience. Collectively, these findings demonstrate that factors such as sex and physiological state can cause differences among individuals in the likelihood of learning new information and hence, in the rate of resource acquisition and monopolization.
Resumo:
In recent times, crowdsourcing over social networks has emerged as an active tool for complex task execution. In this paper, we address the problem faced by a planner to incen-tivize agents in the network to execute a task and also help in recruiting other agents for this purpose. We study this mecha-nism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner’s goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We define a set of fairness properties that should beideally satisfied by a crowdsourcing mechanism. We prove that no mechanism can satisfy all these properties simultane-ously. We relax some of these properties and define their ap-proximate counterparts. Under appropriate approximate fair-ness criteria, we obtain a non-trivial family of payment mech-anisms. Moreover, we provide precise characterizations of cost critical and time critical mechanisms.
Resumo:
Realization of cloud computing has been possible due to availability of virtualization technologies on commodity platforms. Measuring resource usage on the virtualized servers is difficult because of the fact that the performance counters used for resource accounting are not virtualized. Hence, many of the prevalent virtualization technologies like Xen, VMware, KVM etc., use host specific CPU usage monitoring, which is coarse grained. In this paper, we present a performance monitoring tool for KVM based virtualized machines, which measures the CPU overhead incurred by the hypervisor on behalf of the virtual machine along-with the CPU usage of virtual machine itself. This fine-grained resource usage information, provided by the above tool, can be used for diverse situations like resource provisioning to support performance associated QoS requirements, identification of bottlenecks during VM placements, resource profiling of applications in cloud environments, etc. We demonstrate a use case of this tool by measuring the performance of web-servers hosted on a KVM based virtualized server.