959 resultados para RESOURCES ALLOCATION
Resumo:
One relatively unexplored question about the Internet's physical structure concerns the geographical location of its components: routers, links and autonomous systems (ASes). We study this question using two large inventories of Internet routers and links, collected by different methods and about two years apart. We first map each router to its geographical location using two different state-of-the-art tools. We then study the relationship between router location and population density; between geographic distance and link density; and between the size and geographic extent of ASes. Our findings are consistent across the two datasets and both mapping methods. First, as expected, router density per person varies widely over different economic regions; however, in economically homogeneous regions, router density shows a strong superlinear relationship to population density. Second, the probability that two routers are directly connected is strongly dependent on distance; our data is consistent with a model in which a majority (up to 75-95%) of link formation is based on geographical distance (as in the Waxman topology generation method). Finally, we find that ASes show high variability in geographic size, which is correlated with other measures of AS size (degree and number of interfaces). Among small to medium ASes, ASes show wide variability in their geographic dispersal; however, all ASes exceeding a certain threshold in size are maximally dispersed geographically. These findings have many implications for the next generation of topology generators, which we envisage as producing router-level graphs annotated with attributes such as link latencies, AS identifiers and geographical locations.
Resumo:
snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.
Resumo:
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Resumo:
As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.
Resumo:
Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.
Resumo:
Although Common Pool Resources (CPRs) make up a significant share of total income for rural households in Ethiopia and elsewhere in developing world, limited access to these resources and environmental degradation threaten local livelihoods. As a result, the issues of management, governance of CPRs and how to prevent their over-exploitation are of great importance for development policy. This study examines the current state and dynamics of CPRs and overall resource governance system of the Lake Tana sub-basin. This research employed the modified form of Institutional Analysis and Development (IAD) framework. The framework integrates the concept of Socio-Ecological Systems (SES) and Interactive Governance (IG) perspectives where social actors, institutions, the politico-economic context, discourses and ecological features across governance and government levels were considered. It has been observed that overexploitation, degradation and encroachment of CPRs have increased dramatically and this threatens the sustainability of Lake Tana ecosystem. The stakeholder analysis result reveals that there are multiple stakeholders with diverse interest in and power over CPRs. The analysis of institutional arrangements reveals that the existing formal rules and regulations governing access to and control over CPRs could not be implemented and were not effective to legally bind and govern CPR user’s behavior at the operational level. The study also shows that a top-down and non-participatory policy formulation, law and decision making process overlooks the local contexts (local knowledge and informal institutions). The outcomes of examining the participation of local resource users, as an alternative to a centralized, command-and-control, and hierarchical approach to resource management and governance, have called for a fundamental shift in CPR use, management and governance to facilitate the participation of stakeholders in decision making. Therefore, establishing a multi-level stakeholder governance system as an institutional structure and process is necessary to sustain stakeholder participation in decision-making regarding CPR use, management and governance.
Resumo:
The mobile cloud computing model promises to address the resource limitations of mobile devices, but effectively implementing this model is difficult. Previous work on mobile cloud computing has required the user to have a continuous, high-quality connection to the cloud infrastructure. This is undesirable and possibly infeasible, as the energy required on the mobile device to maintain a connection, and transfer sizeable amounts of data is large; the bandwidth tends to be quite variable, and low on cellular networks. The cloud deployment itself needs to efficiently allocate scalable resources to the user as well. In this paper, we formulate the best practices for efficiently managing the resources required for the mobile cloud model, namely energy, bandwidth and cloud computing resources. These practices can be realised with our mobile cloud middleware project, featuring the Cloud Personal Assistant (CPA). We compare this with the other approaches in the area, to highlight the importance of minimising the usage of these resources, and therefore ensure successful adoption of the model by end users. Based on results from experiments performed with mobile devices, we develop a no-overhead decision model for task and data offloading to the CPA of a user, which provides efficient management of mobile cloud resources.
Resumo:
This paper analyzes a class of common-component allocation rules, termed no-holdback (NHB) rules, in continuous-review assemble-to-order (ATO) systems with positive lead times. The inventory of each component is replenished following an independent base-stock policy. In contrast to the usually assumed first-come-first-served (FCFS) component allocation rule in the literature, an NHB rule allocates a component to a product demand only if it will yield immediate fulfillment of that demand. We identify metrics as well as cost and product structures under which NHB rules outperform all other component allocation rules. For systems with certain product structures, we obtain key performance expressions and compare them to those under FCFS. For general product structures, we present performance bounds and approximations. Finally, we discuss the applicability of these results to more general ATO systems. © 2010 INFORMS.
Resumo:
Through an examination of global climate change models combined with hydrological data on deteriorating water quality in the Middle East and North Africa (MENA), we elucidate the ways in which the MENA countries are vulnerable to climate-induced impacts on water resources. Adaptive governance strategies, however, remain a low priority for political leaderships in the MENA region. To date, most MENA governments have concentrated the bulk of their resources on large-scale supply side projects such as desalination, dam construction, inter-basin water transfers, tapping fossil groundwater aquifers, and importing virtual water. Because managing water demand, improving the efficiency of water use, and promoting conservation will be key ingredients in responding to climate-induced impacts on the water sector, we analyze the political, economic, and institutional drivers that have shaped governance responses. While the scholarly literature emphasizes the importance of social capital to adaptive governance, we find that many political leaders and water experts in the MENA rarely engage societal actors in considering water risks. We conclude that the key capacities for adaptive governance to water scarcity in MENA are underdeveloped. © 2010 Springer Science+Business Media B.V.
Resumo:
BACKGROUND: Living related kidney transplantation (LRT) is underutilized, particularly among African Americans. The effectiveness of informational and financial interventions to enhance informed decision-making among African Americans with end stage renal disease (ESRD) and improve rates of LRT is unknown. METHODS/DESIGN: We report the protocol of the Providing Resources to Enhance African American Patients' Readiness to Make Decisions about Kidney Disease (PREPARED) Study, a two-phase study utilizing qualitative and quantitative research methods to design and test the effectiveness of informational (focused on shared decision-making) and financial interventions to overcome barriers to pursuit of LRT among African American patients and their families. Study Phase I involved the evidence-based development of informational materials as well as a financial intervention to enhance African American patients' and families' proficiency in shared decision-making regarding LRT. In Study Phase 2, we are currently conducting a randomized controlled trial in which patients with new-onset ESRD receive 1) usual dialysis care by their nephrologists, 2) the informational intervention (educational video and handbook), or 3) the informational intervention in addition to the option of participating in a live kidney donor financial assistance program. The primary outcome of the randomized controlled trial will include patients' self-reported rates of consideration of LRT (including family discussions of LRT, patient-physician discussions of LRT, and identification of a LRT donor). DISCUSSION: Results from the PREPARED study will provide needed evidence on ways to enhance the decision to pursue LRT among African American patients with ESRD.
Resumo:
Pharmacogenomics (PGx) offers the promise of utilizing genetic fingerprints to predict individual responses to drugs in terms of safety, efficacy and pharmacokinetics. Early-phase clinical trial PGx applications can identify human genome variations that are meaningful to study design, selection of participants, allocation of resources and clinical research ethics. Results can inform later-phase study design and pipeline developmental decisions. Nevertheless, our review of the clinicaltrials.gov database demonstrates that PGx is rarely used by drug developers. Of the total 323 trials that included PGx as an outcome, 80% have been conducted by academic institutions after initial regulatory approval. Barriers for the application of PGx are discussed. We propose a framework for the role of PGx in early-phase drug development and recommend PGx be universally considered in study design, result interpretation and hypothesis generation for later-phase studies, but PGx results from underpowered studies should not be used by themselves to terminate drug-development programs.
Resumo:
The effectiveness of vaccinating males against the human papillomavirus (HPV) remains a controversial subject. Many existing studies conclude that increasing female coverage is more effective than diverting resources into male vaccination. Recently, several empirical studies on HPV immunization have been published, providing evidence of the fact that marginal vaccination costs increase with coverage. In this study, we use a stochastic agent-based modeling framework to revisit the male vaccination debate in light of these new findings. Within this framework, we assess the impact of coverage-dependent marginal costs of vaccine distribution on optimal immunization strategies against HPV. Focusing on the two scenarios of ongoing and new vaccination programs, we analyze different resource allocation policies and their effects on overall disease burden. Our results suggest that if the costs associated with vaccinating males are relatively close to those associated with vaccinating females, then coverage-dependent, increasing marginal costs may favor vaccination strategies that entail immunization of both genders. In particular, this study emphasizes the necessity for further empirical research on the nature of coverage-dependent vaccination costs.