773 resultados para Denial of service(DOS)


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores the organisational experiences of governmental policy change and implementation on the third sector. Using a four-year longitudinal study of 13 third sector organisations (TSOs) it provides evidence based on the experiences of, and effects on, third sector organisations involved in the UK’s Work Programme in Scotland. The paper explores third sector experiences of the Work Programme during the preparation and introductory phase, as well as the effects of subsequent Work Programme implementation. By gathering evidence contemporaneously and longitudinally a unique in-depth analysis is provided of the introduction and implementation of a major new policy. The resource cost and challenges to third sector ways of working for the organisations in the Work Programme supply chain, as well as those not in the supply chain, are considered. The paper considers some of the responses adopted by the third sector to manage the opportunities and challenges presented to them through the implementation of the Work Programme. The paper also reflects on the broader context of the employability services landscape and raises questions as to whether, as a result of the manner in which the Work Programme was contracted, there is evidence of a move towards service homogenisation, challenging perceived TSO characteristics of service innovation and personalisation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bain, William, 'In Praise of Folly: International Administration and the Corruption of Humanity', International Affairs, (2006) 82(3) pp.525-538 RAE2008

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Yang, Y., Humphreys, P., & McIvor, R. (2006). Business service quality in an e-commerce environment. Supply Chain Management: An International Journal, 11 (3), 195-201. RAE2008

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current research on Internet-based distributed systems emphasizes the scalability of overlay topologies for efficient search and retrieval of data items, as well as routing amongst peers. However, most existing approaches fail to address the transport of data across these logical networks in accordance with quality of service (QoS) constraints. Consequently, this paper investigates the use of scalable overlay topologies for routing real-time media streams between publishers and potentially many thousands of subscribers. Specifically, we analyze the costs of using k-ary n-cubes for QoS-constrained routing. Given a number of nodes in a distributed system, we calculate the optimal k-ary n-cube structure for minimizing the average distance between any pair of nodes. Using this structure, we describe a greedy algorithm that selects paths between nodes in accordance with the real-time delays along physical links. We show this method improves the routing latencies by as much as 67%, compared to approaches that do not consider physical link costs. We are in the process of developing a method for adaptive node placement in the overlay topology, based upon the locations of publishers, subscribers, physical link costs and per-subscriber QoS constraints. One such method for repositioning nodes in logical space is discussed, to improve the likelihood of meeting service requirements on data routed between publishers and subscribers. Future work will evaluate the benefits of such techniques more thoroughly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications: unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real-world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes that are caused, for instance, by columns or furniture in a room that can occlude potential camera views. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floorplans. Given a floorplan to be observed, the problem is then to efficiently compute a camera layout such that certain task-specific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In preliminary experiments the performance of the resulting system is demonstrated with different real floorplans.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To support the diverse Quality of Service (QoS) requirements of real-time (e.g. audio/video) applications in integrated services networks, several routing algorithms that allow for the reservation of the needed bandwidth over a Virtual Circuit (VC) established on one of several candidate routes have been proposed. Traditionally, such routing is done using the least-loaded concept, and thus results in balancing the load across the set of candidate routes. In a recent study, we have established the inadequacy of this load balancing practice and proposed the use of load profiling as an alternative. Load profiling techniques allow the distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. In this paper we thoroughly characterize the performance of VC routing using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations of multi-class traffic routing in Virtual Path (VP) based networks. Our findings confirm that for routing guaranteed bandwidth flows in VP networks, load balancing is not desirable as it results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new VC requests. This fragmentation is more pronounced when the granularity of VC requests is large. Typically, this occurs when a common VC is established to carry the aggregate traffic flow of many high-bandwidth real-time sources. For VP-based networks, our simulation results show that our load-profiling VC routing scheme performs better or as well as the traditional load-balancing VC routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth connections.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The congestion control mechanisms of TCP make it vulnerable in an environment where flows with different congestion-sensitivity compete for scarce resources. With the increasing amount of unresponsive UDP traffic in today's Internet, new mechanisms are needed to enforce fairness in the core of the network. We propose a scalable Diffserv-like architecture, where flows with different characteristics are classified into separate service queues at the routers. Such class-based isolation provides protection so that flows with different characteristics do not negatively impact one another. In this study, we examine different aspects of UDP and TCP interaction and possible gains from segregating UDP and TCP into different classes. We also investigate the utility of further segregating TCP flows into two classes, which are class of short and class of long flows. Results are obtained analytically for both Tail-drop and Random Early Drop (RED) routers. Class-based isolation have the following salient features: (1) better fairness, (2) improved predictability for all kinds of flows, (3) lower transmission delay for delay-sensitive flows, and (4) better control over Quality of Service (QoS) of a particular traffic type.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As new multi-party edge services are deployed on the Internet, application-layer protocols with complex communication models and event dependencies are increasingly being specified and adopted. To ensure that such protocols (and compositions thereof with existing protocols) do not result in undesirable behaviors (e.g., livelocks) there needs to be a methodology for the automated checking of the "safety" of these protocols. In this paper, we present ingredients of such a methodology. Specifically, we show how SPIN, a tool from the formal systems verification community, can be used to quickly identify problematic behaviors of application-layer protocols with non-trivial communication models—such as HTTP with the addition of the "100 Continue" mechanism. As a case study, we examine several versions of the specification for the Continue mechanism; our experiments mechanically uncovered multi-version interoperability problems, including some which motivated revisions of HTTP/1.1 and some which persist even with the current version of the protocol. One such problem resembles a classic degradation-of-service attack, but can arise between well-meaning peers. We also discuss how the methods we employ can be used to make explicit the requirements for hardening a protocol's implementation against potentially malicious peers, and for verifying an implementation's interoperability with the full range of allowable peer behaviors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research on the construction of logical overlay networks has gained significance in recent times. This is partly due to work on peer-to-peer (P2P) systems for locating and retrieving distributed data objects, and also scalable content distribution using end-system multicast techniques. However, there are emerging applications that require the real-time transport of data from various sources to potentially many thousands of subscribers, each having their own quality-of-service (QoS) constraints. This paper primarily focuses on the properties of two popular topologies found in interconnection networks, namely k-ary n-cubes and de Bruijn graphs. The regular structure of these graph topologies makes them easier to analyze and determine possible routes for real-time data than complete or irregular graphs. We show how these overlay topologies compare in their ability to deliver data according to the QoS constraints of many subscribers, each receiving data from specific publishing hosts. Comparisons are drawn on the ability of each topology to route data in the presence of dynamic system effects, due to end-hosts joining and departing the system. Finally, experimental results show the service guarantees and physical link stress resulting from efficient multicast trees constructed over both kinds of overlay networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The recent implementation of Universal Neonatal Hearing Screening (UNHS) in all 19 maternity hospitals across Ireland has precipitated early identification of paediatric hearing loss in an Irish context. This qualitative, grounded theory study centres on the issue of parental coping as families receive and respond to (what is typically) an unexpected diagnosis of hearing loss in their newborn baby. Parental wellbeing is of particular concern as the diagnosis occurs in the context of recovery from birth and at a time when the parent-child relationship is being established. As the vast majority of children with a hearing loss are born into hearing families with no prior history of deafness, parents generally have had little exposure to childhood hearing loss and often experience acute emotional vulnerability as they respond to the diagnosis. The researcher conducted in-depth interviews primarily with parents (and to a lesser extent with professionals), as well as a follow-up postal questionnaire for parents. Through a grounded theory analysis of data, the researcher subsequently fashioned a four-stage model depicting the parental journey of receiving and coping with a diagnosis. The four stages (entitled Anticipating, Confirming, Adjusting and Normalising) are differentiated by the chronology of service intervention and defined by the overarching parental experience. Far from representing a homogenous trajectory, this four-stage model is multifaceted and captures a wide diversity of parental experiences ranging from acute distress to resilient hopefulness

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Durham County, North Carolina, faces high rates of human immunodeficiency virus (HIV) infection (with or without progression to AIDS) and sexually transmitted diseases (STDs). We explored the use of health care services and the prevalence of coinfections, among HIV-infected residents, and we recorded community perspectives on HIV-related issues. METHODS: We evaluated data on diagnostic codes, outpatient visits, and hospitalizations for individuals with HIV infection, STDs, and/or hepatitis B or C who visited Duke University Hospital System (DUHS). Viral loads for HIV-infected patients receiving care were estimated for 2009. We conducted geospatial mapping to determine disease trends and used focus groups and key informant interviews to identify barriers and solutions to improving testing and care. RESULTS: We identified substantial increases in HIV/STDs in the southern regions of the county. During the 5-year period, 1,291 adults with HIV infection, 4,245 with STDs, and 2,182 with hepatitis B or C were evaluated at DUHS. Among HIV-infected persons, 13.9% and 21.8% were coinfected with an STD or hepatitis B or C, respectively. In 2009, 65.7% of HIV-infected persons receiving care had undetectable viral loads. Barriers to testing included stigma, fear, and denial of risk, while treatment barriers included costs, transportation, and low medical literacy. LIMITATIONS: Data for health care utilization and HIV load were available from different periods. Focus groups were conducted among a convenience sample, but they represented a diverse population. CONCLUSIONS: Durham County has experienced an increase in the number of HIV-infected persons in the county, and coinfections with STDs and hepatitis B or C are common. Multiple barriers to testing/treatment exist in the community. Coordinated care models are needed to improve access to HIV care and to reduce testing and treatment barriers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.

The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.

The main contributions of the thesis can be placed in one of the following categories.

1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.

2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.

3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.

4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Singapore's population, as that of many other countries, is aging; this is likely to lead to an increase in eye diseases and the demand for eye care. Since ophthalmologist training is long and expensive, early planning is essential. This paper forecasts workforce and training requirements for Singapore up to the year 2040 under several plausible future scenarios. METHODS: The Singapore Eye Care Workforce Model was created as a continuous time compartment model with explicit workforce stocks using system dynamics. The model has three modules: prevalence of eye disease, demand, and workforce requirements. The model is used to simulate the prevalence of eye diseases, patient visits, and workforce requirements for the public sector under different scenarios in order to determine training requirements. RESULTS: Four scenarios were constructed. Under the baseline business-as-usual scenario, the required number of ophthalmologists is projected to increase by 117% from 2015 to 2040. Under the current policy scenario (assuming an increase of service uptake due to increased awareness, availability, and accessibility of eye care services), the increase will be 175%, while under the new model of care scenario (considering the additional effect of providing some services by non-ophthalmologists) the increase will only be 150%. The moderated workload scenario (assuming in addition a reduction of the clinical workload) projects an increase in the required number of ophthalmologists of 192% by 2040. Considering the uncertainties in the projected demand for eye care services, under the business-as-usual scenario, a residency intake of 8-22 residents per year is required, 17-21 under the current policy scenario, 14-18 under the new model of care scenario, and, under the moderated workload scenario, an intake of 18-23 residents per year is required. CONCLUSIONS: The results show that under all scenarios considered, Singapore's aging and growing population will result in an almost doubling of the number of Singaporeans with eye conditions, a significant increase in public sector eye care demand and, consequently, a greater requirement for ophthalmologists.