283 resultados para slack


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Commonly, research work in routing for delay tolerant networks (DTN) assumes that node encounters are predestined, in the sense that they are the result of unknown, exogenous processes that control the mobility of these nodes. In this paper, we argue that for many applications such an assumption is too restrictive: while the spatio-temporal coordinates of the start and end points of a node's journey are determined by exogenous processes, the specific path that a node may take in space-time, and hence the set of nodes it may encounter could be controlled in such a way so as to improve the performance of DTN routing. To that end, we consider a setting in which each mobile node is governed by a schedule consisting of a ist of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged for DTN message delivery purposes. We define the Mobility Coordination Problem (MCP) for DTNs as follows: Given a set of nodes, each with its own schedule, and a set of messages to be exchanged between these nodes, devise a set of node encounters that minimize message delivery delays while satisfying all node schedules. The MCP for DTNs is general enough that it allows us to model and evaluate some of the existing DTN schemes, including data mules and message ferries. In this paper, we show that MCP for DTNs is NP-hard and propose two detour-based approaches to solve the problem. The first (DMD) is a centralized heuristic that leverages knowledge of the message workload to suggest specific detours to optimize message delivery. The second (DNE) is a distributed heuristic that is oblivious to the message workload, and which selects detours so as to maximize node encounters. We evaluate the performance of these detour-based approaches using extensive simulations based on synthetic workloads as well as real schedules obtained from taxi logs in a major metropolitan area. Our evaluation shows that our centralized, workload-aware DMD approach yields the best performance, in terms of message delay and delivery success ratio, and that our distributed, workload-oblivious DNE approach yields favorable performance when compared to approaches that require the use of data mules and message ferries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Controlling the mobility pattern of mobile nodes (e.g., robots) to monitor a given field is a well-studied problem in sensor networks. In this setup, absolute control over the nodes’ mobility is assumed. Apart from the physical ones, no other constraints are imposed on planning mobility of these nodes. In this paper, we address a more general version of the problem. Specifically, we consider a setting in which mobility of each node is externally constrained by a schedule consisting of a list of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged to achieve a specific coverage distribution of a field. Such a distribution defines the relative importance of different field locations. We define the Constrained Mobility Coordination problem for Preferential Coverage (CMC-PC) as follows: given a field with a desired monitoring distribution, and a number of nodes n, each with its own schedule, we need to coordinate the mobility of the nodes in order to achieve the following two goals: 1) satisfy the schedules of all nodes, and 2) attain the required coverage of the given field. We show that the CMC-PC problem is NP-complete (by reduction to the Hamiltonian Cycle problem). Then we propose TFM, a distributed heuristic to achieve field coverage that is as close as possible to the required coverage distribution. We verify the premise of TFM using extensive simulations, as well as taxi logs from a major metropolitan area. We compare TFM to the random mobility strategy—the latter provides a lower bound on performance. Our results show that TFM is very successful in matching the required field coverage distribution, and that it provides, at least, two-fold query success ratio for queries that follow the target coverage distribution of the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most real-time scheduling problems are known to be NP-complete. To enable accurate comparison between the schedules of heuristic algorithms and the optimal schedule, we introduce an omniscient oracle. This oracle provides schedules for periodic task sets with harmonic periods and variable resource requirements. Three different job value functions are described and implemented. Each corresponds to a different system goal. The oracle is used to examine the performance of different on-line schedulers under varying loads, including overload. We have compared the oracle against Rate Monotonic Scheduling, Statistical Rate Monotonic Scheduling, and Slack Stealing Job Admission Control Scheduling. Consistently, the oracle provides an upper bound on performance for the metric under consideration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper considers the single machine due date assignment and scheduling problems with n jobs in which the due dates are to be obtained from the processing times by adding a positive slack q. A schedule is feasible if there are no tardy jobs and the job sequence respects given precedence constraints. The value of q is chosen so as to minimize a function ϕ(F,q) which is non-decreasing in each of its arguments, where F is a certain non-decreasing earliness penalty function. Once q is chosen or fixed, the corresponding scheduling problem is to find a feasible schedule with the minimum value of function F. In the case of arbitrary precedence constraints the problems under consideration are shown to be NP-hard in the strong sense even for F being total earliness. If the precedence constraints are defined by a series-parallel graph, both scheduling and due date assignment problems are proved solvable in time, provided that F is either the sum of linear functions or the sum of exponential functions. The running time of the algorithms can be reduced to if the jobs are independent. Scope and purpose We consider the single machine due date assignment and scheduling problems and design fast algorithms for their solution under a wide range of assumptions. The problems under consideration arise in production planning when the management is faced with a problem of setting the realistic due dates for a number of orders. The due dates of the orders are determined by increasing the time needed for their fulfillment by a common positive slack. If the slack is set to be large enough, the due dates can be easily maintained, thereby producing a good image of the firm. This, however, may result in the substantial holding cost of the finished products before they are brought to the customer. The objective is to explore the trade-off between the size of the slack and the arising holding costs for the early orders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single machine scheduling problems are considered, in which the processing of jobs depend on positions of the jobs in a schedule and the due-dates are assigned either according to the CON rule (a due-date common to all jobs is chosen) or according to the SLK rule (the due-dates are computed by increasing the actual processing times of each job by a slack, common to all jobs). Polynomial-time dynamic programming algorithms are proposed for the problems with the objective functions that include the cost of assigning the due-dates, the total cost of disgarded jobs (which are not scheduled) and, possibly, the total earliness of the scheduled jobs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents the results from an experimental program designed to evaluate the performance of a system consisting of a readout unit and a ribbon type Fiber Optic Sensor (FOS) based on Brillouin Optical Time Domain Analysis (BOTDA). The system is intended for the detection of cracks as well as the monitoring of long-term performance for steel bridge girders. The program consisted of introducing a crack at the center of a 3-m-long steel beam and monitoring its progression using static loading tests performed at ambient and sub-zero temperatures. For sensor lengths similar to those used in the field, the resonant frequency shifts per unit increase in crack width were found to decrease from 114 MHz/mm at ambient temperature (~25C) to 65 MHz/mm at -10C. Results also revealed nonlinearity and variability, which can be attributed to an incompatibility between the settings of the laser pump in the readout unit and the sensor length. Significant losses were detected along the bonded segments of the sensor and were attributed to the presence of ripples along the sensor. These undulations worsen with a reduction in temperature and are induced by the bonding procedure as well as the slack provided in the plastic sleeves containing the splices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The operation of supply chains (SCs) has for many years been focused on efficiency, leanness and responsiveness. This has resulted in reduced slack in operations, compressed cycle times, increased productivity and minimised inventory levels along the SC. Combined with tight tolerance settings for the realisation of logistics and production processes, this has led to SC performances that are frequently not robust. SCs are becoming increasingly vulnerable to disturbances, which can decrease the competitive power of the entire chain in the market. Moreover, in the case of food SCs non-robust performances may ultimately result in empty shelves in grocery stores and supermarkets.
The overall objective of this research is to contribute to Supply Chain Management (SCM) theory by developing a structured approach to assess SC vulnerability, so that robust performances of food SCs can be assured. We also aim to help companies in the food industry to evaluate their current state of vulnerability, and to improve their performance robustness through a better understanding of vulnerability issues. The following research questions (RQs) stem from these objectives:
RQ1: What are the main research challenges related to (food) SC robustness?
RQ2: What are the main elements that have to be considered in the design of robust SCs and what are the relationships between these elements?
RQ3: What is the relationship between the contextual factors of food SCs and the use of disturbance management principles?
RQ4: How to systematically assess the impact of disturbances in (food) SC processes on the robustness of (food) SC performances?
To answer these RQs we used different methodologies, both qualitative and quantitative. For each question, we conducted a literature survey to identify gaps in existing research and define the state of the art of knowledge on the related topics. For the second and third RQ, we conducted both exploration and testing on selected case studies. Finally, to obtain more detailed answers to the fourth question, we used simulation modelling and scenario analysis for vulnerability assessment.
Main findings are summarised as follows.
Based on an extensive literature review, we answered RQ1. The main research challenges were related to the need to define SC robustness more precisely, to identify and classify disturbances and their causes in the context of the specific characteristics of SCs and to make a systematic overview of (re)design strategies that may improve SC robustness. Also, we found that it is useful to be able to discriminate between varying degrees of SC vulnerability and to find a measure that quantifies the extent to which a company or SC shows robust performances when exposed to disturbances.
To address RQ2, we define SC robustness as the degree to which a SC shows an acceptable performance in (each of) its Key Performance Indicators (KPIs) during and after an unexpected event that caused a disturbance in one or more logistics processes. Based on the SCM literature we identified the main elements needed to achieve robust performances and structured them together to form a conceptual framework for the design of robust SCs. We then explained the logic of the framework and elaborate on each of its main elements: the SC scenario, SC disturbances, SC performance, sources of food SC vulnerability, and redesign principles and strategies.
Based on three case studies, we answered RQ3. Our major findings show that the contextual factors have a consistent relationship to Disturbance Management Principles (DMPs). The product and SC environment characteristics are contextual factors that are hard to change and these characteristics initiate the use of specific DMPs as well as constrain the use of potential response actions. The process and the SC network characteristics are contextual factors that are easier to change, and they are affected by the use of the DMPs. We also found a notable relationship between the type of DMP likely to be used and the particular combination of contextual factors present in the observed SC.
To address RQ4, we presented a new method for vulnerability assessments, the VULA method. The VULA method helps to identify how much a company is underperforming on a specific Key Performance Indicator (KPI) in the case of a disturbance, how often this would happen and how long it would last. It ultimately informs the decision maker about whether process redesign is needed and what kind of redesign strategies should be used in order to increase the SC’s robustness. The VULA method is demonstrated in the context of a meat SC using discrete-event simulation. The case findings show that performance robustness can be assessed for any KPI using the VULA method.
To sum-up the project, all findings were incorporated within an integrated framework for designing robust SCs. The integrated framework consists of the following steps: 1) Description of the SC scenario and identification of its specific contextual factors; 2) Identification of disturbances that may affect KPIs; 3) Definition of the relevant KPIs and identification of the main disturbances through assessment of the SC performance robustness (i.e. application of the VULA method); 4) Identification of the sources of vulnerability that may (strongly) affect the robustness of performances and eventually increase the vulnerability of the SC; 5) Identification of appropriate preventive or disturbance impact reductive redesign strategies; 6) Alteration of SC scenario elements as required by the selected redesign strategies and repeat VULA method for KPIs, as defined in Step 3.
Contributions of this research are listed as follows. First, we have identified emerging research areas - SC robustness, and its counterpart, vulnerability. Second, we have developed a definition of SC robustness, operationalized it, and identified and structured the relevant elements for the design of robust SCs in the form of a research framework. With this research framework, we contribute to a better understanding of the concepts of vulnerability and robustness and related issues in food SCs. Third, we identified the relationship between contextual factors of food SCs and specific DMPs used to maintain robust SC performances: characteristics of the product and the SC environment influence the selection and use of DMPs; processes and SC networks are influenced by DMPs. Fourth, we developed specific metrics for vulnerability assessments, which serve as a basis of a VULA method. The VULA method investigates different measures of the variability of both the duration of impacts from disturbances and the fluctuations in their magnitude.
With this project, we also hope to have delivered practical insights into food SC vulnerability. First, the integrated framework for the design of robust SCs can be used to guide food companies in successful disturbance management. Second, empirical findings from case studies lead to the identification of changeable characteristics of SCs that can serve as a basis for assessing where to focus efforts to manage disturbances. Third, the VULA method can help top management to get more reliable information about the “health” of the company.
The two most important research opportunities are: First, there is a need to extend and validate our findings related to the research framework and contextual factors through further case studies related to other types of (food) products and other types of SCs. Second, there is a need to further develop and test the VULA method, e.g.: to use other indicators and statistical measures for disturbance detection and SC improvement; to define the most appropriate KPI to represent the robustness of a complete SC. We hope this thesis invites other researchers to pick up these challenges and help us further improve the robustness of (food) SCs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

LLF (Least Laxity First) scheduling, which assigns a higher priority to a task with a smaller laxity, has been known as an optimal preemptive scheduling algorithm on a single processor platform. However, little work has been made to illuminate its characteristics upon multiprocessor platforms. In this paper, we identify the dynamics of laxity from the system’s viewpoint and translate the dynamics into LLF multiprocessor schedulability analysis. More specifically, we first characterize laxity properties under LLF scheduling, focusing on laxity dynamics associated with a deadline miss. These laxity dynamics describe a lower bound, which leads to the deadline miss, on the number of tasks of certain laxity values at certain time instants. This lower bound is significant because it represents invariants for highly dynamic system parameters (laxity values). Since the laxity of a task is dependent of the amount of interference of higher-priority tasks, we can then derive a set of conditions to check whether a given task system can go into the laxity dynamics towards a deadline miss. This way, to the author’s best knowledge, we propose the first LLF multiprocessor schedulability test based on its own laxity properties. We also develop an improved schedulability test that exploits slack values. We mathematically prove that the proposed LLF tests dominate the state-of-the-art EDZL tests. We also present simulation results to evaluate schedulability performance of both the original and improved LLF tests in a quantitative manner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Consumer-electronics systems are becoming increasingly complex as the number of integrated applications is growing. Some of these applications have real-time requirements, while other non-real-time applications only require good average performance. For cost-efficient design, contemporary platforms feature an increasing number of cores that share resources, such as memories and interconnects. However, resource sharing causes contention that must be resolved by a resource arbiter, such as Time-Division Multiplexing. A key challenge is to configure this arbiter to satisfy the bandwidth and latency requirements of the real-time applications, while maximizing the slack capacity to improve performance of their non-real-time counterparts. As this configuration problem is NP-hard, a sophisticated automated configuration method is required to avoid negatively impacting design time. The main contributions of this article are: 1) An optimal approach that takes an existing integer linear programming (ILP) model addressing the problem and wraps it in a branch-and-price framework to improve scalability. 2) A faster heuristic algorithm that typically provides near-optimal solutions. 3) An experimental evaluation that quantitatively compares the branch-and-price approach to the previously formulated ILP model and the proposed heuristic. 4) A case study of an HD video and graphics processing system that demonstrates the practical applicability of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gobernanza

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The winning intramural football team from the Geography department. Pictured from left to right are: Brian Rogers, Lino Fuciarelli, Tim Slack, Les Selby, Larry Miller, Bob Lunge (captain), Ron Devereux, Bob Steele, and Gerry Gamble.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the recent growth in cultural complexity, many organizations are faced with increasingly diverse employee pools. Gaining a greater understanding of the values that employees possess is the first step in effectively satisfying their needs and achieving a more productive workforce (lung & Avolio, 2000). Values playa significant role in influencing individual behaviours. It is therefore necessary to assess the qualities of employee value systems and directly link them to the values of the organization. The importance of values and value congruence has been emphasized by many organizational behaviour researchers (cf. Adkins & Caldwell, 2004; Erdogan, Kraimer, & Liden, 2004; Jung & Avolio, 2000; Rokeach, 1973); however the emphasis on value studies remains fairly stagnant within the sport industry (Amis, Slack, & Hinings, 2002). In order to examine the realities that were constructed by the participants in this study a holistic view of the impact of values within a specific sport organization were provided. The purpose of this case study was to examine organizational and employee values to understand the effects of values and value congruence on employee behaviours within the context of a large Canadian sport organization. A mUltiple methods case study approach was adopted in order to fully serve the purpose and provide a comprehensive view of the organization being examined. Document analysis, observations, surveys, as well as semi-structured interviews were conducted. The process allowed for triangulation and confirmability of the findings. Each method functioned to create an overarching understanding of the values and value congruence within this organization. The analysis of the findings was divided into qualitative and quantitative sections. The qualitative documents were analyzed twice, once manually by the researcher and once via AtIas.ti Version 4 (1998). The a priori and emergent coding that took place was based on triangulating the findings and uncovering common themes throughout the data. The Rokeach Value Survey (1973) that was incorporated into the survey design of the study was analyzed using descriptive statistics, as well as Mann-Whitney U, and Kruskal Wallis formulas. These were deemed appropriate for analysis given the non-parametric nature of the survey instrument (Kinnear & Gray, 2004). The quantitative survey served to help define the values and value congruence that was then holistically examined through the qualitative interviews, document analyses, and observations. The results of the study indicated incongruent value levels between employees and those stated or perceived as the organization's values. Each finding demonstrated that varying levels of congruence may have diverse affects on individual behaviours. These behaviours range from production levels to interactions with fellow employees to turnover. In addition to the findings pertaining to the research questions, a number of other key issues were uncovered regarding departmentalization, communication, and board relations. Each has contributed to a greater understanding of the organization and has created direction for further research within this area of study.