988 resultados para Job loss
Resumo:
Focussing here on local authorities and health services, this paper examines the significance of new technology to unskilled work in the public sector as it is developing and the implications for workplace learning. An argument is developed that new technology is central to a minority of examples of job change, although, significantly, it is more important to staff–initiated change and to workers’ ability to fully participate in life beyond the workplace.
Resumo:
Chemical and biological processes, such as dissolution in gypsiferous sands and biodegradation in waste refuse, result in mass or particle loss, which in turn lead to changes in solid and void phase volumes and grading. Data on phase volume and grading changes have been obtained from oedometric dissolution tests on sand–salt mixtures. Phase volume changes are defined by a (dissolution-induced) void volume change parameter (Λ). Grading changes are interpreted using grading entropy coordinates, which allow a grading curve to be depicted as a single data point and changes in grading as a vector quantity rather than a family of distribution curves. By combining Λ contours with pre- to post-dissolution grading entropy coordinate paths, an innovative interpretation of the volumetric consequences of particle loss is obtained. Paths associated with small soluble particles, the loss of which triggers relatively little settlement but large increase in void ratio, track parallel to the Λ contours. Paths associated with the loss of larger particles, which can destabilise the sand skeleton, tend to track across the Λ contours.
Resumo:
This document describes two sets of benchmark problem instances for the job shop scheduling problem. Each set of instances is supplied as a compressed (zipped) archive containing a single CSV file for each problem instance using the format described in http://rollproject.org/jssp/jsspGen.pdf
Resumo:
We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyperheuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.
Resumo:
Islamic financing instruments can be categorised into profit and loss/risk sharing and non-participatory instruments. Although profit and loss sharing instruments such as musharakah are widely accepted as the ideal form of Islamic financing, prior studies suggest that alternative instruments such as murabahah are preferred by Islamic banks. Nevertheless, prior studies did not explore factors that influence the use of Islamic financing among non-financial firms. Our study fills this gap and contributes new knowledge in several ways. First, we find no evidence of widespread use of Islamic financing instruments across non-financial firms. This is because the instruments are mostly used by less profitable firms with higher leverage (i.e., risky firms). Second, we find that profit and loss sharing instruments are hardly used, whilst the use of murabahah is dominant. Consistent with the prediction of moral-hazard-risk avoidance theory, further analysis suggests that users with a lower asset base (to serve as collateral) are associated with murabahah financing. Third, we present a critical discourse on the contentious nature of murabahah as practised. The economic significance and ethical issues associated with murabahah as practised should trigger serious efforts to steer Islamic corporate financing towards risk-sharing more than the controversial rent-seeking practice.
Resumo:
Natural herbs have been in use for weight loss purposes since history began. However, the current global obesity epidemic and the rise in obesity-related chronic diseases, including type-II diabetes and cancer, have highlighted the need for novel and effective approaches for herbal remedies. Whilst the popularity of several prescribed and non-prescribed slimming aids and herbal plant supplements have been marketed for their weight loss efficacy, single and multi-ingredient herbal supplements are still being investigated for their single or combined weight loss benefits. Limited research have highlighted an interesting efficacy for several popular herbal plant supplements including caffeine and capsaicin, Ayurvedic preparations and herbal teas, resulting in various degrees of effectiveness including thermogenic, appetite control and psychological benefits such as mood state. Recent research has suggested acute augmented weight-loss effects of combining herbal ingestion with exercise. For example, ingesting green tea, yerba mate and/or caffeine have been shown to increase metabolic rate, and augmented fatty acid metabolism and to increase energy expenditure from fatty acid sources during exercise with various intensities, particularly at low and moderate intensities. Other promising weight-loss effects have also been also reported for combining exercise with multi-ingredient herbal supplements, particularly those that are rich in phytochemicals and caffeoyl derivatives. Combining herbal ingestions with exercise still require further research in order to establish the supplementation most effective protocols in terms of dosage and timing, and to determine the long-term benefits, particularly those related to exercise protocols, and the long term adherence to sustain the weight loss outcomes.
Resumo:
People with sight loss in the United Kingdom are known to have lower levels of emotional wellbeing and to be at higher risk of depression. Consequently ‘having someone to talk to’ is an important priority for people with visual impairment. An on-line survey of the provision of emotional support and counselling for people affected by sight loss across the UK was undertaken. The survey was distributed widely and received 182 responses. There were more services offering ‘emotional support’, in the form of listening and information and advice giving, than offered ‘counselling’. Services were delivered by providers with differing qualifications in a variety of formats. Waiting times were fairly short and clients presented with a wide range of issues. Funding came from a range of sources, but many felt their funding was vulnerable. Conclusions have been drawn about the need for a national standardised framework for the provision of emotional support and counselling services for blind and partially sighted people in the UK
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária
Resumo:
For a given TCP flow, exogenous losses are those occurring on links other than the flow's bottleneck link. Exogenous losses are typically viewed as introducing undesirable "noise" into TCP's feedback control loop, leading to inefficient network utilization and potentially severe global unfairness. This has prompted much research on mechanisms for hiding such losses from end-points. In this paper, we show through analysis and simulations that low levels of exogenous losses are surprisingly beneficial in that they improve stability and convergence, without sacrificing efficiency. Based on this, we argue that exogenous loss awareness should be taken into account in any AQM design that aims to achieve global fairness. To that end, we propose an exogenous-loss aware Queue Management (XQM) that actively accounts for and leverages exogenous losses. We use an equation based approach to derive the quiescent loss rate for a connection based on the connection's profile and its global fair share. In contrast to other queue management techniques, XQM ensures that a connection sees its quiescent loss rate, not only by complementing already existing exogenous losses, but also by actively hiding exogenous losses, if necessary, to achieve global fairness. We establish the advantages of exogenous-loss awareness using extensive simulations in which, we contrast the performance of XQM to that of a host of traditional exogenous-loss unaware AQM techniques.
Resumo:
One of TCP's critical tasks is to determine which packets are lost in the network, as a basis for control actions (flow control and packet retransmission). Modern TCP implementations use two mechanisms: timeout, and fast retransmit. Detection via timeout is necessarily a time-consuming operation; fast retransmit, while much quicker, is only effective for a small fraction of packet losses. In this paper we consider the problem of packet loss detection in TCP more generally. We concentrate on the fact that TCP's control actions are necessarily triggered by inference of packet loss, rather than conclusive knowledge. This suggests that one might analyze TCP's packet loss detection in a standard inferencing framework based on probability of detection and probability of false alarm. This paper makes two contributions to that end: First, we study an example of more general packet loss inference, namely optimal Bayesian packet loss detection based on round trip time. We show that for long-lived flows, it is frequently possible to achieve high detection probability and low false alarm probability based on measured round trip time. Second, we construct an analytic performance model that incorporates general packet loss inference into TCP. We show that for realistic detection and false alarm probabilities (as are achievable via our Bayesian detector) and for moderate packet loss rates, the use of more general packet loss inference in TCP can improve throughput by as much as 25%.
Resumo:
End-to-End differentiation between wireless and congestion loss can equip TCP control so it operates effectively in a hybrid wired/wireless environment. Our approach integrates two techniques: packet loss pairs (PLP) and Hidden Markov Modeling (HMM). A packet loss pair is formed by two back-to-back packets, where one packet is lost while the second packet is successfully received. The purpose is for the second packet to carry the state of the network path, namely the round trip time (RTT), at the time the other packet is lost. Under realistic conditions, PLP provides strong differentiation between congestion and wireless type of loss based on distinguishable RTT distributions. An HMM is then trained so observed RTTs can be mapped to model states that represent either congestion loss or wireless loss. Extensive simulations confirm the accuracy of our HMM-based technique in classifying the cause of a packet loss. We also show the superiority of our technique over the Vegas predictor, which was recently found to perform best and which exemplifies other existing loss labeling techniques.
Resumo:
The current congestion-oriented design of TCP hinders its ability to perform well in hybrid wireless/wired networks. We propose a new improvement on TCP NewReno (NewReno-FF) using a new loss labeling technique to discriminate wireless from congestion losses. The proposed technique is based on the estimation of average and variance of the round trip time using a filter cal led Flip Flop filter that is augmented with history information. We show the comparative performance of TCP NewReno, NewReno-FF, and TCP Westwood through extensive simulations. We study the fundamental gains and limits using TCP NewReno with varying Loss Labeling accuracy (NewReno-LL) as a benchmark. Lastly our investigation opens up important research directions. First, there is a need for a finer grained classification of losses (even within congestion and wireless losses) for TCP in heterogeneous networks. Second, it is essential to develop an appropriate control strategy for recovery after the correct classification of a packet loss.
Resumo:
A secure sketch (defined by Dodis et al.) is an algorithm that on an input w produces an output s such that w can be reconstructed given its noisy version w' and s. Security is defined in terms of two parameters m and m˜ : if w comes from a distribution of entropy m, then a secure sketch guarantees that the distribution of w conditioned on s has entropy m˜ , where λ = m−m˜ is called the entropy loss. In this note we show that the entropy loss of any secure sketch (or, more generally, any randomized algorithm) on any distribution is no more than it is on the uniform distribution.
Resumo:
In this paper, we present Slack Stealing Job Admission Control (SSJAC)---a methodology for scheduling periodic firm-deadline tasks with variable resource requirements, subject to controllable Quality of Service (QoS) constraints. In a system that uses Rate Monotonic Scheduling, SSJAC augments the slack stealing algorithm of Thuel et al with an admission control policy to manage the variability in the resource requirements of the periodic tasks. This enables SSJAC to take advantage of the 31\% of utilization that RMS cannot use, as well as any utilization unclaimed by jobs that are not admitted into the system. Using SSJAC, each task in the system is assigned a resource utilization threshold that guarantees the minimal acceptable QoS for that task (expressed as an upper bound on the rate of missed deadlines). Job admission control is used to ensure that (1) only those jobs that will complete by their deadlines are admitted, and (2) tasks do not interfere with each other, thus a job can only monopolize the slack in the system, but not the time guaranteed to jobs of other tasks. We have evaluated SSJAC against RMS and Statistical RMS (SRMS). Ignoring overhead issues, SSJAC consistently provides better performance than RMS in overload, and, in certain conditions, better performance than SRMS. In addition, to evaluate optimality of SSJAC in an absolute sense, we have characterized the performance of SSJAC by comparing it to an inefficient, yet optimal scheduler for task sets with harmonic periods.
Resumo:
Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When a number of such connections share a common endpoint, that endpoint has the opportunity to correlate these end-to-end measurements to better diagnose and control the use of shared resources. A valuable characterization of such shared resources is the "loss topology". From the perspective of a server with concurrent connections to multiple clients, the loss topology is a logical tree rooted at the server in which edges represent lossy paths between a pair of internal network nodes. We develop an end-to-end unicast packet probing technique and an associated analytical framework to: (1) infer loss topologies, (2) identify loss rates of links in an existing loss topology, and (3) augment a topology to incorporate the arrival of a new connection. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that our approach is robust in terms of its accuracy and convergence over a wide range of network conditions.