22 resultados para job sharing professional

em Greenwich Academic Literature Archive - UK


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the roll-out of a collaborative information repository or 'knowledge-base' in a medium-sized UK professional services firm over a six year period. Data from usage logs provides the basis for analysis of the dynamic evolution of social networks around the depository during this time. The adoption pattern follows an 's-curve' and usage exhibits something of a power law distribution, both attributable to network effects and network opposition is associated with organisational performance on a number of indicators. But periodicity in usage is evident and the usage distribution displays an exponential cut-off. Fourier analysis provides some evidence of mathematical complexity in the periodicity. Some implications of complex patterns in social network data for research and management are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximately 85,000 part-time teaching staff working in further education (FE) and adult and community learning (ACL) are often seen as ‘a problem’. The intrinsic ‘part-timeness’ of these staff tends to marginalise them: they remain under-recognised and largely unsupported. Yet this picture is over-simplified. This article examines how part-time staff make creative use of professional autonomy and agency to mitigate problematic ‘casual employment’ conditions, reporting on results from Learning and Skills Development Agency-sponsored research (2002–2006) with 700 part-time staff in the learning and skills sector. The question of agency was reported as a key factor in part-time employment. Change is necessary for the professional agency of part-timers to be harnessed as the sector responds to ambitious sectoral ‘improvement’ agendas following the Foster Report and FE White Paper. Enhanced professionalisation for part-time staff needs greater recognition and inclusion in change agendas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents an improved version of the greedy open shop approximation algorithm with pre-ordering of jobs. It is shown that the algorithm compares favorably with the greedy algorithm with no pre-ordering by reducing either its absolute or relative error. In the case of three machines, the new algorithm creates a schedule with the makespan that is at most 3/2 times the optimal value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider two “minimum”NP-hard job shop scheduling problems to minimize the makespan. In one of the problems every job has to be processed on at most two out of three available machines. In the other problem there are two machines, and a job may visit one of the machines twice. For each problem, we define a class of heuristic schedules in which certain subsets of operations are kept as blocks on the corresponding machines. We show that for each problem the value of the makespan of the best schedule in that class cannot be less than 3/2 times the optimal value, and present algorithms that guarantee a worst-case ratio of 3/2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of loadsharing algorithms for heterogeneous distributed systems is investigated by simulation. The systems considered are networks of workstations (nodes) which differ in processing power. Two parameters are proposed for characterising system heterogeneity, namely the variance and skew of the distribution of processing power among the network nodes. A variety of networks are investigated, with the same number of nodes and total processing power, but with the processing power distributed differently among the nodes. Two loadsharing algorithms are evaluated, at overall system loadings of 50% and 90%, using job response time as the performance metric. Comparison is made with the ideal situation of ‘perfect sharing’, where it is assumed that the communication delays are zero and that complete knowledge is available about job lengths and the loading at the different nodes, so that an arriving job can be sent to the node where it will be completed in the shortest time. The algorithms studied are based on those already in use for homogeneous networks, but were adapted to take account of system heterogeneity. Both algorithms take into account the differences in the processing powers of the nodes in their location policies, but differ in the extent to which they ‘discriminate’ against the slower nodes. It is seen that the relative performance of the two is strongly influenced by the system utilisation and the distribution of processing power among the nodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers the job shop scheduling problem to minimize the makespan. It is assumed that each job consists of at most two operations, one of which is to be processed on one of m⩾2 machines, while the other operation must be performed on a single bottleneck machine, the same for all jobs. For this strongly NP-hard problem we present two heuristics with improved worst-case performance. One of them guarantees a worst-case performance ratio of 3/2. The other algorithm creates a schedule with the makespan that exceeds the largest machine workload by at most the length of the largest operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we study a problem of scheduling and batching on two machines in a flow-shop and open-shop environment. Each machine processes operations in batches, and the processing time of a batch is the sum of the processing times of the operations in that batch. A setup time, which depends only on the machine, is required before a batch is processed on a machine, and all jobs in a batch remain at the machine until the entire batch is processed. The aim is to make batching and sequencing decisions, which specify a partition of the jobs into batches on each machine, and a processing order of the batches on each machine, respectively, so that the makespan is minimized. The flow-shop problem is shown to be strongly NP-hard. We demonstrate that there is an optimal solution with the same batches on the two machines; we refer to these as consistent batches. A heuristic is developed that selects the best schedule among several with one, two, or three consistent batches, and is shown to have a worst-case performance ratio of 4/3. For the open-shop, we show that the problem is NP-hard in the ordinary sense. By proving the existence of an optimal solution with one, two or three consistent batches, a close relationship is established with the problem of scheduling two or three identical parallel machines to minimize the makespan. This allows a pseudo-polynomial algorithm to be derived, and various heuristic methods to be suggested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of scheduling independent jobs on two machines in an open shop, a job shop and a flow shop environment. Both machines are batching machines, which means that several operations can be combined into a batch and processed simultaneously on a machine. The batch processing time is the maximum processing time of operations in the batch, and all operations in a batch complete at the same time. Such a situation may occur, for instance, during the final testing stage of circuit board manufacturing, where burn-in operations are performed in ovens. We consider cases in which there is no restriction on the size of a batch on a machine, and in which a machine can process only a bounded number of operations in one batch. For most of the possible combinations of restrictions, we establish the complexity status of the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The factors that are driving the development and use of grids and grid computing, such as size, dynamic features, distribution and heterogeneity, are also pushing to the forefront service quality issues. These include performance, reliability and security. Although grid middleware can address some of these issues on a wider scale, it has also become imperative to ensure adequate service provision at local level. Load sharing in clusters can contribute to the provision of a high quality service, by exploiting both static and dynamic information. This paper is concerned with the presentation of a load sharing scheme, that can satisfy grid computing requirements. It follows a proactive, non preemptive and distributed approach. Load information is gathered continuously before it is needed, and a task is allocated to the most appropriate node for execution. Performance and reliability are enhanced by the decentralised nature of the scheme and the symmetric roles of the nodes. In addition, the scheme exhibits transparency characteristics that facilitate integration with the grid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we provide a unified approach to solving preemptive scheduling problems with uniform parallel machines and controllable processing times. We demonstrate that a single criterion problem of minimizing total compression cost subject to the constraint that all due dates should be met can be formulated in terms of maximizing a linear function over a generalized polymatroid. This justifies applicability of the greedy approach and allows us to develop fast algorithms for solving the problem with arbitrary release and due dates as well as its special case with zero release dates and a common due date. For the bicriteria counterpart of the latter problem we develop an efficient algorithm that constructs the trade-off curve for minimizing the compression cost and the makespan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comments on the Chancery Division decision in Clarence House Ltd v National Westminster Bank Plc on whether the alienation covenant in a lease of commercial premises had been breached by the tenant effecting a virtual assignment of it, under which all the economic benefits and burdens of the lease were transferred to a third party without there being any actually assignment of the leasehold interest or change in occupancy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies a two-level supply chain consisting of components supplier and product assembly manufacturer, while the manufacturer shares the investment on shortening supply lead time. The objective of this research is to investigate the benefits of cost sharing strategy and adopting component commonality. The result of numerical analysis demonstrates that using component commonality can help reduce the total cost, especially when the manufacture shares a higher fraction of the cost of investment in shortening supply lead time.

Relevância:

20.00% 20.00%

Publicador: