9 resultados para In-process
em Greenwich Academic Literature Archive - UK
Resumo:
In this paper, a knowledge-based approach is proposed for the management of temporal information in process control. A common-sense theory of temporal constraints over processes/events, allowing relative temporal knowledge, is employed here as the temporal basis for the system. This theory supports duration reasoning and consistency checking, and accepts relative temporal knowledge which is in a form normally used by human operators. An architecture for process control is proposed which centres on an historical database consisting of events and processes, together with the qualitative temporal relationships between their occurrences. The dynamics of the system is expressed by means of three types of rule: database updating rules, process control rules, and data deletion rules. An example is provided in the form of a life scheduler, to illustrate the database and the rule sets. The example demonstrates the transitions of the database over time, and identifies the procedure in terms of a state transition model for the application. The dividing instant problem for logical inference is discussed with reference to this process control example, and it is shown how the temporal theory employed can be used to deal with the problem.
Resumo:
A general system is presented in this paper which supports the expression of relative temporal knowledge in process control and management. This system allows knowledge of Allen's temporal relations over time elements, which may be both intervals and points. The objectives and characteristics of two major temporal attributes, i.e. ‘transaction time’ and ‘valid time’, are described. A graphical representation for the temporal network is presented, and inference over the network may be made by means of a consistency checker in terms of the graphical representation. An illustrative example of the system as applied to process control and management is provided.
Resumo:
Mathematical models of straight-grate pellet induration processes have been developed and carefully validated by a number of workers over the past two decades. However, the subsequent exploitation of these models in process optimization is less clear, but obviously requires a sound understanding of how the key factors control the operation. In this article, we show how a thermokinetic model of pellet induration, validated against operating data from one of the Iron Ore Company of Canada (IOCC) lines in Canada, can be exploited in process optimization from the perspective of fuel efficiency, production rate, and product quality. Most existing processes are restricted in the options available for process optimization. Here, we review the role of each of the drying (D), preheating (PH), firing (F), after-firing (AF), and cooling (C) phases of the induration process. We then use the induration process model to evaluate whether the first drying zone is best to use on the up- or down-draft gas-flow stream, and we optimize the on-gas temperature profile in the hood of the PH, F, and AF zones, to reduce the burner fuel by at least 10 pct over the long term. Finally, we consider how efficient and flexible the process could be if some of the structural constraints were removed (i.e., addressed at the design stage). The analysis suggests it should be possible to reduce the burner fuel lead by 35 pct, easily increase production by 5+ pct, and improve pellet quality.
Resumo:
Thermosetting polymer materials are widely utilised in modern microelectronics packaging technology. These materials are used for a number of functions, such as for device bonding, for structural support applications and for physical protection of semiconductor dies. Typically, convection heating systems are used to raise the temperature of the materials to expedite the polymerisation process. The convection cure process has a number of drawbacks including process durations generally in excess of 1 hour and the requirement to heat the entire printed circuit board assembly, inducing thermomechanical stresses which effect device reliability. Microwave energy is able to raise the temperature of materials in a rapid, controlled manner. As the microwave energy penetrates into the polymer materials, the heating can be considered volumetric – i.e. the rate of heating is approximately constant throughout the material. This enables a maximal heating rate far greater than is available with convection oven systems which only raise the surface temperature of the polymer material and rely on thermal conductivity to transfer heat energy into the bulk. The high heating rate, combined with the ability to vary the operating power of the microwave system, enables the extremely rapid cure processes. Microwave curing of a commercially available encapsulation material has been studied experimentally and through use of numerical modelling techniques. The material assessed is Henkel EO-1080, a single component thermosetting epoxy. The producer has suggested three typical convection oven cure options for EO1080: 20 min at 150C or 90 min at 140C or 120 min at 110C. Rapid curing of materials of this type using advanced microwave systems, such as the FAMOBS system [1], is of great interest to microelectronics system manufacturers as it has the potential to reduce manufacturing costs, increase device reliability and enables new device designs. Experimental analysis has demonstrated that, in a realistic chip-on-board encapsulation scenario, the polymer material can be fully cured in approximately one minute. This corresponds to a reduction in cure time of approximately 95 percent relative to the convection oven process. Numerical assessment of the process [2] also suggests that cure times of approximately 70 seconds are feasible whilst indicating that the decrease in process duration comes at the expense of variation in degree of cure within the polymer.
Resumo:
The measurement of particle velocities in two-phase gas-solid systems has a wide application in flow monitoring in process plant, where two-phase gas-solids systems are frequently employed in the form of pneumatic conveyors and solid fuel injection systems. Such measurements have proved to be difficult to make reliably in industrial environments. This paper details particle velocity measurements made in a two phase gas-solid now utilising a laser Doppler velocimetry system. Tests were carried out using both wheat flour and pulverised coal as the solids phase, with air being used as the gaseous phase throughout. A pipeline of circular section, having a diameter of 53 mm was used for the test work, with air velocities ranging from 25 to 45 m/s and suspension densities ranging from 0.001 kg to 1 kg of solids per cubic meter of air. Details of both the test equipment used, and the results of the measurements are presented.
Resumo:
The historic pattern of public sector pay movements in the UK has been counter-cyclical with private sector pay growth. Periods of relative decline in public sector pay against private sector movements have been followed by periods of ‘catch-up’ as Government controls are eased to remedy skill shortages or deal with industrial unrest among public servants. Public sector ‘catch up’ increases have therefore come at awkward times for Government, often coinciding with economic downturn in the private sector (Trinder 1994, White 1996, Bach 2002). Several such epochs of public sector pay policy can be identified since the 1970s. The question is whether the current limits on public sector pay being imposed by the UK Government fit this historic pattern or whether the pattern has been broken and, if so, how and why? This paper takes a historical approach in considering the context to public sector pay determination in the UK. In particular the paper seeks to review the period since Labour came into office (White and Hatchett 2003) and the various pay ‘modernisation’ exercises that have been in process over the last decade (White 2004). The paper draws on national statistics on public sector employment and pay levels to chart changes in public sector pay policy and draws on secondary literature to consider both Government policy intentions and the impact of these policies for public servants.
Resumo:
This paper studies two models of two-stage processing with no-wait in process. The first model is the two-machine flow shop, and the other is the assembly model. For both models we consider the problem of minimizing the makespan, provided that the setup and removal times are separated from the processing times. Each of these scheduling problems is reduced to the Traveling Salesman Problem (TSP). We show that, in general, the assembly problem is NP-hard in the strong sense. On the other hand, the two-machine flow shop problem reduces to the Gilmore-Gomory TSP, and is solvable in polynomial time. The same holds for the assembly problem under some reasonable assumptions. Using these and existing results, we provide a complete complexity classification of the relevant two-stage no-wait scheduling models.
Resumo:
We study a two-machine flow shop scheduling problem with no-wait in process, in which one of the machines is not available during a specified time interval. We consider three scenarios of handing the operation affected by the nonavailability interval. Its processing may (i) start from scratch after the interval, or (ii) be resumed from the point of interruption, or (iii) be partially restarted after the interval. The objective is to minimize the makespan. We present an approximation algorithm that for all these scenarios delivers a worst-case ratio of 3/2. For the second scenario, we offer a 4/3-approximation algorithm.
Resumo:
In this paper we provide a fairly complete complexity classification of various versions of the two-machine permutation flow shop scheduling problem to minimize the makespan in which some of the jobs have to be processed with no-wait in process. For some version, we offer a fully polynomial-time approximation scheme and a 43-approximation algorithm.