859 resultados para Design time
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
SINNMR (Sonically Induced Narrowing of the Nuclear Magnetic Resonance spectra of solids), is a novel technique that is being developed to enable the routine study of solids by nuclear magnetic resonance spectroscopy. SINNMR aims to narrow the broad resonances that are characteristic of solid state NMR by inducing rapid incoherent motion of solid particles suspended in a support medium, using high frequency ultrasound in the range 2-10 MHz. The width of the normal broad resonances from solids are due to incomplete averaging of several components of the total spin Hamiltonian caused by restrictions placed on molecular motion within a solid. At present Magic Angle Spinning (MAS) NMR is the classical solid state technique used to reduce line broadening, but: this has associated problems, not least of which is the appearance of many spinning side bands which confuse the spectra. It is hoped that SlNNMR will offer a simple alternative, particularly as it does not reveal spinning sidebands The fundamental question concerning whether the use of ultrasound within a cryo-magnet will cause quenching has been investigated with success, as even under the most extreme conditions of power, frequency and irradiator time, the magnet does not quench. The objective of this work is to design and construct a SINNMR probe for use in a super conducting cryo-magnet NMR spectrometer. A cell for such a probe has been constructed and incorporated into an adapted high resolution broadband probe. It has been proved that the cell is capable of causing cavitation, up to 10 MHz, by running a series of ultrasonic reactions within it and observing the reaction products. It was found that the ultrasound was causing the sample to be heated to unacceptable temperatures and this necessitated the incorporation of temperature stabilisation devices. Work has been performed on the investigation of the narrowing of the solid state 23Na spectrum of tri-sodium phosphate using high frequency ultrasound. Work has also been completed on the signal enhancement and T1 reduction of a liquid mixture and a pure compound using ultrasound. Some preliminary "bench" experiments have been completed on a novel ultrasonic device designed to help minimise sample heating. The concept involves passing the ultrasound through a temperature stabilised, liquid filled funnel that has a drum skin on the end that will enable the passage of ultrasound into the sample. Bench experiments have proved that acoustic attenuation is low and that cavitation in the liquid beyond the device is still possible.
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.
Resumo:
Investigations into the modelling techniques that depict the transport of discrete phases (gas bubbles or solid particles) and model biochemical reactions in a bubble column reactor are discussed here. The mixture model was used to calculate gas-liquid, solid-liquid and gasliquid-solid interactions. Multiphase flow is a difficult phenomenon to capture, particularly in bubble columns where the major driving force is caused by the injection of gas bubbles. The gas bubbles cause a large density difference to occur that results in transient multi-dimensional fluid motion. Standard design procedures do not account for the transient motion, due to the simplifying assumptions of steady plug flow. Computational fluid dynamics (CFD) can assist in expanding the understanding of complex flows in bubble columns by characterising the flow phenomena for many geometrical configurations. Therefore, CFD has a role in the education of chemical and biochemical engineers, providing the examples of flow phenomena that many engineers may not experience, even through experimentation. The performance of the mixture model was investigated for three domains (plane, rectangular and cylindrical) and three flow models (laminar, k-e turbulence and the Reynolds stresses). mThis investigation raised many questions about how gas-liquid interactions are captured numerically. To answer some of these questions the analogy between thermal convection in a cavity and gas-liquid flow in bubble columns was invoked. This involved modelling the buoyant motion of air in a narrow cavity for a number of turbulence schemes. The difference in density was caused by a temperature gradient that acted across the width of the cavity. Multiple vortices were obtained when the Reynolds stresses were utilised with the addition of a basic flow profile after each time step. To implement the three-phase models an alternative mixture model was developed and compared against a commercially available mixture model for three turbulence schemes. The scheme where just the Reynolds stresses model was employed, predicted the transient motion of the fluids quite well for both mixture models. Solid-liquid and then alternative formulations of gas-liquid-solid model were compared against one another. The alternative form of the mixture model was found to perform particularly well for both gas and solid phase transport when calculating two and three-phase flow. The improvement in the solutions obtained was a result of the inclusion of the Reynolds stresses model and differences in the mixture models employed. The differences between the alternative mixture models were found in the volume fraction equation (flux and deviatoric stress tensor terms) and the viscosity formulation for the mixture phase.
Resumo:
Gas absorption, the removal of one or more constitutents from a gas mixture, is widely used in chemical processes. In many gas absorption processes, the gas mixture is already at high pressure and in recent years organic solvents have been developed for the process of physical absorption at high pressure followed by low pressure regeneration of the solvent and recovery of the absorbed gases. Until now the discovery of new solvents has usually been by expensive and time consuming trial and error laboratory tests. This work describes a new approach, whereby a solvent is selected from considerations of its molecular structure by applying recently published methods of predicting gas solubility from the molecular groups which make up the solvent molecule. The removal of the acid gases of carbon dioxide and hydrogen sulfide from methane or hydrogen was used as a commercially important example. After a preliminary assessment to identify promising moecular groups, more than eighty new solvent molecules were designed and evaluated by predicting gas solubility. The other important physical properties were also predicted by appropriate theoretical procedures, and a commercially promising new solvent was chosen to have a high solubility for acid gases, a low solubility for methane and hydrogen, a low vapour pressure, and a low viscosity. The solvent chosen, of molecular structure Ch3-COCH2-CH2-CO-CH3, was tested in the laboratory and shown to have physical properties, except for vapour pressures, close to those predicted. That is gas solubilities were within 10% but lower than predicted. Viscosity within 10% but higher than predicted and a vapour pressure significantly lower than predicted. A computer program was written to predict gas solubility in the new solvent at the high pressures (25 bar) used in practice. This is based on the group contribution method of Skold Jorgensen (1984). Before using this with the new solvent, Acetonyl acetone, the method was show to be sufficiently accurate by comparing predicted values of gas solubility with experimental solubilities from the literature for 14 systems up to 50 bar. A test of the commercial potential of the new solvent was made by means of two design studies which compared the size of plant and approximate relative costs of absorbing acid gases by means of the new solvent with other commonly used solvents. These were refrigerated methanol(Rectisol process) and Dimethyl Ether or Polyethylene Glycol(Selexol process). Both studies showed in terms of capital and operating cost some significant advantage for plant designed for the new solvent process.
Resumo:
Requirements for systems to continue to operate satisfactorily in the presence of faults has led to the development of techniques for the construction of fault tolerant software. This thesis addresses the problem of error detection and recovery in distributed systems which consist of a set of communicating sequential processes. A method is presented for the `a priori' design of conversations for this class of distributed system. Petri nets are used to represent the state and to solve state reachability problems for concurrent systems. The dynamic behaviour of the system can be characterised by a state-change table derived from the state reachability tree. Systematic conversation generation is possible by defining a closed boundary on any branch of the state-change table. By relating the state-change table to process attributes it ensures all necessary processes are included in the conversation. The method also ensures properly nested conversations. An implementation of the conversation scheme using the concurrent language occam is proposed. The structure of the conversation is defined using the special features of occam. The proposed implementation gives a structure which is independent of the application and is independent of the number of processes involved. Finally, the integrity of inter-process communications is investigated. The basic communication primitives used in message passing systems are seen to have deficiencies when applied to systems with safety implications. Using a Petri net model a boundary for a time-out mechanism is proposed which will increase the integrity of a system which involves inter-process communications.
Resumo:
Cold roll forming is an extremely important but little studied sheet metal forming process. In this thesis, the process of cold roll forming is introduced and it is seen that form roll design is central to the cold roll forming process. The conventional design and manufacture of form rolls is discussed and it is observed that surrounding the design process are a number of activities which although peripheral are time consuming and a possible source of error. A CAD/CAM system is described which alleviates many of the problems traditional to form roll design. New techniques for the calculation of strip length and controlling the means of forming bends are detailed. The CAD/CAM system's advantages and limitations are discussed and, whilst the system has numerous significant advantages, its principal limitation can be said to be the need to manufacture form rolls and test them on a mill before a design can be stated satisfactory. A survey of the previous theoretical and experimental analysis of cold roll forming is presented and is found to be limited. By considering the previous work, a method of numerical analysis of the cold roll forming process is proposed based on a minimum energy approach. Parallel to the numerical analysis, a comprehensive range of software has been developed to enhance the designer's visualisation of the effects of his form roll design. A complementary approach to the analysis of form roll design is the generation of form roll design, a method for the partial generation of designs is described. It is suggested that the two approaches should continue in parallel and that the limitation of each approach is knowledge of the cold roll forming process. Hence, an initial experimental investigation of the rolling of channel sections is described. Finally, areas of potential future work are discussed.
Resumo:
Deep hole drilling is one of the most complicated metal cutting processes and one of the most difficult to perform on CNC machine-tools or machining centres under conditions of limited manpower or unmanned operation. This research work investigates aspects of the deep hole drilling process with small diameter twist drills and presents a prototype system for real time process monitoring and adaptive control; two main research objectives are fulfilled in particular : First objective is the experimental investigation of the mechanics of the deep hole drilling process, using twist drills without internal coolant supply, in the range of diarneters Ø 2.4 to Ø4.5 mm and working length up to 40 diameters. The definition of the problems associated with the low strength of these tools and the study of mechanisms of catastrophic failure which manifest themselves well before and along with the classic mechanism of tool wear. The relationships between drilling thrust and torque with the depth of penetration and the various machining conditions are also investigated and the experimental evidence suggests that the process is inherently unstable at depths beyond a few diameters. Second objective is the design and implementation of a system for intelligent CNC deep hole drilling, the main task of which is to ensure integrity of the process and the safety of the tool and the workpiece. This task is achieved by means of interfacing the CNC system of the machine tool to an external computer which performs the following functions: On-line monitoring of the drilling thrust and torque, adaptive control of feed rate, spindle speed and tool penetration (Z-axis), indirect monitoring of tool wear by pattern recognition of variations of the drilling thrust with cumulative cutting time and drilled depth, operation as a data base for tools and workpieces and finally issuing of alarms and diagnostic messages.
Resumo:
This thesis describes an investigation by the author into the spares operation of compare BroomWade Ltd. Whilst the complete system, including the warehousing and distribution functions, was investigated, the thesis concentrates on the provisioning aspect of the spares supply problem. Analysis of the historical data showed the presence of significant fluctuations in all the measures of system performance. Two Industrial Dynamics simulation models were developed to study this phenomena. The models showed that any fluctuation in end customer demand would be amplified as it passed through the distributor and warehouse stock control systems. The evidence from the historical data available supported this view of the system's operation. The models were utilised to determine which parts of the total system could be expected to exert a critical influence on its performance. The lead time parameters of the supply sector were found to be critical and further study showed that the manner in which the lead time changed with work in progress levels was also an important factor. The problem therefore resolved into the design of a spares manufacturing system. Which exhibited the appropriate dynamic performance characteristics. The gross level of entity presentation, inherent in the Industrial Dynamics methodology, was found to limit the value of these models in the development of detail design proposals. Accordingly, an interacting job shop simulation package was developed to allow detailed evaluation of organisational factors on the performance characteristics of a manufacturing system. The package was used to develop a design for a pilot spares production unit. The need for a manufacturing system to perform successfully under conditions of fluctuating demand is not limited to the spares field. Thus, although the spares exercise provides an example of the approach, the concepts and techniques developed can be considered to have broad application throughout batch manufacturing industry.
Resumo:
The research, which was given the terms of reference, "To cut the lead time for getting new products into volume production", was sponsored by a company which develops and manufactures telecommunications equipment. The research described was based on studies made of the development of two processors which were designed to control telephone exchanges in the public network. It was shown that for each of these products, which were large electronic systems containing both hardware and software, most of their lead time was taken up with development. About half of this time was consumed by activities associated with redesign resulting from changes found to be necessary after the original design had been built. Analysing the causes of design changes showed the most significant to be Design Faults. The reasons why these predominated were investigated by seeking the collective opinion from design staff and their management using a questionnaire. Using the results from these studies to build upon the works of other authors, a model of the development process of large hierarchical systems is derived. An important feature of this model is its representation of iterative loops due to design changes. In order to reduce the development time, two closely related philosophies are proposed: By spending more time at the early stages of development (detecting and remedying faults in the design) even greater savings can be made later on, The collective performance of the development organisation would be improved by increasing the amount and speed of feedback about that performance. A trial was performed to test these philosophies using readily available techniques for design verification. It showed that about an 11 per cent saving would be made on the development time and that the philosophies might be equally successfully applied to other products and techniques.
Resumo:
Cold roll forming of thin-walled sections is a very useful process in the sheet metal industry. However, the conventional method for the design and manufacture of form-rolls, the special tooling used in the cold roll forming process, is a very time consuming and skill demanding exercise. This thesis describes the establishment of a stand-alone minicomputer based CAD/CAM system for assisting the design and manufacture of form-rolls. The work was undertaken in collaboration with a leading manufacturer of thin-walled sections. A package of computer programs have been developed to provide computer aids for every aspect of work in form-roll design and manufacture. The programs have been successfully implemented, as an integrated CAD/CAM software system, on the ICL PERQ minicomputer with graphics facilities. Thus, the developed CAD/CAM system is a single-user workstation, with software facilities to help the user to perform the conventional roll design activities including the design of the finished section, the flower pattern, and the form-rolls. A roll editor program can then be used to modify, if required, the computer generated roll profiles. As far as manufacturing is concerned, a special-purpose roll machining program and postprocessor can be used in conjunction to generate the NC control part-programs for the production of form-rolls by NC turning. Graphics facilities have been incorporated into the CAD/CAM software programs to display drawings interactively on the computer screen throughout all stages of execution of the CAD/CAM software. It has been found that computerisation can shorten the lead time in all activities dealing with the design and manufacture of form-rolls, and small or medium size manufacturing companies can gain benefits from the CAD/CM! technology by developing, according to its own specification, a tailor-made CAD/CAM software system on a low cost minicomputer.