841 resultados para process control systems
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
The work presented in this thesis describes an investigation into the production and properties of thin amorphous C films, with and without Cr doping, as a low wear / friction coating applicable to MEMS and other micro- and nano-engineering applications. Firstly, an assessment was made of the available testing techniques. Secondly, the optimised test methods were applied to a series of sputtered films of thickness 10 - 2000 nm in order to: (i) investigate the effect of thickness on the properties of coatingslcoating process (ii) investigate fundamental tribology at the nano-scale and (iii) provide a starting point for nanotribological coating optimisation at ultra low thickness. The use of XPS was investigated for the determination of Sp3/Sp2 carbon bonding. Under C 1s peak analysis, significant errors were identified and this was attributed to the absence of sufficient instrument resolution to guide the component peak structure (even with a high resolution instrument). A simple peak width analysis and correlation work with C KLL D value confirmed the errors. The use of XPS for Sp3/Sp2 was therefore limited to initial tentative estimations. Nanoindentation was shown to provide consistent hardness and reduced modulus results with depth (to < 7nm) when replicate data was suitably statistically processed. No significant pile-up or cracking of the films was identified under nanoindentation. Nanowear experimentation by multiple nanoscratching provided some useful information, however the conditions of test were very different to those expect for MEMS and micro- / nano-engineering systems. A novel 'sample oscillated nanoindentation' system was developed for testing nanowear under more relevant conditions. The films were produced in an industrial production coating line. In order to maximise the available information and to take account of uncontrolled process variation a statistical design of experiment procedure was used to investigate the effect of four key process control parameters. Cr doping was the most significant control parameter at all thicknesses tested and produced a softening effect and thus increased nanowear. Substrate bias voltage was also a significant parameter and produced hardening and a wear reducing effect at all thicknesses tested. The use of a Cr adhesion layer produced beneficial results at 150 nm thickness, but was ineffective at 50 nm. Argon flow to the coating chamber produced a complex effect. All effects reduced significantly with reducing film thickness. Classic fretting wear was produced at low amplitude under nanowear testing. Reciprocating sliding was produced at higher amplitude which generated three body abrasive wear and this was generally consistent with the Archard model. Specific wear rates were very low (typically 10-16 - 10-18 m3N-1m-1). Wear rates reduced exponentially with reduced film thickness and below (approx.) 20 nm, thickness was identified as the most important control of wear.
Resumo:
The thesis describes an investigation into methods for the specification, design and implementation of computer control systems for flexible manufacturing machines comprising multiple, independent, electromechanically-driven mechanisms. An analysis is made of the elements of conventional mechanically-coupled machines in order that the operational functions of these elements may be identified. This analysis is used to define the scope of requirements necessary to specify the format, function and operation of a flexible, independently driven mechanism machine. A discussion of how this type of machine can accommodate modern manufacturing needs of high-speed and flexibility is presented. A sequential method of capturing requirements for such machines is detailed based on a hierarchical partitioning of machine requirements from product to independent drive mechanism. A classification of mechanisms using notations, including Data flow diagrams and Petri-nets, is described which supports capture and allows validation of requirements. A generic design for a modular, IDM machine controller is derived based upon hierarchy of control identified in these machines. A two mechanism experimental machine is detailed which is used to demonstrate the application of the specification, design and implementation techniques. A computer controller prototype and a fully flexible implementation for the IDM machine, based on Petri-net models described using the concurrent programming language Occam, is detailed. The ability of this modular computer controller to support flexible, safe and fault-tolerant operation of the two intermittent motion, discrete-synchronisation independent drive mechanisms is presented. The application of the machine development methodology to industrial projects is established.
Resumo:
This thesis is based upon a case study of the adoption of digital, electronic, microprocessor-based control systems by Albright & Wilson Limited - a UK chemical producer. It offers an explanation of the company's changing technology policy between 1978 and 1981, by examining its past development, internal features and industrial environment. Part One of the thesis gives an industry-level analysis which relates the development of process control technology to changes in the economic requirements of production . The rapid diffusion of microcomputers and other microelectronic equipment in the chemical industry is found to be a response to general need to raise the efficiency of all processes, imposed by the economic recession following 1973. Part Two examines the impaot of these technical and eoonomic ohanges upon Albright & Wilson Limited. The company's slowness in adopting new control technology is explained by its long history in which trends are identified whlich produced theconservatism of the 1970s. By contrast, a study of Tenneco Incorporated, a much more successful adoptor of automating technology, is offered with an analysis of the new technology policy of adoption of such equipment which it imposed upon Albright & Wilson, following the latter's takeover by Tenneco in 1978. Some indications of the consequences by this new policy of widespread adoptions of microprocessor-based control equipment are derived from a study of the first Albright & Wilson plant to use such equipment. The thesis concludes that companies which fail to adopt rapidly the new control technology may not survive in the recessionary environment, the long- established British companies may lack the flexibility to make such necessary changes and that multi-national companies may have an important role jn the planned transfer and adoption of new production technology through their subsidiaries in the UK.
Resumo:
The questions of designing multicriteria control systems on the basis of logic models of composite dynamic objects are considered.
Resumo:
In nonlinear and stochastic control problems, learning an efficient feed-forward controller is not amenable to conventional neurocontrol methods. For these approaches, estimating and then incorporating uncertainty in the controller and feed-forward models can produce more robust control results. Here, we introduce a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. A nonlinear multi-variable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non-Gaussian distributions of control signal as well as processes with hysteresis. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
The importance of “control variations” for obtaining local approximations of the reachable set of nonlinear control systems is well known. Heuristically, if one can construct control variations in all possible directions, then the considered control system is small-time locally controllable (STLC). Two concepts of control variations of higher order are introduced for the case of smooth control systems. The relation between these variations and the small-time local controllability is studied and a new sufficient STLC condition is proved.
Resumo:
In the Light Controlled Factory part-to-part assembly and reduced weight will be enabled through the use of predictive fitting processes; low cost high accuracy reconfigurable tooling will be made possible by active compensation; improved control will allow accurate robotic machining; and quality will be improved through the use of traceable uncertainty based quality control throughout the production system. A number of challenges must be overcome before this vision will be realized; 1) controlling industrial robots for accurate machining; 2) compensation of measurements for thermal expansion; 3) Compensation of measurements for refractive index changes; 4) development of Embedded Metrology Tooling for in-tooling measurement and active tooling compensation; and 5) development of Software for the Planning and Control of Integrated Metrology Networks based on Quality Control with Uncertainty Evaluation and control systems for predictive processes. This paper describes how these challenges are being addressed, in particular the central challenge of developing large volume measurement process models within an integrated dimensional variation management (IDVM) system.
Resumo:
In today’s modern manufacturing industry there is an increasing need to improve internal processes to meet diverse client needs. Process re-engineering is an important activity that is well understood by industry but its rate of application within small to medium size enterprises (SME) is less developed. Business pressures shift the focus of SMEs toward winning new projects and contracts rather than developing long-term, sustainable manufacturing processes. Variations in manufacturing processes are inevitable, but the amount of non-conformity often exceeds the acceptable levels. This paper is focused on the re-engineering of the manufacturing and verification procedure for discrete parts production with the aim of enhancing process control and product verification. The ideologies of the ‘Push’ and ‘Pull’ approaches to manufacturing are useful in the context of process re-engineering for data improvement. Currently information is pulled from the market and prominent customers, and manufacturing companies always try to make the right product, by following customer procedures that attempt to verify against specifications. This approach can result in significant quality control challenges. The aim of this paper is to highlight the importance of process re-engineering in product verification in SMEs. Leadership, culture, ownership and process management are among the main attributes required for the successful deployment of process re-engineering. This paper presents the findings from a case study showcasing the application of a modified re-engingeering method for the manufacturing and verification process. The findings from the case study indicate there are several advantages to implementing the re-engineering method outlined in this paper.
Resumo:
It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.
Resumo:
This paper presents a new, dynamic feature representation method for high value parts consisting of complex and intersecting features. The method first extracts features from the CAD model of a complex part. Then the dynamic status of each feature is established between various operations to be carried out during the whole manufacturing process. Each manufacturing and verification operation can be planned and optimized using the real conditions of a feature, thus enhancing accuracy, traceability and process control. The dynamic feature representation is complementary to the design models used as underlining basis in current CAD/CAM and decision support systems. © 2012 CIRP.
Resumo:
Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^
Resumo:
BACKGROUND: Guidance for appropriate utilisation of transthoracic echocardiograms (TTEs) can be incorporated into ordering prompts, potentially affecting the number of requests. METHODS: We incorporated data from the 2011 Appropriate Use Criteria for Echocardiography, the 2010 National Institute for Clinical Excellence Guideline on Chronic Heart Failure, and American College of Cardiology Choosing Wisely list on TTE use for dyspnoea, oedema and valvular disease into electronic ordering systems at Durham Veterans Affairs Medical Center. Our primary outcome was TTE orders per month. Secondary outcomes included rates of outpatient TTE ordering per 100 visits and frequency of brain natriuretic peptide (BNP) ordering prior to TTE. Outcomes were measured for 20 months before and 12 months after the intervention. RESULTS: The number of TTEs ordered did not decrease (338±32 TTEs/month prior vs 320±33 afterwards, p=0.12). Rates of outpatient TTE ordering decreased minimally post intervention (2.28 per 100 primary care/cardiology visits prior vs 1.99 afterwards, p<0.01). Effects on TTE ordering and ordering rate significantly interacted with time from intervention (p<0.02 for both), as the small initial effects waned after 6 months. The percentage of TTE orders with preceding BNP increased (36.5% prior vs 42.2% after for inpatients, p=0.01; 10.8% prior vs 14.5% after for outpatients, p<0.01). CONCLUSIONS: Ordering prompts for TTEs initially minimally reduced the number of TTEs ordered and increased BNP measurement at a single institution, but the effect on TTEs ordered was likely insignificant from a utilisation standpoint and decayed over time.
Resumo:
The BlackEnergy malware targeting critical infrastructures has a long history. It evolved over time from a simple DDoS platform to a quite sophisticated plug-in based malware. The plug-in architecture has a persistent malware core with easily installable attack specific modules for DDoS, spamming, info-stealing, remote access, boot-sector formatting etc. BlackEnergy has been involved in several high profile cyber physical attacks including the recent Ukraine power grid attack in December 2015. This paper investigates the evolution of BlackEnergy and its cyber attack capabilities. It presents a basic cyber attack model used by BlackEnergy for targeting industrial control systems. In particular, the paper analyzes cyber threats of BlackEnergy for synchrophasor based systems which are used for real-time control and monitoring functionalities in smart grid. Several BlackEnergy based attack scenarios have been investigated by exploiting the vulnerabilities in two widely used synchrophasor communication standards: (i) IEEE C37.118 and (ii) IEC 61850-90-5. Specifically, the paper addresses reconnaissance, DDoS, man-in-the-middle and replay/reflection attacks on IEEE C37.118 and IEC 61850-90-5. Further, the paper also investigates protection strategies for detection and prevention of BlackEnergy based cyber physical attacks.
Resumo:
PEREIRA, J. P. ; CASTRO, B. P. S. ; VALENTIM, R. A. M. . Kit Educacional para Controle e Supervisão Aplicado a Nível. Holos, Natal, v. 2, p. 68-72, 2009