908 resultados para process control
Resumo:
A system for visual recognition is described, with implications for the general problem of representation of knowledge to assist control. The immediate objective is a computer system that will recognize objects in a visual scene, specifically hammers. The computer receives an array of light intensities from a device like a television camera. It is to locate and identify the hammer if one is present. The computer must produce from the numerical "sensory data" a symbolic description that constitutes its perception of the scene. Of primary concern is the control of the recognition process. Control decisions should be guided by the partial results obtained on the scene. If a hammer handle is observed this should suggest that the handle is part of a hammer and advise where to look for the hammer head. The particular knowledge that a handle has been found combines with general knowledge about hammers to influence the recognition process. This use of knowledge to direct control is denoted here by the term "active knowledge". A descriptive formalism is presented for visual knowledge which identifies the relationships relevant to the active use of the knowledge. A control structure is provided which can apply knowledge organized in this fashion actively to the processing of a given scene.
Resumo:
This document contains the 'hard' code for the Java program "AgileSim_updated.java", complete with comments. This program accompanies the article "Process Control in Agile Supply Chain Network" and was the program used to simulate the data for the illustrative example in the article.
Resumo:
Predictability - the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems - possessing properties such as clairvoyance, caprice, in finite capacity, or perfect timing - cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the CLEOPATRA programming language. CLEOPATRA features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. CLEOPATRA is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of CLEOPATRA has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.
Resumo:
We present practical modelling techniques for electromagnetically agitated liquid metal flows involving dynamic change of the fluid volume and shape during melting and the free surface oscillation. Typically the electromagnetic field is strongly coupled to the free surface dynamics and the heat-mass transfer. Accurate pseudo-spectral code and the k-omega turbulence model modified for complex and transitional flows with free surfaces are used for these simulations. The considered examples include magnetic suspension melting, induction scull remelting (cold crucible), levitation and aluminium electrolysis cells. The process control and the energy savings issues are analysed.
Resumo:
A design methodology based on numerical modelling, integrated with optimisation techniques and statistical methods, to aid the process control of micro and nano-electronics based manufacturing processes is presented in this paper. The design methodology is demonstrated for a micro-machining process called Focused Ion Beam (FIB). This process has been modelled to help understand how a pre-defined geometry of micro- and nano- structures can be achieved using this technology. The process performance is characterised on the basis of developed Reduced Order Models (ROM) and are generated using results from a mathematical model of the Focused Ion Beam and Design of Experiment (DoE) methods. Two ion beam sources, Argon and Gallium ions, have been used to compare and quantify the process variable uncertainties that can be observed during the milling process. The evaluations of the process performance takes into account the uncertainties and variations of the process variables and are used to identify their impact on the reliability and quality of the fabricated structure. An optimisation based design task is to identify the optimal process conditions, by varying the process variables, so that certain quality objectives and requirements are achieved and imposed constraints are satisfied. The software tools used and developed to demonstrate the design methodology are also presented.
Resumo:
This is the first paper that shows and theoretically analyses that the presence of auto-correlation can produce considerable alterations in the Type I and Type II errors in univariate and multivariate statistical control charts. To remove this undesired effect, linear inverse ARMA filter are employed and the application studies in this paper show that false alarms (increased Type I errors) and an insensitive monitoring statistics (increased Type II errors) were eliminated.
Resumo:
This brief examines the application of nonlinear statistical process control to the detection and diagnosis of faults in automotive engines. In this statistical framework, the computed score variables may have a complicated nonparametric distri- bution function, which hampers statistical inference, notably for fault detection and diagnosis. This brief shows that introducing the statistical local approach into nonlinear statistical process control produces statistics that follow a normal distribution, thereby enabling a simple statistical inference for fault detection. Further, for fault diagnosis, this brief introduces a compensation scheme that approximates the fault condition signature. Experimental results from a Volkswagen 1.9-L turbo-charged diesel engine are included.
Resumo:
Treasure et al. (2004) recently proposed a new sub space-monitoring technique, based on the N4SID algorithm, within the multivariate statistical process control framework. This dynamic-monitoring method requires considerably fewer variables to be analysed when compared with dynamic principal component analysis (PCA). The contribution charts and variable reconstruction, traditionally employed for static PCA, are analysed in a dynamic context. The contribution charts and variable reconstruction may be affected by the ratio of the number of retained components to the total number of analysed variables. Particular problems arise if this ratio is large and a new reconstruction chart is introduced to overcome these. The utility of such a dynamic contribution chart and variable reconstruction is shown in a simulation and by application to industrial data from a distillation unit.
Resumo:
.In this letter, we demonstrate for the first time that gate misalignment is not a critical limiting factor for low voltage operation in gate-underlap double gate (DG) devices. Our results show that underlap architecture significantly extends the tolerable limit of gate misalignment in 25 nm devices. DG MOSFETs with high degree of gate misalignment and optimal gate-underlap design can perform comparably or even better than self-aligned nonunderlap devices. Results show that spacer-to-straggle (s/sigma) ratio, a key design parameter for underlap devices, should be within the range of 2.3-3.0 to accommodate back gate misalignment. These results are very significant as the stringent process control requirements for achieving self-alignment in nanoscale planar DG MOSFETs are considerably relaxed
Resumo:
The tailpipe emissions from automotive engines have been subject to steadily reducing legislative limits. This reduction has been achieved through the addition of sub-systems to the basic four-stroke engine which thereby increases its complexity. To ensure the entire system functions correctly, each system and / or sub-systems needs to be continuously monitored for the presence of any faults or malfunctions. This is a requirement detailed within the On-Board Diagnostic (OBD) legislation. To date, a physical model approach has been adopted by me automotive industry for the monitoring requirement of OBD legislation. However, this approach has restrictions from the available knowledge base and computational load required. A neural network technique incorporating Multivariant Statistical Process Control (MSPC) has been proposed as an alternative method of building interrelationships between the measured variables and monitoring the correct operation of the engine. Building upon earlier work for steady state fault detection, this paper details the use of non-linear models based on an Auto-associate Neural Network (ANN) for fault detection under transient engine operation. The theory and use of the technique is shown in this paper with the application to the detection of air leaks within the inlet manifold system of a modern gasoline engine whilst operated on a pseudo-drive cycle. Copyright © 2007 by ASME.
Resumo:
This case study examines how the lean ideas behind the Toyota production system can be applied to software project management. It is a detailed investigation of the performance of a nine-person software development team employed by BBC Worldwide based in London. The data collected in 2009 involved direct observations of the development team, the kanban boards, the daily stand-up meetings, semistructured interviews with a wide variety of staff, and statistical analysis. The evidence shows that over the 12-month period, lead time to deliver software improved by 37%, consistency of delivery rose by 47%, and defects reported by customers fell 24%. The significance of this work is showing that the use of lean methods including visual management, team-based problem solving, smaller batch sizes, and statistical process control can improve software development. It also summarizes key differences between agile and lean approaches to software development. The conclusion is that the performance of the software development team was improved by adopting a lean approach. The faster delivery with a focus on creating the highest value to the customer also reduced both technical and market risks. The drawbacks are that it may not fit well with existing corporate standards.
Resumo:
In the present work, by investigating the influence of source/drain (S/D) extension region engineering (also known as gate-underlap architecture) in planar Double Gate (DG) SOI MOSFETs, we offer new design insights to achieve high tolerance to gate misalignment/oversize in nanoscale devices for ultra-low-voltage (ULV) analog/rf applications. Our results show that (i) misaligned gate-underlap devices perform significantly better than DC devices with abrupt source/drain junctions with identical misalignment, (ii) misaligned gate underlap performance (with S/D optimization) exceeds perfectly aligned DG devices with abrupt S/D regions and (iii) 25% back gate misalignment can be tolerated without any significant degradation in cut-off frequency (f(T)) and intrinsic voltage gain (A(VO)). Gate-underlap DG devices designed with spacer-to-straggle ratio lying within the range 2.5 to 3.0 show best tolerance to misaligned/oversize back gate and indeed are better than self-aligned DG MOSFETs with non-underlap (abrupt) S/D regions. Impact of gate length and silicon film thickness scaling is also discussed. These results are very significant as the tolerable limit of misaligned/oversized back gate is considerably extended and the stringent process control requirements to achieve self-alignment can be relaxed for nanoscale planar ULV DG MOSFETs operating in weak-inversion region. The present work provides new opportunities for realizing future ULV analog/rf design with nanoscale gate-underlap DG MOSFETs. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The use of carbon fibre composites is growing in many sectors but their use remains stronger in very high value industries such as aerospace where the demands of the application more easily justify the high energy input needed and the corresponding costs incurred. This energy and cost input is returned through gains over the whole life of the product, with for example, longer maintenance intervals for an aircraft and lower fuel burn. Thermoplastic composites however have a different energy and cost profile compared to traditional thermosets with notable differences in recyclability, but this profile is not well quantified or documented. This study considers the key process control parameters and identifies an optimal window for processing, along with the effect this has on the final characteristics of the manufactured parts. Interactions between parameters and corresponding sensitivities are extracted from the results.