39 resultados para Process control -- Data processing
Resumo:
Nonlinear principal component analysis (PCA) based on neural networks has drawn significant attention as a monitoring tool for complex nonlinear processes, but there remains a difficulty with determining the optimal network topology. This paper exploits the advantages of the Fast Recursive Algorithm, where the number of nodes, the location of centres, and the weights between the hidden layer and the output layer can be identified simultaneously for the radial basis function (RBF) networks. The topology problem for the nonlinear PCA based on neural networks can thus be solved. Another problem with nonlinear PCA is that the derived nonlinear scores may not be statistically independent or follow a simple parametric distribution. This hinders its applications in process monitoring since the simplicity of applying predetermined probability distribution functions is lost. This paper proposes the use of a support vector data description and shows that transforming the nonlinear principal components into a feature space allows a simple statistical inference. Results from both simulated and industrial data confirm the efficacy of the proposed method for solving nonlinear principal component problems, compared with linear PCA and kernel PCA.
Resumo:
Chemical Imaging (CI) is an emerging platform technology that integrates conventional imaging and spectroscopy to attain both spatial and spectral information from an object. Vibrational spectroscopic methods, such as Near Infrared (NIR) and Raman spectroscopy, combined with imaging are particularly useful for analysis of biological/pharmaceutical forms. The rapid, non-destructive and non-invasive features of CI mark its potential suitability as a process analytical tool for the pharmaceutical industry, for both process monitoring and quality control in the many stages of drug production. This paper provides an overview of CI principles, instrumentation and analysis. Recent applications of Raman and NIR-CI to pharmaceutical quality and process control are presented; challenges facing Cl implementation and likely future developments in the technology are also discussed. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes max separation clustering (MSC), a new non-hierarchical clustering method used for feature extraction from optical emission spectroscopy (OES) data for plasma etch process control applications. OES data is high dimensional and inherently highly redundant with the result that it is difficult if not impossible to recognize useful features and key variables by direct visualization. MSC is developed for clustering variables with distinctive patterns and providing effective pattern representation by a small number of representative variables. The relationship between signal-to-noise ratio (SNR) and clustering performance is highlighted, leading to a requirement that low SNR signals be removed before applying MSC. Experimental results on industrial OES data show that MSC with low SNR signal removal produces effective summarization of the dominant patterns in the data.
Resumo:
A reduction in the time required to locate and restore faults on a utility's distribution network improves the customer minutes lost (CML) measurement and hence brings direct cost savings to the operating company. The traditional approach to fault location involves fault impedance determination from high volume waveform files dispatched across a communications channel to a central location for processing and analysis. This paper examines an alternative scheme where data processing is undertaken locally within a recording instrument thus reducing the volume of data to be transmitted. Processed event fault reports may be emailed to relevant operational staff for the timely repair and restoration of the line.
Resumo:
With security and surveillance, there is an increasing need to be able to process image data efficiently and effectively either at source or in a large data networks. Whilst Field Programmable Gate Arrays have been seen as a key technology for enabling this, they typically use high level and/or hardware description language synthesis approaches; this provides a major disadvantage in terms of the time needed to design or program them and to verify correct operation; it considerably reduces the programmability capability of any technique based on this technology. The work here proposes a different approach of using optimised soft-core processors which can be programmed in software. In particular, the paper proposes a design tool chain for programming such processors that uses the CAL Actor Language as a starting point for describing an image processing algorithm and targets its implementation to these custom designed, soft-core processors on FPGA. The main purpose is to exploit the task and data parallelism in order to achieve the same parallelism as a previous HDL implementation but avoiding the design time, verification and debugging steps associated with such approaches.
Resumo:
Field programmable gate array devices boast abundant resources with which custom accelerator components for signal, image and data processing may be realised; however, realising high performance, low cost accelerators currently demands manual register transfer level design. Software-programmable ’soft’ processors have been proposed as a way to reduce this design burden but they are unable to support performance and cost comparable to custom circuits. This paper proposes a new soft processing approach for FPGA which promises to overcome this barrier. A high performance, fine-grained streaming processor, known as a Streaming Accelerator Element, is proposed which realises accelerators as large scale custom multicore networks. By adopting a streaming execution approach with advanced program control and memory addressing capabilities, typical program inefficiencies can be almost completely eliminated to enable performance and cost which are unprecedented amongst software-programmable solutions. When used to realise accelerators for fast fourier transform, motion estimation, matrix multiplication and sobel edge detection it is shown how the proposed architecture enables real-time performance and with performance and cost comparable with hand-crafted custom circuit accelerators and up to two orders of magnitude beyond existing soft processors.
Resumo:
With security and surveillance, there is an increasing need to process image data efficiently and effectively either at source or in a large data network. Whilst a Field-Programmable Gate Array (FPGA) has been seen as a key technology for enabling this, the design process has been viewed as problematic in terms of the time and effort needed for implementation and verification. The work here proposes a different approach of using optimized FPGA-based soft-core processors which allows the user to exploit the task and data level parallelism to achieve the quality of dedicated FPGA implementations whilst reducing design time. The paper also reports some preliminary
progress on the design flow to program the structure. An implementation for a Histogram of Gradients algorithm is also reported which shows that a performance of 328 fps can be achieved with this design approach, whilst avoiding the long design time, verification and debugging steps associated with conventional FPGA implementations.
Resumo:
Treasure et al. (2004) recently proposed a new sub space-monitoring technique, based on the N4SID algorithm, within the multivariate statistical process control framework. This dynamic-monitoring method requires considerably fewer variables to be analysed when compared with dynamic principal component analysis (PCA). The contribution charts and variable reconstruction, traditionally employed for static PCA, are analysed in a dynamic context. The contribution charts and variable reconstruction may be affected by the ratio of the number of retained components to the total number of analysed variables. Particular problems arise if this ratio is large and a new reconstruction chart is introduced to overcome these. The utility of such a dynamic contribution chart and variable reconstruction is shown in a simulation and by application to industrial data from a distillation unit.
Resumo:
Chitosan nanoparticles fabricated via different preparation protocols have been in recent years widely studied as carriers for therapeutic proteins and genes with varying degree of effectiveness and drawbacks. This work seeks to further explore the polyionic coacervation fabrication process, and associated processing conditions under which protein encapsulation and subsequent release can be systematically and predictably manipulated so as to obtain desired effectiveness. BSA was used as a model protein which was encapsulated by either incorporation or incubation method, using the polyanion tripolyphosphate (TPP) as the coacervation crosslink agent to form chitosan-BSA-TPP nanoparticles. The BSA-loaded chitosan-TPP nanoparticles were characterized for particle size, morphology, zeta potential, BSA encapsulation efficiency, and subsequent release kinetics, which were found predominantly dependent on the factors of chitosan molecular weight, chitosan concentration, BSA loading concentration, and chitosan/TPP mass ratio. The BSA loaded nanoparticles prepared under varying conditions were in the size range of 200-580 nm, and exhibit a high positive zeta potential. Detailed sequential time frame TEM imaging of morphological change of the BSA loaded particles showed a swelling and particle degradation process. Initial burst released due to surface protein desorption and diffusion from sublayers did not relate directly to change of particle size and shape, which was eminently apparent only after 6 h. It is also notable that later stage particle degradation and disintegration did not yield a substantial follow-on release, as the remaining protein molecules, with adaptable 3-D conformation, could be tightly bound and entangled with the cationic chitosan chains. In general, this study demonstrated that the polyionic coacervation process for fabricating protein loaded chitosan nanoparticles offers simple preparation conditions and a clear processing window for manipulation of physiochemical properties of the nanoparticles (e.g., size and surface charge), which can be conditioned to exert control over protein encapsulation efficiency and subsequent release profile. The weakness of the chitosan nanoparticle system lies typically with difficulties in controlling initial burst effect in releasing large quantities of protein molecules. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This case study examines how the lean ideas behind the Toyota production system can be applied to software project management. It is a detailed investigation of the performance of a nine-person software development team employed by BBC Worldwide based in London. The data collected in 2009 involved direct observations of the development team, the kanban boards, the daily stand-up meetings, semistructured interviews with a wide variety of staff, and statistical analysis. The evidence shows that over the 12-month period, lead time to deliver software improved by 37%, consistency of delivery rose by 47%, and defects reported by customers fell 24%. The significance of this work is showing that the use of lean methods including visual management, team-based problem solving, smaller batch sizes, and statistical process control can improve software development. It also summarizes key differences between agile and lean approaches to software development. The conclusion is that the performance of the software development team was improved by adopting a lean approach. The faster delivery with a focus on creating the highest value to the customer also reduced both technical and market risks. The drawbacks are that it may not fit well with existing corporate standards.
Resumo:
A Time of flight (ToF) mass spectrometer suitable in terms of sensitivity, detector response and time resolution, for application in fast transient Temporal Analysis of Products (TAP) kinetic catalyst characterization is reported. Technical difficulties associated with such application as well as the solutions implemented in terms of adaptations of the ToF apparatus are discussed. The performance of the ToF was validated and the full linearity of the specific detector over the full dynamic range was explored in order to ensure its applicability for the TAP application. The reported TAP-ToF setup is the first system that achieves the high level of sensitivity allowing monitoring of the full 0-200 AMU range simultaneously with sub-millisecond time resolution. In this new setup, the high sensitivity allows the use of low intensity pulses ensuring that transport through the reactor occurs in the Knudsen diffusion regime and that the data can, therefore, be fully analysed using the reported theoretical TAP models and data processing.
Resumo:
In polymer extrusion, the delivery of a melt which is homogenous in composition and temperature is paramount for achieving high quality extruded products. However, advancements in process control are required to reduce temperature variations across the melt flow which can result in poor product quality. The majority of thermal monitoring methods provide only low accuracy point/bulk melt temperature measurements and cause poor controller performance. Furthermore, the most common conventional proportional-integral-derivative controllers seem to be incapable of performing well over the nonlinear operating region. This paper presents a model-based fuzzy control approach to reduce the die melt temperature variations across the melt flow while achieving desired average die melt temperature. Simulation results confirm the efficacy of the proposed controller.
Resumo:
The work presented in this paper takes advantage of newly developed instrumentation suitable for in process monitoring of an industrial stretch blow molding machine. The instrumentation provides blowing pressure and stretch rod force histories along with the kinematics of polymer contact with the mould wall. A Design of Experiments pattern was used to qualitatively relate machine inputs with these process parameters and the thickness distribution of stretch blow molded PET (polyethylene terephtalate) bottles. Material slippage at the mold wall and thickness distribution is also discussed in relation to machine inputs. The key process indicators defined have great potential for use in a closed loop process control system and for validation of process simulations.
Resumo:
The use of carbon fibre composites is growing in many sectors but their use remains stronger in very high value industries such as aerospace where the demands of the application more easily justify the high energy input needed and the corresponding costs incurred. This energy and cost input is returned through gains over the whole life of the product, with for example, longer maintenance intervals for an aircraft and lower fuel burn. Thermoplastic composites however have a different energy and cost profile compared to traditional thermosets with notable differences in recyclability, but this profile is not well quantified or documented. This study considers the key process control parameters and identifies an optimal window for processing, along with the effect this has on the final characteristics of the manufactured parts. Interactions between parameters and corresponding sensitivities are extracted from the results.