936 resultados para Process control -- Data processing
Resumo:
This paper concern the development of a stable model predictive controller (MPC) to be integrated with real time optimization (RTO) in the control structure of a process system with stable and integrating outputs. The real time process optimizer produces Optimal targets for the system inputs and for Outputs that Should be dynamically implemented by the MPC controller. This paper is based oil a previous work (Comput. Chem. Eng. 2005, 29, 1089) where a nominally stable MPC was proposed for systems with the conventional control approach where only the outputs have set points. This work is also based oil the work of Gonzalez et at. (J. Process Control 2009, 19, 110) where the zone control of stable systems is studied. The new control for is obtained by defining ail extended control objective that includes input targets and zone controller the outputs. Additional decision variables are also defined to increase the set of feasible solutions to the control problem. The hard constraints resulting from the cancellation of the integrating modes Lit the end of the control horizon are softened,, and the resulting control problem is made feasible to a large class of unknown disturbances and changes of the optimizing targets. The methods are illustrated with the simulated application of the proposed,approaches to a distillation column of the oil refining industry.
Resumo:
Several MPC applications implement a control strategy in which some of the system outputs are controlled within specified ranges or zones, rather than at fixed set points [J.M. Maciejowski, Predictive Control with Constraints, Prentice Hall, New Jersey, 2002]. This means that these outputs will be treated as controlled variables only when the predicted future values lie outside the boundary of their corresponding zones. The zone control is usually implemented by selecting an appropriate weighting matrix for the output error in the control cost function. When an output prediction is inside its zone, the corresponding weight is zeroed, so that the controller ignores this output. When the output prediction lies outside the zone, the error weight is made equal to a specified value and the distance between the output prediction and the boundary of the zone is minimized. The main problem of this approach, as long as stability of the closed loop is concerned, is that each time an output is switched from the status of non-controlled to the status of controlled, or vice versa, a different linear controller is activated. Thus, throughout the continuous operation of the process, the control system keeps switching from one controller to another. Even if a stabilizing control law is developed for each of the control configurations, switching among stable controllers not necessarily produces a stable closed loop system. Here, a stable M PC is developed for the zone control of open-loop stable systems. Focusing on the practical application of the proposed controller, it is assumed that in the control structure of the process system there is an upper optimization layer that defines optimal targets to the system inputs. The performance of the proposed strategy is illustrated by simulation of a subsystem of an industrial FCC system. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
For the optimal design of plate heat exchangers (PHEs), an accurate thermal-hydraulic model that takes into account the effect of the flow arrangement on the heat load and pressure drop is necessary. In the present study, the effect of the flow arrangement on the pressure drop of a PHE is investigated. Thirty two different arrangements were experimentally tested using a laboratory scale PHE with flat plates. The experimental data was used for (a) determination of an empirical correlation for the effect of the number of passes and number of flow channels per pass on the pressure drop; (b) validation of a friction factor model through parameter estimation; and (c) comparison with the simulation results obtained with a CFD (computational fluid dynamics) model of the PHE. All three approaches resulted in a good agreement between experimental and predicted values of pressure drop. Moreover, the CFD model is used for evaluating the flow maldistribution in a PHE with two channels Per Pass. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The aim of this paper is to present an economical design of an X chart for a short-run production. The process mean starts equal to mu(0) (in-control, State I) and in a random time it shifts to mu(1) > mu(0) (out-of-control, State II). The monitoring procedure consists of inspecting a single item at every m produced ones. If the measurement of the quality characteristic does not meet the control limits, the process is stopped, adjusted, and additional (r - 1) items are inspected retrospectively. The probabilistic model was developed considering only shifts in the process mean. A direct search technique is applied to find the optimum parameters which minimizes the expected cost function. Numerical examples illustrate the proposed procedure. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We investigate here a modification of the discrete random pore model [Bhatia SK, Vartak BJ, Carbon 1996;34:1383], by including an additional rate constant which takes into account the different reactivity of the initial pore surface having attached functional groups and hydrogens, relative to the subsequently exposed surface. It is observed that the relative initial reactivity has a significant effect on the conversion and structural evolution, underscoring the importance of initial surface chemistry. The model is tested against experimental data on chemically controlled char oxidation and steam gasification at various temperatures. It is seen that the variations of the reaction rate and surface area with conversion are better represented by the present approach than earlier random pore models. The results clearly indicate the improvement of model predictions in the low conversion region, where the effect of the initially attached functional groups and hydrogens is more significant, particularly for char oxidation. It is also seen that, for the data examined, the initial surface chemistry is less important for steam gasification as compared to the oxidation reaction. Further development of the approach must also incorporate the dynamics of surface complexation, which is not considered here.
Resumo:
The classical model of surface layering followed by capillary condensation during adsorption in mesopores, is modified here by consideration of the adsorbate solid interaction potential. The new theory accurately predicts the capillary coexistence curve as well as pore criticality, matching that predicted by density functional theory. The model also satisfactorily predicts the isotherm for nitrogen adsorption at 77.4 K on MCM-41 material of various pore sizes, synthesized and characterized in our laboratory, including the multilayer region, using only data on the variation of condensation pressures with pore diameter. The results indicate a minimum mesopore diameter for the surface layering model to hold as 14.1 Å, below which size micropore filling must occur, and a minimum pore diameter for mechanical stability of the hemispherical meniscus during desorption as 34.2 Å. For pores in-between these two sizes reversible condensation is predicted to occur, in accord with the experimental data for nitrogen adsorption on MCM-41 at 77.4 K.
Resumo:
Dimensionless spray flux Ψa is a dimensionless group that characterises the three most important variables in liquid dispersion: flowrate, drop size and powder flux through the spray zone. In this paper, the Poisson distribution was used to generate analytical solutions for the proportion of nuclei formed from single drops (fsingle) and the fraction of the powder surface covered by drops (fcovered) as a function of Ψa. Monte-Carlo simulations were performed to simulate the spray zone and investigate how Ψa, fsingle and fcovered are related. The Monte-Carlo data was an excellent match with analytical solutions of fcovered and fsingle as a function of Ψa. At low Ψa, the proportion of the surface covered by drops (fcovered) was equal to Ψa. As Ψa increases, drop overlap becomes more dominant and the powder surface coverage levels off. The proportion of nuclei formed from single drops (fsingle) falls exponentially with increasing Ψa. In the ranges covered, these results were independent of drop size, number of drops, drop size distribution (mono-sized, bimodal and trimodal distributions), and the uniformity of the spray. Experimental data of nuclei size distributions as a function of spray flux were fitted to the analytical solution for fsingle by defining a cutsize for single drop nuclei. The fitted cutsizes followed the spray drop sizes suggesting that the method is robust and that the cutsize does indicate the transition size between single drop and agglomerate nuclei. This demonstrates that the nuclei distribution is determined by the dimensionless spray flux and the fraction of drop controlled nuclei can be calculated analytically in advance.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length, temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches.
Resumo:
Introduction: The aim of this study was to assess cyclic fatigue resistance in rotary nickel-titanium instruments submitted to nitrogen ion implantation by using a custom-made cyclic fatigue testing apparatus. Methods: Thirty K3 files, size #25, taper 0.04, were divided into 3 experimental groups as follows: group A, 12 files exposed to nitrogen ion implantation at a dose of 2.5 x 10(17) ions/cm(2), accelerating voltage of 200 kV, currents of 1 mu A/cm(2), 130 degrees C temperature, and vacuum conditions of 10 x 10(-6) torr for 6 hours; group B, 12 nonimplanted files; and group C, 6 files submitted to thermal annealing for 6 hours at 130 degrees C. One extra file was used for process control. All files were submitted to a cyclic fatigue test that was performed with an apparatus that allowed the instruments to rotate freely, simulating rotary instrumentation of a curved canal (40-degree, 5-mm radius curve). An electric motor handpiece was used with a contra-angle of 16:1 at an operating speed of 300 rpm and a torque of 2 N-cm. Time to failure was recorded with a stopwatch in seconds and subsequently converted to number of cycles to fracture. Data were analyzed with the Student t test (P < .05). Results: Ion-implanted instruments reached significantly higher cycle numbers before fracture (mean, 510 cycles) when compared with annealed (mean, 428 cycles) and nonimplanted files (mean, 381 cycles). Conclusions: Our results showed that nitrogen ion implantation improves cyclic fatigue resistance in rotary nickel-titanium instruments. Industrial implementation. of this surface modification technique would produce rotary nickel-titanium instruments with a longer working life. (J Endod 2010;36:1183-1186)
Resumo:
The vacancy solution theory of adsorption is re-formulated here through the mass-action law, and placed in a convenient framework permitting the development of thermodynamic ally consistent isotherms. It is shown that both the multisite Langmuir model and the classical vacancy solution theory expression are special cases of the more general approach when the Flory-Huggins activity coefficient model is used, with the former being the thermodynamically consistent result. The improved vacancy solution theory approach is further extended here to heterogeneous adsorbents by considering the pore-width dependent potential along with a pore size distribution. However, application of the model to numerous hydrocarbons as well as other adsorptives on microporous activated carbons shows that the multisite model has difficulty in the presence of a pore size distribution, because pores of different sizes can have different numbers of adsorbed layers and therefore different site occupancies. On the other hand, use of the classical vacancy solution theory expression for the local isotherm leads to good simultaneous fit of the data, while yielding a site diameter of about 0.257 nm, consistent with that expected for the potential well in aromatic rings on carbon pore surfaces. It is argued that the classical approach is successful because the Flory-Huggins term effectively represents adsorbate interactions in disguise. When used together with the ideal adsorbed solution theory the heterogeneous vacancy solution theory successfully predicts binary adsorption equilibria, and is found to perform better than the multisite Langmuir as well as the heterogeneous Langmuir model. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length. temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Resumo:
The personal computer revolution has resulted in the widespread availability of low-cost image analysis hardware. At the same time, new graphic file formats have made it possible to handle and display images at resolutions beyond the capability of the human eye. Consequently, there has been a significant research effort in recent years aimed at making use of these hardware and software technologies for flotation plant monitoring. Computer-based vision technology is now moving out of the research laboratory and into the plant to become a useful means of monitoring and controlling flotation performance at the cell level. This paper discusses the metallurgical parameters that influence surface froth appearance and examines the progress that has been made in image analysis of flotation froths. The texture spectrum and pixel tracing techniques developed at the Julius Kruttschnitt Mineral Research Centre are described in detail. The commercial implementation, JKFrothCam, is one of a number of froth image analysis systems now reaching maturity. In plants where it is installed, JKFrothCam has shown a number of performance benefits. Flotation runs more consistently, meeting product specifications while maintaining high recoveries. The system has also shown secondary benefits in that reagent costs have been significantly reduced as a result of improved flotation control. (C) 2002 Elsevier Science B.V. All rights reserved.