942 resultados para strategy formulation process
Resumo:
This work presents a mathematical model for the vinyl acetate and n-butyl acrylate emulsion copolymerization process in batch reactors. The model is able to explain the effects of simultaneous changes in emulsifier concentration, initiator concentration, monomer-to-water ratio, and monomer feed composition on monomer conversion, copolymer composition and, to lesser extent, average particle size evolution histories. The main features of the system, such as the increase in the rate of polymerization as temperature, emulsifier, and initiator concentrations increase are correctly represented by the model. The model accounts for the basic features of the process and may be useful for practical applications, despite its simplicity and a reduced number of adjustable parameters.
Resumo:
Oxidation processes can be used to treat industrial wastewater containing non-biodegradable organic compounds. However, the presence of dissolved salts may inhibit or retard the treatment process. In this study, wastewater desalination by electrodialysis (ED) associated with an advanced oxidation process (photo-Fenton) was applied to an aqueous NaCl solution containing phenol. The influence of process variables on the demineralization factor was investigated for ED in pilot scale and a correlation was obtained between the phenol, salt and water fluxes with the driving force. The oxidation process was investigated in a laboratory batch reactor and a model based on artificial neural networks was developed by fitting the experimental data describing the reaction rate as a function of the input variables. With the experimental parameters of both processes, a dynamic model was developed for ED and a continuous model, using a plug flow reactor approach, for the oxidation process. Finally, the hybrid model simulation could validate different scenarios of the integrated system and can be used for process optimization.
Resumo:
The objective of this paper is to develop and validate a mechanistic model for the degradation of phenol by the Fenton process. Experiments were performed in semi-batch operation, in which phenol, catechol and hydroquinone concentrations were measured. Using the methodology described in Pontes and Pinto [R.F.F. Pontes, J.M. Pinto, Analysis of integrated kinetic and flow models for anaerobic digesters, Chemical Engineering journal 122 (1-2) (2006) 65-80], a stoichiometric model was first developed, with 53 reactions and 26 compounds, followed by the corresponding kinetic model. Sensitivity analysis was performed to determine the most influential kinetic parameters of the model that were estimated with the obtained experimental results. The adjusted model was used to analyze the impact of the initial concentration and flow rate of reactants on the efficiency of the Fenton process to degrade phenol. Moreover, the model was applied to evaluate the treatment cost of wastewater contaminated with phenol in order to meet environmental standards. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The objective of this paper is to develop a mathematical model for the synthesis of anaerobic digester networks based on the optimization of a superstructure that relies on a non-linear programming formulation. The proposed model contains the kinetic and hydraulic equations developed by Pontes and Pinto [Chemical Engineering journal 122 (2006) 65-80] for two types of digesters, namely UASB (Upflow Anaerobic Sludge Blanket) and EGSB (Expanded Granular Sludge Bed) reactors. The objective function minimizes the overall sum of the reactor volumes. The optimization results show that a recycle stream is only effective in case of a reactor with short-circuit, such as the UASB reactor. Sensitivity analysis was performed in the one and two-digester network superstructures, for the following parameters: UASB reactor short-circuit fraction and the EGSB reactor maximum organic load, and the corresponding results vary considerably in terms of digester volumes. Scenarios for three and four-digester network superstructures were optimized and compared with the results from fewer digesters. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.
Resumo:
Several MPC applications implement a control strategy in which some of the system outputs are controlled within specified ranges or zones, rather than at fixed set points [J.M. Maciejowski, Predictive Control with Constraints, Prentice Hall, New Jersey, 2002]. This means that these outputs will be treated as controlled variables only when the predicted future values lie outside the boundary of their corresponding zones. The zone control is usually implemented by selecting an appropriate weighting matrix for the output error in the control cost function. When an output prediction is inside its zone, the corresponding weight is zeroed, so that the controller ignores this output. When the output prediction lies outside the zone, the error weight is made equal to a specified value and the distance between the output prediction and the boundary of the zone is minimized. The main problem of this approach, as long as stability of the closed loop is concerned, is that each time an output is switched from the status of non-controlled to the status of controlled, or vice versa, a different linear controller is activated. Thus, throughout the continuous operation of the process, the control system keeps switching from one controller to another. Even if a stabilizing control law is developed for each of the control configurations, switching among stable controllers not necessarily produces a stable closed loop system. Here, a stable M PC is developed for the zone control of open-loop stable systems. Focusing on the practical application of the proposed controller, it is assumed that in the control structure of the process system there is an upper optimization layer that defines optimal targets to the system inputs. The performance of the proposed strategy is illustrated by simulation of a subsystem of an industrial FCC system. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The influence of guar and xanthan gum and their combined use on dough proofing rate and its calorimetric properties was investigated. Fusion enthalpy, which is related to the amount of frozen water, was influenced by frozen dough formulation and storage time; specifically gum addition reduced the fusion enthalpy in comparison to control formulation, 76.9 J/g for formulation with both gums and 81.2 J/g for control, at 28th day. Other calorimetric parameters, such as T(g) and freezable water amount, were also influenced by frozen storage time. For all formulations, proofing rate of dough after freezing, frozen storage time and thawing, decreased in comparison to non-frozen dough, indicating that the freezing process itself was more detrimental to the proofing rate than storage time. For all formulations, the mean value of proofing rate was 2.97 +/- 0.24 cm(3) min(-1) per 100 g of non-frozen dough and 2.22 +/- 0.12 cm(3) min(-1) per 100 g of frozen dough. Also the proofing rate of non-frozen dough with xanthan gum decreased significantly in relation to dough without gums and dough with only guar gum. Optical microscopy analyses showed that the gas cell production after frozen storage period was reduced, which is in agreement with the proofing rate results. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
There is an increasing need to treat effluents contaminated with phenol with advanced oxidation processes (AOPs) to minimize their impact on the environment as well as on bacteriological populations of other wastewater treatment systems. One of the most promising AOPs is the Fenton process that relies on the Fenton reaction. Nevertheless, there are no systematic studies on Fenton reactor networks. The objective of this paper is to develop a strategy for the optimal synthesis of Fenton reactor networks. The strategy is based on a superstructure optimization approach that is represented as a mixed integer non-linear programming (MINLP) model. Network superstructures with multiple Fenton reactors are optimized with the objective of minimizing the sum of capital, operation and depreciation costs of the effluent treatment system. The optimal solutions obtained provide the reactor volumes and network configuration, as well as the quantities of the reactants used in the Fenton process. Examples based on a case study show that multi-reactor networks yield decrease of up to 45% in overall costs for the treatment plant. (C) 2010 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Resumo:
In this paper, three single-control charts are proposed to monitor individual observations of a bivariate Poisson process. The specified false-alarm risk, their control limits, and ARLs were determined to compare their performances for different types and sizes of shifts. In most of the cases, the single charts presented better performance rather than two separate control charts ( one for each quality characteristic). A numerical example illustrates the proposed control charts.
Diagnostic errors and repetitive sequential classifications in on-line process control by attributes
Resumo:
The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.
Resumo:
The procedure for online process control by attributes consists of inspecting a single item at every m produced items. It is decided on the basis of the inspection result whether the process is in-control (the conforming fraction is stable) or out-of-control (the conforming fraction is decreased, for example). Most articles about online process control have cited the stoppage of the production process for an adjustment when the inspected item is non-conforming (then the production is restarted in-control, here denominated as corrective adjustment). Moreover, the articles related to this subject do not present semi-economical designs (which may yield high quantities of non-conforming items), as they do not include a policy of preventive adjustments (in such case no item is inspected), which can be more economical, mainly if the inspected item can be misclassified. In this article, the possibility of preventive or corrective adjustments in the process is decided at every m produced item. If a preventive adjustment is decided upon, then no item is inspected. On the contrary, the m-th item is inspected; if it conforms, the production goes on, otherwise, an adjustment takes place and the process restarts in-control. This approach is economically feasible for some practical situations and the parameters of the proposed procedure are determined minimizing an average cost function subject to some statistical restrictions (for example, to assure a minimal levelfixed in advanceof conforming items in the production process). Numerical examples illustrate the proposal.
Resumo:
Adsorbent materials and composites are quite useful for sensor development. Therefore, the aim of this work is the surface modification of particulates and/or composite formation. The material was produced by plasma polymerization of HMDS (hexamethyldisilazane) in a single step. SEM analysis shows good surface coverage of particulates with a plasma polymerized film formed by several clusters that might increase adsorption. Particles (starch. 5 5 mu m) recovered with HMDS films show good properties for retention of medium-size Organic molecules, such as dye. Thin films formed by a mixture of particles and plasma polymerized thin film HMDS species were obtained in a single step and can be used for retention of organic compounds, in liquid or gaseous phase. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Among several process variability sources, valve friction and inadequate controller tuning are supposed to be two of the most prevalent. Friction quantification methods can be applied to the development of model-based compensators or to diagnose valves that need repair, whereas accurate process models can be used in controller retuning. This paper extends existing methods that jointly estimate the friction and process parameters, so that a nonlinear structure is adopted to represent the process model. The developed estimation algorithm is tested with three different data sources: a simulated first order plus dead time process, a hybrid setup (composed of a real valve and a simulated pH neutralization process) and from three industrial datasets corresponding to real control loops. The results demonstrate that the friction is accurately quantified, as well as ""good"" process models are estimated in several situations. Furthermore, when a nonlinear process model is considered, the proposed extension presents significant advantages: (i) greater accuracy for friction quantification and (ii) reasonable estimates of the nonlinear steady-state characteristics of the process. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In order to model the synchronization of brain signals, a three-node fully-connected network is presented. The nodes are considered to be voltage control oscillator neurons (VCON) allowing to conjecture about how the whole process depends on synaptic gains, free-running frequencies and delays. The VCON, represented by phase-locked loops (PLL), are fully-connected and, as a consequence, an asymptotically stable synchronous state appears. Here, an expression for the synchronous state frequency is derived and the parameter dependence of its stability is discussed. Numerical simulations are performed providing conditions for the use of the derived formulae. Model differential equations are hard to be analytically treated, but some simplifying assumptions combined with simulations provide an alternative formulation for the long-term behavior of the fully-connected VCON network. Regarding this kind of network as models for brain frequency signal processing, with each PLL representing a neuron (VCON), conditions for their synchronization are proposed, considering the different bands of brain activity signals and relating them to synaptic gains, delays and free-running frequencies. For the delta waves, the synchronous state depends strongly on the delays. However, for alpha, beta and theta waves, the free-running individual frequencies determine the synchronous state. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the alpha-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.