961 resultados para Optimization methods
Resumo:
This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper concern the development of a stable model predictive controller (MPC) to be integrated with real time optimization (RTO) in the control structure of a process system with stable and integrating outputs. The real time process optimizer produces Optimal targets for the system inputs and for Outputs that Should be dynamically implemented by the MPC controller. This paper is based oil a previous work (Comput. Chem. Eng. 2005, 29, 1089) where a nominally stable MPC was proposed for systems with the conventional control approach where only the outputs have set points. This work is also based oil the work of Gonzalez et at. (J. Process Control 2009, 19, 110) where the zone control of stable systems is studied. The new control for is obtained by defining ail extended control objective that includes input targets and zone controller the outputs. Additional decision variables are also defined to increase the set of feasible solutions to the control problem. The hard constraints resulting from the cancellation of the integrating modes Lit the end of the control horizon are softened,, and the resulting control problem is made feasible to a large class of unknown disturbances and changes of the optimizing targets. The methods are illustrated with the simulated application of the proposed,approaches to a distillation column of the oil refining industry.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
The TCP/IP architecture was consolidated as a standard to the distributed systems. However, there are several researches and discussions about alternatives to the evolution of this architecture and, in this study area, this work presents the Title Model to contribute with the application needs support by the cross layer ontology use and the horizontal addressing, in a next generation Internet. For a practical viewpoint, is showed the network cost reduction for the distributed programming example, in networks with layer 2 connectivity. To prove the title model enhancement, it is presented the network analysis performed for the message passing interface, sending a vector of integers and returning its sum. By this analysis, it is confirmed that the current proposal allows, in this environment, a reduction of 15,23% over the total network traffic, in bytes.
Resumo:
An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.
Resumo:
This letter addresses the optimization and complexity reduction of switch-reconfigured antennas. A new optimization technique based on graph models is investigated. This technique is used to minimize the redundancy in a reconfigurable antenna structure and reduce its complexity. A graph modeling rule for switch-reconfigured antennas is proposed, and examples are presented.
Resumo:
The optimization of the treatment process for residual waters from a brewery operating under the modality of an anaerobic reactor and activated sludge combination was studied in two phases. In the first stage, lasting for six months, the characteristics and parameters of the plant operation were analyzed, wherein a diversion rate of more than 50% to aerobic treatment, the use of two aeration tanks and a high sludge production prevailed. The second stage comprised four months during which the system worked under the proposed operational model, with the aim of improving the treatment: reduction of the diversion rate to 30% and use of only one aeration tank At each stage, TSS, VSS and COD were measured at the entrance and exit of the anaerobic reactor mid the aeration tanks. The results were compared with the corresponding design specifications and the needed conditions were applied to reduce the diversion rate towards the aerobic process through monitoring the volume and concentration of the affluent, while applying the strategic changes in reactor parameters needed to increase its efficiency. A diversion reduction from 53 to 34% was achieved, reducing the sludge discharge generated in the aerobic system from 3670mg TSS/l. with two aeration tanks down to 2947mf TSS/l using one tank keeping the same relation VSS:TSS (0.55) and an efficiency of total removal of 98% in terms of COD.
Resumo:
In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The aim objective of this project was to evaluate the protein extraction of soybean flour in dairy whey, by the multivariate statistical method with 2(3) experiments. Influence of three variables were considered: temperature, pH and percentage of sodium chloride against the process specific variable ( percentage of protein extraction). It was observed that, during the protein extraction against time and temperature, the treatments at 80 degrees C for 2h presented great values of total protein (5.99%). The increasing for the percentage of protein extraction was major according to the heating time. Therefore, the maximum point from the function that represents the protein extraction was analysed by factorial experiment 2(3). By the results, it was noted that all the variables were important to extraction. After the statistical analyses, was observed that the parameters as pH, temperature, and percentage of sodium chloride, did not sufficient for the extraction process, since did not possible to obtain the inflection point from mathematical function, however, by the other hand, the mathematical model was significant, as well as, predictive.
Resumo:
Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The present investigation is the first part of an initiative to prepare a regional map of the natural abundance of selenium in various areas of Brazil, based on the analysis of bean and soil samples. Continuous-flow hydride generation electrothermal atomic absorption spectrometry (HG-ET AAS) with in situ trapping on an iridium-coated graphite tube has been chosen because of the high sensitivity and relative simplicity. The microwave-assisted acid digestion for bean and soil samples was tested for complete recovery of inorganic and organic selenium compounds (selenomethionine). The reduction of Se(VI) to Se(IV) was optimized in order to guarantee that there is no back-oxidation, which is of importance when digested samples are not analyzed immediately after the reduction step. The limits of detection and quantification of the method were 30 ng L(-1) Se and 101 ng L(-1) Se, respectively, corresponding to about 3 ng g(-1) and 10 ng g(-1), respectively, in the solid samples, considering a typical dilution factor of 100 for the digestion process. The results obtained for two certified food reference materials (CRM), soybean and rice, and for a soil and sediment CRM confirmed the validity of the investigated method. The selenium content found in a number of selected bean samples varied between 5.5 +/- 0.4 ng g(-1) and 1726 +/- 55 ng g(-1), and that in soil samples varied between 113 +/- 6.5 ng g(-1) and 1692 +/- 21 ng g(-1). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Desserts made with soy cream, which are oil-in-water emulsions, are widely consumed by lactose-intolerant individuals in Brazil. In this regard, this study aimed at using response surface methodology (RSM) to optimize the sensory attributes of a soy-based emulsion over a range of pink guava juice (GJ: 22% to 32%) and soy protein (SP: 1% to 3%). WHC and backscattering were analyzed after 72 h of storage at 7 degrees C. Furthermore, a rating test was performed to determine the degree of liking of color, taste, creaminess, appearance, and overall acceptability. The data showed that the samples were stable against gravity and storage. The models developed by RSM adequately described the creaminess, taste, and appearance of the emulsions. The response surface of the desirability function was used successfully in the optimization of the sensory properties of dairy-free emulsions, suggesting that a product with 30.35% GJ and 3% SP was the best combination of these components. The optimized sample presented suitable sensory properties, in addition to being a source of dietary fiber, iron, copper, and ascorbic acid.
Resumo:
The effect of thermal treatment on phenolic compounds and type 2 diabetes functionality linked to alpha-glucosidase and alpha-amylase inhibition and hypertension relevant angiotensin I-converting enzyme (ACE) inhibition were investigated in selected bean (Phaseolus vulgaris L,) cultivars from Peru and Brazil using in vitro models. Thermal processing by autoclaving decreased the total phenolic content in all cultivars, whereas the 1,1-diphenyl-2-picrylhydrazyl radical scavenging activity-linked antioxidant activity increased among Peruvian cultivars, alpha-Amylase and alpha-glucosidase inhibitory activities were reduced significantly after heat treatment (73-94% and 8-52%, respectively), whereas ACE inhibitory activity was enhanced (9-15%). Specific phenolic acids such as chlorogenic and caffeic acid increased moderately following thermal treatment (2-16% and 5-35%, respectively). No correlation was found between phenolic contents and functionality associated to antidiabetes and antihypertension potential, indicating that non phenolic compounds may be involved. Thermally processed bean cultivars are interesting sources of phenolic acids linked to high antioxidant activity and show potential for hypertension prevention.
Resumo:
The antioxidant capacity of the striped sunflower seed cotyledon extracts, obtained by sequential extraction with different polarities of solvents, was evaluated by three different in vitro methods: ferric reducing/antioxidant power (FRAP), 2.2-diphenyl-1-picrylhydrazyl radical (DPPH) and oxygen radical absorbance capacity (ORAC) assays. In the three methods, the aqueous extract at 30 mu g/ml showed a higher antioxidant capacity value (FRAP, 45.27 mu mol; DPPH, 50.18%; ORAC, 1.5 Trolox equivalents) than the ethanolic extract (FRAP, 32.17 mu mol; DPPH, 15.21%; ORAC, 0.50 Trolox equivalents). When compared with the synthetic antioxidant butylated hydroxyl toluene, the antioxidant capacity of the aqueous extract varied from 45% to 66%, according to the used method. The high antioxidant capacity observed for the aqueous extract of the studied sunflower seed suggests that the intake of this seed may prevent in vivo oxidative reactions responsible for the development of several diseases, such as cancer.