994 resultados para Flow optimization
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
There have been various techniques published for optimizing the net present value of tenders by use of discounted cash flow theory and linear programming. These approaches to tendering appear to have been largely ignored by the industry. This paper utilises six case studies of tendering practice in order to establish the reasons for this apparent disregard. Tendering is demonstrated to be a market orientated function with many subjective judgements being made regarding a firm's environment. Detailed consideration of 'internal' factors such as cash flow are therefore judged to be unjustified. Systems theory is then drawn upon and applied to the separate processes of estimating and tendering. Estimating is seen as taking place in a relatively sheltered environment and as such operates as a relatively closed system. Tendering, however, takes place in a changing and dynamic environment and as such must operate as a relatively open system. The use of sophisticated methods to optimize the value of tenders is then identified as being dependent upon the assumption of rationality, which is justified in the case of a relatively closed system (i.e. estimating), but not for a relatively open system (i.e. tendering).
Investigation and optimization of parameters affecting the multiply charged ion yield in AP-MALDI MS
Resumo:
Liquid matrix-assisted laser desorption/ionization (MALDI) allows the generation of predominantly multiply charged ions in atmospheric pressure (AP) MALDI ion sources for mass spectrometry (MS) analysis. The charge state distribution of the generated ions and the efficiency of the ion source in generating such ions crucially depend on the desolvation regime of the MALDI plume after desorption in the AP-tovacuum inlet. Both high temperature and a flow regime with increased residence time of the desorbed plume in the desolvation region promote the generation of multiply charged ions. Without such measures the application of an electric ion extraction field significantly increases the ion signal intensity of singly charged species while the detection of multiply charged species is less dependent on the extraction field. In general, optimization of high temperature application facilitates the predominant formation and detection of multiply charged compared to singly charged ion species. In this study an experimental setup and optimization strategy is described for liquid AP-MALDI MS which improves the ionization effi- ciency of selected ion species up to 14 times. In combination with ion mobility separation, the method allows the detection of multiply charged peptide and protein ions for analyte solution concentrations as low as 2 fmol/lL (0.5 lL, i.e. 1 fmol, deposited on the target) with very low sample consumption in the low nL-range.
Resumo:
A new approach for solving the optimal power flow (OPF) problem is established by combining the reduced gradient method and the augmented Lagrangian method with barriers and exploring specific characteristics of the relations between the variables of the OPF problem. Computer simulations on IEEE 14-bus and IEEE 30-bus test systems illustrate the method. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
The final contents of total and individual trans-fatty acids of sunflower oil, produced during the deacidification step of physical refining were obtained using a computational simulation program that considered cis-trans isomerization reaction features for oleic, linoleic, and linolenic acids attached to the glycerol part of triacylglycerols. The impact of process variables, such as temperature and liquid flow rate, and of equipment configuration parameters, such as liquid height, diameter, and number of stages, that influence the retention time of the oil in the equipment was analyzed using the response-surface methodology (RSM). The computational simulation and the RSM results were used in two different optimization methods, aiming to minimize final levels of total and individual trans-fatty acids (trans-FA), while keeping neutral oil loss and final oil acidity at low values. The main goal of this work was to indicate that computational simulation, based on a careful modeling of the reaction system, combined with optimization could be an important tool for indicating better processing conditions in industrial physical refining plants of vegetable oils, concerning trans-FA formation.
Resumo:
The ever-increasing robustness and reliability of flow-simulation methods have consolidated CFD as a major tool in virtually all branches of fluid mechanics. Traditionally, those methods have played a crucial role in the analysis of flow physics. In more recent years, though, the subject has broadened considerably, with the development of optimization and inverse design applications. Since then, the search for efficient ways to evaluate flow-sensitivity gradients has received the attention of numerous researchers. In this scenario, the adjoint method has emerged as, quite possibly, the most powerful tool for the job, which heightens the need for a clear understanding of its conceptual basis. Yet, some of its underlying aspects are still subject to debate in the literature, despite all the research that has been carried out on the method. Such is the case with the adjoint boundary and internal conditions, in particular. The present work aims to shed more light on that topic, with emphasis on the need for an internal shock condition. By following the path of previous authors, the quasi-1D Euler problem is used as a vehicle to explore those concepts. The results clearly indicate that the behavior of the adjoint solution through a shock wave ultimately depends upon the nature of the objective functional.
Resumo:
Successful coupling of electrochemical preconcentration (EPC) to capillary electrophoresis (CE) with contactless conductivity detection (C(4)D) is reported for the first time. The EPC-CE interface comprises a dual glassy carbon electrode (GCE) block, a spacer and an upper block with flow inlet and outlet, pseudo-reference electrode and a fitting for the CE silica column, consisting of an orifice perpendicular to the surface of a glassy carbon electrode with a bushing inside to ensure a tight press fit. The end of the capillary in contact with the GCE is slant polished, thus defining a reproducible distance from the electrode surface to the column bore. First results with EPC-CE-C(4)D are very promising, as revealed by enrichment factors of two orders of magnitude for Tl, Cu, Pb and Cd ion peak area signals. Detection limits for 10 min deposition time fall around 20 nmol L(-1) with linear calibration curves over a wide range. Besides preconcentration, easy matrix exchange between accumulation and stripping/injection favors procedures like sample cleanup and optimization of pH, ionic strength and complexing power. This was demonstrated for highly saline samples by using a low conductivity buffer for stripping/injection to improve separation and promote field-enhanced sample stacking during electromigration along the capillary. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Optimization of photo-Fenton degradation of copper phthalocyanine blue was achieved by response surface methodology (RSM) constructed with the aid of a sequential injection analysis (SIA) system coupled to a homemade photo-reactor. Highest degradation percentage was obtained at the following conditions [H(2)O(2)]/[phthalocyanine] = 7, [H(2)O(2)]/[FeSO(4)] = 10, pH = 2.5, and stopped flow time in the photo reactor = 30 s. The SIA system was designed to prepare a monosegment containing the reagents and sample, to pump it toward the photo-reactor for the specified time and send the products to a flow-through spectrophotometer for monitoring the color reduction of the dye. Changes in parameters such as reagent molar ratios. residence time and pH were made by modifications in the software commanding the SI system, without the need for physical reconfiguration of reagents around the selection valve. The proposed procedure and system fed the statistical program with degradation data for fast construction of response surface plots. After optimization, 97% of the dye was degraded. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This work presents the use of sequential injection analysis (SIA) and the response surface methodology as a tool for optimization of Fenton-based processes. Alizarin red S dye (C.I. 58005) was used as a model compound for the anthraquinones family. whose pigments have a large use in coatings industry. The following factors were considered: [H(2)O(2)]:[Alizarin] and [H(2)O(2)]:[FeSO(4)] ratios and pH. The SIA system was designed to add reagents to the reactor and to perform on-line sampling of the reaction medium, sending the samples to a flow-through spectrophotometer for monitoring the color reduction of the dye. The proposed system fed the statistical program with degradation data for fast construction of response surface plots. After optimization, 99.7% of the dye was degraded and the TOC content was reduced to 35% of the original value. Low reagents consumption and high sampling throughput were the remarkable features of the SIA system. (C) 2008 Published by Elsevier B.V.
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Oxygen-deficient TiO2 films with enhanced visible and near-infrared optical absorption have been deposited by reactive sputtering using a planar diode radio frequency magnetron configuration. It is observed that the increase in the absorption coefficient is more effective when the O-2 gas supply is periodically interrupted rather than by a decrease of the partial O-2 gas pressure in the deposition plasma. The optical absorption coefficient at 1.5 eV increases from about 1 x 10(2) cm(-1) to more than 4 x 10(3) cm(-1) as a result of the gas flow discontinuity. A red-shift of similar to 0.24 eV in the optical absorption edge is also observed. High resolution transmission electron microscopy with composition analysis shows that the films present a dense columnar morphology, with estimated mean column width of 40nm. Moreover, the interruptions of the O-2 gas flow do not produce detectable variations in the film composition along its growing direction. X-ray diffraction and micro-Raman experiments indicate the presence of the TiO2 anatase, rutile, and brookite phases. The anatase phase is dominant, with a slight increment of the rutile and brookite phases in films deposited under discontinued O-2 gas flow. The increase of optical absorption in the visible and near-infrared regions has been attributed to a high density of defects in the TiO2 films, which is consistent with density functional theory calculations that place oxygen-related vacancy states in the upper third of the optical bandgap. The electronic structure calculation results, along with the adopted deposition method and experimental data, have been used to propose a mechanism to explain the formation of the observed oxygen-related defects in TiO2 thin films. The observed increase in sub-bandgap absorption and the modeling of the corresponding changes in the electronic structure are potentially useful concerning the optimization of efficiency of the photocatalytic activity and the magnetic doping of TiO2 films. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4724334]
Resumo:
This article presents an thermoeconomic analysis of cogeneration plants, applied as a rational technique to produce electric power and saturated steam. The aim of this new methodology is the minimum exergetic manufacturing cost (EMC), based on the Second Law of Thermodynamics. The decision variables selected for the optimization are the pressure and the temperature of the steam leaving the boiler in the case of using steam turbine, and the pressure ratio, turbine exhaust temperature and mass flow in the case of using gas turbines. The equations for calculating the capital costs of the components and products are formulated as a function of these decision variables. An application of the method using real data of a multinational chemical industry located in São Paulo state is presented. The conditions which establish the minimum cost are presented as finals conclusions.
Resumo:
This paper presents for the first time how to easily incorporate facts devices in an optimal active power flow model such that an efficient interior-point method may be applied. The optimal active power flow model is based on a network flow approach instead of the traditional nodal formulation that allows the use of an efficiently predictor-corrector interior point method speed up by sparsity exploitation. The mathematical equivalence between the network flow and the nodal models is addressed, as well as the computational advantages of the former considering the solution by interior point methods. The adequacy of the network flow model for representing facts devices is presented and illustrated on a small 5-bus system. The model was implemented using Matlab and its performance was evaluated with the 3,397-bus and 4,075-branch Brazilian power system which show the robustness and efficiency of the formulation proposed. The numerical results also indicate an efficient tool for optimal active power flow that is suitable for incorporating facts devices.