975 resultados para Optimization analysis
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper presents an analysis of an irreversible Otto cycle aiming to optimize the net power through ECOP and ecological function. The studied cycle operates between two thermal reservoirs of infinite thermal capacity, with internal irreversibilities derived from non-isentropic behavior of compression and expansion processes, irreversibilities from thermal resistance in heat exchangers and heat leakage from the high temperature reservoir to the low temperature reservoir. Analytical expressions are applied for the power outputs optimized by the ECOP, by the ecological function and by the maximum power criteria, in conjunction with a graphic analysis, in which some cycle operation parameters are analyzed for an increased comprehension of the effects of the irreversibilities in the optimized power.
Resumo:
The common practice in industry is to perform flutter analyses considering the generalized stiffness and mass matrices obtained from finite element method (FEM) and aerodynamic generalized force matrices obtained from a panel method, as the doublet lattice method. These analyses are often reperformed if significant differences are found in structural frequencies and damping ratios determined from ground vibration tests compared to FEM. This unavoidable rework can result in a lengthy and costly process of analysis during the aircraft development. In this context, this paper presents an approach to perform flutter analysis including uncertainties in natural frequencies and damping ratios. The main goal is to assure the nominal system’s stability considering these modal parameters varying in a limited range. The aeroelastic system is written as an affine parameter model and the robust stability is verified solving a Lyapunov function through linear matrix inequalities and convex optimization
Resumo:
A methodology to analyze organochlorine pesticides (OCPs) in water samples has been accomplished by using headspace stir bar sorptive extraction (HS-SBSE). The bars were in house coated with a thick film of PDMS in order to properly work in the headspace mode. Sampling was done by a novel HS-SBSE system whereas the analysis was performed by capillary GC coupled mass spectrometric detection (HS-SBSE-GC-MS). The extraction optimization, using different experimental parameters has been established by a standard equilibrium time of 120 min at 85 degrees C. A mixture of ACN/toluene as back extraction solvent promoted a good performance to remove the OCPs sorbed in the bar. Reproducibility between 2.1 and 14.8% and linearity between 0.96 and 1.0 were obtained for pesticides spiked in a linear range between 5 and 17 ng/g in water samples during the bar evaluation.
Resumo:
Liquid biofuels can be produced from a variety of feedstocks and processes. Ethanol and biodiesel production processes based on conventional raw materials are already commercial, but subject to further improvement and optimization. Biofuels production processes using lignocellulosic feedstocks are still in the demonstration phase and require further R&D to increase efficiency. A primary tool to analyze the efficiency of biofuels production processes from an integrated point of view is offered by exergy analysis. To gain further insight into the performance of biofuels production processes, a simulation tool, which allows analyzing the effect of process variables on the exergy efficiency of stages in which chemical or biochemical reactions take place, were implemented. Feedstocks selected for analysis were parts or products of tropical plants such as the fruit and flower stalk of banana tree, palm oil, and glucose syrups. Results of process simulation, taking into account actual process conditions, showed that the exergy efficiencies of the acid hydrolysis of banana fruit and banana pulp were in the same order (between 50% and 60%), lower than the figure for palm oil transesterification (90%), and higher that the exergy efficiency of the enzymatic hydrolysis of flower stalk (20.3%). (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A sensitive, selective, and reproducible in-tube solid-phase microextraction and liquid chromatographic (in-tube SPME/LC-UV) method for determination of lidocaine and its metabolite monoethylglycinexylidide (MEGX) in human plasma has been developed, validated, and further applied to pharmacokinetic study in pregnant women with gestational diabetes mellitus (GDM) subjected to epidural anesthesia. Important factors in the optimization of in-tube SPME performance are discussed, including the draw/eject sample volume, draw/eject cycle number, draw/eject flow rate, sample pH, and influence of plasma proteins. The limits of quantification of the in-tube SPME/LC method were 50 ng/mL for both metabolite and lidocaine. The interday and intraday precision had coefficients of variation lower than 8%, and accuracy ranged from 95 to 117%. The response of the in-tube SPME/LC method for analytes was linear over a dynamic range from 50 to 5000 ng/mL, with correlation coefficients higher than 0.9976. The developed in-tube SPME/LC method was successfully used to analyze lidocaine and its metabolite in plasma samples from pregnant women with GDM subjected to epidural anesthesia for pharmacokinetic study.
Resumo:
Piezoelectric materials can be used to convert oscillatory mechanical energy into electrical energy. Energy harvesting devices are designed to capture the ambient energy surrounding the electronics and convert it into usable electrical energy. The design of energy harvesting devices is not obvious, requiring optimization procedures. This paper investigates the influence of pattern gradation using topology optimization on the design of piezocomposite energy harvesting devices based on bending behavior. The objective function consists of maximizing the electric power generated in a load resistor. A projection scheme is employed to compute the element densities from design variables and control the length scale of the material density. Examples of two-dimensional piezocomposite energy harvesting devices are presented and discussed using the proposed method. The numerical results illustrate that pattern gradation constraints help to increase the electric power generated in a load resistor and guides the problem toward a more stable solution. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the development of a mathematical model to optimize the management and operation of the Brazilian hydrothermal system. The system consists of a large set of individual hydropower plants and a set of aggregated thermal plants. The energy generated in the system is interconnected by a transmission network so it can be transmitted to centers of consumption throughout the country. The optimization model offered is capable of handling different types of constraints, such as interbasin water transfers, water supply for various purposes, and environmental requirements. Its overall objective is to produce energy to meet the country's demand at a minimum cost. Called HIDROTERM, the model integrates a database with basic hydrological and technical information to run the optimization model, and provides an interface to manage the input and output data. The optimization model uses the General Algebraic Modeling System (GAMS) package and can invoke different linear as well as nonlinear programming solvers. The optimization model was applied to the Brazilian hydrothermal system, one of the largest in the world. The system is divided into four subsystems with 127 active hydropower plants. Preliminary results under different scenarios of inflow, demand, and installed capacity demonstrate the efficiency and utility of the model. From this and other case studies in Brazil, the results indicate that the methodology developed is suitable to different applications, such as planning operation, capacity expansion, and operational rule studies, and trade-off analysis among multiple water users. DOI: 10.1061/(ASCE)WR.1943-5452.0000149. (C) 2012 American Society of Civil Engineers.
Resumo:
The wide variety of molecular architectures used in sensors and biosensors and the large amount of data generated with some principles of detection have motivated the use of computational methods, such as information visualization techniques, not only to handle the data but also to optimize sensing performance. In this study, we combine projection techniques with micro-Raman scattering and atomic force microscopy (AFM) to address critical issues related to practical applications of electronic tongues (e-tongues) based on impedance spectroscopy. Experimentally, we used sensing units made with thin films of a perylene derivative (AzoPTCD acronym), coating Pt interdigitated electrodes, to detect CuCl(2) (Cu(2+)), methylene blue (MB), and saccharose in aqueous solutions, which were selected due to their distinct molecular sizes and ionic character in solution. The AzoPTCD films were deposited from monolayers to 120 nm via Langmuir-Blodgett (LB) and physical vapor deposition (PVD) techniques. Because the main aspects investigated were how the interdigitated electrodes are coated by thin films (architecture on e-tongue) and the film thickness, we decided to employ the same material for all sensing units. The capacitance data were projected into a 2D plot using the force scheme method, from which we could infer that at low analyte concentrations the electrical response of the units was determined by the film thickness. Concentrations at 10 mu M or higher could be distinguished with thinner films tens of nanometers at most-which could withstand the impedance measurements, and without causing significant changes in the Raman signal for the AzoPTCD film-forming molecules. The sensitivity to the analytes appears to be related to adsorption on the film surface, as inferred from Raman spectroscopy data using MB as analyte and from the multidimensional projections. The analysis of the results presented may serve as a new route to select materials and molecular architectures for novel sensors and biosensors, in addition to suggesting ways to unravel the mechanisms behind the high sensitivity obtained in various sensors.
Resumo:
Objective To evaluate the changes in tissue perfusion parameters in dogs with severe sepsis/septic shock in response to goal-directed hemodynamic optimization in the ICU and their relation to outcome. Design Prospective observational study. Setting ICU of a veterinary university medical center. Animals Thirty dogs with severe sepsis or septic shock caused by pyometra who underwent surgery and were admitted to the ICU. Measurements and Main Results Severe sepsis was defined as the presence of sepsis and sepsis-induced dysfunction of one or more organs. Septic shock was defined as the presence of severe sepsis plus hypotension not reversed with fluid resuscitation. After the presumptive diagnosis of sepsis secondary to pyometra, blood samples were collected and clinical findings were recorded. Volume resuscitation with 0.9% saline solution and antimicrobial therapy were initiated. Following abdominal ultrasonography and confirmation of increased uterine volume, dogs underwent corrective surgery. After surgery, the animals were admitted to the ICU, where resuscitation was guided by the clinical parameters, central venous oxygen saturation (ScvO2), lactate, and base deficit. Between survivors and nonsurvivors it was observed that the ScvO2, lactate, and base deficit on ICU admission were each related independently to death (P = 0.001, P = 0.030, and P < 0.001, respectively). ScvO2 and base deficit were found to be the best discriminators between survivors and nonsurvivors as assessed via receiver operator characteristic curve analysis. Conclusion Our study suggests that ScvO2 and base deficit are useful in predicting the prognosis of dogs with severe sepsis and septic shock; animals with a higher ScvO2 and lower base deficit at admission to the ICU have a lower probability of death.
Resumo:
The present paper presents a theoretical analysis of a cross flow heat exchanger with a new flow arrangement comprehending several tube rows. The thermal performance of the proposed flow arrangement is compared with the thermal performance of a typical counter cross flow arrangement that is used in chemical, refrigeration, automotive and air conditioning industries. The thermal performance comparison has been performed in terms of the following parameters: heat exchanger effectiveness and efficiency, dimensionless entropy generation, entransy dissipation number, and dimensionless local temperature differences. It is also shown that the uniformity of the temperature difference field leads to a higher thermal performance of the heat exchanger. In the present case this is accomplished thorough a different organization of the in-tube fluid circuits in the heat exchanger. The relation between the recently introduced "entransy dissipation number" and the conventional thermal effectiveness has been obtained in terms of the "number of transfer units". A case study has been solved to quantitatively to obtain the temperature difference distribution over two rows units involving the proposed arrangement and the counter cross flow one. It has been shown that the proposed arrangement presents better thermal performance regardless the comparison parameter. (C) 2012 Elsevier Masson SAS. All rights reserved.
Resumo:
In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Electronic polarization induced by the interaction of a reference molecule with a liquid environment is expected to affect the magnetic shielding constants. Understanding this effect using realistic theoretical models is important for proper use of nuclear magnetic resonance in molecular characterization. In this work, we consider the pyridine molecule in water as a model system to briefly investigate this aspect. Thus, Monte Carlo simulations and quantum mechanics calculations based on the B3LYP/6-311++G (d,p) are used to analyze different aspects of the solvent effects on the N-15 magnetic shielding constant of pyridine in water. This includes in special the geometry relaxation and the electronic polarization of the solute by the solvent. The polarization effect is found to be very important, but, as expected for pyridine, the geometry relaxation contribution is essentially negligible. Using an average electrostatic model of the solvent, the magnetic shielding constant is calculated as -58.7 ppm, in good agreement with the experimental value of -56.3 ppm. The explicit inclusion of hydrogen-bonded water molecules embedded in the electrostatic field of the remaining solvent molecules gives the value of -61.8 ppm.
Resumo:
This paper presents a technique for performing analog design synthesis at circuit level providing feedback to the designer through the exploration of the Pareto frontier. A modified simulated annealing which is able to perform crossover with past anchor points when a local minimum is found which is used as the optimization algorithm on the initial synthesis procedure. After all specifications are met, the algorithm searches for the extreme points of the Pareto frontier in order to obtain a non-exhaustive exploration of the Pareto front. Finally, multi-objective particle swarm optimization is used to spread the results and to find a more accurate frontier. Piecewise linear functions are used as single-objective cost functions to produce a smooth and equal convergence of all measurements to the desired specifications during the composition of the aggregate objective function. To verify the presented technique two circuits were designed, which are: a Miller amplifier with 96 dB Voltage gain, 15.48 MHz unity gain frequency, slew rate of 19.2 V/mu s with a current supply of 385.15 mu A, and a complementary folded cascode with 104.25 dB Voltage gain, 18.15 MHz of unity gain frequency and a slew rate of 13.370 MV/mu s. These circuits were synthesized using a 0.35 mu m technology. The results show that the method provides a fast approach for good solutions using the modified SA and further good Pareto front exploration through its connection to the particle swarm optimization algorithm.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.