975 resultados para Optimization analysis
Resumo:
Betacyanins are betalain pigments that display a red-violet colour which have been reported to be three times stronger than the red-violet dye produced by anthocyanins [1]. The applications of betacyanins cover a wide range of matrices, mainly as additives or ingredients in the food industry, cosmetics, pharmaceuticals and livestock feed. Although, being less commonly used than anthocyanins and carotenoids, betacyanins are stable between pH 3 to 7 and suitable for colouring in low acid matrices. In addition, betacyanins have been reported to display interesting medicinal character as powerful antioxidant and chemopreventive compounds either in vitro or in vivo models [2]. Betacyanins are obtained mainly from the red beet of Beta vulgaris plant (between I 0 to 20 mg per I 00 g pulp) but alternative primary sources are needed [3]. In addition, independently of the source used, the effect of the variables that affect the extraction of betacyanins have not been properly described and quantified. Therefore, the aim of this study was to identifY and optimize the conditions that maximize betacyanins extraction using the tepals of Gomphrena globosa L. flowers as an alternative source. Assisted by the statistical technique of response surface methodology, an experimental design was developed for testing the significant explanatory variables of the extraction (time, temperature, solid-liquid ratio and ethanolwater ratio). The identification was performed using high-performance liquid chromatography coupled with a photodiode array detector and mass spectrometry with electron spray ionization (HPLC-PDAMS/ ESI) and the response was measured by the quantification of these compounds using HPLC-PDA. Afterwards, a response surface analysis was performed to evaluate the results. The major betacyanin compounds identified were gomphrenin 11 and Ill and isogomphrenin IJ and Ill. The highest total betacyanins content was obtained by using the following conditions: 45 min of extraction. time, 35•c, 35 g/L of solid-liquid ratio and 25% of ethanol. These values would not be found without optimizing the conditions of the betacyanins extraction, which moreover showed contrary trends to what it has been described in the scientific bibliography. More specifically, concerning the time and temperature variables, an increase of both values (from the common ones used in the bibliography) showed a considerable improvement on the betacyanins extraction yield without displaying any type of degradation patterns.
Resumo:
According to many scientists third industrial revolution has already began and this primarily means the transition to renewable energy sources. Energy requirements are increasing rapidly due to fast industrialization and the increased number of vehicles on the roads. Massive consumption of fossil fuels leads to environmental pollution, therefore, biofuels are offered as an alternative. For example, the application of biodiesel in diesel engines instead of diesel results in the proven reduction of harmful exhaust emissions. One of the most important technologies, which has been already explored at the commercial level, is the production of a liquid biofuel applicable in compression-ignition engines (or diesel engines), from biomass rich in fats and oils. This biofuel is generically referred as biodiesel, and consists essentially of a mixture of FAME's (fatty acid methyl esters). This current work describes modern approaches of biodiesel production from vegetable oil and subsequent analysis of produced biodiesel main characteristics such as density, acidity, iodine value and FAME content.
Resumo:
The knowledge of the liquid-liquid equilibria (LLE) between ionic liquids (ILs) and water is of utmost importance for environmental monitoring, process design and optimization. Therefore, in this work, the mutual solubilities with water, for the ILs combining the 1-methylimidazolium, [C(1)im](+); 1-ethylimidazolium, [C(2)im](+); 1-ethyl-3-propylimidazolium, [C(2)C(3)im](+); and 1-butyl-2,3-dimethylimidazolium, [C(4)C(1)C(1)im](+) cations with the bis(trifluoromethylsulfonyl)imide anion, were determined and compared with the isomers of the symmetric 1,3-dialkylimidazolium bis(trifluoromethylsulfonyl)imide ([C(n)C(n)im][NTf2], with n=1-3) and of the asymmetric 1-alkyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([C(n)C(1)im][NTf2], with n = 2-5) series of ILs. The results obtained provide a broad picture of the impact of the IL cation structural isomerism, including the number of alkyl side chains at the cation, on the water-IL mutual solubilities. Despite the hydrophobic behaviour associated to the [NTf2](-) anion, the results show a significant solubility of water in the IL-rich phase, while the solubility of ILs in the water-rich phase is much lower. The thermodynamic properties of solution indicate that the solubility of ILs in water is entropically driven and highly influenced by the cation size. Using the results obtained here in addition to literature data, a correlation between the solubility of [NTf2]-based ILs in water and their molar volume, for a large range of cations, is proposed. The COnductor like Screening MOdel for Real Solvents (COSMO-RS) was also used to estimate the LLE of the investigated systems and proved to be a useful predictive tool for the a priori screening of ILs aiming at finding suitable candidates before extensive experimental measurements.
Resumo:
Purpose – Curve fitting from unordered noisy point samples is needed for surface reconstruction in many applications -- In the literature, several approaches have been proposed to solve this problem -- However, previous works lack formal characterization of the curve fitting problem and assessment on the effect of several parameters (i.e. scalars that remain constant in the optimization problem), such as control points number (m), curve degree (b), knot vector composition (U), norm degree (k), and point sample size (r) on the optimized curve reconstruction measured by a penalty function (f) -- The paper aims to discuss these issues -- Design/methodology/approach - A numerical sensitivity analysis of the effect of m, b, k and r on f and a characterization of the fitting procedure from the mathematical viewpoint are performed -- Also, the spectral (frequency) analysis of the derivative of the angle of the fitted curve with respect to u as a means to detect spurious curls and peaks is explored -- Findings - It is more effective to find optimum values for m than k or b in order to obtain good results because the topological faithfulness of the resulting curve strongly depends on m -- Furthermore, when an exaggerate number of control points is used the resulting curve presents spurious curls and peaks -- The authors were able to detect the presence of such spurious features with spectral analysis -- Also, the authors found that the method for curve fitting is robust to significant decimation of the point sample -- Research limitations/implications - The authors have addressed important voids of previous works in this field -- The authors determined, among the curve fitting parameters m, b and k, which of them influenced the most the results and how -- Also, the authors performed a characterization of the curve fitting problem from the optimization perspective -- And finally, the authors devised a method to detect spurious features in the fitting curve -- Practical implications – This paper provides a methodology to select the important tuning parameters in a formal manner -- Originality/value - Up to the best of the knowledge, no previous work has been conducted in the formal mathematical evaluation of the sensitivity of the goodness of the curve fit with respect to different possible tuning parameters (curve degree, number of control points, norm degree, etc.)
Resumo:
In this work, we further extend the recently developed adaptive data analysis method, the Sparse Time-Frequency Representation (STFR) method. This method is based on the assumption that many physical signals inherently contain AM-FM representations. We propose a sparse optimization method to extract the AM-FM representations of such signals. We prove the convergence of the method for periodic signals under certain assumptions and provide practical algorithms specifically for the non-periodic STFR, which extends the method to tackle problems that former STFR methods could not handle, including stability to noise and non-periodic data analysis. This is a significant improvement since many adaptive and non-adaptive signal processing methods are not fully capable of handling non-periodic signals. Moreover, we propose a new STFR algorithm to study intrawave signals with strong frequency modulation and analyze the convergence of this new algorithm for periodic signals. Such signals have previously remained a bottleneck for all signal processing methods. Furthermore, we propose a modified version of STFR that facilitates the extraction of intrawaves that have overlaping frequency content. We show that the STFR methods can be applied to the realm of dynamical systems and cardiovascular signals. In particular, we present a simplified and modified version of the STFR algorithm that is potentially useful for the diagnosis of some cardiovascular diseases. We further explain some preliminary work on the nature of Intrinsic Mode Functions (IMFs) and how they can have different representations in different phase coordinates. This analysis shows that the uncertainty principle is fundamental to all oscillating signals.
Resumo:
We present a general multistage stochastic mixed 0-1 problem where the uncertainty appears everywhere in the objective function, constraints matrix and right-hand-side. The uncertainty is represented by a scenario tree that can be a symmetric or a nonsymmetric one. The stochastic model is converted in a mixed 0-1 Deterministic Equivalent Model in compact representation. Due to the difficulty of the problem, the solution offered by the stochastic model has been traditionally obtained by optimizing the objective function expected value (i.e., mean) over the scenarios, usually, along a time horizon. This approach (so named risk neutral) has the inconvenience of providing a solution that ignores the variance of the objective value of the scenarios and, so, the occurrence of scenarios with an objective value below the expected one. Alternatively, we present several approaches for risk averse management, namely, a scenario immunization strategy, the optimization of the well known Value-at-Risk (VaR) and several variants of the Conditional Value-at-Risk strategies, the optimization of the expected mean minus the weighted probability of having a "bad" scenario to occur for the given solution provided by the model, the optimization of the objective function expected value subject to stochastic dominance constraints (SDC) for a set of profiles given by the pairs of threshold objective values and either bounds on the probability of not reaching the thresholds or the expected shortfall over them, and the optimization of a mixture of the VaR and SDC strategies.
Resumo:
With recent advances in remote sensing processing technology, it has become more feasible to begin analysis of the enormous historic archive of remotely sensed data. This historical data provides valuable information on a wide variety of topics which can influence the lives of millions of people if processed correctly and in a timely manner. One such field of benefit is that of landslide mapping and inventory. This data provides a historical reference to those who live near high risk areas so future disasters may be avoided. In order to properly map landslides remotely, an optimum method must first be determined. Historically, mapping has been attempted using pixel based methods such as unsupervised and supervised classification. These methods are limited by their ability to only characterize an image spectrally based on single pixel values. This creates a result prone to false positives and often without meaningful objects created. Recently, several reliable methods of Object Oriented Analysis (OOA) have been developed which utilize a full range of spectral, spatial, textural, and contextual parameters to delineate regions of interest. A comparison of these two methods on a historical dataset of the landslide affected city of San Juan La Laguna, Guatemala has proven the benefits of OOA methods over those of unsupervised classification. Overall accuracies of 96.5% and 94.3% and F-score of 84.3% and 77.9% were achieved for OOA and unsupervised classification methods respectively. The greater difference in F-score is a result of the low precision values of unsupervised classification caused by poor false positive removal, the greatest shortcoming of this method.
Resumo:
The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.
Resumo:
A wide range of non-destructive testing (NDT) methods for the monitoring the health of concrete structure has been studied for several years. The recent rapid evolution of wireless sensor network (WSN) technologies has resulted in the development of sensing elements that can be embedded in concrete, to monitor the health of infrastructure, collect and report valuable related data. The monitoring system can potentially decrease the high installation time and reduce maintenance cost associated with wired monitoring systems. The monitoring sensors need to operate for a long period of time, but sensors batteries have a finite life span. Hence, novel wireless powering methods must be devised. The optimization of wireless power transfer via Strongly Coupled Magnetic Resonance (SCMR) to sensors embedded in concrete is studied here. First, we analytically derive the optimal geometric parameters for transmission of power in the air. This specifically leads to the identification of the local and global optimization parameters and conditions, it was validated through electromagnetic simulations. Second, the optimum conditions were employed in the model for propagation of energy through plain and reinforced concrete at different humidity conditions, and frequencies with extended Debye's model. This analysis leads to the conclusion that SCMR can be used to efficiently power sensors in plain and reinforced concrete at different humidity levels and depth, also validated through electromagnetic simulations. The optimization of wireless power transmission via SMCR to Wearable and Implantable Medical Device (WIMD) are also explored. The optimum conditions from the analytics were used in the model for propagation of energy through different human tissues. This analysis shows that SCMR can be used to efficiently transfer power to sensors in human tissue without overheating through electromagnetic simulations, as excessive power might result in overheating of the tissue. Standard SCMR is sensitive to misalignment; both 2-loops and 3-loops SCMR with misalignment-insensitive performances are presented. The power transfer efficiencies above 50% was achieved over the complete misalignment range of 0°-90° and dramatically better than typical SCMR with efficiencies less than 10% in extreme misalignment topologies.
Resumo:
Minimization of undesirable temperature gradients in all dimensions of a planar solid oxide fuel cell (SOFC) is central to the thermal management and commercialization of this electrochemical reactor. This article explores the effective operating variables on the temperature gradient in a multilayer SOFC stack and presents a trade-off optimization. Three promising approaches are numerically tested via a model-based sensitivity analysis. The numerically efficient thermo-chemical model that had already been developed by the authors for the cell scale investigations (Tang et al. Chem. Eng. J. 2016, 290, 252-262) is integrated and extended in this work to allow further thermal studies at commercial scales. Initially, the most common approach for the minimization of stack's thermal inhomogeneity, i.e., usage of the excess air, is critically assessed. Subsequently, the adjustment of inlet gas temperatures is introduced as a complementary methodology to reduce the efficiency loss due to application of excess air. As another practical approach, regulation of the oxygen fraction in the cathode coolant stream is examined from both technical and economic viewpoints. Finally, a multiobjective optimization calculation is conducted to find an operating condition in which stack's efficiency and temperature gradient are maximum and minimum, respectively.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
The aim of this study was to establish guidelines for the optimization of biologic therapies for health professionals involved in the management of patients with RA, AS and PsA. Recommendations were established via consensus by a panel of experts in rheumatology and hospital pharmacy, based on analysis of available scientific evidence obtained from four systematic reviews and on the clinical experience of panellists. The Delphi method was used to evaluate these recommendations, both between panellists and among a wider group of rheumatologists. Previous concepts concerning better management of RA, AS and PsA were reviewed and, more specifically, guidelines for the optimization of biologic therapies used to treat these diseases were formulated. Recommendations were made with the aim of establishing a plan for when and how to taper biologic treatment in patients with these diseases. The recommendations established herein aim not only to provide advice on how to improve the risk:benefit ratio and efficiency of such treatments, but also to reduce variability in daily clinical practice in the use of biologic therapies for rheumatic diseases
Resumo:
In most agroecosystems, nitrogen (N) is the most important nutrient limiting plant growth. One management strategy that affects N cycling and N use efficiency (NUE) is conservation agriculture (CA), an agricultural system based on a combination of minimum tillage, crop residue retention and crop rotation. Available results on the optimization of NUE in CA are inconsistent and studies that cover all three components of CA are scarce. Presently, CA is promoted in the Yaqui Valley in Northern Mexico, the country´s major wheat-producing area in which from 1968 to 1995, fertilizer application rates for the cultivation of irrigated durum wheat (Triticum durum L.) at 6 t ha-1 increased from 80 to 250 kg ha-1, demonstrating the high intensification potential in this region. Given major knowledge gaps on N availability in CA this thesis summarizes the current knowledge of N management in CA and provides insights in the effects of tillage practice, residue management and crop rotation on wheat grain quality and N cycling. Major aims of the study were to identify N fertilizer application strategies that improve N use efficiency and reduce N immobilization in CA with the ultimate goal to stabilize cereal yields, maintain grain quality, minimize N losses into the environment and reduce farmers’ input costs. Soil physical and chemical properties in CA were measured and compared with those in conventional systems and permanent beds with residue burning focusing on their relationship to plant N uptake and N cycling in the soil and how they are affected by tillage and N fertilizer timing, method and doses. For N fertilizer management, we analyzed how placement, time and amount of N fertilizer influenced yield and quality parameters of durum and bread wheat in CA systems. Overall, grain quality parameters, in particular grain protein concentration decreased with zero-tillage and increasing amount of residues left on the field compared with conventional systems. The second part of the dissertation provides an overview of applied methodologies to measure NUE and its components. We evaluated the methodology of ion exchange resin cartridges under irrigated, intensive agricultural cropping systems on Vertisols to measure nitrate leaching losses which through drainage channels ultimately end up in the Sea of Cortez where they lead to algae blooming. A throughout analysis of N inputs and outputs was conducted to calculate N balances in three different tillage-straw systems. As fertilizer inputs are high, N balances were positive in all treatments indicating the risk of N leaching or volatilization during or in subsequent cropping seasons and during heavy rain fall in summer. Contrary to common belief, we did not find negative effects of residue burning on soil nutrient status, yield or N uptake. A labeled fertilizer experiment with urea 15N was implemented in micro-plots to measure N fertilizer recovery and the effects of residual fertilizer N in the soil from summer maize on the following winter crop wheat. Obtained N fertilizer recovery rates for maize grain were with an average of 11% very low for all treatments.
Resumo:
When blood flows through small vessels, the two-phase nature of blood as a suspension of red cells (erythrocytes) in plasma cannot be neglected, and with decreasing vessel size, a homogeneous continuum model become less adequate in describing blood flow. Following the Haynes’ marginal zone theory, and viewing the flow as the result of concentric laminae of fluid moving axially, the present work provides models for fluid flow in dichotomous branching composed by larger and smaller vessels, respectively. Expressions for the branching sizes of parent and daughter vessels, that provides easier flow access, are obtained by means of a constrained optimization approach using the Lagrange multipliers. This study shows that when blood behaves as a Newtonian fluid, Hess – Murray law that states that the daughters-to-parent diameter ratio must equal to 2^(-1/3) is valid. However, when the nature of blood as a suspension becomes important, the expression for optimum branching diameters of vessels is dependent on the separation phase lengths. It is also shown that the same effect occurs for the relative lengths of daughters and parent vessels. For smaller vessels (e. g., arterioles and capillaries), it is found that the daughters-to-parent diameter ratio may varies from 0,741 to 0,849, and the daughters-to-parent length ratio varies from 0,260 to 2,42. For larger vessels (e. g., arteries), the daughters-to-parent diameter ratio and the daughters-to-parent length ratio range from 0,458 to 0,819, and from 0,100 to 6,27, respectively. In this paper, it is also demonstrated that the entropy generated when blood behaves as a single phase fluid (i. e., continuum viscous fluid) is greater than the entropy generated when the nature of blood as a suspension becomes important. Another important finding is that the manifestation of the particulate nature of blood in small vessels reduces entropy generation due to fluid friction, thereby maintaining the flow through dichotomous branching vessels at a relatively lower cost.