964 resultados para Non-convex optimization
Resumo:
Tomato (Lycopersicon esculentum Mill.) is the second most important vegetable crop worldwide and a rich source of hydrophilic (H) and lipophilic (L) antioxidants. The H fraction is constituted mainly by ascorbic acid and soluble phenolic compounds, while the L fraction contains carotenoids (mostly lycopene), tocopherols, sterols and lipophilic phenolics [1,2]. To obtain these antioxidants it is necessary to follow appropriate extraction methods and processing conditions. In this regard, this study aimed at determining the optimal extraction conditions for H and L antioxidants from a tomato surplus. A 5-level full factorial design with 4 factors (extraction time (I, 0-20 min), temperature (T, 60-180 •c), ethanol percentage (Et, 0-100%) and solid/liquid ratio (S/L, 5-45 g!L)) was implemented and the response surface methodology used for analysis. Extractions were carried out in a Biotage Initiator Microwave apparatus. The concentration-time response methods of crocin and P-carotene bleaching were applied (using 96-well microplates), since they are suitable in vitro assays to evaluate the antioxidant activity of H and L matrices, respectively [3]. Measurements were carried out at intervals of 3, 5 and 10 min (initiation, propagation and asymptotic phases), during a time frame of 200 min. The parameters Pm (maximum protected substrate) and V m (amount of protected substrate per g of extract) and the so called IC50 were used to quantify the response. The optimum extraction conditions were as follows: r~2.25 min, 7'=149.2 •c, Et=99.1 %and SIL=l5.0 giL for H antioxidants; and t=l5.4 min, 7'=60.0 •c, Et=33.0% and S/L~l5.0 g/L for L antioxidants. The proposed model was validated based on the high values of the adjusted coefficient of determination (R2.wi>0.91) and on the non-siguificant differences between predicted and experimental values. It was also found that the antioxidant capacity of the H fraction was much higher than the L one.
Resumo:
The production of natural extracts requires suitable processing conditions to maximize the preservation of the bioactive ingredients. Herein, a microwave-assisted extraction (MAE) process was optimized, by means of response surface methodology (RSM), to maximize the recovery of phenolic acids and flavonoids and obtain antioxidant ingredients from tomato. A 5-level full factorial Box-Behnken design was successfully implemented for MAE optimization, in which the processing time (t), temperature (T), ethanol concentration (Et) and solid/liquid ratio (S/L) were relevant independent variables. The proposed model was validated based on the high values of the adjusted coefficient of determination and on the non-significant differences between experimental and predicted values. The global optimum processing conditions (t=20 min; T=180 ºC; Et=0 %; and S/L=45 g/L) provided tomato extracts with high potential as nutraceuticals or as active ingredients in the design of functional foods. Additionally, the round tomato variety was highlighted as a source of added-value phenolic acids and flavonoids.
Resumo:
[en] It is known that most of the problems applied in the real life present uncertainty. In the rst part of the dissertation, basic concepts and properties of the Stochastic Programming have been introduced to the reader, also known as Optimization under Uncertainty. Moreover, since stochastic programs are complex to compute, we have presented some other models such as wait-and-wee, expected value and the expected result of using expected value. The expected value of perfect information and the value of stochastic solution measures quantify how worthy the Stochastic Programming is, with respect to the other models. In the second part, it has been designed and implemented with the modeller GAMS and the optimizer CPLEX an application that optimizes the distribution of non-perishable products, guaranteeing some nutritional requirements with minimum cost. It has been developed within Hazia project, managed by Sortarazi association and associated with Food Bank of Biscay and Basic Social Services of several districts of Biscay.
Resumo:
In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.
Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.
In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Resumo:
In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.
In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.
The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.
We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.
In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.
Resumo:
Protective relaying comprehends several procedures and techniques focused on maintaining the power system working safely during and after undesired and abnormal network conditions, mostly caused by faulty events. Overcurrent relay is one of the oldest protective relays, its operation principle is straightforward: when the measured current is greater than a specified magnitude the protection trips; less variables are required from the system in comparison with other protections, causing the overcurrent relay to be the simplest and also the most difficult protection to coordinate; its simplicity is reflected in low implementation, operation, and maintenance cost. The counterpart consists in the increased tripping times offered by this kind of relays mostly before faults located far from their location; this problem can be particularly accentuated when standardized inverse-time curves are used or when only maximum faults are considered to carry out relay coordination. These limitations have caused overcurrent relay to be slowly relegated and replaced by more sophisticated protection principles, it is still widely applied in subtransmission, distribution, and industrial systems. In this work, the use of non standardized inverse-time curves, the model and implementation of optimization algorithms capable to carry out the coordination process, the use of different levels of short circuit currents, and the inclusion of distance relays to replace insensitive overcurrent ones are proposed methodologies focused on the overcurrent relay performance improvement. These techniques may transform the typical overcurrent relay into a more sophisticated one without changing its fundamental principles and advantages. Consequently a more secure and still economical alternative can be obtained, increasing its implementation area
Resumo:
Recently, the interest of the automotive market for hybrid vehicles has increased due to the more restrictive pollutants emissions legislation and to the necessity of decreasing the fossil fuel consumption, since such solution allows a consistent improvement of the vehicle global efficiency. The term hybridization regards the energy flow in the powertrain of a vehicle: a standard vehicle has, usually, only one energy source and one energy tank; instead, a hybrid vehicle has at least two energy sources. In most cases, the prime mover is an internal combustion engine (ICE) while the auxiliary energy source can be mechanical, electrical, pneumatic or hydraulic. It is expected from the control unit of a hybrid vehicle the use of the ICE in high efficiency working zones and to shut it down when it is more convenient, while using the EMG at partial loads and as a fast torque response during transients. However, the battery state of charge may represent a limitation for such a strategy. That’s the reason why, in most cases, energy management strategies are based on the State Of Charge, or SOC, control. Several studies have been conducted on this topic and many different approaches have been illustrated. The purpose of this dissertation is to develop an online (usable on-board) control strategy in which the operating modes are defined using an instantaneous optimization method that minimizes the equivalent fuel consumption of a hybrid electric vehicle. The equivalent fuel consumption is calculated by taking into account the total energy used by the hybrid powertrain during the propulsion phases. The first section presents the hybrid vehicles characteristics. The second chapter describes the global model, with a particular focus on the energy management strategies usable for the supervisory control of such a powertrain. The third chapter shows the performance of the implemented controller on a NEDC cycle compared with the one obtained with the original control strategy.
Resumo:
The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.
Resumo:
A wide range of non-destructive testing (NDT) methods for the monitoring the health of concrete structure has been studied for several years. The recent rapid evolution of wireless sensor network (WSN) technologies has resulted in the development of sensing elements that can be embedded in concrete, to monitor the health of infrastructure, collect and report valuable related data. The monitoring system can potentially decrease the high installation time and reduce maintenance cost associated with wired monitoring systems. The monitoring sensors need to operate for a long period of time, but sensors batteries have a finite life span. Hence, novel wireless powering methods must be devised. The optimization of wireless power transfer via Strongly Coupled Magnetic Resonance (SCMR) to sensors embedded in concrete is studied here. First, we analytically derive the optimal geometric parameters for transmission of power in the air. This specifically leads to the identification of the local and global optimization parameters and conditions, it was validated through electromagnetic simulations. Second, the optimum conditions were employed in the model for propagation of energy through plain and reinforced concrete at different humidity conditions, and frequencies with extended Debye's model. This analysis leads to the conclusion that SCMR can be used to efficiently power sensors in plain and reinforced concrete at different humidity levels and depth, also validated through electromagnetic simulations. The optimization of wireless power transmission via SMCR to Wearable and Implantable Medical Device (WIMD) are also explored. The optimum conditions from the analytics were used in the model for propagation of energy through different human tissues. This analysis shows that SCMR can be used to efficiently transfer power to sensors in human tissue without overheating through electromagnetic simulations, as excessive power might result in overheating of the tissue. Standard SCMR is sensitive to misalignment; both 2-loops and 3-loops SCMR with misalignment-insensitive performances are presented. The power transfer efficiencies above 50% was achieved over the complete misalignment range of 0°-90° and dramatically better than typical SCMR with efficiencies less than 10% in extreme misalignment topologies.
Resumo:
Background: Falls are common events in older people, which cause considerable morbidity and mortality. Non-pharmacological interventions are an important approach to prevent falls. There are a large number of systematic reviews of non-pharmacological interventions, whose evidence needs to be synthesized in order to facilitate evidence-based clinical decision making. Objectives: To systematically examine reviews and meta-analyses that evaluated non-pharmacological interventions to prevent falls in older adults in the community, care facilities and hospitals. Methods: We searched the electronic databases Pubmed, the Cochrane Database of Systematic Reviews, EMBASE, CINAHL, PsycINFO, PEDRO and TRIP from January 2009 to March 2015, for systematic reviews that included at least one comparative study, evaluating any non-pharmacological intervention, to prevent falls amongst older adults. The quality of the reviews was assessed using AMSTAR and ProFaNE taxonomy was used to organize the interventions. Results: Fifty-nine systematic reviews were identified which consisted of single, multiple and multi-factorial non-pharmacological interventions to prevent falls in older people. The most frequent ProFaNE defined interventions were exercises either alone or combined with other interventions, followed by environment/assistive technology interventions comprising environmental modifications, assistive and protective aids, staff education and vision assessment/correction. Knowledge was the third principle class of interventions as patient education. Exercise and multifactorial interventions were the most effective treatments to reduce falls in older adults, although not all types of exercise were equally effective in all subjects and in all settings. Effective exercise programs combined balance and strength training. Reviews with a higher AMSTAR score were more likely to contain more primary studies, to be updated and to perform meta-analysis. Conclusions: The aim of this overview of reviews of non-pharmacological interventions to prevent falls in older people in different settings, is to support clinicians and other healthcare workers with clinical decision-making by providing a comprehensive perspective of findings.
Resumo:
The variability in non-dispatchable power generation raises important challenges to the integration of renewable energy sources into the electricity power grid. This paper provides the coordinated trading of wind and photovoltaic energy to mitigate risks due to the wind and solar power variability, electricity prices, and financial penalties arising out the generation shortfall and surplus. The problem of wind-photovoltaic coordinated trading is formulated as a linear programming problem. The goal is to obtain the optimal bidding strategy that maximizes the total profit. The wind-photovoltaic coordinated operation is modeled and compared with the uncoordinated operation. A comparison of the models and relevant conclusions are drawn from an illustrative case study of the Iberian day-ahead electricity market.
Resumo:
The variability in non-dispatchable power generation raises important challenges to the integration of renewable energy sources into the electricity power grid. This paper provides the coordinated trading of wind and photovoltaic energy assisted by a cyber-physical system for supporting management decisions to mitigate risks due to the wind and solar power variability, electricity prices, and financial penalties arising out the generation shortfall and surplus. The problem of wind-photovoltaic coordinated trading is formulated as a stochastic linear programming problem. The goal is to obtain the optimal bidding strategy that maximizes the total profit. The wind-photovoltaic coordinated operation is modelled and compared with the uncoordinated operation. A comparison of the models and relevant conclusions are drawn from an illustrative case study of the Iberian day-ahead electricity market.
Resumo:
The use of atmospheric pressure plasmas for thin film deposition on thermo-sensitive materials is currently one of the main challenges of the plasma scientific community. Despite the growing interest in this field, the existing knowledge gap between gas-phase reaction mechanisms and thin film properties is still one of the most important barriers to overcome for a complete understanding of the process. In this work, thin films surface characterization techniques, combined with passive and active gas-phase diagnostic methods, were used to provide a comprehensive study of the Ar/TEOS deposition process assisted by an atmospheric pressure plasma jet. SiO2-based thin films exhibiting a well-defined chemistry, a good morphological structure and high uniformity were studied in detail by FTIR, XPS, AFM and SEM analysis. Furthermore, non-intrusive spectroscopy techniques (OES, filter imaging) and laser spectroscopic methods (Rayleigh scattering, LIF and TALIF) were employed to shed light on the complexity of gas-phase mechanisms involved in the deposition process and discuss the influence of TEOS admixture on gas temperature, electron density and spatial-temporal behaviours of active species. The poly-diagnostic approach proposed in this work opens interesting perspectives both in terms of process control and optimization of thin film performances.
Resumo:
Since last century, the rising interest of value-added and advanced functional materials has spurred a ceaseless development in terms of industrial processes and applications. Among the emerging technologies, thanks to their unique features and versatility in terms of supported processes, non-equilibrium plasma discharges appear as a key solvent-free, high-throughput and cost-efficient technique. Nevertheless, applied research studies are needed with the aim of addressing plasma potentialities optimizing devices and processes for future industrial applications. In this framework, the aim of this dissertation is to report on the activities carried out and the results achieved concerning the development and optimization of plasma techniques for nanomaterial synthesis and processing to be applied in the biomedical field. In the first section, the design and investigation of a plasma assisted process for the production of silver (Ag) nanostructured multilayer coatings exhibiting anti-biofilm and anti-clot properties is described. With the aim on enabling in-situ and on-demand deposition of Ag nanoparticles (NPs), the optimization of a continuous in-flight aerosol process for particle synthesis is reported. The stability and promising biological performances of deposited coatings spurred further investigation through in-vitro and in-vivo tests which results are reported and discussed. With the aim of addressing the unanswered questions and tuning NPs functionalities, the second section concerns the study of silver containing droplet conversion in a flow-through plasma reactor. The presented results, obtained combining different analysis techniques, support a formation mechanism based on droplet to particle conversion driven by plasma induced precursor reduction. Finally, the third section deals with the development of a simulative and experimental approach used to investigate the in-situ droplet evaporation inside the plasma discharge addressing the main contributions to liquid evaporation in the perspective of process industrial scale up.