975 resultados para Non-lineal optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the evaporation of non-ideal liquid mixtures using a multicomponent mass transfer approach. It develops the concept of evaporation maps as a convenient way of representing the dynamic composition changes of ternary mixtures during an evaporation process. Evaporation maps represent the residual composition of evaporating ternary non-ideal mixtures over the full range of composition, and are analogous to the commonly-used residue curve maps of simple distillation processes. The evaporation process initially considered in this work involves gas-phase limited evaporation from a liquid or wetted-solid surface, over which a gas flows at known conditions. Evaporation may occur into a pure inert gas, or into one pre-loaded with a known fraction of one of the ternary components. To explore multicomponent masstransfer effects, a model is developed that uses an exact solution to the Maxwell-Stefan equations for mass transfer in the gas film, with a lumped approach applied to the liquid phase. Solutions to the evaporation model take the form of trajectories in temperaturecomposition space, which are then projected onto a ternary diagram to form the map. Novel algorithms are developed for computation of pseudo-azeotropes in the evaporating mixture, and for calculation of the multicomponent wet-bulb temperature at a given liquid composition. A numerical continuation method is used to track the bifurcations which occur in the evaporation maps, where the composition of one component of the pre-loaded gas is the bifurcation parameter. The bifurcation diagrams can in principle be used to determine the required gas composition to produce a specific terminal composition in the liquid. A simple homotopy method is developed to track the locations of the various possible pseudo-azeotropes in the mixture. The stability of pseudo-azeotropes in the gas-phase limited case is examined using a linearized analysis of the governing equations. Algorithms for the calculation of separation boundaries in the evaporation maps are developed using an optimization-based method, as well as a method employing eigenvectors derived from the linearized analysis. The flexure of the wet-bulb temperature surface is explored, and it is shown how evaporation trajectories cross ridges and valleys, so that ridges and valleys of the surface do not coincide with separation boundaries. Finally, the assumption of gas-phase limited mass transfer is relaxed, by employing a model that includes diffusion in the liquid phase. A finite-volume method is used to solve the system of partial differential equations that results. The evaporation trajectories for the distributed model reduce to those of the lumped (gas-phase limited) model as the diffusivity in the liquid increases; under the same gas-phase conditions the permissible terminal compositions of the distributed and lumped models are the same.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesi presenta un'attività di ottimizzazione per forme di dirigibili non convenzionali al fine di esaltarne alcune prestazioni. Il loop di ottimizzazione implementato comporta il disegno automatico in ambiente CAD del dirigibile, il salvataggio in formato STL, la elaborazione del modello al fine di ridurre il numero di triangoli della mesh, la valutazione delle masse aggiunte della configurazione, la stima approssimata dell'aerodinamica del dirigibile, ed infine il calcolo della prestazione di interesse. Questa tesi presenta inoltre la descrizione di un codice di ottimizzazione euristica (Particle Swarm Optimization) che viene messo in loop con il precedente ciclo di calcolo, e un caso di studio per dimostrare la funzionalità della metodologia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to optimize the aqueous extraction conditions for the recovery of phenolic compounds and antioxidant capacity of lemon pomace using response surface methodology. An experiment based on Box–Behnken design was conducted to analyse the effects of temperature, time and sample-to-water ratio on the extraction of total phenolic compounds, total flavonoids, proanthocyanidins and antioxidant capacity. Sample-to-solvent ratio had a negative effect on all the dependent variables, while extraction temperature and time had a positive effect only on TPC yields and ABTS antioxidant capacity. The optimal extraction conditions were 95 oC, 15 min, and a sample-to-solvent ratio of 1:100 g/ml. Under these conditions, the aqueous extracts had the same content of TPC and TF as well as antioxidant capacity in comparison with those of methanol extracts obtained by sonication. Therefore these conditions could be applied for further extraction and isolation of phenolic compounds from lemon pomace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Una detallada descripción de la dinámica de bajas energías del entrelazamiento multipartito es proporcionada para sistemas armónicos en una gran variedad de escenarios disipativos. Sin hacer ninguna aproximación central, esta descripción yace principalmente sobre un conjunto razonable de hipótesis acerca del entorno e interacción entorno-sistema, ambas consistente con un análisis lineal de la dinámica disipativa. En la primera parte se deriva un criterio de inseparabilidad capaz de detectar el entrelazamiento k-partito de una extensa clase de estados gausianos y no-gausianos en sistemas de variable continua. Este criterio se emplea para monitorizar la dinámica transitiva del entrelazamiento, mostrando que los estados no-gausianos pueden ser tan robustos frente a los efectos disipativos como los gausianos. Especial atención se dedicada a la dinámica estacionaria del entrelazamiento entre tres osciladores interaccionando con el mismo entorno o diferentes entornos a distintas temperaturas. Este estudio contribuye a dilucidar el papel de las correlaciones cuánticas en el comportamiento de la corrientes energéticas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tomato (Lycopersicon esculentum Mill.) is the second most important vegetable crop worldwide and a rich source of hydrophilic (H) and lipophilic (L) antioxidants. The H fraction is constituted mainly by ascorbic acid and soluble phenolic compounds, while the L fraction contains carotenoids (mostly lycopene), tocopherols, sterols and lipophilic phenolics [1,2]. To obtain these antioxidants it is necessary to follow appropriate extraction methods and processing conditions. In this regard, this study aimed at determining the optimal extraction conditions for H and L antioxidants from a tomato surplus. A 5-level full factorial design with 4 factors (extraction time (I, 0-20 min), temperature (T, 60-180 •c), ethanol percentage (Et, 0-100%) and solid/liquid ratio (S/L, 5-45 g!L)) was implemented and the response surface methodology used for analysis. Extractions were carried out in a Biotage Initiator Microwave apparatus. The concentration-time response methods of crocin and P-carotene bleaching were applied (using 96-well microplates), since they are suitable in vitro assays to evaluate the antioxidant activity of H and L matrices, respectively [3]. Measurements were carried out at intervals of 3, 5 and 10 min (initiation, propagation and asymptotic phases), during a time frame of 200 min. The parameters Pm (maximum protected substrate) and V m (amount of protected substrate per g of extract) and the so called IC50 were used to quantify the response. The optimum extraction conditions were as follows: r~2.25 min, 7'=149.2 •c, Et=99.1 %and SIL=l5.0 giL for H antioxidants; and t=l5.4 min, 7'=60.0 •c, Et=33.0% and S/L~l5.0 g/L for L antioxidants. The proposed model was validated based on the high values of the adjusted coefficient of determination (R2.wi>0.91) and on the non-siguificant differences between predicted and experimental values. It was also found that the antioxidant capacity of the H fraction was much higher than the L one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production of natural extracts requires suitable processing conditions to maximize the preservation of the bioactive ingredients. Herein, a microwave-assisted extraction (MAE) process was optimized, by means of response surface methodology (RSM), to maximize the recovery of phenolic acids and flavonoids and obtain antioxidant ingredients from tomato. A 5-level full factorial Box-Behnken design was successfully implemented for MAE optimization, in which the processing time (t), temperature (T), ethanol concentration (Et) and solid/liquid ratio (S/L) were relevant independent variables. The proposed model was validated based on the high values of the adjusted coefficient of determination and on the non-significant differences between experimental and predicted values. The global optimum processing conditions (t=20 min; T=180 ºC; Et=0 %; and S/L=45 g/L) provided tomato extracts with high potential as nutraceuticals or as active ingredients in the design of functional foods. Additionally, the round tomato variety was highlighted as a source of added-value phenolic acids and flavonoids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[en] It is known that most of the problems applied in the real life present uncertainty. In the rst part of the dissertation, basic concepts and properties of the Stochastic Programming have been introduced to the reader, also known as Optimization under Uncertainty. Moreover, since stochastic programs are complex to compute, we have presented some other models such as wait-and-wee, expected value and the expected result of using expected value. The expected value of perfect information and the value of stochastic solution measures quantify how worthy the Stochastic Programming is, with respect to the other models. In the second part, it has been designed and implemented with the modeller GAMS and the optimizer CPLEX an application that optimizes the distribution of non-perishable products, guaranteeing some nutritional requirements with minimum cost. It has been developed within Hazia project, managed by Sortarazi association and associated with Food Bank of Biscay and Basic Social Services of several districts of Biscay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.

Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.

In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a positive, accurate moment closure for linear kinetic transport equations based on a filtered spherical harmonic (FP_N) expansion in the angular variable. The FP_N moment equations are accurate approximations to linear kinetic equations, but they are known to suffer from the occurrence of unphysical, negative particle concentrations. The new positive filtered P_N (FP_N+) closure is developed to address this issue. The FP_N+ closure approximates the kinetic distribution by a spherical harmonic expansion that is non-negative on a finite, predetermined set of quadrature points. With an appropriate numerical PDE solver, the FP_N+ closure generates particle concentrations that are guaranteed to be non-negative. Under an additional, mild regularity assumption, we prove that as the moment order tends to infinity, the FP_N+ approximation converges, in the L2 sense, at the same rate as the FP_N approximation; numerical tests suggest that this assumption may not be necessary. By numerical experiments on the challenging line source benchmark problem, we confirm that the FP_N+ method indeed produces accurate and non-negative solutions. To apply the FP_N+ closure on problems at large temporal-spatial scales, we develop a positive asymptotic preserving (AP) numerical PDE solver. We prove that the propose AP scheme maintains stability and accuracy with standard mesh sizes at large temporal-spatial scales, while, for generic numerical schemes, excessive refinements on temporal-spatial meshes are required. We also show that the proposed scheme preserves positivity of the particle concentration, under some time step restriction. Numerical results confirm that the proposed AP scheme is capable for solving linear transport equations at large temporal-spatial scales, for which a generic scheme could fail. Constrained optimization problems are involved in the formulation of the FP_N+ closure to enforce non-negativity of the FP_N+ approximation on the set of quadrature points. These optimization problems can be written as strictly convex quadratic programs (CQPs) with a large number of inequality constraints. To efficiently solve the CQPs, we propose a constraint-reduced variant of a Mehrotra-predictor-corrector algorithm, with a novel constraint selection rule. We prove that, under appropriate assumptions, the proposed optimization algorithm converges globally to the solution at a locally q-quadratic rate. We test the algorithm on randomly generated problems, and the numerical results indicate that the combination of the proposed algorithm and the constraint selection rule outperforms other compared constraint-reduced algorithms, especially for problems with many more inequality constraints than variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.

In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.

The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.

We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.

In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protective relaying comprehends several procedures and techniques focused on maintaining the power system working safely during and after undesired and abnormal network conditions, mostly caused by faulty events. Overcurrent relay is one of the oldest protective relays, its operation principle is straightforward: when the measured current is greater than a specified magnitude the protection trips; less variables are required from the system in comparison with other protections, causing the overcurrent relay to be the simplest and also the most difficult protection to coordinate; its simplicity is reflected in low implementation, operation, and maintenance cost. The counterpart consists in the increased tripping times offered by this kind of relays mostly before faults located far from their location; this problem can be particularly accentuated when standardized inverse-time curves are used or when only maximum faults are considered to carry out relay coordination. These limitations have caused overcurrent relay to be slowly relegated and replaced by more sophisticated protection principles, it is still widely applied in subtransmission, distribution, and industrial systems. In this work, the use of non standardized inverse-time curves, the model and implementation of optimization algorithms capable to carry out the coordination process, the use of different levels of short circuit currents, and the inclusion of distance relays to replace insensitive overcurrent ones are proposed methodologies focused on the overcurrent relay performance improvement. These techniques may transform the typical overcurrent relay into a more sophisticated one without changing its fundamental principles and advantages. Consequently a more secure and still economical alternative can be obtained, increasing its implementation area

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, the interest of the automotive market for hybrid vehicles has increased due to the more restrictive pollutants emissions legislation and to the necessity of decreasing the fossil fuel consumption, since such solution allows a consistent improvement of the vehicle global efficiency. The term hybridization regards the energy flow in the powertrain of a vehicle: a standard vehicle has, usually, only one energy source and one energy tank; instead, a hybrid vehicle has at least two energy sources. In most cases, the prime mover is an internal combustion engine (ICE) while the auxiliary energy source can be mechanical, electrical, pneumatic or hydraulic. It is expected from the control unit of a hybrid vehicle the use of the ICE in high efficiency working zones and to shut it down when it is more convenient, while using the EMG at partial loads and as a fast torque response during transients. However, the battery state of charge may represent a limitation for such a strategy. That’s the reason why, in most cases, energy management strategies are based on the State Of Charge, or SOC, control. Several studies have been conducted on this topic and many different approaches have been illustrated. The purpose of this dissertation is to develop an online (usable on-board) control strategy in which the operating modes are defined using an instantaneous optimization method that minimizes the equivalent fuel consumption of a hybrid electric vehicle. The equivalent fuel consumption is calculated by taking into account the total energy used by the hybrid powertrain during the propulsion phases. The first section presents the hybrid vehicles characteristics. The second chapter describes the global model, with a particular focus on the energy management strategies usable for the supervisory control of such a powertrain. The third chapter shows the performance of the implemented controller on a NEDC cycle compared with the one obtained with the original control strategy.