981 resultados para Numerical optimization
Resumo:
In this work we solve Mathematical Programs with Complementarity Constraints using the hyperbolic smoothing strategy. Under this approach, the complementarity condition is relaxed through the use of the hyperbolic smoothing function, involving a positive parameter that can be decreased to zero. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
We have developed an algorithm using a Design of Experiments technique for reduction of search-space in global optimization problems. Our approach is called Domain Optimization Algorithm. This approach can efficiently eliminate search-space regions with low probability of containing a global optimum. The Domain Optimization Algorithm approach is based on eliminating non-promising search-space regions, which are identifyed using simple models (linear) fitted to the data. Then, we run a global optimization algorithm starting its population inside the promising region. The proposed approach with this heuristic criterion of population initialization has shown relevant results for tests using hard benchmark functions.
Resumo:
Photovoltaic (PV) solar panels generally produce electricity in the 6% to 16% efficiency range, the rest being dissipated in thermal losses. To recover this amount, hybrid photovoltaic thermal systems (PVT) have been devised. These are devices that simultaneously convert solar energy into electricity and heat. It is thus interesting to study the PVT system globally from different point of views in order to evaluate advantages and disadvantages of this technology and its possible uses. In particular in Chapter II, the development of the PVT absorber numerical optimization by a genetic algorithm has been carried out analyzing different internal channel profiles in order to find a right compromise between performance and technical and economical feasibility. Therefore in Chapter III ,thanks to a mobile structure built into the university lab, it has been compared experimentally electrical and thermal output power from PVT panels with separated photovoltaic and solar thermal productions. Collecting a lot of experimental data based on different seasonal conditions (ambient temperature,irradiation, wind...),the aim of this mobile structure has been to evaluate average both thermal and electrical increasing and decreasing efficiency values obtained respect to separate productions through the year. In Chapter IV , new PVT and solar thermal equation based models in steady state conditions have been developed by software Dymola that uses Modelica language. This permits ,in a simplified way respect to previous system modelling softwares, to model and evaluate different concepts about PVT panel regarding its structure before prototyping and measuring it. Chapter V concerns instead the definition of PVT boundary conditions into a HVAC system . This was made trough year simulations by software Polysun in order to finally assess the best solar assisted integrated structure thanks to F_save(solar saving energy)factor. Finally, Chapter VI presents the conclusion and the perspectives of this PhD work.
Resumo:
This paper uses a novel numerical optimization technique - robust optimization - that is well suited to solving the asset-liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximise the Sharpe ratio. We disaggregate pension liabilities into three components - active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes-Stein, and Black-Litterman models as well as the actual USS investment decisions. Over a 144 month out-of-sample period robust optimization is superior to the four benchmarks across 20 performance criteria, and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.
Resumo:
Técnicas de otimização numérica são úteis na solução de problemas de determinação da melhor entrada para sistemas descritos por modelos matemáticos e cujos objetivos podem ser expressos de uma maneira quantitativa. Este trabalho aborda o problema de otimizar as dosagens dos medicamentos no tratamento da AIDS em termos de um balanço entre a resposta terapêutica e os efeitos colaterais. Um modelo matemático para descrever a dinâmica do vírus HIV e células CD4 é utilizado para calcular a dosagem ótima do medicamento no tratamento a curto prazo de pacientes com AIDS por um método de otimização direta utilizando uma função custo do tipo Bolza. Os parâmetros do modelo foram ajustados com dados reais obtidos da literatura. Com o objetivo de simplificar os procedimentos numéricos, a lei de controle foi expressa em termos de uma expansão em séries que, após truncamento, permite obter controles sub-ótimos. Quando os pacientes atingem um estado clínico satisfatório, a técnica do Regulador Linear Quadrático (RLQ) é utilizada para determinar a dosagem permanente de longo período para os medicamentos. As dosagens calculadas utilizando a técnica RLQ , tendem a ser menores do que a equivalente terapia de dose constante em termos do expressivo aumento na contagem das células T+ CD4 e da redução da densidade de vírus livre durante um intervalo fixo de tempo.
Resumo:
Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.
Resumo:
We propose a positive, accurate moment closure for linear kinetic transport equations based on a filtered spherical harmonic (FP_N) expansion in the angular variable. The FP_N moment equations are accurate approximations to linear kinetic equations, but they are known to suffer from the occurrence of unphysical, negative particle concentrations. The new positive filtered P_N (FP_N+) closure is developed to address this issue. The FP_N+ closure approximates the kinetic distribution by a spherical harmonic expansion that is non-negative on a finite, predetermined set of quadrature points. With an appropriate numerical PDE solver, the FP_N+ closure generates particle concentrations that are guaranteed to be non-negative. Under an additional, mild regularity assumption, we prove that as the moment order tends to infinity, the FP_N+ approximation converges, in the L2 sense, at the same rate as the FP_N approximation; numerical tests suggest that this assumption may not be necessary. By numerical experiments on the challenging line source benchmark problem, we confirm that the FP_N+ method indeed produces accurate and non-negative solutions. To apply the FP_N+ closure on problems at large temporal-spatial scales, we develop a positive asymptotic preserving (AP) numerical PDE solver. We prove that the propose AP scheme maintains stability and accuracy with standard mesh sizes at large temporal-spatial scales, while, for generic numerical schemes, excessive refinements on temporal-spatial meshes are required. We also show that the proposed scheme preserves positivity of the particle concentration, under some time step restriction. Numerical results confirm that the proposed AP scheme is capable for solving linear transport equations at large temporal-spatial scales, for which a generic scheme could fail. Constrained optimization problems are involved in the formulation of the FP_N+ closure to enforce non-negativity of the FP_N+ approximation on the set of quadrature points. These optimization problems can be written as strictly convex quadratic programs (CQPs) with a large number of inequality constraints. To efficiently solve the CQPs, we propose a constraint-reduced variant of a Mehrotra-predictor-corrector algorithm, with a novel constraint selection rule. We prove that, under appropriate assumptions, the proposed optimization algorithm converges globally to the solution at a locally q-quadratic rate. We test the algorithm on randomly generated problems, and the numerical results indicate that the combination of the proposed algorithm and the constraint selection rule outperforms other compared constraint-reduced algorithms, especially for problems with many more inequality constraints than variables.
Resumo:
It is remarkable that there are no deployed military hybrid vehicles since battlefield fuel is approximately 100 times the cost of civilian fuel. In the commercial marketplace, where fuel prices are much lower, electric hybrid vehicles have become increasingly common due to their increased fuel efficiency and the associated operating cost benefit. An absence of military hybrid vehicles is not due to a lack of investment in research and development, but rather because applying hybrid vehicle architectures to a military application has unique challenges. These challenges include inconsistent duty cycles for propulsion requirements and the absence of methods to look at vehicle energy in a holistic sense. This dissertation provides a remedy to these challenges by presenting a method to quantify the benefits of a military hybrid vehicle by regarding that vehicle as a microgrid. This innovative concept allowed for the creation of an expandable multiple input numerical optimization method that was implemented for both real-time control and system design optimization. An example of each of these implementations was presented. Optimization in the loop using this new method was compared to a traditional closed loop control system and proved to be more fuel efficient. System design optimization using this method successfully illustrated battery size optimization by iterating through various electric duty cycles. By utilizing this new multiple input numerical optimization method, a holistic view of duty cycle synthesis, vehicle energy use, and vehicle design optimization can be achieved.
Resumo:
The quantification of quantum correlations (other than entanglement) usually entails labored numerical optimization procedures also demanding quantum state tomographic methods. Thus it is interesting to have a laboratory friendly witness for the nature of correlations. In this Letter we report a direct experimental implementation of such a witness in a room temperature nuclear magnetic resonance system. In our experiment the nature of correlations is revealed by performing only few local magnetization measurements. We also compared the witness results with those for the symmetric quantum discord and we obtained a fairly good agreement.
Resumo:
We describe in detail the theory underpinning the measurement of density matrices of a pair of quantum two-level systems (qubits). Our particular emphasis is on qubits realized by the two polarization degrees of freedom of a pair of entangled photons generated in a down-conversion experiment; however, the discussion applies in general, regardless of the actual physical realization. Two techniques are discussed, namely, a tomographic reconstruction (in which the density matrix is linearly related to a set of measured quantities) and a maximum likelihood technique which requires numerical optimization (but has the advantage of producing density matrices that are always non-negative definite). In addition, a detailed error analysis is presented, allowing errors in quantities derived from the density matrix, such as the entropy or entanglement of formation, to be estimated. Examples based on down-conversion experiments are used to illustrate our results.
Resumo:
Novel current density mapping (CDM) schemes are developed for the design of new actively shielded, clinical magnetic resonance imaging (MRI) magnets. This is an extended inverse method in which the entire potential solution space for the superconductors has been considered, rather than single current density layers. The solution provides an insight into the required superconducting coil pattern for a desired magnet configuration. This information is then used as an initial set of parameters for the magnet structure, and a previously developed hybrid numerical optimization technique is used to obtain the final geometry of the magnet. The CDM scheme is applied to the design of compact symmetric, asymmetric, and open architecture 1.0-1.5 T MRI magnet systems of novel geometry and utility. A new symmetric 1.0-T system that is just I m in length with a full 50-cm diameter of the active, or sensitive, volume (DSV) is detailed, as well as an asymmetric system in which a 50-cm DSV begins just 14 cm from the end of the coil structure. Finally a 1.0-T open magnet system with a full 50-cm DSV is presented. These new designs provide clinically useful homogeneous regions and have appropriately restricted stray fields but, in some of the designs, the DSV is much closer to the end of the magnet system than in conventional designs. These new designs have the potential to reduce patient claustrophobia and improve physician access to patients undergoing scans. (C) 2002 Wiley Periodicals, Inc.
Resumo:
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
Resumo:
This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.
Resumo:
Thermophilic Bacillus sp. SMIA-2, produced protease when grown on apple pectic, whey protein and corn step liquor medium, whose concentration was varied from 3 to 10 gL-1, according to the central composite design 2³. The experiments were conducted in shaker, at 50 °C, 150 rpm and initial pH 6.5. The results revealed that the culture medium affected both, cell growth and enzyme production. After graphical and numerical optimization procedure, the enzyme production reached its maximum value at 30 h fermentation, reaching, approximately, 70 U protein mg-1, suggesting that this process was partially associated to the growth.
Resumo:
This work describes techniques for modeling, optimizing and simulating calibration processes of robots using off-line programming. The identification of geometric parameters of the nominal kinematic model is optimized using techniques of numerical optimization of the mathematical model. The simulation of the actual robot and the measurement system is achieved by introducing random errors representing their physical behavior and its statistical repeatability. An evaluation of the corrected nominal kinematic model brings about a clear perception of the influence of distinct variables involved in the process for a suitable planning, and indicates a considerable accuracy improvement when the optimized model is compared to the non-optimized one.