964 resultados para Linearly constrained optimization
Resumo:
A discrete vortex method-based model has been proposed for two-dimensional/three-dimensional ground-effect prediction. The model merely requires two-dimensional sectional aerodynamics in free flight. This free-flight data can be obtained either from experiments or a high-fidelity computational fluid dynamics solver. The first step of this two-step model involves a constrained optimization procedure that modifies the vortex distribution on the camber line as obtained from a discrete vortex method to match the free-flight data from experiments/computational fluid dynamics. In the second step, the vortex distribution thus obtained is further modified to account for the presence of the ground plane within a discrete vortex method-based framework. Whereas the predictability of the lift appears as a natural extension, the drag predictability within a potential flow framework is achieved through the introduction of what are referred to as drag panels. The need for the use of the generalized Kutta-Joukowski theorem is emphasized. The extension of the model to three dimensions is by the way of using the numerical lifting-line theory that allows for wing sweep. The model is extensively validated for both two-dimensional and three-dimensional ground-effect studies. The work also demonstrates the ability of the model to predict lift and drag coefficients of a high-lift wing in ground effect to about 2 and 8% accuracy, respectively, as compared to the results obtained using a Reynolds-averaged Navier-Stokes solver involving grids with several million volumes. The model shows a lot of promise in design, particularly during the early phase.
Resumo:
We present in this paper a new algorithm based on Particle Swarm Optimization (PSO) for solving Dynamic Single Objective Constrained Optimization (DCOP) problems. We have modified several different parameters of the original particle swarm optimization algorithm by introducing new types of particles for local search and to detect changes in the search space. The algorithm is tested with a known benchmark set and compare with the results with other contemporary works. We demonstrate the convergence properties by using convergence graphs and also the illustrate the changes in the current benchmark problems for more realistic correspondence to practical real world problems.
Resumo:
We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy.
Resumo:
Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.
This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.
This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.
The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.
The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.
Resumo:
The architecture of model predictive control (MPC), with its explicit internal model and constrained optimization is presented. Since MPC relies on an explicit internal model, one can imagine dealing with failures by updating the internal model, and letting the on-line optimizer work out how to control the system in its new condition. This aspects rely on assumptions such that the nature of the fault can be located, and the model can be updated automatically. A standard form of MPC, with linear inequality constraints on inputs and outputs, linear internal model, and quadriatic cost function.
Resumo:
We apply adjoint-based sensitivity analysis to a time-delayed thermo-acoustic system: a Rijke tube containing a hot wire. We calculate how the growth rate and frequency of small oscillations about a base state are affected either by a generic passive control element in the system (the structural sensitivity analysis) or by a generic change to its base state (the base-state sensitivity analysis). We illustrate the structural sensitivity by calculating the effect of a second hot wire with a small heat-release parameter. In a single calculation, this shows how the second hot wire changes the growth rate and frequency of the small oscillations, as a function of its position in the tube. We then examine the components of the structural sensitivity in order to determine the passive control mechanism that has the strongest influence on the growth rate. We find that a force applied to the acoustic momentum equation in the opposite direction to the instantaneous velocity is the most stabilizing feedback mechanism. We also find that its effect is maximized when it is placed at the downstream end of the tube. This feedback mechanism could be supplied, for example, by an adiabatic mesh. We illustrate the base-state sensitivity by calculating the effects of small variations in the damping factor, the heat-release time-delay coefficient, the heat-release parameter, and the hot-wire location. The successful application of sensitivity analysis to thermo-acoustics opens up new possibilities for the passive control of thermo-acoustic oscillations by providing gradient information that can be combined with constrained optimization algorithms in order to reduce linear growth rates. © Cambridge University Press 2013.
Resumo:
标准约束优化问题的等式或不等式约束之间是逻辑“与”关系,目前已经有很多高效、收敛的优化算法.但是,在实际应用中有很多更一般的约束优化问题,其等式或不等式约束之间不仅包含逻辑“与”关系,而且还包含逻辑“或”关系,现有的针对标准约束优化问题的各种算法不再适用,给出一种新的数学变换方法,把具有逻辑“或”关系的不等式约束转换为一组具有逻辑“与”关系的不等式,并应用到实时单调速率调度算法的可调度性判定充要条件中,把实时系统设计表示成混合布尔型整数规划问题,利用经典的分支定界法求解.实验部分指出了各种方法的优缺点.
Resumo:
在电机的设计中,常常需要通过优化设计得到合理的电机结构尺寸和参数.电机的设计问题实质上是一种带约束的复杂的非线性连续函数优化问题.要得到一个满意的优化结果不仅要求算法具有较高的精度,而且要有快的收敛速度.提出一种新的混合算法对永磁电机的尺寸和整体结构进行优化设计.将混沌算法和粒子群算法相结合,以微型永磁电机为例,对槽形等多个变量进行优化,结果证明了算法的有效性和快速性,适合于同类问题求解.
Resumo:
Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.
Resumo:
We propose an estimation-theoretic approach to the inference of an incoherent 3D scattering density from 2D scattered speckle field measurements. The object density is derived from the covariance of the speckle field. The inference is performed by a constrained optimization technique inspired by compressive sensing theory. Experimental results demonstrate and verify the performance of our estimates.
Resumo:
An overview of a many-body approach to calculation of electronic transport in molecular systems is given. The physics required to describe electronic transport through a molecule at the many-body level, without relying on commonly made assumptions such as the Landauer formalism or linear response theory, is discussed. Physically, our method relies on the incorporation of scattering boundary conditions into a many-body wavefunction and application of the maximum entropy principle to the transport region. Mathematically, this simple physical model translates into a constrained nonlinear optimization problem. A strategy for solving the constrained optimization problem is given. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Unit Commitment Problem (UCP) in power system refers to the problem of determining the on/ off status of generating units that minimize the operating cost during a given time horizon. Since various system and generation constraints are to be satisfied while finding the optimum schedule, UCP turns to be a constrained optimization problem in power system scheduling. Numerical solutions developed are limited for small systems and heuristic methodologies find difficulty in handling stochastic cost functions associated with practical systems. This paper models Unit Commitment as a multi stage decision making task and an efficient Reinforcement Learning solution is formulated considering minimum up time /down time constraints. The correctness and efficiency of the developed solutions are verified for standard test systems
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
Resumo:
Identifying a periodic time-series model from environmental records, without imposing the positivity of the growth rate, does not necessarily respect the time order of the data observations. Consequently, subsequent observations, sampled in the environmental archive, can be inversed on the time axis, resulting in a non-physical signal model. In this paper an optimization technique with linear constraints on the signal model parameters is proposed that prevents time inversions. The activation conditions for this constrained optimization are based upon the physical constraint of the growth rate, namely, that it cannot take values smaller than zero. The actual constraints are defined for polynomials and first-order splines as basis functions for the nonlinear contribution in the distance-time relationship. The method is compared with an existing method that eliminates the time inversions, and its noise sensitivity is tested by means of Monte Carlo simulations. Finally, the usefulness of the method is demonstrated on the measurements of the vessel density, in a mangrove tree, Rhizophora mucronata, and the measurement of Mg/Ca ratios, in a bivalve, Mytilus trossulus.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.