887 resultados para nonlinear optimization problems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This manuscript presents three approaches : analytical, experimental and numerical, to study the behaviour of a flexible membrane tidal energy converter. This technology, developed by the EEL Energy company, is based on periodic deformations of a pre-stressed flexible structure. Energy converters, located on each side of the device, are set into motion by the wave-like motion. In the analytical model, the membrane is represented by a linear beam model at one dimension and the flow by a 3 dimensions potential fluid. The fluid forces are evaluated by the elongated body theory. Energy is dissipated all over the length of the membrane. A 20th scale experimental prototype has been designed with micro-dampers to simulate the power take-off. Trials have allowed to validate the undulating membrane energy converter concept. A numerical model has been developed. Each element of the device is represented and the energy dissipation is done by dampers element with a damping law linear to damper velocity. Comparison of the three approaches validates their ability to represent the membrane behaviour without damping. The energy dissipation applied with the analytical model is clearly different from the two other models because of the location (where the energy is dissipated) and damping law. The two others show a similar behaviour and the same order of power take off repartition but value of power take off are underestimated by the numerical model. This three approaches have allowed to put forward key-parameters on which depend the behaviour of the membrane and the parametric study highlights the complementarity and the advantage of developing three approaches in parallel to answer industrial optimization problems. To make the link between trials in flume tank and sea trials, a 1/6th prototype has been built. To do so, the change of scale was studied. The behaviour of both prototypes is compared and differences could be explained by differences of boundary conditions and confinement effects. To evaluated membrane long-term behaviour at sea, a method of ageing accelerated by temperature and fatigue tests have been carried out on prototype materials samples submerged in sea water.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given that landfills are depletable and replaceable resources, the right approach, when dealing with landfill management, is that of designing an optimal sequence of landfills rather than designing every single landfill separately. In this paper we use Optimal Control models, with mixed elements of both continuous and discrete time problems, to determine an optimal sequence of landfills, as regarding their capacity and lifetime. The resulting optimization problems involve splitting a time horizon of planning into several subintervals, the length of which has to be decided. In each of the subintervals some costs, the amount of which depends on the value of the decision variables, have to be borne. The obtained results may be applied to other economic problems such as private and public investments, consumption decisions on durable goods, etc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless power transfer (WPT) and radio frequency (RF)-based energy har- vesting arouses a new wireless network paradigm termed as wireless powered com- munication network (WPCN), where some energy-constrained nodes are enabled to harvest energy from the RF signals transferred by other energy-sufficient nodes to support the communication operations in the network, which brings a promising approach for future energy-constrained wireless network design. In this paper, we focus on the optimal WPCN design. We consider a net- work composed of two communication groups, where the first group has sufficient power supply but no available bandwidth, and the second group has licensed band- width but very limited power to perform required information transmission. For such a system, we introduce the power and bandwidth cooperation between the two groups so that both group can accomplish their expected information delivering tasks. Multiple antennas are employed at the hybrid access point (H-AP) to en- hance both energy and information transfer efficiency and the cooperative relaying is employed to help the power-limited group to enhance its information transmission throughput. Compared with existing works, cooperative relaying, time assignment, power allocation, and energy beamforming are jointly designed in a single system. Firstly, we propose a cooperative transmission protocol for the considered system, where group 1 transmits some power to group 2 to help group 2 with information transmission and then group 2 gives some bandwidth to group 1 in return. Sec- ondly, to explore the information transmission performance limit of the system, we formulate two optimization problems to maximize the system weighted sum rate by jointly optimizing the time assignment, power allocation, and energy beamforming under two different power constraints, i.e., the fixed power constraint and the aver- age power constraint, respectively. In order to make the cooperation between the two groups meaningful and guarantee the quality of service (QoS) requirements of both groups, the minimal required data rates of the two groups are considered as constraints for the optimal system design. As both problems are non-convex and have no known solutions, we solve it by using proper variable substitutions and the semi-definite relaxation (SDR). We theoretically prove that our proposed solution method can guarantee to find the global optimal solution. Thirdly, consider that the WPCN has promising application potentials in future energy-constrained net- works, e.g., wireless sensor network (WSN), wireless body area network (WBAN) and Internet of Things (IoT), where the power consumption is very critical. We investigate the minimal power consumption optimal design for the considered co- operation WPCN. For this, we formulate an optimization problem to minimize the total consumed power by jointly optimizing the time assignment, power allocation, and energy beamforming under required data rate constraints. As the problem is also non-convex and has no known solutions, we solve it by using some variable substitutions and the SDR method. We also theoretically prove that our proposed solution method for the minimal power consumption design guarantees the global optimal solution. Extensive experimental results are provided to discuss the system performance behaviors, which provide some useful insights for future WPCN design. It shows that the average power constrained system achieves higher weighted sum rate than the fixed power constrained system. Besides, it also shows that in such a WPCN, relay should be placed closer to the multi-antenna H-AP to achieve higher weighted sum rate and consume lower total power.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The genetic algorithm is a very efficient tool to solve optimization problems. On the other hand, the classroom assignation in any education center, particularly those that does not have enough quantity of classrooms for the courseʼs demand converts it in an optimization problem. In the Department of Computer Science (Universidad de Costa Rica) this work is carried out manually every six months. Besides, at least two persons of the department are dedicated full time to this labor for one week or more. The present article describes an automatic solution that not only reduces the response time to seconds but it also finds an optimal solution in the majority of the cases. In addition gives flexibility in using the program when the information involved with classroom assignation has to be updated. The interface is simple an easy to use.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper, a new equalizer learning scheme is introduced based on the algorithm of the directional evolutionary multi-objective optimization (EMOO). Whilst nonlinear channel equalizers such as the radial basis function (RBF) equalizers have been widely studied to combat the linear and nonlinear distortions in the modern communication systems, most of them do not take into account the equalizers' generalization capabilities. In this paper, equalizers are designed aiming at improving their generalization capabilities. It is proposed that this objective can be achieved by treating the equalizer design problem as a multi-objective optimization (MOO) problem, with each objective based on one of several training sets, followed by deriving equalizers with good capabilities of recovering the signals for all the training sets. Conventional EMOO which is widely applied in the MOO problems suffers from disadvantages such as slow convergence speed. Directional EMOO improves the computational efficiency of the conventional EMOO by explicitly making use of the directional information. The new equalizer learning scheme based on the directional EMOO is applied to the RBF equalizer design. Computer simulation demonstrates that the new scheme can be used to derive RBF equalizers with good generalization capabilities, i.e., good performance on predicting the unseen samples.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

An algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimization and Parameter Estimation (DISOPE), which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimization procedure. A version of the algorithm with a linear-quadratic model-based problem, implemented in the C+ + programming language, is developed and applied to illustrative simulation examples. An analysis of the optimality and convergence properties of the algorithm is also presented.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

SOMS is a general surrogate-based multistart algorithm, which is used in combination with any local optimizer to find global optima for computationally expensive functions with multiple local minima. SOMS differs from previous multistart methods in that a surrogate approximation is used by the multistart algorithm to help reduce the number of function evaluations necessary to identify the most promising points from which to start each nonlinear programming local search. SOMS’s numerical results are compared with four well-known methods, namely, Multi-Level Single Linkage (MLSL), MATLAB’s MultiStart, MATLAB’s GlobalSearch, and GLOBAL. In addition, we propose a class of wavy test functions that mimic the wavy nature of objective functions arising in many black-box simulations. Extensive comparisons of algorithms on the wavy testfunctions and on earlier standard global-optimization test functions are done for a total of 19 different test problems. The numerical results indicate that SOMS performs favorably in comparison to alternative methods and does especially well on wavy functions when the number of function evaluations allowed is limited.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.