13 resultados para Stochastic Approximation Algorithms

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, several computational schemes are presented for the optimal tuning of the global behavior of nonlinear dynamical sys- tems. Specifically, the maximization of the size of domains of attraction associated with invariants in parametrized dynamical sys- tems is addressed. Cell Mapping (CM) tech- niques are used to estimate the size of the domains, and such size is then maximized via different optimization tools. First, a ge- netic algorithm is tested whose performance shows to be good for determining global maxima at the expense of high computa- tional cost. Secondly, an iterative scheme based on a Stochastic Approximation proce- dure (the Kiefer-Wolfowitz algorithm) is eval- uated showing acceptable performance at low cost. Finally, several schemes combining neu- ral network based estimations and optimiza- tion procedures are addressed with promising results. The performance of the methods is illus- trated with two applications: first on the well-known van der Pol equation with stan- dard parametrization, and second the tuning of a controller for saturated systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La tesis MEDIDAS AUTOSEMEJANTES EN EL PLANO, MOMENTOS Y MATRICES DE HESSENBERG se enmarca entre las áreas de la teoría geométrica de la medida, la teoría de polinomios ortogonales y la teoría de operadores. La memoria aborda el estudio de medidas con soporte acotado en el plano complejo vistas con la óptica de las matrices infinitas de momentos y de Hessenberg asociadas a estas medidas que en la teoría de los polinomios ortogonales las representan. En particular se centra en el estudio de las medidas autosemejantes que son las medidas de equilibrio definidas por un sistema de funciones iteradas (SFI). Los conjuntos autosemejantes son conjuntos que tienen la propiedad geométrica de descomponerse en unión de piezas semejantes al conjunto total. Estas piezas pueden solaparse o no, cuando el solapamiento es pequeño la teoría de Hutchinson [Hut81] funciona bien, pero cuando no existen restricciones falla. El problema del solapamiento consiste en controlar la medida de este solapamiento. Un ejemplo de la complejidad de este problema se plantea con las convoluciones infinitas de distribuciones de Bernoulli, que han resultado ser un ejemplo de medidas autosemejantes en el caso real. En 1935 Jessen y A. Wintner [JW35] ya se planteaba este problema, lejos de ser sencillo ha sido estudiado durante más de setenta y cinco años y siguen sin resolverse las principales cuestiones planteadas ya por A. Garsia [Gar62] en 1962. El interés que ha despertado este problema así como la complejidad del mismo está demostrado por las numerosas publicaciones que abordan cuestiones relacionadas con este problema ver por ejemplo [JW35], [Erd39], [PS96], [Ma00], [Ma96], [Sol98], [Mat95], [PS96], [Sim05],[JKS07] [JKS11]. En el primer capítulo comenzamos introduciendo con detalle las medidas autosemejante en el plano complejo y los sistemas de funciones iteradas, así como los conceptos de la teoría de la medida necesarios para describirlos. A continuación se introducen las herramientas necesarias de teoría de polinomios ortogonales, matrices infinitas y operadores que se van a usar. En el segundo y tercer capítulo trasladamos las propiedades geométricas de las medidas autosemejantes a las matrices de momentos y de Hessenberg, respectivamente. A partir de estos resultados se describen algoritmos para calcular estas matrices a partir del SFI correspondiente. Concretamente, se obtienen fórmulas explícitas y algoritmos de aproximación para los momentos y matrices de momentos de medidas fractales, a partir de un teorema del punto fijo para las matrices. Además utilizando técnicas de la teoría de operadores, se han extendido al plano complejo los resultados que G. Mantica [Ma00, Ma96] obtenía en el caso real. Este resultado es la base para definir un algoritmo estable de aproximación de la matriz de Hessenberg asociada a una medida fractal u obtener secciones finitas exactas de matrices Hessenberg asociadas a una suma de medidas. En el último capítulo, se consideran medidas, μ, más generales y se estudia el comportamiento asintótico de los autovalores de una matriz hermitiana de momentos y su impacto en las propiedades de la medida asociada. En el resultado central se demuestra que si los polinomios asociados son densos en L2(μ) entonces necesariamente el autovalor mínimo de las secciones finitas de la matriz de momentos de la medida tiende a cero. ABSTRACT The Thesis work “Self-similar Measures on the Plane, Moments and Hessenberg Matrices” is framed among the geometric measure theory, orthogonal polynomials and operator theory. The work studies measures with compact support on the complex plane from the point of view of the associated infinite moments and Hessenberg matrices representing them in the theory of orthogonal polynomials. More precisely, it concentrates on the study of the self-similar measures that are equilibrium measures in a iterated functions system. Self-similar sets have the geometric property of being decomposable in a union of similar pieces to the complete set. These pieces can overlap. If the overlapping is small, Hutchinson’s theory [Hut81] works well, however, when it has no restrictions, the theory does not hold. The overlapping problem consists in controlling the measure of the overlap. The complexity of this problem is exemplified in the infinite convolutions of Bernoulli’s distributions, that are an example of self-similar measures in the real case. As early as 1935 [JW35], Jessen and Wintner posed this problem, that far from being simple, has been studied during more than 75 years. The main cuestiones posed by Garsia in 1962 [Gar62] remain unsolved. The interest in this problem, together with its complexity, is demonstrated by the number of publications that over the years have dealt with it. See, for example, [JW35], [Erd39], [PS96], [Ma00], [Ma96], [Sol98], [Mat95], [PS96], [Sim05], [JKS07] [JKS11]. In the first chapter, we will start with a detailed introduction to the self-similar measurements in the complex plane and to the iterated functions systems, also including the concepts of measure theory needed to describe them. Next, we introduce the necessary tools from orthogonal polynomials, infinite matrices and operators. In the second and third chapter we will translate the geometric properties of selfsimilar measures to the moments and Hessenberg matrices. From these results, we will describe algorithms to calculate these matrices from the corresponding iterated functions systems. To be precise, we obtain explicit formulas and approximation algorithms for the moments and moment matrices of fractal measures from a new fixed point theorem for matrices. Moreover, using techniques from operator theory, we extend to the complex plane the real case results obtained by Mantica [Ma00, Ma96]. This result is the base to define a stable algorithm that approximates the Hessenberg matrix associated to a fractal measure and obtains exact finite sections of Hessenberg matrices associated to a sum of measurements. In the last chapter, we consider more general measures, μ, and study the asymptotic behaviour of the eigenvalues of a hermitian matrix of moments, together with its impact on the properties of the associated measure. In the main result we demonstrate that, if the associated polynomials are dense in L2(μ), then necessarily follows that the minimum eigenvalue of the finite sections of the moments matrix goes to zero.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose four approximation algorithms (metaheuristic based), for the Minimum Vertex Floodlight Set problem. Urrutia et al. [9] solved the combinatorial problem, although it is strongly believed that the algorithmic problem is NP-hard. We conclude that, on average, the minimum number of vertex floodlights needed to illuminate a orthogonal polygon with n vertices is n/4,29.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

System identification deals with the problem of building mathematical models of dynamical systems based on observed data from the system" [1]. In the context of civil engineering, the system refers to a large scale structure such as a building, bridge, or an offshore structure, and identification mostly involves the determination of modal parameters (the natural frequencies, damping ratios, and mode shapes). This paper presents some modal identification results obtained using a state-of-the-art time domain system identification method (data-driven stochastic subspace algorithms [2]) applied to the output-only data measured in a steel arch bridge. First, a three dimensional finite element model was developed for the numerical analysis of the structure using ANSYS. Modal analysis was carried out and modal parameters were extracted in the frequency range of interest, 0-10 Hz. The results obtained from the finite element modal analysis were used to determine the location of the sensors. After that, ambient vibration tests were conducted during April 23-24, 2009. The response of the structure was measured using eight accelerometers. Two stations of three sensors were formed (triaxial stations). These sensors were held stationary for reference during the test. The two remaining sensors were placed at the different measurement points along the bridge deck, in which only vertical and transversal measurements were conducted (biaxial stations). Point estimate and interval estimate have been carried out in the state space model using these ambient vibration measurements. In the case of parametric models (like state space), the dynamic behaviour of a system is described using mathematical models. Then, mathematical relationships can be established between modal parameters and estimated point parameters (thus, it is common to use experimental modal analysis as a synonym for system identification). Stable modal parameters are found using a stabilization diagram. Furthermore, this paper proposes a method for assessing the precision of estimates of the parameters of state-space models (confidence interval). This approach employs the nonparametric bootstrap procedure [3] and is applied to subspace parameter estimation algorithm. Using bootstrap results, a plot similar to a stabilization diagram is developed. These graphics differentiate system modes from spurious noise modes for a given order system. Additionally, using the modal assurance criterion, the experimental modes obtained have been compared with those evaluated from a finite element analysis. A quite good agreement between numerical and experimental results is observed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the renormalization group flow of the average action of the stochastic Navier-Stokes equation with power-law forcing. Using Galilean invariance, we introduce a nonperturbative approximation adapted to the zero-frequency sector of the theory in the parametric range of the Hölder exponent 4−2 ɛ of the forcing where real-space local interactions are relevant. In any spatial dimension d, we observe the convergence of the resulting renormalization group flow to a unique fixed point which yields a kinetic energy spectrum scaling in agreement with canonical dimension analysis. Kolmogorov's −5/3 law is, thus, recovered for ɛ=2 as also predicted by perturbative renormalization. At variance with the perturbative prediction, the −5/3 law emerges in the presence of a saturation in the ɛ dependence of the scaling dimension of the eddy diffusivity at ɛ=3/2 when, according to perturbative renormalization, the velocity field becomes infrared relevant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The algorithms and graphic user interface software package ?OPT-PROx? are developed to meet food engineering needs related to canned food thermal processing simulation and optimization. The adaptive random search algorithm and its modification coupled with penalty function?s approach, and the finite difference methods with cubic spline approximation are utilized by ?OPT-PROx? package (http://tomakechoice. com/optprox/index.html). The diversity of thermal food processing optimization problems with different objectives and required constraints are solvable by developed software. The geometries supported by the ?OPT-PROx? are the following: (1) cylinder, (2) rectangle, (3) sphere. The mean square error minimization principle is utilized in order to estimate the heat transfer coefficient of food to be heated under optimal condition. The developed user friendly dialogue and used numerical procedures makes the ?OPT-PROx? software useful to food scientists in research and education, as well as to engineers involved in optimization of thermal food processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo (MC) methods are widely used in signal processing, machine learning and stochastic optimization. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information using another MCMC technique working on the entire population of current states. These parallel ?vertical? chains are led by random-walk proposals, whereas the ?horizontal? MCMC uses a independent proposal, which can be easily adapted by making use of all the generated samples. Numerical results show the advantages of the proposed sampling scheme in terms of mean absolute error, as well as robustness w.r.t. to initial values and parameter choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.