973 resultados para matrix function approximation
Resumo:
Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.
Resumo:
Image analysis and graphics synthesis can be achieved with learning techniques using directly image examples without physically-based, 3D models. In our technique: -- the mapping from novel images to a vector of "pose" and "expression" parameters can be learned from a small set of example images using a function approximation technique that we call an analysis network; -- the inverse mapping from input "pose" and "expression" parameters to output images can be synthesized from a small set of example images and used to produce new images using a similar synthesis network. The techniques described here have several applications in computer graphics, special effects, interactive multimedia and very low bandwidth teleconferencing.
Resumo:
We present a new method for estimating the expected return of a POMDP from experience. The estimator does not assume any knowle ge of the POMDP and allows the experience to be gathered with an arbitrary set of policies. The return is estimated for any new policy of the POMDP. We motivate the estimator from function-approximation and importance sampling points-of-view and derive its theoretical properties. Although the estimator is biased, it has low variance and the bias is often irrelevant when the estimator is used for pair-wise comparisons.We conclude by extending the estimator to policies with memory and compare its performance in a greedy search algorithm to the REINFORCE algorithm showing an order of magnitude reduction in the number of trials required.
Resumo:
In this paper, we employ techniques from artificial intelligence such as reinforcement learning and agent based modeling as building blocks of a computational model for an economy based on conventions. First we model the interaction among firms in the private sector. These firms behave in an information environment based on conventions, meaning that a firm is likely to behave as its neighbors if it observes that their actions lead to a good pay off. On the other hand, we propose the use of reinforcement learning as a computational model for the role of the government in the economy, as the agent that determines the fiscal policy, and whose objective is to maximize the growth of the economy. We present the implementation of a simulator of the proposed model based on SWARM, that employs the SARSA(λ) algorithm combined with a multilayer perceptron as the function approximation for the action value function.
Resumo:
Background: MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results: A large dataset comprising MHC-peptide structural complexes was created by remodelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion: The QSAR techniques of Genetic Function Approximation (GFA) and Genetic Partial Least Squares (G/PLS) algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
This work presents a set of intelligent algorithms with the purpose of correcting calibration errors in sensors and reducting the periodicity of their calibrations. Such algorithms were designed using Artificial Neural Networks due to its great capacity of learning, adaptation and function approximation. Two approaches willbe shown, the firstone uses Multilayer Perceptron Networks to approximate the many shapes of the calibration curve of a sensor which discalibrates in different time points. This approach requires the knowledge of the sensor s functioning time, but this information is not always available. To overcome this need, another approach using Recurrent Neural Networks was proposed. The Recurrent Neural Networks have a great capacity of learning the dynamics of a system to which it was trained, so they can learn the dynamics of a sensor s discalibration. Knowingthe sensor s functioning time or its discalibration dynamics, it is possible to determine how much a sensor is discalibrated and correct its measured value, providing then, a more exact measurement. The algorithms proposed in this work can be implemented in a Foundation Fieldbus industrial network environment, which has a good capacity of device programming through its function blocks, making it possible to have them applied to the measurement process
Resumo:
In multi-robot systems, both control architecture and work strategy represent a challenge for researchers. It is important to have a robust architecture that can be easily adapted to requirement changes. It is also important that work strategy allows robots to complete tasks efficiently, considering that robots interact directly in environments with humans. In this context, this work explores two approaches for robot soccer team coordination for cooperative tasks development. Both approaches are based on a combination of imitation learning and reinforcement learning. Thus, in the first approach was developed a control architecture, a fuzzy inference engine for recognizing situations in robot soccer games, a software for narration of robot soccer games based on the inference engine and the implementation of learning by imitation from observation and analysis of others robotic teams. Moreover, state abstraction was efficiently implemented in reinforcement learning applied to the robot soccer standard problem. Finally, reinforcement learning was implemented in a form where actions are explored only in some states (for example, states where an specialist robot system used them) differently to the traditional form, where actions have to be tested in all states. In the second approach reinforcement learning was implemented with function approximation, for which an algorithm called RBF-Sarsa($lambda$) was created. In both approaches batch reinforcement learning algorithms were implemented and imitation learning was used as a seed for reinforcement learning. Moreover, learning from robotic teams controlled by humans was explored. The proposal in this work had revealed efficient in the robot soccer standard problem and, when implemented in other robotics systems, they will allow that these robotics systems can efficiently and effectively develop assigned tasks. These approaches will give high adaptation capabilities to requirements and environment changes.
Resumo:
The study of function approximation is motivated by the human limitation and inability to register and manipulate with exact precision the behavior variations of the physical nature of a phenomenon. These variations are referred to as signals or signal functions. Many real world problem can be formulated as function approximation problems and from the viewpoint of artificial neural networks these can be seen as the problem of searching for a mapping that establishes a relationship from an input space to an output space through a process of network learning. Several paradigms of artificial neural networks (ANN) exist. Here we will be investigated a comparative of the ANN study of RBF with radial Polynomial Power of Sigmoids (PPS) in function approximation problems. Radial PPS are functions generated by linear combination of powers of sigmoids functions. The main objective of this paper is to show the advantages of the use of the radial PPS functions in relationship traditional RBF, through adaptive training and ridge regression techniques.
Resumo:
Wavelet functions have been used as the activation function in feedforward neural networks. An abundance of R&D has been produced on wavelet neural network area. Some successful algorithms and applications in wavelet neural network have been developed and reported in the literature. However, most of the aforementioned reports impose many restrictions in the classical backpropagation algorithm, such as low dimensionality, tensor product of wavelets, parameters initialization, and, in general, the output is one dimensional, etc. In order to remove some of these restrictions, a family of polynomial wavelets generated from powers of sigmoid functions is presented. We described how a multidimensional wavelet neural networks based on these functions can be constructed, trained and applied in pattern recognition tasks. As an example of application for the method proposed, it is studied the exclusive-or (XOR) problem.
Resumo:
Function approximation is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. Neural networks and wavenets have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. In this paper, it is shown how feedforward neural networks can be built using a different type of activation function referred to as the PPS-wavelet. An algorithm is presented to generate a family of PPS-wavelets that can be used to efficiently construct feedforward networks for function approximation.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Im Forschungsgebiet der Künstlichen Intelligenz, insbesondere im Bereich des maschinellen Lernens, hat sich eine ganze Reihe von Verfahren etabliert, die von biologischen Vorbildern inspiriert sind. Die prominentesten Vertreter derartiger Verfahren sind zum einen Evolutionäre Algorithmen, zum anderen Künstliche Neuronale Netze. Die vorliegende Arbeit befasst sich mit der Entwicklung eines Systems zum maschinellen Lernen, das Charakteristika beider Paradigmen in sich vereint: Das Hybride Lernende Klassifizierende System (HCS) wird basierend auf dem reellwertig kodierten eXtended Learning Classifier System (XCS), das als Lernmechanismus einen Genetischen Algorithmus enthält, und dem Wachsenden Neuralen Gas (GNG) entwickelt. Wie das XCS evolviert auch das HCS mit Hilfe eines Genetischen Algorithmus eine Population von Klassifizierern - das sind Regeln der Form [WENN Bedingung DANN Aktion], wobei die Bedingung angibt, in welchem Bereich des Zustandsraumes eines Lernproblems ein Klassifizierer anwendbar ist. Beim XCS spezifiziert die Bedingung in der Regel einen achsenparallelen Hyperquader, was oftmals keine angemessene Unterteilung des Zustandsraumes erlaubt. Beim HCS hingegen werden die Bedingungen der Klassifizierer durch Gewichtsvektoren beschrieben, wie die Neuronen des GNG sie besitzen. Jeder Klassifizierer ist anwendbar in seiner Zelle der durch die Population des HCS induzierten Voronoizerlegung des Zustandsraumes, dieser kann also flexibler unterteilt werden als beim XCS. Die Verwendung von Gewichtsvektoren ermöglicht ferner, einen vom Neuronenadaptationsverfahren des GNG abgeleiteten Mechanismus als zweites Lernverfahren neben dem Genetischen Algorithmus einzusetzen. Während das Lernen beim XCS rein evolutionär erfolgt, also nur durch Erzeugen neuer Klassifizierer, ermöglicht dies dem HCS, bereits vorhandene Klassifizierer anzupassen und zu verbessern. Zur Evaluation des HCS werden mit diesem verschiedene Lern-Experimente durchgeführt. Die Leistungsfähigkeit des Ansatzes wird in einer Reihe von Lernproblemen aus den Bereichen der Klassifikation, der Funktionsapproximation und des Lernens von Aktionen in einer interaktiven Lernumgebung unter Beweis gestellt.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
We introduce the block numerical range Wn(L) of an operator function L with respect to a decomposition H = H1⊕. . .⊕Hn of the underlying Hilbert space. Our main results include the spectral inclusion property and estimates of the norm of the resolvent for analytic L . They generalise, and improve, the corresponding results for the numerical range (which is the case n = 1) since the block numerical range is contained in, and may be much smaller than, the usual numerical range. We show that refinements of the decomposition entail inclusions between the corresponding block numerical ranges and that the block numerical range of the operator matrix function L contains those of its principal subminors. For the special case of operator polynomials, we investigate the boundedness of Wn(L) and we prove a Perron-Frobenius type result for the block numerical radius of monic operator polynomials with coefficients that are positive in Hilbert lattice sense.