621 resultados para Dimensionality


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimal Bayesian multi-target filtering is in general computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency was proposed by Whiteley et. al. Numerical examples were presented for two scenarios, including a challenging nonlinear observation model, to support the claim. This paper studies the theoretical properties of this auxiliary particle implementation. $\mathbb{L}_p$ error bounds are established from which almost sure convergence follows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimal Bayesian multi-target filtering is, in general, computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), we present a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency. Numerical examples are presented for two scenarios, including a challenging nonlinear observation model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent advances in technology involving magnetic materials require development of novel advanced magnetic materials with improved magnetic and magneto-transport properties and with reduced dimensionality. Therefore magnetic materials with outstanding magnetic characteristics and reduced dimensionality have recently gained much attention. Among these magnetic materials a family of thin wires with reduced geometrical dimensions (of order of 1-30 mu m in diameter) have gained importance within the last few years. These thin wires combine excellent soft magnetic properties (with coercivities up to 4 A/m) with attractive magneto-transport properties (Giant Magneto-impedance effect, GMI, Giant Magneto-resistance effect, GMR) and an unusual re-magnetization process in positive magnetostriction compositions exhibiting quite fast domain wall propagation. In this paper we overview the magnetic and magneto-transport properties of these microwires that make them suitable for microsensor applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, reanalysis fields from the ECMWF have been statistically downscaled to predict from large-scale atmospheric fields, surface moisture flux and daily precipitation at two observatories (Zaragoza and Tortosa, Ebro Valley, Spain) during the 1961-2001 period. Three types of downscaling models have been built: (i) analogues, (ii) analogues followed by random forests and (iii) analogues followed by multiple linear regression. The inputs consist of data (predictor fields) taken from the ERA-40 reanalysis. The predicted fields are precipitation and surface moisture flux as measured at the two observatories. With the aim to reduce the dimensionality of the problem, the ERA-40 fields have been decomposed using empirical orthogonal functions. Available daily data has been divided into two parts: a training period used to find a group of about 300 analogues to build the downscaling model (1961-1996) and a test period (19972001), where models' performance has been assessed using independent data. In the case of surface moisture flux, the models based on analogues followed by random forests do not clearly outperform those built on analogues plus multiple linear regression, while simple averages calculated from the nearest analogues found in the training period, yielded only slightly worse results. In the case of precipitation, the three types of model performed equally. These results suggest that most of the models' downscaling capabilities can be attributed to the analogues-calculation stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examine voting situations in which individuals have incomplete information over each others' true preferences. In many respects, this work is motivated by a desire to provide a more complete understanding of so-called probabilistic voting.

Chapter 2 examines the similarities and differences between the incentives faced by politicians who seek to maximize expected vote share, expected plurality, or probability of victory in single member: single vote, simple plurality electoral systems. We find that, in general, the candidates' optimal policies in such an electoral system vary greatly depending on their objective function. We provide several examples, as well as a genericity result which states that almost all such electoral systems (with respect to the distributions of voter behavior) will exhibit different incentives for candidates who seek to maximize expected vote share and those who seek to maximize probability of victory.

In Chapter 3, we adopt a random utility maximizing framework in which individuals' preferences are subject to action-specific exogenous shocks. We show that Nash equilibria exist in voting games possessing such an information structure and in which voters and candidates are each aware that every voter's preferences are subject to such shocks. A special case of our framework is that in which voters are playing a Quantal Response Equilibrium (McKelvey and Palfrey (1995), (1998)). We then examine candidate competition in such games and show that, for sufficiently large electorates, regardless of the dimensionality of the policy space or the number of candidates, there exists a strict equilibrium at the social welfare optimum (i.e., the point which maximizes the sum of voters' utility functions). In two candidate contests we find that this equilibrium is unique.

Finally, in Chapter 4, we attempt the first steps towards a theory of equilibrium in games possessing both continuous action spaces and action-specific preference shocks. Our notion of equilibrium, Variational Response Equilibrium, is shown to exist in all games with continuous payoff functions. We discuss the similarities and differences between this notion of equilibrium and the notion of Quantal Response Equilibrium and offer possible extensions of our framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Faz-se uma revisão do problema da dimensionalidade do espaço entendido como um problema de Física, enfatizando que algumas leis físicas dependem fortemente deste parâmetro topológico do espaço. Discute-se o que já foi feito tanto no caso da equação de Schrödinger quanto na de Dirac. A situação na literatura é bastante controversa e, no caso específico da equação de Dirac em D dimensões, não se encontra nenhum trabalho na literatura científica que leve em conta o potencial de intera coulombiana corretamente generalizado quando o número de dimensões espaciais é maior do que três. Discute-se, portanto, o átomo de hidrogênio relativístico em D dimensões. Novos resultados numéricos para os níveis de energia e para as funções de onda são apresentados e discutidos. Em particular, considera-se a possibilidade de existência de átomos estáveis em espaços com dimensionalidade 6= 3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atualmente, existem modelos matemáticos capazes de preverem acuradamente as relações entre propriedades de estado; e esta tarefa é extremamente importante no contexto da Engenharia Química, uma vez que estes modelos podem ser empregados para avaliar a performance de processos químicos. Ademais, eles são de fundamental importância para a simulação de reservatórios de petróleo e processos de separação. Estes modelos são conhecidos como equações de estado, e podem ser usados em problemas de equilíbrios de fases, principalmente em equilíbrios líquido-vapor. Recentemente, um teorema matemático foi formulado (Teorema de Redução), fornecendo as condições para a redução de dimensionalidade de problemas de equilíbrios de fases para misturas multicomponentes descritas por equações de estado cúbicas e regras de mistura e combinação clássicas. Este teorema mostra como para uma classe bem definidade de modelos termodinâmicos (equações de estado cúbicas e regras de mistura clássicas), pode-se reduzir a dimensão de vários problemas de equilíbrios de fases. Este método é muito vantajoso para misturas com muitos componentes, promovendo uma redução significativa no tempo de computação e produzindo resultados acurados. Neste trabalho, apresentamos alguns experimentos numéricos com misturas-testes usando a técnica de redução para obter pressões de ponto de orvalho sob especificação de temperaturas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

FOTO-ASSEMBLAGE consiste em nomenclatura sugestionada para definir os trabalhos que tenho produzido a partir da junção de fotografias digitais. As elaborações e fundamentações desses trabalhos representam também o cerne das pesquisas que resultaram na presente dissertação. Em princípio, o termo foto-assemblage haveria de referir-se a questões técnicas ou formais dessa prática. Contudo, ao desenvolver as pesquisas alguns procedimentos acabaram por determinar certas nuances que revelaram aspectos comuns também em seus e conteúdos. Como resultado de construções artísticas juntando fotografias desde 2009, cheguei às composições sintéticas aqui apresentadas, construídas a partir de duas fotografias. Aventei o nome foto-assemblage por observar nas imagens resultantes ressalvas que as distinguiriam de certas convenções atribuídas à ideia de fotografia. Ao mesmo tempo, as referidas imagens proporiam um possível desdobramento ao entendimento de assemblage enquanto técnica artística. Ainda que não seja uma regra, fotografias revelam imagens de momentos. Em sua relação com a compreensão humana de tempo ou espaço, fotografias quase sempre contêm instâncias mínimas. Fotografias, contudo, podem ser também compreendidas como uma contração de um percurso de tempo. Toda imagem fotográfica pode ser assimilada como resultante de determinados acontecimentos anteriores e mesmo tida como elemento gerador de conseqüências futuras. Seguindo esse entendimento, o que proponho com a foto-assemblage é que essa lide com segmentos de tempo ou de espaço contidos numa mesma imagem. Essas fotografias originárias ganhariam uma nova atribuição, sendo retiradas de seu contexto original, serviriam de balizas do percurso de tempo ou espaço suprimido e subjetivado entre elas. Poeticamente, eventos ocorridos entre as fotografias originárias estariam contidos nas imagens produzidas. O termo assemblage foi incorporado às artes a partir de 1953, por Jean Dubuffet, para descrever trabalhos que seriam algo mais do que simples colagem. A ideia de assemblage se baseia no princípio de que todo e qualquer material ou objeto colhido de nosso mundo cotidiano pode ser incorporado a uma obra de arte, criando um novo conjunto, sem que perca seu sentido original. Esse objeto é arrancado de seu uso habitual e inserido num novo contexto, tecendo laços de relação com os demais elementos, construindo narrativas num novo ambiente, o da obra. Na ideia da foto-assemblage, entretanto, é sugerido uso das imagens fotográficas originárias não como objetos que estariam em um mundo cotidiano, mas sim como imagem na concepção do que seria uma entidade mental. Adoto como que uma visão mágica onde as imagens originárias e básicas estariam numa outra dimensão, num plano bidimensional, não manipulável por nós habitantes da tridimensionalidade. Nesse ambiente imaginário ou não, as fotografias são assentadas consolidando a foto-assemblage. Quando a foto-assemblage se concretiza, se corporifica numa mídia, sendo impressa para uma contemplação, ai então, passaria a integrar nosso mundo tridimensional. O resultado poderia ser admitido como um híbrido, uma terceira coisa, a partir de duas que já não se dissociam mais no ensejo de uma compreensão estética. Ao final da dissertação, apresento experiências práticas que resultaram em quatro séries de imagens em foto-assemblage. Cada série enfatiza aspectos peculiares do que denomino paisagem expandida, representando percursos de tempo, espaço ou trajetos entre o mundo concreto e mundos do inconsciente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A digitalização das mídias e a influência da tecnologia ocasionaram a difusão de um novo modelo de cinema, conjunto complexo de mudanças pautadas na lógica econômica e em um processo de cognição específico. Nesse contexto, na nova disposição do cinema, observa-se uma transformação no processo narrativo fílmico para além de novos formatos e conteúdos audiovisuais. Para investigar o fenômeno da polidimensionalidade do conceito da narrativa no cinema contemporâneo, a presente dissertação realiza uma análise qualitativa de exemplos significativos do cinema comercial, considerando a não-linearidade, a transnacionalização, a transmídia, além de destacar a vertente da multiplicidade narrativa, da polidimensionalidade, no cinema experimental e nas instalações artísticas. Como recorte do objeto de pesquisa, optou-se pela análise do filme Slumdog Millionaire (2008), nos itens transnacionalização e não-linearidade; dos filmes Watchmen (2009) e Batman (The Dark Knight, 2008), como franquias de transmídia; além da multiplicidade do cinema experimental What we will e instalações. O critério de seleção dos filmes foi o destaque que obtiveram nas respectivas categorias nos últimos dois anos, nas grandes bilheterias do cinema comercial pelos sites Filme B e The Internet Movie Database