934 resultados para Restricted maximum likelihood
Resumo:
In this paper, we consider spatial modulation (SM) operating in a frequency-selective single-carrier (SC) communication scenario and propose zero-padding instead of the cyclic-prefix considered in the existing literature. We show that the zero-padded single-carrier (ZP-SC) SM system offers full multipath diversity under maximum-likelihood (ML) detection, unlike the cyclic-prefix based SM system. Furthermore, we show that the order of ML detection complexity in our proposed ZP-SC SM system is independent of the frame length and depends only on the number of multipath links between the transmitter and the receiver. Thus, we show that the zero-padding applied in the SC SM system has two advantages over the cyclic prefix: 1) achieves full multipath diversity, and 2) imposes a relatively low ML detection complexity. Furthermore, we extend the partial interference cancellation receiver (PIC-R) proposed by Guo and Xia for the detection of space-time block codes (STBCs) in order to convert the ZP-SC system into a set of narrowband subsystems experiencing flat-fading. We show that full rank STBC transmissions over these subsystems achieves full transmit, receive as well as multipath diversity for the PIC-R. Furthermore, we show that the ZP-SC SM system achieves receive and multipath diversity for the PIC-R at a detection complexity order which is the same as that of the SM system in flat-fading scenario. Our simulation results demonstrate that the symbol error ratio performance of the proposed linear receiver for the ZP-SC SM system is significantly better than that of the SM in cyclic prefix based orthogonal frequency division multiplexing as well as of the SM in the cyclic-prefixed and zero-padded single carrier systems relying on zero-forcing/minimum mean-squared error equalizer based receivers.
Resumo:
Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.
Resumo:
Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.
Resumo:
We propose and demonstrate a limited-view light sheet microscopy (LV-LSM) for three dimensional (3D) volume imaging. Realizing that longer and frequent image acquisition results in significant photo-bleaching, we have taken limited angular views (18 views) of the macroscopic specimen and integrated with maximum likelihood (ML) technique for reconstructing high quality 3D volume images. Existing variants of light-sheet microscopy require both rotation and translation with a total of approximately 10-fold more views to render a 3D volume image. Comparatively, LV-LSM technique reduces data acquisition time and consequently minimizes light-exposure by many-folds. Since ML is a post-processing technique and highly parallelizable, this does not cost precious imaging time. Results show noise-free and high contrast volume images when compared to the state-of-the-art selective plane illumination microscopy. (C) 2015 AIP Publishing LLC.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.
Resumo:
Resumen: Se aplicó el Modelo de Crédito Parcial (MCP) de la Teoría de Respuesta al Ítem (TRI) al análisis de ítems de una escala que mide Afecto hacia la Matemática. Esta variable describe el interés de los estudiantes de Psicología por involucrarse en actividades vinculadas a la matemática y los sentimientos asociados al uso de sus conceptos. La prueba consta de 8 ítems con formato de respuesta Likert de 6 opciones. Participaron 1875 estudiantes de Psicología de la Universidad de Buenos Aires (Argentina) de los cuales un 82% fueron mujeres. El análisis de la consistencia interna brindó un índice altamente satisfactorio (Alfa = .91). Se verificó la condición de unidimensionalidad requerida por el modelo mediante un análisis factorial exploratorio. Todos los análisis basados sobre la TRI se realizaron con el programa Winsteps. La estimación de los parámetros del modelo se efectuó por Máxima Verosimilitud Conjunta. El ajuste del MCP fue satisfactorio para todos los ítems. La Función de Información del Test fue elevada en un rango amplio de niveles del rasgo latente. Un ítem presentó una inversión en dos parámetros de umbral. Como consecuencia, 1 de las 6 categorías del ítem no fue máximamente probable en ningún intervalo de la escala del rasgo latente. Se analizan las implicancias de este hallazgo en la evaluación de la calidad psicométrica del ítem. Los resultados de este estudio permitieron profundizar el análisis del constructo y aportaron evidencias de validez basadas en las estructura interna de la escala
Resumo:
Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.
Resumo:
This paper estimates a standard version of the New Keynesian monetary (NKM) model under alternative specifications of the monetary policy rule using U.S. and Eurozone data. The estimation procedure implemented is a classical method based on the indirect inference principle. An unrestricted VAR is considered as the auxiliary model. On the one hand, the estimation method proposed overcomes some of the shortcomings of using a structural VAR as the auxiliary model in order to identify the impulse response that defines the minimum distance estimator implemented in the literature. On the other hand, by following a classical approach we can further assess the estimation results found in recent papers that follow a maximum-likelihood Bayesian approach. The estimation results show that some structural parameter estimates are quite sensitive to the specification of monetary policy. Moreover, the estimation results in the U.S. show that the fit of the NKM under an optimal monetary plan is much worse than the fit of the NKM model assuming a forward-looking Taylor rule. In contrast to the U.S. case, in the Eurozone the best fit is obtained assuming a backward-looking Taylor rule, but the improvement is rather small with respect to assuming either a forward-looking Taylor rule or an optimal plan.
Resumo:
We report the findings of an experiment designed to study how people learn and make decisions in network games. Network games offer new opportunities to identify learning rules, since on networks (compared to e.g. random matching) more rules differ in terms of their information requirements. Our experimental design enables us to observe both which actions participants choose and which information they consult before making their choices. We use this information to estimate learning types using maximum likelihood methods. There is substantial heterogeneity in learning types. However, the vast majority of our participants' decisions are best characterized by reinforcement learning or (myopic) best-response learning. The distribution of learning types seems fairly stable across contexts. Neither network topology nor the position of a player in the network seem to substantially affect the estimated distribution of learning types.
Resumo:
Without knowledge of basic seafloor characteristics, the ability to address any number of critical marine and/or coastal management issues is diminished. For example, management and conservation of essential fish habitat (EFH), a requirement mandated by federally guided fishery management plans (FMPs), requires among other things a description of habitats for federally managed species. Although the list of attributes important to habitat are numerous, the ability to efficiently and effectively describe many, and especially at the scales required, does not exist with the tools currently available. However, several characteristics of seafloor morphology are readily obtainable at multiple scales and can serve as useful descriptors of habitat. Recent advancements in acoustic technology, such as multibeam echosounding (MBES), can provide remote indication of surficial sediment properties such as texture, hardness, or roughness, and further permit highly detailed renderings of seafloor morphology. With acoustic-based surveys providing a relatively efficient method for data acquisition, there exists a need for efficient and reproducible automated segmentation routines to process the data. Using MBES data collected by the Olympic Coast National Marine Sanctuary (OCNMS), and through a contracted seafloor survey, we expanded on the techniques of Cutter et al. (2003) to describe an objective repeatable process that uses parameterized local Fourier histogram (LFH) texture features to automate segmentation of surficial sediments from acoustic imagery using a maximum likelihood decision rule. Sonar signatures and classification performance were evaluated using video imagery obtained from a towed camera sled. Segmented raster images were converted to polygon features and attributed using a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999) for use in a geographical information system (GIS). (PDF contains 41 pages.)
Resumo:
Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer
Resumo:
The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.
Resumo:
The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.
The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.
In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.
The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.