973 resultados para Trimmed likelihood
Resumo:
We begin by providing observational evidence that the probability of encountering very high and very low annual tropical rainfall has increased significantly in the most recent decade (1998-present) compared with the preceding warming era (1979-1997). These changes over land and ocean are spatially coherent and comprise a rearrangement of very wet regions and a systematic expansion of dry zones. While the increased likelihood of extremes is consistent with a higher average temperature during the pause (compared with 1979-1997), it is important to note that the periods considered are also characterized by a transition from a relatively warm to a cold phase of the El Nino Southern Oscillation (ENSO). To probe the relation between contrasting phases of ENSO and extremes in accumulation further, a similar comparison is performed between 1960 and 1978 (another extended cold phase of ENSO) and the aforementioned warming era. Though limited by land-only observations, in this cold-to-warm transition, remarkably, a near-exact reversal of extremes is noted both statistically and geographically. This is despite the average temperature being higher in 1979-1997 compared with 1960-1978. Taking this evidence together, we propose that there is a fundamental mode of natural variability, involving the waxing and waning of extremes in accumulation of global tropical rainfall with different phases of ENSO.
Resumo:
Human detection is a complex problem owing to the variable pose that they can adopt. Here, we address this problem in sparse representation framework with an overcomplete scale-embedded dictionary. Histogram of oriented gradient features extracted from the candidate image patches are sparsely represented by the dictionary that contain positive bases along with negative and trivial bases. The object is detected based on the proposed likelihood measure obtained from the distribution of these sparse coefficients. The likelihood is obtained as the ratio of contribution of positive bases to negative and trivial bases. The positive bases of the dictionary represent the object (human) at various scales. This enables us to detect the object at any scale in one shot and avoids multiple scanning at different scales. This significantly reduces the computational complexity of detection task. In addition to human detection, it also finds the scale at which the human is detected due to the scale-embedded structure of the dictionary.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.
Resumo:
In order to assess the safety of high-energy solid propellants, the effects of damage on deflagration-to-detonation transition (DDT) in a nitrate ester plasticized polyether (NEPE) propellant, is investigated. A comparison of DDT in the original and impacted propellants was studied in steel tubes with synchronous optoelectronic triodes and strain gauges. The experimental results indicate that the microstructural damage in the propellant enhances its transition rate from deflagration to detonation and causes its danger increase. It is suggested that the mechanical properties of the propellant should be improved to restrain its damage so that the likelihood of DDT might be reduced.
Resumo:
Modern technology has allowed real-time data collection in a variety of domains, ranging from environmental monitoring to healthcare. Consequently, there is a growing need for algorithms capable of performing inferential tasks in an online manner, continuously revising their estimates to reflect the current status of the underlying process. In particular, we are interested in constructing online and temporally adaptive classifiers capable of handling the possibly drifting decision boundaries arising in streaming environments. We first make a quadratic approximation to the log-likelihood that yields a recursive algorithm for fitting logistic regression online. We then suggest a novel way of equipping this framework with self-tuning forgetting factors. The resulting scheme is capable of tracking changes in the underlying probability distribution, adapting the decision boundary appropriately and hence maintaining high classification accuracy in dynamic or unstable environments. We demonstrate the scheme's effectiveness in both real and simulated streaming environments. © Springer-Verlag 2009.
Resumo:
The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finitedimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets. Copyright 2009.
Resumo:
The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finite-dimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
Although approximate Bayesian computation (ABC) has become a popular technique for performing parameter estimation when the likelihood functions are analytically intractable there has not as yet been a complete investigation of the theoretical properties of the resulting estimators. In this paper we give a theoretical analysis of the asymptotic properties of ABC based parameter estimators for hidden Markov models and show that ABC based estimators satisfy asymptotically biased versions of the standard results in the statistical literature.
Resumo:
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.
Resumo:
Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t.~the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.
Resumo:
Resumen: El modelo de la Muestra de la Información sostiene que la probabilidad de que cierta información sea mencionada en un grupo es mayor si se encuentra disponible en muchos miembros que en uno solo. La información compartida en la matriz de creencias preexistente a la interacción social, tiene mayor probabilidad de ser expresada, repetida y validada por consentimiento e influye en el producto grupal. Objetivos: cuantificar el impacto de la matriz de creencias compartidas en los procesos de negociación de significados y comprender cualitativamente este proceso. Sujetos: Participaron 225 estudiantes de Psicología de la Universidad Nacional de Mar del Plata consensuando sobre la relación significativa entre 9 conceptos académicos.El conocimiento previo compartido fue operativizado usando la Centralidad Sociocognitiva. El mapeo de las redes semánticas de los participantes, su inter influencia y evolución en las diferentes instancias de la negociación, el tratamiento analítico de comparación cuali y cuantitativa y su resolución gráfica, se realiza por medio de métodos especiales desarrollados sobre Análisis de Redes Sociales. Resultados: Las predicciones de influencia social entre pares y la visualización de la evolución de las redes semánticas de los participantes y los grupos, arrojan resultados robustos y sugerentes para su aplicación a diversos ámbitos de interacción social y comunicacional.
Resumo:
Resumen: Se aplicó el Modelo de Crédito Parcial (MCP) de la Teoría de Respuesta al Ítem (TRI) al análisis de ítems de una escala que mide Afecto hacia la Matemática. Esta variable describe el interés de los estudiantes de Psicología por involucrarse en actividades vinculadas a la matemática y los sentimientos asociados al uso de sus conceptos. La prueba consta de 8 ítems con formato de respuesta Likert de 6 opciones. Participaron 1875 estudiantes de Psicología de la Universidad de Buenos Aires (Argentina) de los cuales un 82% fueron mujeres. El análisis de la consistencia interna brindó un índice altamente satisfactorio (Alfa = .91). Se verificó la condición de unidimensionalidad requerida por el modelo mediante un análisis factorial exploratorio. Todos los análisis basados sobre la TRI se realizaron con el programa Winsteps. La estimación de los parámetros del modelo se efectuó por Máxima Verosimilitud Conjunta. El ajuste del MCP fue satisfactorio para todos los ítems. La Función de Información del Test fue elevada en un rango amplio de niveles del rasgo latente. Un ítem presentó una inversión en dos parámetros de umbral. Como consecuencia, 1 de las 6 categorías del ítem no fue máximamente probable en ningún intervalo de la escala del rasgo latente. Se analizan las implicancias de este hallazgo en la evaluación de la calidad psicométrica del ítem. Los resultados de este estudio permitieron profundizar el análisis del constructo y aportaron evidencias de validez basadas en las estructura interna de la escala
Resumo:
Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.
Resumo:
Abstract: Focusing on Obadiah and Psalm 137, this article provides biblical evidence for an Edomite treaty betrayal of Judah during the Babylonian crisis ca. 588–586 B.C.E. After setting a context that includes the use of treaties in the ancient Near East to establish expectations for political relationships and the likelihood that Edom could operate as a political entity in the Judahite Negev during the Babylonian assault, this article demonstrates that Obadiah’s poetics include a density of inverted form and content (a reversal motif) pointing to treaty betrayal. Obadiah’s modifications of Jeremiah 49, a text with close thematic and terminological parallels, evidence an Edomite treaty betrayal of Judah. Moreover, the study shows that Obadiah is replete with treaty allusions. A study of Psalm 137 in comparison with Aramaic treaty texts from Sefire reveals that this difficult psalm also evidences a treaty betrayal by Edom and includes elements appropriate for treaty curses. The article closes with a discussion of piecemeal data from a few other biblical texts, a criticism of the view that Edom was innocent during the Babylonian crisis, and a suggestion that this treaty betrayal may have contributed to the production of some anti-Edom biblical material.