981 resultados para penalized likelihood


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern technology has allowed real-time data collection in a variety of domains, ranging from environmental monitoring to healthcare. Consequently, there is a growing need for algorithms capable of performing inferential tasks in an online manner, continuously revising their estimates to reflect the current status of the underlying process. In particular, we are interested in constructing online and temporally adaptive classifiers capable of handling the possibly drifting decision boundaries arising in streaming environments. We first make a quadratic approximation to the log-likelihood that yields a recursive algorithm for fitting logistic regression online. We then suggest a novel way of equipping this framework with self-tuning forgetting factors. The resulting scheme is capable of tracking changes in the underlying probability distribution, adapting the decision boundary appropriately and hence maintaining high classification accuracy in dynamic or unstable environments. We demonstrate the scheme's effectiveness in both real and simulated streaming environments. © Springer-Verlag 2009.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finitedimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets. Copyright 2009.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finite-dimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although approximate Bayesian computation (ABC) has become a popular technique for performing parameter estimation when the likelihood functions are analytically intractable there has not as yet been a complete investigation of the theoretical properties of the resulting estimators. In this paper we give a theoretical analysis of the asymptotic properties of ABC based parameter estimators for hidden Markov models and show that ABC based estimators satisfy asymptotically biased versions of the standard results in the statistical literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t.~the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen: El modelo de la Muestra de la Información sostiene que la probabilidad de que cierta información sea mencionada en un grupo es mayor si se encuentra disponible en muchos miembros que en uno solo. La información compartida en la matriz de creencias preexistente a la interacción social, tiene mayor probabilidad de ser expresada, repetida y validada por consentimiento e influye en el producto grupal. Objetivos: cuantificar el impacto de la matriz de creencias compartidas en los procesos de negociación de significados y comprender cualitativamente este proceso. Sujetos: Participaron 225 estudiantes de Psicología de la Universidad Nacional de Mar del Plata consensuando sobre la relación significativa entre 9 conceptos académicos.El conocimiento previo compartido fue operativizado usando la Centralidad Sociocognitiva. El mapeo de las redes semánticas de los participantes, su inter influencia y evolución en las diferentes instancias de la negociación, el tratamiento analítico de comparación cuali y cuantitativa y su resolución gráfica, se realiza por medio de métodos especiales desarrollados sobre Análisis de Redes Sociales. Resultados: Las predicciones de influencia social entre pares y la visualización de la evolución de las redes semánticas de los participantes y los grupos, arrojan resultados robustos y sugerentes para su aplicación a diversos ámbitos de interacción social y comunicacional.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen: Se aplicó el Modelo de Crédito Parcial (MCP) de la Teoría de Respuesta al Ítem (TRI) al análisis de ítems de una escala que mide Afecto hacia la Matemática. Esta variable describe el interés de los estudiantes de Psicología por involucrarse en actividades vinculadas a la matemática y los sentimientos asociados al uso de sus conceptos. La prueba consta de 8 ítems con formato de respuesta Likert de 6 opciones. Participaron 1875 estudiantes de Psicología de la Universidad de Buenos Aires (Argentina) de los cuales un 82% fueron mujeres. El análisis de la consistencia interna brindó un índice altamente satisfactorio (Alfa = .91). Se verificó la condición de unidimensionalidad requerida por el modelo mediante un análisis factorial exploratorio. Todos los análisis basados sobre la TRI se realizaron con el programa Winsteps. La estimación de los parámetros del modelo se efectuó por Máxima Verosimilitud Conjunta. El ajuste del MCP fue satisfactorio para todos los ítems. La Función de Información del Test fue elevada en un rango amplio de niveles del rasgo latente. Un ítem presentó una inversión en dos parámetros de umbral. Como consecuencia, 1 de las 6 categorías del ítem no fue máximamente probable en ningún intervalo de la escala del rasgo latente. Se analizan las implicancias de este hallazgo en la evaluación de la calidad psicométrica del ítem. Los resultados de este estudio permitieron profundizar el análisis del constructo y aportaron evidencias de validez basadas en las estructura interna de la escala

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract: Focusing on Obadiah and Psalm 137, this article provides biblical evidence for an Edomite treaty betrayal of Judah during the Babylonian crisis ca. 588–586 B.C.E. After setting a context that includes the use of treaties in the ancient Near East to establish expectations for political relationships and the likelihood that Edom could operate as a political entity in the Judahite Negev during the Babylonian assault, this article demonstrates that Obadiah’s poetics include a density of inverted form and content (a reversal motif) pointing to treaty betrayal. Obadiah’s modifications of Jeremiah 49, a text with close thematic and terminological parallels, evidence an Edomite treaty betrayal of Judah. Moreover, the study shows that Obadiah is replete with treaty allusions. A study of Psalm 137 in comparison with Aramaic treaty texts from Sefire reveals that this difficult psalm also evidences a treaty betrayal by Edom and includes elements appropriate for treaty curses. The article closes with a discussion of piecemeal data from a few other biblical texts, a criticism of the view that Edom was innocent during the Babylonian crisis, and a suggestion that this treaty betrayal may have contributed to the production of some anti-Edom biblical material.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most fisheries agencies conduct biological and economic assessments independently. This independent conduct may lead to situations in which economists reject management plans proposed by biologists. The objective of this study is to show how to find optimal strategies that may satisfy biologists and economists' conditions. In particular we characterize optimal fishing trajectories that maximize the present value of a discounted economic indicator taking into account the age-structure of the population as in stock assessment methodologies. This approach is applied to the Northern Stock of Hake. Our main empirical findings are: i) Optimal policy may be far away from any of the classical scenarios proposed by biologists, ii) The more the future is discounted, the higher the likelihood of finding contradictions among scenarios proposed by biologists and conclusions from economic analysis, iii) Optimal management reduces the risk of the stock falling under precautionary levels, especially if the future is not discounted to much, and iv) Optimal stationary fishing rate may be very different depending on the economic indicator used as reference.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper estimates a standard version of the New Keynesian monetary (NKM) model under alternative specifications of the monetary policy rule using U.S. and Eurozone data. The estimation procedure implemented is a classical method based on the indirect inference principle. An unrestricted VAR is considered as the auxiliary model. On the one hand, the estimation method proposed overcomes some of the shortcomings of using a structural VAR as the auxiliary model in order to identify the impulse response that defines the minimum distance estimator implemented in the literature. On the other hand, by following a classical approach we can further assess the estimation results found in recent papers that follow a maximum-likelihood Bayesian approach. The estimation results show that some structural parameter estimates are quite sensitive to the specification of monetary policy. Moreover, the estimation results in the U.S. show that the fit of the NKM under an optimal monetary plan is much worse than the fit of the NKM model assuming a forward-looking Taylor rule. In contrast to the U.S. case, in the Eurozone the best fit is obtained assuming a backward-looking Taylor rule, but the improvement is rather small with respect to assuming either a forward-looking Taylor rule or an optimal plan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using the ECHP, we explored the determinants of having the first child in Spain. Our main goal was to study the relation between female wages and the decision to enter motherhood. Since the offered wage of non-working women is not observed, we estimate it and impute a potential wage to each woman (working and non-working). This potential wage enable us to investigate the effect of wages (the opportunity cost of time non-worked and dedicated to children) on the decision to have the first child, for both workers and non-workers. Contrary to previous results, we found that female wages are positively related to the likelihood of having the first child. This result suggests that the income effect overcomes the substitution effect when non-participants opportunity cost is also taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using data from the Spanish Labor Force Survey (Encuesta de Población Activa) from 1999 through 2004, we explore the role of regional employment opportunities in explaining the increasing immigrant flows of recent years despite the limited internal mobility on the part of natives. Subsequently, we investigate the policy question of whether immigration has helped reduced unemployment rate disparities across Spanish regions by attracting immigrant flows to regions offering better employment opportunities. Our results indicate that immigrants choose to reside in regions with larger employment rates and where their probability of finding a job is higher. In particular, and despite some differences depending on their origin, immigrants appear generally more responsive than their native counterparts to a higher likelihood of informal, self, or indefinite employment. More importantly, insofar the vast majority of immigrants locate in regions characterized by higher employment rates, immigration contributes to greasing the wheels of the Spanish labor market by narrowing regional unemployment rate disparities.