967 resultados para Intractable Likelihood
Resumo:
Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.
Resumo:
Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.
Resumo:
In this paper, we have proposed an anomaly detection algorithm based on Histogram of Oriented Motion Vectors (HOMV) 1] in sparse representation framework. Usual behavior is learned at each location by sparsely representing the HOMVs over learnt normal feature bases obtained using an online dictionary learning algorithm. In the end, anomaly is detected based on the likelihood of the occurrence of sparse coefficients at that location. The proposed approach is found to be robust compared to existing methods as demonstrated in the experiments on UCSD Ped1 and UCSD Ped2 datasets.
Resumo:
Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.
Resumo:
Northeast India and its adjoining areas are characterized by very high seismic activity. According to the Indian seismic code, the region falls under seismic zone V, which represents the highest seismic-hazard level in the country. This region has experienced a number of great earthquakes, such as the Assam (1950) and Shillong (1897) earthquakes, that caused huge devastation in the entire northeast and adjacent areas by flooding, landslides, liquefaction, and damage to roads and buildings. In this study, an attempt has been made to find the probability of occurrence of a major earthquake (M-w > 6) in this region using an updated earthquake catalog collected from different sources. Thereafter, dividing the catalog into six different seismic regions based on different tectonic features and seismogenic factors, the probability of occurrences was estimated using three models: the lognormal, Weibull, and gamma distributions. We calculated the logarithmic probability of the likelihood function (ln L) for all six regions and the entire northeast for all three stochastic models. A higher value of ln L suggests a better model, and a lower value shows a worse model. The results show different model suits for different seismic zones, but the majority follows lognormal, which is better for forecasting magnitude size. According to the results, Weibull shows the highest conditional probabilities among the three models for small as well as large elapsed time T and time intervals t, whereas the lognormal model shows the lowest and the gamma model shows intermediate probabilities. Only for elapsed time T = 0, the lognormal model shows the highest conditional probabilities among the three models at a smaller time interval (t = 3-15 yrs). The opposite result is observed at larger time intervals (t = 15-25 yrs), which show the highest probabilities for the Weibull model. However, based on this study, the IndoBurma Range and Eastern Himalaya show a high probability of occurrence in the 5 yr period 2012-2017 with >90% probability.
Resumo:
We propose and demonstrate a limited-view light sheet microscopy (LV-LSM) for three dimensional (3D) volume imaging. Realizing that longer and frequent image acquisition results in significant photo-bleaching, we have taken limited angular views (18 views) of the macroscopic specimen and integrated with maximum likelihood (ML) technique for reconstructing high quality 3D volume images. Existing variants of light-sheet microscopy require both rotation and translation with a total of approximately 10-fold more views to render a 3D volume image. Comparatively, LV-LSM technique reduces data acquisition time and consequently minimizes light-exposure by many-folds. Since ML is a post-processing technique and highly parallelizable, this does not cost precious imaging time. Results show noise-free and high contrast volume images when compared to the state-of-the-art selective plane illumination microscopy. (C) 2015 AIP Publishing LLC.
Resumo:
Speech enhancement in stationary noise is addressed using the ideal channel selection framework. In order to estimate the binary mask, we propose to classify each time-frequency (T-F) bin of the noisy signal as speech or noise using Discriminative Random Fields (DRF). The DRF function contains two terms - an enhancement function and a smoothing term. On each T-F bin, we propose to use an enhancement function based on likelihood ratio test for speech presence, while Ising model is used as smoothing function for spectro-temporal continuity in the estimated binary mask. The effect of the smoothing function over successive iterations is found to reduce musical noise as opposed to using only enhancement function. The binary mask is inferred from the noisy signal using Iterated Conditional Modes (ICM) algorithm. Sentences from NOIZEUS corpus are evaluated from 0 dB to 15 dB Signal to Noise Ratio (SNR) in 4 kinds of additive noise settings: additive white Gaussian noise, car noise, street noise and pink noise. The reconstructed speech using the proposed technique is evaluated in terms of average segmental SNR, Perceptual Evaluation of Speech Quality (PESQ) and Mean opinion Score (MOS).
Resumo:
We begin by providing observational evidence that the probability of encountering very high and very low annual tropical rainfall has increased significantly in the most recent decade (1998-present) compared with the preceding warming era (1979-1997). These changes over land and ocean are spatially coherent and comprise a rearrangement of very wet regions and a systematic expansion of dry zones. While the increased likelihood of extremes is consistent with a higher average temperature during the pause (compared with 1979-1997), it is important to note that the periods considered are also characterized by a transition from a relatively warm to a cold phase of the El Nino Southern Oscillation (ENSO). To probe the relation between contrasting phases of ENSO and extremes in accumulation further, a similar comparison is performed between 1960 and 1978 (another extended cold phase of ENSO) and the aforementioned warming era. Though limited by land-only observations, in this cold-to-warm transition, remarkably, a near-exact reversal of extremes is noted both statistically and geographically. This is despite the average temperature being higher in 1979-1997 compared with 1960-1978. Taking this evidence together, we propose that there is a fundamental mode of natural variability, involving the waxing and waning of extremes in accumulation of global tropical rainfall with different phases of ENSO.
Resumo:
Human detection is a complex problem owing to the variable pose that they can adopt. Here, we address this problem in sparse representation framework with an overcomplete scale-embedded dictionary. Histogram of oriented gradient features extracted from the candidate image patches are sparsely represented by the dictionary that contain positive bases along with negative and trivial bases. The object is detected based on the proposed likelihood measure obtained from the distribution of these sparse coefficients. The likelihood is obtained as the ratio of contribution of positive bases to negative and trivial bases. The positive bases of the dictionary represent the object (human) at various scales. This enables us to detect the object at any scale in one shot and avoids multiple scanning at different scales. This significantly reduces the computational complexity of detection task. In addition to human detection, it also finds the scale at which the human is detected due to the scale-embedded structure of the dictionary.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.
Resumo:
In order to assess the safety of high-energy solid propellants, the effects of damage on deflagration-to-detonation transition (DDT) in a nitrate ester plasticized polyether (NEPE) propellant, is investigated. A comparison of DDT in the original and impacted propellants was studied in steel tubes with synchronous optoelectronic triodes and strain gauges. The experimental results indicate that the microstructural damage in the propellant enhances its transition rate from deflagration to detonation and causes its danger increase. It is suggested that the mechanical properties of the propellant should be improved to restrain its damage so that the likelihood of DDT might be reduced.
Resumo:
Modern technology has allowed real-time data collection in a variety of domains, ranging from environmental monitoring to healthcare. Consequently, there is a growing need for algorithms capable of performing inferential tasks in an online manner, continuously revising their estimates to reflect the current status of the underlying process. In particular, we are interested in constructing online and temporally adaptive classifiers capable of handling the possibly drifting decision boundaries arising in streaming environments. We first make a quadratic approximation to the log-likelihood that yields a recursive algorithm for fitting logistic regression online. We then suggest a novel way of equipping this framework with self-tuning forgetting factors. The resulting scheme is capable of tracking changes in the underlying probability distribution, adapting the decision boundary appropriately and hence maintaining high classification accuracy in dynamic or unstable environments. We demonstrate the scheme's effectiveness in both real and simulated streaming environments. © Springer-Verlag 2009.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
Resumen: El modelo de la Muestra de la Información sostiene que la probabilidad de que cierta información sea mencionada en un grupo es mayor si se encuentra disponible en muchos miembros que en uno solo. La información compartida en la matriz de creencias preexistente a la interacción social, tiene mayor probabilidad de ser expresada, repetida y validada por consentimiento e influye en el producto grupal. Objetivos: cuantificar el impacto de la matriz de creencias compartidas en los procesos de negociación de significados y comprender cualitativamente este proceso. Sujetos: Participaron 225 estudiantes de Psicología de la Universidad Nacional de Mar del Plata consensuando sobre la relación significativa entre 9 conceptos académicos.El conocimiento previo compartido fue operativizado usando la Centralidad Sociocognitiva. El mapeo de las redes semánticas de los participantes, su inter influencia y evolución en las diferentes instancias de la negociación, el tratamiento analítico de comparación cuali y cuantitativa y su resolución gráfica, se realiza por medio de métodos especiales desarrollados sobre Análisis de Redes Sociales. Resultados: Las predicciones de influencia social entre pares y la visualización de la evolución de las redes semánticas de los participantes y los grupos, arrojan resultados robustos y sugerentes para su aplicación a diversos ámbitos de interacción social y comunicacional.
Resumo:
Resumen: Se aplicó el Modelo de Crédito Parcial (MCP) de la Teoría de Respuesta al Ítem (TRI) al análisis de ítems de una escala que mide Afecto hacia la Matemática. Esta variable describe el interés de los estudiantes de Psicología por involucrarse en actividades vinculadas a la matemática y los sentimientos asociados al uso de sus conceptos. La prueba consta de 8 ítems con formato de respuesta Likert de 6 opciones. Participaron 1875 estudiantes de Psicología de la Universidad de Buenos Aires (Argentina) de los cuales un 82% fueron mujeres. El análisis de la consistencia interna brindó un índice altamente satisfactorio (Alfa = .91). Se verificó la condición de unidimensionalidad requerida por el modelo mediante un análisis factorial exploratorio. Todos los análisis basados sobre la TRI se realizaron con el programa Winsteps. La estimación de los parámetros del modelo se efectuó por Máxima Verosimilitud Conjunta. El ajuste del MCP fue satisfactorio para todos los ítems. La Función de Información del Test fue elevada en un rango amplio de niveles del rasgo latente. Un ítem presentó una inversión en dos parámetros de umbral. Como consecuencia, 1 de las 6 categorías del ítem no fue máximamente probable en ningún intervalo de la escala del rasgo latente. Se analizan las implicancias de este hallazgo en la evaluación de la calidad psicométrica del ítem. Los resultados de este estudio permitieron profundizar el análisis del constructo y aportaron evidencias de validez basadas en las estructura interna de la escala