869 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an algorithmic technique for accelerating maximum likelihood (ML) algorithm for image reconstruction in fluorescence microscopy. This is made possible by integrating Biggs-Andrews (BA) method with ML approach. The results on widefield, confocal, and super-resolution 4Pi microscopy reveal substantial improvement in the speed of 3D image reconstruction (the number of iterations has reduced by approximately one-half). Moreover, the quality of reconstruction obtained using accelerated ML closely resembles with nonaccelerated ML method. The proposed technique is a step closer to realize real-time reconstruction in 3D fluorescence microscopy. Microsc. Res. Tech. 78:331-335, 2015. (c) 2015 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Noise-predictive maximum likelihood (NPML) is a well known signal detection technique used in partial response maximum likelihood (PRML) scheme in 1D magnetic recording channels. The noise samples colored by the partial response (PR) equalizer are predicted/ whitened during the signal detection using a Viterbi detector. In this paper, we propose an extension of the NPML technique for signal detection in 2D ISI channels. The impact of noise prediction during signal detection is studied in PRML scheme for a particular choice of 2D ISI channel and PR targets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper estimates a standard version of the New Keynesian monetary (NKM) model under alternative specifications of the monetary policy rule using U.S. and Eurozone data. The estimation procedure implemented is a classical method based on the indirect inference principle. An unrestricted VAR is considered as the auxiliary model. On the one hand, the estimation method proposed overcomes some of the shortcomings of using a structural VAR as the auxiliary model in order to identify the impulse response that defines the minimum distance estimator implemented in the literature. On the other hand, by following a classical approach we can further assess the estimation results found in recent papers that follow a maximum-likelihood Bayesian approach. The estimation results show that some structural parameter estimates are quite sensitive to the specification of monetary policy. Moreover, the estimation results in the U.S. show that the fit of the NKM under an optimal monetary plan is much worse than the fit of the NKM model assuming a forward-looking Taylor rule. In contrast to the U.S. case, in the Eurozone the best fit is obtained assuming a backward-looking Taylor rule, but the improvement is rather small with respect to assuming either a forward-looking Taylor rule or an optimal plan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.

In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.

The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.

The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A microtomografia computadorizada (computed microtomography - μCT) permite uma análise não destrutiva de amostras, além de possibilitar sua reutilização. A μCT permite também a reconstrução de objetos tridimensionais a partir de suas seções transversais que são obtidas interceptando a amostra através de planos paralelos. Equipamentos de μCT oferecem ao usuário diversas opções de configurações que alteram a qualidade das imagens obtidas afetando, dessa forma, o resultado esperado. Nesta tese foi realizada a caracterização e análise de imagens de μCT geradas pelo microtomógrafo SkyScan1174 Compact Micro-CT. A base desta caracterização é o processamento de imagens. Foram aplicadas técnicas de realce (brilho, saturação, equalização do histograma e filtro de mediana) nas imagens originais gerando novas imagens e em seguida a quantificação de ambos os conjuntos, utilizando descritores de textura (probabilidade máxima, momento de diferença, momento inverso de diferença, entropia e uniformidade). Os resultados mostram que, comparadas às originais, as imagens que passaram por técnicas de realce apresentaram melhoras quando gerados seus modelos tridimensionais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

English: We describe an age-structured statistical catch-at-length analysis (A-SCALA) based on the MULTIFAN-CL model of Fournier et al. (1998). The analysis is applied independently to both the yellowfin and the bigeye tuna populations of the eastern Pacific Ocean (EPO). We model the populations from 1975 to 1999, based on quarterly time steps. Only a single stock for each species is assumed for each analysis, but multiple fisheries that are spatially separate are modeled to allow for spatial differences in catchability and selectivity. The analysis allows for error in the effort-fishing mortality relationship, temporal trends in catchability, temporal variation in recruitment, relationships between the environment and recruitment and between the environment and catchability, and differences in selectivity and catchability among fisheries. The model is fit to total catch data and proportional catch-at-length data conditioned on effort. The A-SCALA method is a statistical approach, and therefore recognizes that the data collected from the fishery do not perfectly represent the population. Also, there is uncertainty in our knowledge about the dynamics of the system and uncertainty about how the observed data relate to the real population. The use of likelihood functions allow us to model the uncertainty in the data collected from the population, and the inclusion of estimable process error allows us to model the uncertainties in the dynamics of the system. The statistical approach allows for the calculation of confidence intervals and the testing of hypotheses. We use a Bayesian version of the maximum likelihood framework that includes distributional constraints on temporal variation in recruitment, the effort-fishing mortality relationship, and catchability. Curvature penalties for selectivity parameters and penalties on extreme fishing mortality rates are also included in the objective function. The mode of the joint posterior distribution is used as an estimate of the model parameters. Confidence intervals are calculated using the normal approximation method. It should be noted that the estimation method includes constraints and priors and therefore the confidence intervals are different from traditionally calculated confidence intervals. Management reference points are calculated, and forward projections are carried out to provide advice for making management decisions for the yellowfin and bigeye populations. Spanish: Describimos un análisis estadístico de captura a talla estructurado por edad, A-SCALA (del inglés age-structured statistical catch-at-length analysis), basado en el modelo MULTIFAN- CL de Fournier et al. (1998). Se aplica el análisis independientemente a las poblaciones de atunes aleta amarilla y patudo del Océano Pacífico oriental (OPO). Modelamos las poblaciones de 1975 a 1999, en pasos trimestrales. Se supone solamente una sola población para cada especie para cada análisis, pero se modelan pesquerías múltiples espacialmente separadas para tomar en cuenta diferencias espaciales en la capturabilidad y selectividad. El análisis toma en cuenta error en la relación esfuerzo-mortalidad por pesca, tendencias temporales en la capturabilidad, variación temporal en el reclutamiento, relaciones entre el medio ambiente y el reclutamiento y entre el medio ambiente y la capturabilidad, y diferencias en selectividad y capturabilidad entre pesquerías. Se ajusta el modelo a datos de captura total y a datos de captura a talla proporcional condicionados sobre esfuerzo. El método A-SCALA es un enfoque estadístico, y reconoce por lo tanto que los datos obtenidos de la pesca no representan la población perfectamente. Además, hay incertidumbre en nuestros conocimientos de la dinámica del sistema e incertidumbre sobre la relación entre los datos observados y la población real. El uso de funciones de verosimilitud nos permite modelar la incertidumbre en los datos obtenidos de la población, y la inclusión de un error de proceso estimable nos permite modelar las incertidumbres en la dinámica del sistema. El enfoque estadístico permite calcular intervalos de confianza y comprobar hipótesis. Usamos una versión bayesiana del marco de verosimilitud máxima que incluye constreñimientos distribucionales sobre la variación temporal en el reclutamiento, la relación esfuerzo-mortalidad por pesca, y la capturabilidad. Se incluyen también en la función objetivo penalidades por curvatura para los parámetros de selectividad y penalidades por tasas extremas de mortalidad por pesca. Se usa la moda de la distribución posterior conjunta como estimación de los parámetros del modelo. Se calculan los intervalos de confianza usando el método de aproximación normal. Cabe destacar que el método de estimación incluye constreñimientos y distribuciones previas y por lo tanto los intervalos de confianza son diferentes de los intervalos de confianza calculados de forma tradicional. Se calculan puntos de referencia para el ordenamiento, y se realizan proyecciones a futuro para asesorar la toma de decisiones para el ordenamiento de las poblaciones de aleta amarilla y patudo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

King mackerel (Scomberomorus cavalla) are ecologically and economically important scombrids that inhabit U.S. waters of the Gulf of Mexico (GOM) and Atlantic Ocean (Atlantic). Separate migratory groups, or stocks, migrate from eastern GOM and southeastern U.S. Atlantic to south Florida waters where the stocks mix during winter. Currently, all winter landings from a management-defined south Florida mixing zone are attributed to the GOM stock. In this study, the stock composition of winter landings across three south Florida sampling zones was estimated by using stock-specific otolith morphological variables and Fourier harmonics. The mean accuracies of the jackknifed classifications from stepwise linear discriminant function analysis of otolith shape variables ranged from 66−76% for sex-specific models. Estimates of the contribution of the Atlantic stock to winter landings, derived from maximum likelihood stock mixing models, indicated the contribution was highest off southeastern Florida (as high as 82.8% for females in winter 2001−02) and lowest off southwestern Florida (as low as 14.5% for females in winter 2002−03). Overall, results provided evidence that the Atlantic stock contributes a certain, and perhaps a significant (i.e., ≥50%), percentage of landings taken in the management-defined winter mixing zone off south Florida, and the practice of assigning all winter mixing zone landings to the GOM stock should

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainability of benefits from capture fisheries has been a concern of fisheries scientists for a long time. The development of fisheries management models reflects the historical debate (from maximum sustainable yield to maximum economic yield, and so on) of what benefits are valued and need to be sustained. Social and anthropological research needs an increased emphasis on bio-socioeconomic models to effectively determine directions for fisheries management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an HMM-based approach to generating emotional intonation patterns. A set of models were built to represent syllable-length intonation units. In a classification framework, the models were able to detect a sequence of intonation units from raw fundamental frequency values. Using the models in a generative framework, we were able to synthesize smooth and natural sounding pitch contours. As a case study for emotional intonation generation, Maximum Likelihood Linear Regression (MLLR) adaptation was used to transform the neutral model parameters with a small amount of happy and sad speech data. Perceptual tests showed that listeners could identify the speech with the sad intonation 80% of the time. On the other hand, listeners formed a bimodal distribution in their ability to detect the system generated happy intontation and on average listeners were able to detect happy intonation only 46% of the time. © Springer-Verlag Berlin Heidelberg 2005.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Southern bluefin tuna (SBT) (Thunnus maccoyii) growth rates are estimated from tag-return data associated with two time periods, the 1960s and 1980s. The traditional von Bertalanffy growth model (VBG) and a two-phase VBG model were fitted to the data by maximum likelihood. The traditional VBG model did not provide an adequate representation of growth in SBT, and the two-phase VBG yielded a significantly better fit. The results indicated that significant change occurs in the pattern of growth in relation to a VBG curve during the juvenile stages of the SBT life cycle, which may be related to the transition from a tightly schooling fish that spends substantial time in near and surface shore waters to one that is found primarily in more offshore and deeper waters. The results suggest that more complex growth models should be considered for other tunas and for other species that show a marked change in habitat use with age. The likelihood surface for the two-phase VBG model was found to be bimodal and some implications of this are investigated. Significant and substantial differences were found in the growth for fish spawned in the 1960s and in the 1980s, such that after age four there is a difference of about one year in the expected age of a fish of similar length which persists over the size range for which meaningful recapture data are available. This difference may be a density-dependent response as a consequence of the marked reduction in the SBT population. Given the key role that estimates of growth have in most stock assessments, the results indicate that there is a need both for the regular monitoring of growth rates and for provisions for changes in growth over time (possibly related to changes in abundance) in the stock assessment models used for SBT and other species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For many realistic scenarios, there are multiple factors that affect the clean speech signal. In this work approaches to handling two such factors, speaker and background noise differences, simultaneously are described. A new adaptation scheme is proposed. Here the acoustic models are first adapted to the target speaker via an MLLR transform. This is followed by adaptation to the target noise environment via model-based vector Taylor series (VTS) compensation. These speaker and noise transforms are jointly estimated, using maximum likelihood. Experiments on the AURORA4 task demonstrate that this adaptation scheme provides improved performance over VTS-based noise adaptation. In addition, this framework enables the speech and noise to be factorised, allowing the speaker transform estimated in one noise condition to be successfully used in a different noise condition. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a novel model for the spatio-temporal clustering of trajectories based on motion, which applies to challenging street-view video sequences of pedestrians captured by a mobile camera. A key contribution of our work is the introduction of novel probabilistic region trajectories, motivated by the non-repeatability of segmentation of frames in a video sequence. Hierarchical image segments are obtained by using a state-of-the-art hierarchical segmentation algorithm, and connected from adjacent frames in a directed acyclic graph. The region trajectories and measures of confidence are extracted from this graph using a dynamic programming-based optimisation. Our second main contribution is a Bayesian framework with a twofold goal: to learn the optimal, in a maximum likelihood sense, Random Forests classifier of motion patterns based on video features, and construct a unique graph from region trajectories of different frames, lengths and hierarchical levels. Finally, we demonstrate the use of Isomap for effective spatio-temporal clustering of the region trajectories of pedestrians. We support our claims with experimental results on new and existing challenging video sequences. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The jetting of dilute polymer solutions in drop-on-demand printing is investigated. A quantitative model is presented which predicts three different regimes of behaviour depending upon the jet Weissenberg number Wi and extensibility of the polymer molecules. In regime I (Wi < ½) the polymer chains are relaxed and the fluid behaves in a Newtonian manner. In regime II (½ < Wi < L) where L is the extensibility of the polymer chain the fluid is viscoelastic, but the polymer do not reach their extensibility limit. In regime III (Wi > L) the chains remain fully extended in the thinning ligament. The maximum polymer concentration at which a jet of a certain speed can be formed scales with molecular weight to the power of (1-3ν), (1-6ν) and -2ν in the three regimes respectively, where ν is the solvent quality coefficient. Experimental data obtained with solutions of mono-disperse polystyrene in diethyl phthalate with molecular weights between 24 - 488 kDa, previous numerical simulations of this system, and previously published data for this and another linear polymer in a variety of “good” solvents, all show good agreement with the scaling predictions of the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.