946 resultados para Maximum Likelihood
Resumo:
We consider estimation of mortality rates and growth parameters from length-frequency data of a fish stock and derive the underlying length distribution of the population and the catch when there is individual variability in the von Bertalanffy growth parameter L∞. The model is flexible enough to accommodate 1) any recruitment pattern as a function of both time and length, 2) length-specific selectivity, and 3) varying fishing effort over time. The maximum likelihood method gives consistent estimates, provided the underlying distribution for individual variation in growth is correctly specified. Simulation results indicate that our method is reasonably robust to violations in the assumptions. The method is applied to tiger prawn data (Penaeus semisulcatus) to obtain estimates of natural and fishing mortality.
Resumo:
We present a growth analysis model that combines large amounts of environmental data with limited amounts of biological data and apply it to Corbicula japonica. The model uses the maximum-likelihood method with the Akaike information criterion, which provides an objective criterion for model selection. An adequate distribution for describing a single cohort is selected from available probability density functions, which are expressed by location and scale parameters. Daily relative increase rates of the location parameter are expressed by a multivariate logistic function with environmental factors for each day and categorical variables indicating animal ages as independent variables. Daily relative increase rates of the scale parameter are expressed by an equation describing the relationship with the daily relative increase rate of the location parameter. Corbicula japonica grows to a modal shell length of 0.7 mm during the first year in Lake Abashiri. Compared with the attain-able maximum size of about 30 mm, the growth of juveniles is extremely slow because their growth is less susceptible to environmental factors until the second winter. The extremely slow growth in Lake Abashiri could be a geographical genetic variation within C. japonica.
Resumo:
We show that the sensor self-localization problem can be cast as a static parameter estimation problem for Hidden Markov Models and we implement fully decentralized versions of the Recursive Maximum Likelihood and on-line Expectation-Maximization algorithms to localize the sensor network simultaneously with target tracking. For linear Gaussian models, our algorithms can be implemented exactly using a distributed version of the Kalman filter and a novel message passing algorithm. The latter allows each node to compute the local derivatives of the likelihood or the sufficient statistics needed for Expectation-Maximization. In the non-linear case, a solution based on local linearization in the spirit of the Extended Kalman Filter is proposed. In numerical examples we demonstrate that the developed algorithms are able to learn the localization parameters. © 2012 IEEE.
Resumo:
The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.