869 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective Working through a depressive illness can improve mental health but also carries risks and costs from reduced concentration, fatigue, and poor on-the-job performance. However, evidence-based recommendations for managing work attendance decisions, which benefit individuals and employers, are lacking. Therefore, this study has compared the costs and health outcomes of short-term absenteeism versus working while ill (“presenteeism”) amongst employed Australians reporting lifetime major depression. Methods Cohort simulation using state-transition Markov models simulated movement of a hypothetical cohort of workers, reporting lifetime major depression, between health states over one- and five-years according to probabilities derived from a quality epidemiological data source and existing clinical literature. Model outcomes were health service and employment-related costs, and quality-adjusted-life-years (QALYs), captured for absenteeism relative to presenteeism, and stratified by occupation (blue versus white-collar). Results Per employee with depression, absenteeism produced higher mean costs than presenteeism over one- and five-years ($42,573/5-years for absenteeism, $37,791/5-years for presenteeism). However, overlapping confidence intervals rendered differences non-significant. Employment-related costs (lost productive time, job turnover), and antidepressant medication and service use costs of absenteeism and presenteeism were significantly higher for white-collar workers. Health outcomes differed for absenteeism versus presenteeism amongst white-collar workers only. Conclusions Costs and health outcomes for absenteeism and presenteeism were not significantly different; service use costs excepted. Significant variation by occupation type was identified. These findings provide the first occupation-specific cost evidence which can be used by clinicians, employees, and employers to review their management of depression-related work attendance, and may suggest encouraging employees to continue working is warranted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an unmanned aircraft system (UAS) that uses a probabilistic model for autonomous front-on environmental sensing or photography of a target. The system is based on low-cost and readily-available sensor systems in dynamic environments and with the general intent of improving the capabilities of dynamic waypoint-based navigation systems for a low-cost UAS. The behavioural dynamics of target movement for the design of a Kalman filter and Markov model-based prediction algorithm are included. Geometrical concepts and the Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of a target, thus delivering a new waypoint for autonomous navigation. The results of the application to aerial filming with low-cost UAS are presented, achieving the desired goal of maintained front-on perspective without significant constraint to the route or pace of target movement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is an increased interest on the use of Unmanned Aerial Vehicles (UAVs) for wildlife and feral animal monitoring around the world. This paper describes a novel system which uses a predictive dynamic application that places the UAV ahead of a user, with a low cost thermal camera, a small onboard computer that identifies heat signatures of a target animal from a predetermined altitude and transmits that target’s GPS coordinates. A map is generated and various data sets and graphs are displayed using a GUI designed for easy use. The paper describes the hardware and software architecture and the probabilistic model for downward facing camera for the detection of an animal. Behavioral dynamics of target movement for the design of a Kalman filter and Markov model based prediction algorithm are used to place the UAV ahead of the user. Geometrical concepts and Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of the user, thus delivering a new way point for autonomous navigation. Results show that the system is capable of autonomously locating animals from a predetermined height and generate a map showing the location of the animals ahead of the user.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of this study is on statistical analysis of categorical responses, where the response values are dependent of each other. The most typical example of this kind of dependence is when repeated responses have been obtained from the same study unit. For example, in Paper I, the response of interest is the pneumococcal nasopharengyal carriage (yes/no) on 329 children. For each child, the carriage is measured nine times during the first 18 months of life, and thus repeated respones on each child cannot be assumed independent of each other. In the case of the above example, the interest typically lies in the carriage prevalence, and whether different risk factors affect the prevalence. Regression analysis is the established method for studying the effects of risk factors. In order to make correct inferences from the regression model, the associations between repeated responses need to be taken into account. The analysis of repeated categorical responses typically focus on regression modelling. However, further insights can also be gained by investigating the structure of the association. The central theme in this study is on the development of joint regression and association models. The analysis of repeated, or otherwise clustered, categorical responses is computationally difficult. Likelihood-based inference is often feasible only when the number of repeated responses for each study unit is small. In Paper IV, an algorithm is presented, which substantially facilitates maximum likelihood fitting, especially when the number of repeated responses increase. In addition, a notable result arising from this work is the freely available software for likelihood-based estimation of clustered categorical responses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of the biological processes are governed through specific protein-ligand interactions. Discerning different components that contribute toward a favorable protein-ligand interaction could contribute significantly toward better understanding protein function, rationalizing drug design and obtaining design principles for protein engineering. The Protein Data Bank (PDB) currently hosts the structure of similar to 68 000 protein-ligand complexes. Although several databases exist that classify proteins according to sequence and structure, a mere handful of them annotate and classify protein-ligand interactions and provide information on different attributes of molecular recognition. In this study, an exhaustive comparison of all the biologically relevant ligand-binding sites (84 846 sites) has been conducted using PocketMatch: a rapid, parallel, in-house algorithm. PocketMatch quantifies the similarity between binding sites based on structural descriptors and residue attributes. A similarity network was constructed using binding sites whose PocketMatch scores exceeded a high similarity threshold (0.80). The binding site similarity network was clustered into discrete sets of similar sites using the Markov clustering (MCL) algorithm. Furthermore, various computational tools have been used to study different attributes of interactions within the individual clusters. The attributes can be roughly divided into (i) binding site characteristics including pocket shape, nature of residues and interaction profiles with different kinds of atomic probes, (ii) atomic contacts consisting of various types of polar, hydrophobic and aromatic contacts along with binding site water molecules that could play crucial roles in protein-ligand interactions and (iii) binding energetics involved in interactions derived from scoring functions developed for docking. For each ligand-binding site in each protein in the PDB, site similarity information, clusters they belong to and description of site attributes are provided as a relational database-protein-ligand interaction clusters (PLIC).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The formulation of higher order structural models and their discretization using the finite element method is difficult owing to their complexity, especially in the presence of non-linearities. In this work a new algorithm for automating the formulation and assembly of hyperelastic higher-order structural finite elements is developed. A hierarchic series of kinematic models is proposed for modeling structures with special geometries and the algorithm is formulated to automate the study of this class of higher order structural models. The algorithm developed in this work sidesteps the need for an explicit derivation of the governing equations for the individual kinematic modes. Using a novel procedure involving a nodal degree-of-freedom based automatic assembly algorithm, automatic differentiation and higher dimensional quadrature, the relevant finite element matrices are directly computed from the variational statement of elasticity and the higher order kinematic model. Another significant feature of the proposed algorithm is that natural boundary conditions are implicitly handled for arbitrary higher order kinematic models. The validity algorithm is illustrated with examples involving linear elasticity and hyperelasticity. (C) 2013 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When estimating parameters that constitute a discrete probability distribution {pj}, it is difficult to determine how constraints should be made to guarantee that the estimated parameters { pˆj} constitute a probability distribution (i.e., pˆj>0, Σ pˆj =1). For age distributions estimated from mixtures of length-at-age distributions, the EM (expectationmaximization) algorithm (Hasselblad, 1966; Hoenig and Heisey, 1987; Kimura and Chikuni, 1987), restricted least squares (Clark, 1981), and weak quasisolutions (Troynikov, 2004) have all been used. Each of these methods appears to guarantee that the estimated distribution will be a true probability distribution with all categories greater than or equal to zero and with individual probabilities that sum to one. In addition, all these methods appear to provide a theoretical basis for solutions that will be either maximum-likelihood estimates or at least convergent to a probability distribut

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dynamic programming algorithm for joint data detection and carrier phase estimation of continuous-phase-modulated signal is presented. The intent is to combine the robustness of noncoherent detectors with the superior performance of coherent ones. The algorithm differs from the Viterbi algorithm only in the metric that it maximizes over the possible transmitted data sequences. This metric is influenced both by the correlation with the received signal and the current estimate of the carrier phase. Carrier-phase estimation is based on decision guiding, but there is no external phase-locked loop. Instead, the phase of the best complex correlation with the received signal over the last few signaling intervals is used. The algorithm is slightly more complex than the coherent Viterbi algorithm but does not require narrowband filtering of the recovered carrier, as earlier appproaches did, to achieve the same level of performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models. GPs are gaining increasing importance in signal processing, machine learning, robotics, and control for representing unknown system functions by posterior probability distributions. This modern way of system identification is more robust than finding point estimates of a parametric function representation. Our principled filtering/smoothing approach for GP dynamic systems is based on analytic moment matching in the context of the forward-backward algorithm. Our numerical evaluations demonstrate the robustness of the proposed approach in situations where other state-of-the-art Gaussian filters and smoothers can fail. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Natural Science Foundation of China [40471134]; program of Lights of the West China by the Chinese Academy of Science

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To pick velocity automatically is not only helpful to improve the efficiency of seismic data process, but also to provide quickly the initial velocity for prestack depth migration. In this thesis, we use the Viterbi algorithm to do automatic picking, but the velocity picked usually is immoderate. By thorough study and analysis, we think that the Viterbi algorithm has the function to do quickly and effectually automatic picking, but the data provided for picking maybe not continuous on derivative of its curved surface, viz., the curved face on velocity spectrum is not slick. Therefore, the velocity picked may include irrational velocity information. To solve the problem above, we develop a new method to filter signal by performing nonlinear transformation of coordinate and filter of function. Here, we call it as Gravity Center Preserved Pulse Compressed Filter (GCPPCF). The main idea to perform the GCPPCF as follows: separating a curve, such as a pulse, to several subsection, calculating the gravity center (coordinate displacement), and then assign the value (density) on the subsection to gravity center. When gravity center departure away from center of its subsection, the value assigned to gravity center is smaller than the actual one, but non other than gravity center anastomoses fully with its subsection center, the assigned value equal to the actual one. By doing so, the curve shape under new coordinate breadthwise narrows down compare to its original one. It is a process of nonlinear transformation of coordinate, due to gravity center changing with the shape of subsection. Furthermore, the gravity function is filter one, because it is a cause of filtering that the value assigned from subsection center to gravity center is obtained by calculating its weight mean of subsetion function. In addition, the filter has the properties of the adaptive time delay changed filter, owing to the weight coefficient used for weight mean also changes with the shape of subsection. In this thesis, the Viterbi algorithm inducted, being applied to auto pick the stack velocity, makes the rule to integral the max velocity spectrum ("energy group") forward and to get the optimal solution in recursion backward. It is a convenient tool to pick automatically velocity. The GCPPCF above not only can be used to preserve the position of peak value and compress the velocity spectrum, but also can be used as adaptive time delay changed filter to smooth object curved line or curved face. We apply it to smooth variable of sequence observed to get a favourable source data ta provide for achieving the final exact resolution. If there is no the adaptive time delay-changed filter to perform optimization, we can't get a finer source data and also can't valid velocity information, moreover, if there is no the Viterbi algorithm to do shortcut searching, we can't pick velocity automatically. Accordingly, combination of both of algorithm is to make an effective method to do automatic picking. We apply the method of automatic picking velocity to do velocity analysis of the wavefield extrapolated. The results calculated show that the imaging effect of deep layer with the wavefield extrapolated was improved dominantly. The GCPPCF above has achieved a good effect in application. It not only can be used to optimize and smooth velocity spectrum, but also can be used to perform a correlated process for other type of signal. The method of automatic picking velocity developed in this thesis has obtained favorable result by applying it to calculate single model, complicated model (Marmousi model) and also the practical data. The results show that it not only has feasibility, but also practicability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel approach for real-time skin segmentation in video sequences is described. The approach enables reliable skin segmentation despite wide variation in illumination during tracking. An explicit second order Markov model is used to predict evolution of the skin-color (HSV) histogram over time. Histograms are dynamically updated based on feedback from the current segmentation and predictions of the Markov model. The evolution of the skin-color distribution at each frame is parameterized by translation, scaling and rotation in color space. Consequent changes in geometric parameterization of the distribution are propagated by warping and resampling the histogram. The parameters of the discrete-time dynamic Markov model are estimated using Maximum Likelihood Estimation, and also evolve over time. The accuracy of the new dynamic skin color segmentation algorithm is compared to that obtained via a static color model. Segmentation accuracy is evaluated using labeled ground-truth video sequences taken from staged experiments and popular movies. An overall increase in segmentation accuracy of up to 24% is observed in 17 out of 21 test sequences. In all but one case the skin-color classification rates for our system were higher, with background classification rates comparable to those of the static segmentation.