980 resultados para set-estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints through priors obtained directly from training data. A prior distribution covering the probability of a human pose occurring is used to incorporate likely human poses. This distribution is obtained offline, by fitting a Gaussian mixture model to a large dataset of recorded human body poses, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave as random walks, but drift towards a set of typically observed poses. This model is combined with measurements of the human head and hand positions, using recursive Bayesian estimation to incorporate temporal information. Measurements are obtained using face detection and a simple skin colour hand detector, trained using the detected face. The suggested model is designed with analytical tractability in mind and we show that the pose tracking can be Rao-Blackwellised using the mixture Kalman filter, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. In addition, the use of the proposed upper body model allows reliable three-dimensional pose estimates to be obtained indirectly for a number of joints that are often difficult to detect using traditional object recognition strategies. Comparisons with Kinect sensor results and the state of the art in 2D pose estimation highlight the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concise probabilistic formulae with definite crystallographic implications are obtained from the distribution for eight three-phase structure invariants (3PSIs) in the case of a native protein and a heavy-atom derivative [Hauptman (1982). Acta Cryst. A38, 289-294] and from the distribution for 27 3PSIs in the case of a native and two derivatives [Fortier, Weeks & Hauptman (1984). Acta Cryst. A40, 646-651]. The main results of the probabilistic formulae for the four-phase structure invariants are presented and compared with those for the 3PSIs. The analysis directly leads to a general formula of probabilistic estimation for the n-phase structure invariants in the case of a native and m derivatives. The factors affecting the estimated accuracy of the 3PSIs are examined using the diffraction data from a moderate-sized protein. A method to estimate a set of the large-modulus invariants, each corresponding to one of the eight 3PSIs, that has the largest \Delta\ values and relatively large structure-factor moduli between the native and derivative is suggested, which remarkably improves the accuracy, and thus a phasing procedure making full use of all eight 3PSIs is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between monthly sea-level data measured at stations located along the Chinese coast and concurrent large-scale atmospheric forcing in the period 1960-1990 is examined. It is found that sea-level varies quite coherently along the whole coast, despite the geographical extension of the station set. A canonical correlation analysis between sea-level and sea-level pressure (SLP) indicates that a great part of the sea-level variability can be explained by the action of the wind stress on the ocean surface. The relationship between sea-level and sea-level pressure is analyzed separately for the summer and winter half-years. In winter, one factor affecting sea-level variability at all stations is the SLP contrast between the continent and the Pacific Ocean, hence the intensity of the winter Monsoon circulation. Another factor that affects coherently all stations is the intensity of the zonal circulation at mid-latitudes. In the summer half year, on the other hand, the influence of SLP on sea-level is spatially less coherent: the stations in the Yellow Sea are affected by a more localized circulation anomaly pattern, whereas the rest of the stations is more directly connected to the intensity of the zonal circulation. Based on this analysis, statistical models (different for summer and winter) to hindcast coastal sealevel anomalies from the large-scale SLP field are formulated. These models have been tested by fitting their internal parameters in a test period and reproducing reasonably the sea-level evolution in an independent period. These statistical models are also used to estimate the contribution of the changes of the atmospheric circulation on sea-level along the Chinese coast in an altered climate. For this purpose the ouput of 150 year-long experiment with the coupled ocean-atmosphere model ECHAM1-LSG has been analyzed, in which the atmospheric concentration of greenhouse gases was continuously increased from 1940 until 2090, according to the Scenario A projection of the Intergovermental Panel on Climate Change. In this experiment the meridional (zonal) circulation relevant for sea-level tends to become weaker (stronger) in the winter half year and stronger (weaker) in summer. The estimated contribution of this atmospheric circulation changes to coastal sea-level is of the order of a few centimeters at the end of the integration, being in winter negative in the Yellow Sea and positive in the China Sea with opposite signs in the summer half-year.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly becme prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends a recently developed method for locality-sensitive hashing, which finds approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions; we show how to find the set of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call Parameter-Sensitive Hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recovering a volumetric model of a person, car, or other object of interest from a single snapshot would be useful for many computer graphics applications. 3D model estimation in general is hard, and currently requires active sensors, multiple views, or integration over time. For a known object class, however, 3D shape can be successfully inferred from a single snapshot. We present a method for generating a ``virtual visual hull''-- an estimate of the 3D shape of an object from a known class, given a single silhouette observed from an unknown viewpoint. For a given class, a large database of multi-view silhouette examples from calibrated, though possibly varied, camera rigs are collected. To infer a novel single view input silhouette's virtual visual hull, we search for 3D shapes in the database which are most consistent with the observed contour. The input is matched to component single views of the multi-view training examples. A set of viewpoint-aligned virtual views are generated from the visual hulls corresponding to these examples. The 3D shape estimate for the input is then found by interpolating between the contours of these aligned views. When the underlying shape is ambiguous given a single view silhouette, we produce multiple visual hull hypotheses; if a sequence of input images is available, a dynamic programming approach is applied to find the maximum likelihood path through the feasible hypotheses over time. We show results of our algorithm on real and synthetic images of people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A nonparametric probability estimation procedure using the fuzzy ARTMAP neural network is here described. Because the procedure does not make a priori assumptions about underlying probability distributions, it yields accurate estimates on a wide variety of prediction tasks. Fuzzy ARTMAP is used to perform probability estimation in two different modes. In a 'slow-learning' mode, input-output associations change slowly, with the strength of each association computing a conditional probability estimate. In 'max-nodes' mode, a fixed number of categories are coded during an initial fast learning interval, and weights are then tuned by slow learning. Simulations illustrate system performance on tasks in which various numbers of clusters in the set of input vectors mapped to a given class.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A nested heuristic approach that uses route length approximation is proposed to solve the location-routing problem. A new estimation formula for route length approximation is also developed. The heuristic is evaluated empirically against the sequential method and a recently developed nested method for location routing problems. This testing is carried out on a set of problems of 400 customers and around 15 to 25 depots with good results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel approach based on the use of evolutionary agents for epipolar geometry estimation. In contrast to conventional nonlinear optimization methods, the proposed technique employs each agent to denote a minimal subset to compute the fundamental matrix, and considers the data set of correspondences as a 1D cellular environment, in which the agents inhabit and evolve. The agents execute some evolutionary behavior, and evolve autonomously in a vast solution space to reach the optimal (or near optima) result. Then three different techniques are proposed in order to improve the searching ability and computational efficiency of the original agents. Subset template enables agents to collaborate more efficiently with each other, and inherit accurate information from the whole agent set. Competitive evolutionary agent (CEA) and finite multiple evolutionary agent (FMEA) apply a better evolutionary strategy or decision rule, and focus on different aspects of the evolutionary process. Experimental results with both synthetic data and real images show that the proposed agent-based approaches perform better than other typical methods in terms of accuracy and speed, and are more robust to noise and outliers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: We study a stochastic method for approximating the set of local minima in partial RNA folding landscapes associated with a bounded-distance neighbourhood of folding conformations. The conformations are limited to RNA secondary structures without pseudoknots. The method aims at exploring partial energy landscapes pL induced by folding simulations and their underlying neighbourhood relations. It combines an approximation of the number of local optima devised by Garnier and Kallel (2002) with a run-time estimation for identifying sets of local optima established by Reeves and Eremeev (2004).

Results: The method is tested on nine sequences of length between 50 nt and 400 nt, which allows us to compare the results with data generated by RNAsubopt and subsequent barrier tree calculations. On the nine sequences, the method captures on average 92% of local minima with settings designed for a target of 95%. The run-time of the heuristic can be estimated by O(n2D?ln?), where n is the sequence length, ? is the number of local minima in the partial landscape pL under consideration and D is the maximum number of steepest descent steps in attraction basins associated with pL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present optical (UBVRI) and near-IR (YJHK) photometry of the normal Type Ia supernova (SN) 2004S. We also present eight optical spectra and one near-IR spectrum of SN 2004S. The light curves and spectra are nearly identical to those of SN 2001el. This is the first time we have seen optical and IR light curves of two Type Ia SNe match so closely. Within the one parameter family of light curves for normal Type Ia SNe, that two objects should have such similar light curves implies that they had identical intrinsic colors and produced similar amounts of Ni-56. From the similarities of the light-curve shapes we obtain a set of extinctions as a function of wavelength that allows a simultaneous solution for the distance modulus difference of the two objects, the difference of the host galaxy extinctions, and RV. Since SN 2001el had roughly an order of magnitude more host galaxy extinction than SN 2004S, the value of R-V = 2.15(-0.22)(+0.24) pertains primarily to dust in the host galaxy of SN 2001el. We have also shown via Monte Carlo simulations that adding rest-frame J-band photometry to the complement of BVRI photometry of Type Ia SNe decreases the uncertainty in the distance modulus by a factor of 2.7. A combination of rest-frame optical and near-IR photometry clearly gives more accurate distances than using rest-frame optical photometry alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many applications in applied statistics researchers reduce the complexity of a data set by combining a group of variables into a single measure using factor analysis or an index number. We argue that such compression loses information if the data actually has high dimensionality. We advocate the use of a non-parametric estimator, commonly used in physics (the Takens estimator), to estimate the correlation dimension of the data prior to compression. The advantage of this approach over traditional linear data compression approaches is that the data does not have to be linearized. Applying our ideas to the United Nations Human Development Index we find that the four variables that are used in its construction have dimension three and the index loses information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary: We present a new R package, diveRsity, for the calculation of various diversity statistics, including common diversity partitioning statistics (?, G) and population differentiation statistics (D, GST ', ? test for population heterogeneity), among others. The package calculates these estimators along with their respective bootstrapped confidence intervals for loci, sample population pairwise and global levels. Various plotting tools are also provided for a visual evaluation of estimated values, allowing users to critically assess the validity and significance of statistical tests from a biological perspective. diveRsity has a set of unique features, which facilitate the use of an informed framework for assessing the validity of the use of traditional F-statistics for the inference of demography, with reference to specific marker types, particularly focusing on highly polymorphic microsatellite loci. However, the package can be readily used for other co-dominant marker types (e.g. allozymes, SNPs). Detailed examples of usage and descriptions of package capabilities are provided. The examples demonstrate useful strategies for the exploration of data and interpretation of results generated by diveRsity. Additional online resources for the package are also described, including a GUI web app version intended for those with more limited experience using R for statistical analysis. © 2013 British Ecological Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, a method to measure inequality has been proposed that is based on an- thropometric indicators. Baten (1999, 2000) argued that the coefficient of variation of human stature (henceforth ‘CV’) is correlated with overall inequality in a society, and that it can be used as indicator, especially where income inequality measures are lack- ing. This correlation has been confirmed in further analyses, for example by Pradhan et al. (2003), Moradi and Baten (2005), Sunder (2003), Guntupalli and Baten (2006), Blum (2010a), van Zanden et al. (2010), see also Figure 1 and Table 1. The idea is that average height reflects nutritional conditions during early childhood and youth. Since wealthier people have better access to food, shelter and medical resources, they tend to be taller than the poorer part of the population. Hence, the variation of height of a cer- tain cohort may be indicative of income distribution during the decade of their birth. The aim of this study is firstly to provide an overview of different forms of within- country height inequality. Previous studies on the aspects of height inequality are re- viewed. Inequalities between ethnic groups, gender, inhabitants of different regions and income groups are discussed. In the two final sections, we compare height CVs of anthropological inequality with another indicator of inequality, namely skill premia. We also present estimates of skill premia for a set of countries and decades for which “height CVs”, as they will be called in the following, are available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work an adaptive modeling and spectral estimation scheme based on a dual Discrete Kalman Filtering (DKF) is proposed for speech enhancement. Both speech and noise signals are modeled by an autoregressive structure which provides an underlying time frame dependency and improves time-frequency resolution. The model parameters are arranged to obtain a combined state-space model and are also used to calculate instantaneous power spectral density estimates. The speech enhancement is performed by a dual discrete Kalman filter that simultaneously gives estimates for the models and the signals. This approach is particularly useful as a pre-processing module for parametric based speech recognition systems that rely on spectral time dependent models. The system performance has been evaluated by a set of human listeners and by spectral distances. In both cases the use of this pre-processing module has led to improved results.