962 resultados para Estimation par maximum de vraisemblance


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The maximum independent set problem is NP-complete even when restricted to planar graphs, cubic planar graphs or triangle free graphs. The problem of finding an absolute approximation still remains NP-complete. Various polynomial time approximation algorithms, that guarantee a fixed worst case ratio between the independent set size obtained to the maximum independent set size, in planar graphs have been proposed. We present in this paper a simple and efficient, O(|V|) algorithm that guarantees a ratio 1/2, for planar triangle free graphs. The algorithm differs completely from other approaches, in that, it collects groups of independent vertices at a time. Certain bounds we obtain in this paper relate to some interesting questions in the theory of extremal graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anticipating the number and identity of bidders has significant influence in many theoretical results of the auction itself and bidders’ bidding behaviour. This is because when a bidder knows in advance which specific bidders are likely competitors, this knowledge gives a company a head start when setting the bid price. However, despite these competitive implications, most previous studies have focused almost entirely on forecasting the number of bidders and only a few authors have dealt with the identity dimension qualitatively. Using a case study with immediate real-life applications, this paper develops a method for estimating every potential bidder’s probability of participating in a future auction as a function of the tender economic size removing the bias caused by the contract size opportunities distribution. This way, a bidder or auctioner will be able to estimate the likelihood of a specific group of key, previously identified bidders in a future tender.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Queensland the subtropical strawberry ( Fragaria * ananassa) breeding program aims to combine traits into novel genotypes that increase production efficiency. The contribution of individual plant traits to cost and income under subtropical Queensland conditions was investigated, with the overall goal of improving the profitability of the industry through the release of new strawberry cultivars. The study involved specifying the production and marketing system using three cultivars of strawberry that are currently widely grown annually in southeast Queensland, developing methods to assess the economic impact of changes to the system, and identifying plant traits that influence outcomes from the system. From May through September P (price; $ punnet -1), V (monthly mass; tonne of fruit on the market) and M (calendar month; i.e. May=5) were found to be related ( r2=0.92) by the function (SE) P=4.741(0.469)-0.001630(0.0005) V-0.226(0.102) M using data from 2006 to 2010 for the Brisbane central market. Both income and cost elements in the gross margin were subject to sensitivity analysis. 'Harvesting' and 'Handling/Packing' 'Groups' of 'Activities' were the major contributors to variable costs (each >20%) in the gross margin analysis. Within the 'Harvesting Group', the 'Picking Activity' contributed most (>80%) with the trait 'display of fruit' having the greatest (33%) influence on the cost of the 'Picking Activity'. Within the 'Handling/Packing Group', the 'Packing Activity' contributed 50% of costs with the traits 'fruit shape', 'fruit size variation' and 'resistance to bruising' having the greatest (12-62%) influence on the cost of the 'Packing Activity'. Non-plant items (e.g. carton purchases) made up the other 50% of the costs within the 'Handling/Packing Group'. When any of the individual traits in the 'Harvesting' and 'Handling/Packing' groups were changed by one unit (on a 1-9 scale) the gross margin changed by up to 1%. Increasing yield increased the gross margin to a maximum (15% above present) at 1320 g plant -1 (94% above present). A 10% redistribution of total yield from September to May increased the gross margin by 23%. Increasing fruit size increased gross margin: a 75% increase in fruit size (to ~30 g) produced a 22% increase in the gross margin. The modified gross margin analysis developed in this study allowed simultaneous estimation of the gross margin for the producer and gross value of the industry. These parameters sometimes move in opposite directions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract of Macbeth, G. M., Broderick, D., Buckworth, R. & Ovenden, J. R. (In press, Feb 2013). Linkage disequilibrium estimation of effective population size with immigrants from divergent populations: a case study on Spanish mackerel (Scomberomorus commerson). G3: Genes, Genomes and Genetics. Estimates of genetic effective population size (Ne) using molecular markers are a potentially useful tool for the management of endangered through to commercial species. But, pitfalls are predicted when the effective size is large, as estimates require large numbers of samples from wild populations for statistical validity. Our simulations showed that linkage disequilibrium estimates of Ne up to 10,000 with finite confidence limits can be achieved with sample sizes around 5000. This was deduced from empirical allele frequencies of seven polymorphic microsatellite loci in a commercially harvested fisheries species, the narrow barred Spanish mackerel (Scomberomorus commerson). As expected, the smallest standard deviation of Ne estimates occurred when low frequency alleles were excluded. Additional simulations indicated that the linkage disequilibrium method was sensitive to small numbers of genotypes from cryptic species or conspecific immigrants. A correspondence analysis algorithm was developed to detect and remove outlier genotypes that could possibly be inadvertently sampled from cryptic species or non-breeding immigrants from genetically separate populations. Simulations demonstrated the value of this approach in Spanish mackerel data. When putative immigrants were removed from the empirical data, 95% of the Ne estimates from jacknife resampling were above 24,000.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An investigation is presented of the daily variation of the maximum cloud zone (MCZ) and the 7W mb trough in the Northern Hemisphere over the Indian longitudes 70–90°E during April–October for 1973–77. It is found that during June–September there are two favorable locations for a MCZ over these longitudes–on a majority of days the MCZ is present in the monsoon zone north of 15°N, and often a secondary MCZ occurs in the equatorial region (0–10°N). The monsoon MCZ gets established by northward movement of the MCZ occurring over the equatorial Indian ocean in April and May. The secondary MCZ appears intermittently, and is characterized by long spells of persistence only when the monsoon MCZ is absent. In each of the seasons studied, the MCZ temporarily disappeared from the mean summer monsoon location (15–28°N) about four weeks after it was established near the beginning of July. It is reestablished by the northward movement of the secondary MCZ, which becomes active during the absence of the monsoon MCZ, in a manner strikingly similar to that observed in the spring to summer transition. A break in monsoon conditions prevails just prior to the temporary disappearance of the monsoon MCZ. Thus we conclude that the monsoon MCZ cannot survive for longer than a month without reestablishment by the secondary MCZ. Possible underlying mechanisms are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis examines the feasibility of a forest inventory method based on two-phase sampling in estimating forest attributes at the stand or substand levels for forest management purposes. The method is based on multi-source forest inventory combining auxiliary data consisting of remote sensing imagery or other geographic information and field measurements. Auxiliary data are utilized as first-phase data for covering all inventory units. Various methods were examined for improving the accuracy of the forest estimates. Pre-processing of auxiliary data in the form of correcting the spectral properties of aerial imagery was examined (I), as was the selection of aerial image features for estimating forest attributes (II). Various spatial units were compared for extracting image features in a remote sensing aided forest inventory utilizing very high resolution imagery (III). A number of data sources were combined and different weighting procedures were tested in estimating forest attributes (IV, V). Correction of the spectral properties of aerial images proved to be a straightforward and advantageous method for improving the correlation between the image features and the measured forest attributes. Testing different image features that can be extracted from aerial photographs (and other very high resolution images) showed that the images contain a wealth of relevant information that can be extracted only by utilizing the spatial organization of the image pixel values. Furthermore, careful selection of image features for the inventory task generally gives better results than inputting all extractable features to the estimation procedure. When the spatial units for extracting very high resolution image features were examined, an approach based on image segmentation generally showed advantages compared with a traditional sample plot-based approach. Combining several data sources resulted in more accurate estimates than any of the individual data sources alone. The best combined estimate can be derived by weighting the estimates produced by the individual data sources by the inverse values of their mean square errors. Despite the fact that the plot-level estimation accuracy in two-phase sampling inventory can be improved in many ways, the accuracy of forest estimates based mainly on single-view satellite and aerial imagery is a relatively poor basis for making stand-level management decisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A very general and numerically quite robust algorithm has been proposed by Sastry and Gauvrit (1980) for system identification. The present paper takes it up and examines its performance on a real test example. The example considered is the lateral dynamics of an aircraft. This is used as a vehicle for demonstrating the performance of various aspects of the algorithm in several possible modes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fisheries managers are becoming increasingly aware of the need to quantify all forms of harvest, including that by recreational fishers. This need has been driven by both a growing recognition of the potential impact that noncommercial fishers can have on exploited resources and the requirement to allocate catch limits between different sectors of the wider fishing community in many jurisdictions. Marine recreational fishers are rarely required to report any of their activity, and some form of survey technique is usually required to estimate levels of recreational catch and effort. In this review, we describe and discuss studies that have attempted to estimate the nature and extent of recreational harvests of marine fishes in New Zealand and Australia over the past 20 years. We compare studies by method to show how circumstances dictate their application and to highlight recent developments that other researchers may find of use. Although there has been some convergence of approach, we suggest that context is an important consideration, and many of the techniques discussed here have been adapted to suit local conditions and to address recognized sources of bias. Much of this experience, along with novel improvements to existing approaches, have been reported only in "gray" literature because of an emphasis on providing estimates for immediate management purposes. This paper brings much of that work together for the first time, and we discuss how others might benefit from our experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

NeEstimator v2 is a completely revised and updated implementation of software that produces estimates of contemporary effective population size, using several different methods and a single input file. NeEstimator v2 includes three single-sample estimators (updated versions of the linkage disequilibrium and heterozygote-excess methods, and a new method based on molecular coancestry), as well as the two-sample (moment-based temporal) method. New features include the following: (i) an improved method for accounting for missing data; (ii) options for screening out rare alleles; (iii) confidence intervals for all methods; (iv) the ability to analyse data sets with large numbers of genetic markers (10000 or more); (v) options for batch processing large numbers of different data sets, which will facilitate cross-method comparisons using simulated data; and (vi) correction for temporal estimates when individuals sampled are not removed from the population (Plan I sampling). The user is given considerable control over input data and composition, and format of output files. The freely available software has a new JAVA interface and runs under MacOS, Linux and Windows.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Terrain traversability estimation is a fundamental requirement to ensure the safety of autonomous planetary rovers and their ability to conduct long-term missions. This paper addresses two fundamental challenges for terrain traversability estimation techniques. First, representations of terrain data, which are typically built by the rover’s onboard exteroceptive sensors, are often incomplete due to occlusions and sensor limitations. Second, during terrain traversal, the rover-terrain interaction can cause terrain deformation, which may significantly alter the difficulty of traversal. We propose a novel approach built on Gaussian process (GP) regression to learn, and consequently to predict, the rover’s attitude and chassis configuration on unstructured terrain using terrain geometry information only. First, given incomplete terrain data, we make an initial prediction under the assumption that the terrain is rigid, using a learnt kernel function. Then, we refine this initial estimate to account for the effects of potential terrain deformation, using a near-to-far learning approach based on multitask GP regression. We present an extensive experimental validation of the proposed approach on terrain that is mostly rocky and whose geometry changes as a result of loads from rover traversals. This demonstrates the ability of the proposed approach to accurately predict the rover’s attitude and configuration in partially occluded and deformable terrain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data-driven approaches such as Gaussian Process (GP) regression have been used extensively in recent robotics literature to achieve estimation by learning from experience. To ensure satisfactory performance, in most cases, multiple learning inputs are required. Intuitively, adding new inputs can often contribute to better estimation accuracy, however, it may come at the cost of a new sensor, larger training dataset and/or more complex learning, some- times for limited benefits. Therefore, it is crucial to have a systematic procedure to determine the actual impact each input has on the estimation performance. To address this issue, in this paper we propose to analyse the impact of each input on the estimate using a variance-based sensitivity analysis method. We propose an approach built on Analysis of Variance (ANOVA) decomposition, which can characterise how the prediction changes as one or more of the input changes, and also quantify the prediction uncertainty as attributed from each of the inputs in the framework of dependent inputs. We apply the proposed approach to a terrain-traversability estimation method we proposed in prior work, which is based on multi-task GP regression, and we validate this implementation experimentally using a rover on a Mars-analogue terrain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a family of multivariate heavy-tailed distributions that allow variable marginal amounts of tailweight. The originality comes from introducing multidimensional instead of univariate scale variables for the mixture of scaled Gaussian family of distributions. In contrast to most existing approaches, the derived distributions can account for a variety of shapes and have a simple tractable form with a closed-form probability density function whatever the dimension. We examine a number of properties of these distributions and illustrate them in the particular case of Pearson type VII and t tails. For these latter cases, we provide maximum likelihood estimation of the parameters and illustrate their modelling flexibility on simulated and real data clustering examples.