10 resultados para Camera Pose Estimation

em Dalarna University College Electronic Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis work, is to propose an algorithm to detect the faces in a digital image with complex background. A lot of work has already been done in the area of face detection, but drawback of some face detection algorithms is the lack of ability to detect faces with closed eyes and open mouth. Thus facial features form an important basis for detection. The current thesis work focuses on detection of faces based on facial objects. The procedure is composed of three different phases: segmentation phase, filtering phase and localization phase. In segmentation phase, the algorithm utilizes color segmentation to isolate human skin color based on its chrominance properties. In filtering phase, Minkowski addition based object removal (Morphological operations) has been used to remove the non-skin regions. In the last phase, Image Processing and Computer Vision methods have been used to find the existence of facial components in the skin regions.This method is effective on detecting a face region with closed eyes, open mouth and a half profile face. The experiment’s results demonstrated that the detection accuracy is around 85.4% and the detection speed is faster when compared to neural network method and other techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider method of moment fixed effects (FE) estimation of technical inefficiency. When N, the number of cross sectional observations, is large it ispossible to obtain consistent central moments of the population distribution of the inefficiencies. It is well-known that the traditional FE estimator may be seriously upward biased when N is large and T, the number of time observations, is small. Based on the second central moment and a single parameter distributional assumption on the inefficiencies, we obtain unbiased technical inefficiencies in large N settings. The proposed methodology bridges traditional FE and maximum likelihood estimation – bias is reduced without the random effects assumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Random effect models have been widely applied in many fields of research. However, models with uncertain design matrices for random effects have been little investigated before. In some applications with such problems, an expectation method has been used for simplicity. This method does not include the extra information of uncertainty in the design matrix is not included. The closed solution for this problem is generally difficult to attain. We therefore propose an two-step algorithm for estimating the parameters, especially the variance components in the model. The implementation is based on Monte Carlo approximation and a Newton-Raphson-based EM algorithm. As an example, a simulated genetics dataset was analyzed. The results showed that the proportion of the total variance explained by the random effects was accurately estimated, which was highly underestimated by the expectation method. By introducing heuristic search and optimization methods, the algorithm can possibly be developed to infer the 'model-based' best design matrix and the corresponding best estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper Swedish listed companies’ use of capital budgeting and cost of capital estimation methods in 2005 and 2008 are examined. The relation between company characteristics and choice of methods is investigated and both within-country longitudinal and cross-country comparisons are made. Larger companies seem to have used capital budgeting methods more frequently than smaller companies. When compared to U.S. and continental European companies, Swedish listed companies employed capital budgeting methods less frequently. In 2005 the most common method for establishing the cost of equity was by asking the investors what return they required. By 2008 CAPM was instead the most utilised method, which could indicate greater sophistication. The use of project risk when evaluating investments also seems to have gained in popularity, while the use of company risk declined. Overall, the use of sophisticated capital budgeting and cost of capital estimation methods seem to be rising and the use of less sophisticated methods declining.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A system for weed management on railway embankments that is both adapted to the environment and efficient in terms of resources requires knowledge and understanding about the growing conditions of vegetation so that methods to control its growth can be adapted accordingly. Automated records could complement present-day manual inspections and over time come to replace these. One challenge is to devise a method that will result in a reasonable breakdown of gathered information that can be managed rationally by affected parties and, at the same time, serve as a basis for decisions with sufficient precision. The project examined two automated methods that may be useful for the Swedish Transport Administration in the future: 1) A machine vision method, which makes use of camera sensors as a way of sensing the environment in the visible and near infrared spectrum; and 2) An N-Sensor method, which transmits light within an area that is reflected by the chlorophyll in the plants. The amount of chlorophyll provides a value that can be correlated with the biomass. The choice of technique depends on how the information is to be used. If the purpose is to form a general picture of the growth of vegetation on railway embankments as a way to plan for maintenance measures, then the N-Sensor technique may be the right choice. If the plan is to form a general picture as well as monitor and survey current and exact vegetation status on the surface over time as a way to fight specific vegetation with the correct means, then the machine vision method is the better of the two. Both techniques involve registering data using GPS positioning. In the future, it will be possible to store this information in databases that are directly accessible to stakeholders online during or in conjunction with measures to deal with the vegetation. The two techniques were compared with manual (visual) estimations as to the levels of vegetation growth. The observers (raters) visual estimation of weed coverage (%) differed statistically from person to person. In terms of estimating the frequency (number) of woody plants (trees and bushes) in the test areas, the observers were generally in agreement. The same person is often consistent in his or her estimation: it is the comparison with the estimations of others that can lead to misleading results. The system for using the information about vegetation growth requires development. The threshold for the amount of weeds that can be tolerated in different track types is an important component in such a system. The classification system must be capable of dealing with the demands placed on it so as to ensure the quality of the track and other pre-conditions such as traffic levels, conditions pertaining to track location, and the characteristics of the vegetation. The project recommends that the Swedish Transport Administration: Discusses how threshold values for the growth of vegetation on railway embankments can be determined Carries out registration of the growth of vegetation over longer and a larger number of railway sections using one or more of the methods studied in the project Introduces a system that effectively matches the information about vegetation to its position Includes information about the growth of vegetation in the records that are currently maintained of the track’s technical quality, and link the data material to other maintenance-related databases Establishes a number of representative surfaces in which weed inventories (by measuring) are regularly conducted, as a means of developing an overview of the long-term development that can serve as a basis for more precise prognoses in terms of vegetation growth Ensures that necessary opportunities for education are put in place