97 resultados para depth estimation
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc
Resumo:
Vegeu el resum a l'inici del document del fitxer adjunt
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
Lean meat percentage (LMP) is an important carcass quality parameter. The aim of this work is to obtain a calibration equation for the Computed Tomography (CT) scans with the Partial Least Square Regression (PLS) technique in order to predict the LMP of the carcass and the different cuts and to study and compare two different methodologies of the selection of the variables (Variable Importance for Projection — VIP- and Stepwise) to be included in the prediction equation. The error of prediction with cross-validation (RMSEPCV) of the LMP obtained with PLS and selection based on VIP value was 0.82% and for stepwise selection it was 0.83%. The prediction of the LMP scanning only the ham had a RMSEPCV of 0.97% and if the ham and the loin were scanned the RMSEPCV was 0.90%. Results indicate that for CT data both VIP and stepwise selection are good methods. Moreover the scanning of only the ham allowed us to obtain a good prediction of the LMP of the whole carcass.
Resumo:
Aquest estudi consisteix en un anàlisi exploratori que té per objectiu principal la realització d’una reconstrucció de la temperatura de l’aigua i l’aire del llac Baikal durant els últims 40.000 anys. El treball s’ha dut a terme mitjançant l’ús de les proxys de reconstrucció de la temperatura y la utilització dels mètodes TEX86, MAAT, i la d’aportació de matèria orgànica d’origen terrestre, el BIT, aplicant-les a la mostra VER93-2 st GC-24, extreta pel Baikal Drilling Project a la conca central, amb l’objectiu de fer una aportació de dades paleoclimàtiques per tal d’aconseguir una millora en les interpretacions de futurs esdeveniments climàtics, i d’identificar esdeveniments climàtics sobtats, tals com els Heinrich events i els Youngers Dryas. Abans de la realització de l’anàlisi de les mostres s’ha dut a terme una extrapolació de l’edat en el testimoni, degut a que l’edat del core BDP VER93-2.st.GC-24 havia estat extrapolada fins a 277,5 cm de profunditat i en el present estudi s’ha ampliat l’anàlisi fins als 460 cm. de profunditat. Un cop obtinguts els resultats s’ha realitzat un càlcul de precisió i reproductibilitat per tal de conèixer una estimació quantitativa de la variabilitat de les dades obtingudes en les diferents proxys, en el qual ha estat demostrat una baixa variabilitat de les dades, exceptuant la variabilitat del TEX86 i la precisió del MAAT. Per a la localització dels diferents esdeveniments climàtics donats durant l’Holocè i el Plistocè s’han realitzat anàlisis gràfics dels propis resultats, juntament i en comparació dels resultats realitzats per Escala et al. (r.n.p [resultats no publicats]) en la conca sud, i de l’estudi publicat per Prokopenko et al., en el que s’analitza la presència de diatomees i matèria orgànica l’Atlàntic Nord. Els resultats integrats d’Escala et al.,(r.n.p) i els d’aquest estudi coincideixen en la datació dels diferents esdeveniments, amb alguna variació hipotèticament produïda per l’extrapolació d’edat realitzada en el present estudi i la gran aportació de matèria orgànica en el lloc d’extracció del testimoni per part del riu Selenga. Aquests resultats mostren una possible relació entre els esdeveniments climàtics i la variació de la temperatura de l’aigua.
Resumo:
Properties of GMM estimators for panel data, which have become very popular in the empirical economic growth literature, are not well known when the number of individuals is small. This paper analyses through Monte Carlo simulations the properties of various GMM and other estimators when the number of individuals is the one typically available in country growth studies. It is found that, provided that some persistency is present in the series, the system GMM estimator has a lower bias and higher efficiency than all the other estimators analysed, including the standard first-differences GMM estimator.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described
Resumo:
This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
Resumo:
Report for the scientific sojourn at the the Philipps-Universität Marburg, Germany, from september to december 2007. For the first, we employed the Energy-Decomposition Analysis (EDA) to investigate aromaticity on Fischer carbenes as it is related through all the reaction mechanisms studied in my PhD thesis. This powerful tool, compared with other well-known aromaticity indices in the literature like NICS, is useful not only for quantitative results but also to measure the degree of conjugation or hyperconjugation in molecules. Our results showed for the annelated benzenoid systems studied here, that electron density is more concentrated on the outer rings than in the central one. The strain-induced bond localization plays a major role as a driven force to keep the more substituted ring as the less aromatic. The discussion presented in this work was contrasted at different levels of theory to calibrate the method and ensure the consistency of our results. We think these conclusions can also be extended to arene chemistry for explaining aromaticity and regioselectivity reactions found in those systems.In the second work, we have employed the Turbomole program package and density-functionals of the best performance in the state of art, to explore reaction mechanisms in the noble gas chemistry. Particularly, we were interested in compounds of the form H--Ng--Ng--F (where Ng (Noble Gas) = Ar, Kr and Xe) and we investigated the relative stability of these species. Our quantum chemical calculations predict that the dixenon compound HXeXeF has an activation barrier for decomposition of 11 kcal/mol which should be large enough to identify the molecule in a low-temperature matrix. The other noble gases present lower activation barriers and therefore are more labile and difficult to be observable systems experimentally.
Resumo:
This paper examines why a financial entity’s solvency capital estimation might be underestimated if the total amount required is obtained directly from a risk measurement. Using Monte Carlo simulation we show that, in some instances, a common risk measure such as Value-at-Risk is not subadditive when certain dependence structures are considered. Higher risk evaluations are obtained for independence between random variables than those obtained in the case of comonotonicity. The paper stresses, therefore, the relationship between dependence structures and capital estimation.
Resumo:
En aquest document s'introdueixen els conceptes bàsics necessaris per a l'execució de mètriques de productivitat de programari. Després de la introducció, s'estudien amb detall les mètriques de productivitat més emprades actualment, que són línies de codi (mètrica orientada a les dimensions del projecte), punts de funció (orientada a la funcionalitat del projecte, específica per a projectes de gestió), punts de característica (semblant a punts de funció, però més genèrica i útil per a altres tipus de projectes) i punts de casos d'ús (també orientada a la funció i específica per a projectes d'orientació a objectes). S'hi explica com es pot aconseguir, a partir d'aquestes mètriques i amb l'ajut de models d'estimació de productivitat, com ara el model COCOMO II, les estimacions de l'esforç necessari per a desenvolupar un projecte de programari i la distribució de l'esforç en totes les etapes del projecte a partir de les estimacions de la fase de desenvolupament. També es tracta, encara que no amb tanta profunditat, de la mètrica
Resumo:
Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting formeasurement error. From the various specifications, Jöreskog and Yang's(1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given