882 resultados para the least squares distance method
Resumo:
La modélisation géométrique est importante autant en infographie qu'en ingénierie. Notre capacité à représenter l'information géométrique fixe les limites et la facilité avec laquelle on manipule les objets 3D. Une de ces représentations géométriques est le maillage volumique, formé de polyèdres assemblés de sorte à approcher une forme désirée. Certaines applications, tels que le placage de textures et le remaillage, ont avantage à déformer le maillage vers un domaine plus régulier pour faciliter le traitement. On dit qu'une déformation est \emph{quasi-conforme} si elle borne la distorsion. Cette thèse porte sur l’étude et le développement d'algorithmes de déformation quasi-conforme de maillages volumiques. Nous étudions ces types de déformations parce qu’elles offrent de bonnes propriétés de préservation de l’aspect local d’un solide et qu’elles ont été peu étudiées dans le contexte de l’informatique graphique, contrairement à leurs pendants 2D. Cette recherche tente de généraliser aux volumes des concepts bien maitrisés pour la déformation de surfaces. Premièrement, nous présentons une approche linéaire de la quasi-conformité. Nous développons une méthode déformant l’objet vers son domaine paramétrique par une méthode des moindres carrés linéaires. Cette méthode est simple d'implémentation et rapide d'exécution, mais n'est qu'une approximation de la quasi-conformité car elle ne borne pas la distorsion. Deuxièmement, nous remédions à ce problème par une approche non linéaire basée sur les positions des sommets. Nous développons une technique déformant le domaine paramétrique vers le solide par une méthode des moindres carrés non linéaires. La non-linéarité permet l’inclusion de contraintes garantissant l’injectivité de la déformation. De plus, la déformation du domaine paramétrique au lieu de l’objet lui-même permet l’utilisation de domaines plus généraux. Troisièmement, nous présentons une approche non linéaire basée sur les angles dièdres. Cette méthode définit la déformation du solide par les angles dièdres au lieu des positions des sommets du maillage. Ce changement de variables permet une expression naturelle des bornes de distorsion de la déformation. Nous présentons quelques applications de cette nouvelle approche dont la paramétrisation, l'interpolation, l'optimisation et la compression de maillages tétraédriques.
Resumo:
Econometrics is a young science. It developed during the twentieth century in the mid-1930’s, primarily after the World War II. Econometrics is the unification of statistical analysis, economic theory and mathematics. The history of econometrics can be traced to the use of statistical and mathematics analysis in economics. The most prominent contributions during the initial period can be seen in the works of Tinbergen and Frisch, and also that of Haavelmo in the 1940's through the mid 1950's. Right from the rudimentary application of statistics to economic data, like the use of laws of error through the development of least squares by Legendre, Laplace, and Gauss, the discipline of econometrics has later on witnessed the applied works done by Edge worth and Mitchell. A very significant mile stone in its evolution has been the work of Tinbergen, Frisch, and Haavelmo in their development of multiple regression and correlation analysis. They used these techniques to test different economic theories using time series data. In spite of the fact that some predictions based on econometric methodology might have gone wrong, the sound scientific nature of the discipline cannot be ignored by anyone. This is reflected in the economic rationale underlying any econometric model, statistical and mathematical reasoning for the various inferences drawn etc. The relevance of econometrics as an academic discipline assumes high significance in the above context. Because of the inter-disciplinary nature of econometrics (which is a unification of Economics, Statistics and Mathematics), the subject can be taught at all these broad areas, not-withstanding the fact that most often Economics students alone are offered this subject as those of other disciplines might not have adequate Economics background to understand the subject. In fact, even for technical courses (like Engineering), business management courses (like MBA), professional accountancy courses etc. econometrics is quite relevant. More relevant is the case of research students of various social sciences, commerce and management. In the ongoing scenario of globalization and economic deregulation, there is the need to give added thrust to the academic discipline of econometrics in higher education, across various social science streams, commerce, management, professional accountancy etc. Accordingly, the analytical ability of the students can be sharpened and their ability to look into the socio-economic problems with a mathematical approach can be improved, and enabling them to derive scientific inferences and solutions to such problems. The utmost significance of hands-own practical training on the use of computer-based econometric packages, especially at the post-graduate and research levels need to be pointed out here. Mere learning of the econometric methodology or the underlying theories alone would not have much practical utility for the students in their future career, whether in academics, industry, or in practice This paper seeks to trace the historical development of econometrics and study the current status of econometrics as an academic discipline in higher education. Besides, the paper looks into the problems faced by the teachers in teaching econometrics, and those of students in learning the subject including effective application of the methodology in real life situations. Accordingly, the paper offers some meaningful suggestions for effective teaching of econometrics in higher education
Resumo:
Summary: Productivity, botanical composition and forage quality of legume-grass swards are important factors for successful arable farming in both organic and conventional farming systems. As these attributes can vary considerably within a field, a non-destructive method of detection while doing other tasks would facilitate a more targeted management of crops, forage and nutrients in the soil-plant-animal system. This study was undertaken to explore the potential of field spectral measurements for a non destructive prediction of dry matter (DM) yield, legume proportion in the sward, metabolizable energy (ME), ash content, crude protein (CP) and acid detergent fiber (ADF) of legume-grass mixtures. Two experiments were conducted in a greenhouse under controlled conditions which allowed collecting spectral measurements which were free from interferences such as wind, passing clouds and changing angles of solar irradiation. In a second step this initial investigation was evaluated in the field by a two year experiment with the same legume-grass swards. Several techniques for analysis of the hyperspectral data set were examined in this study: four vegetation indices (VIs): simple ratio (SR), normalized difference vegetation index (NDVI), enhanced vegetation index (EVI) and red edge position (REP), two-waveband reflectance ratios, modified partial least squares (MPLS) regression and stepwise multiple linear regression (SMLR). The results showed the potential of field spectroscopy and proved its usefulness for the prediction of DM yield, ash content and CP across a wide range of legume proportion and growth stage. In all investigations prediction accuracy of DM yield, ash content and CP could be improved by legume-specific calibrations which included mixtures and pure swards of perennial ryegrass and of the respective legume species. The comparison between the greenhouse and the field experiments showed that the interaction between spectral reflectance and weather conditions as well as incidence angle of light interfered with an accurate determination of DM yield. Further research is hence needed to improve the validity of spectral measurements in the field. Furthermore, the developed models should be tested on varying sites and vegetation periods to enhance the robustness and portability of the models to other environmental conditions.
Resumo:
We propose a nonparametric method for estimating derivative financial asset pricing formulae using learning networks. To demonstrate feasibility, we first simulate Black-Scholes option prices and show that learning networks can recover the Black-Scholes formula from a two-year training set of daily options prices, and that the resulting network formula can be used successfully to both price and delta-hedge options out-of-sample. For comparison, we estimate models using four popular methods: ordinary least squares, radial basis functions, multilayer perceptrons, and projection pursuit. To illustrate practical relevance, we also apply our approach to S&P 500 futures options data from 1987 to 1991.
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
A new approach for the control of the size of particles fabricated using the Electrohydrodynamic Atomization (EHDA) method is being developed. In short, the EHDA process produces solution droplets in a controlled manner, and as the solvent evaporates from the surface of the droplets, polymeric particles are formed. By varying the voltage applied, the size of the droplets can be changed, and consequently, the size of the particles can also be controlled. By using both a nozzle electrode and a ring electrode placed axisymmetrically and slightly above the nozzle electrode, we are able to produce a Single Taylor Cone Single Jet for a wide range of voltages, contrary to just using a single nozzle electrode where the range of permissible voltage for the creation of the Single Taylor Cone Single Jet is usually very small. Phase Doppler Particle Analyzer (PDPA) test results have shown that the droplet size increases with increasing voltage applied. This trend is predicted by the electrohydrodynamic theory of the Single Taylor Cone Single Jet based on a perfect dielectric fluid model. Particles fabricated using different voltages do not show much change in the particles size, and this may be attributed to the solvent evaporation process. Nevertheless, these preliminary results do show that this method has the potential of providing us with a way of fine controlling the particles size using relatively simple method with trends predictable by existing theories.
Resumo:
La tecnología LiDAR (Light Detection and Ranging), basada en el escaneado del territorio por un telémetro láser aerotransportado, permite la construcción de Modelos Digitales de Superficie (DSM) mediante una simple interpolación, así como de Modelos Digitales del Terreno (DTM) mediante la identificación y eliminación de los objetos existentes en el terreno (edificios, puentes o árboles). El Laboratorio de Geomática del Politécnico de Milán – Campus de Como- desarrolló un algoritmo de filtrado de datos LiDAR basado en la interpolación con splines bilineares y bicúbicas con una regularización de Tychonov en una aproximación de mínimos cuadrados. Sin embargo, en muchos casos son todavía necesarios modelos más refinados y complejos en los cuales se hace obligatorio la diferenciación entre edificios y vegetación. Este puede ser el caso de algunos modelos de prevención de riesgos hidrológicos, donde la vegetación no es necesaria; o la modelización tridimensional de centros urbanos, donde la vegetación es factor problemático. (...)
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
Colour image segmentation based on the hue component presents some problems due to the physical process of image formation. One of that problems is colour clipping, which appear when at least one of the sensor components is saturated. We have designed a system, that works for a trained set of colours, to recover the chromatic information of those pixels on which colour has been clipped. The chromatic correction method is based on the fact that hue and saturation are invariant to the uniform scaling of the three RGB components. The proposed method has been validated by means of a specific colour image processing board that has allowed its execution in real time. We show experimental results of the application of our method
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
This article focuses on innate concepts: their definition, according to the linguistic work of Noam Chomsky, and the outline of a method for their study. As an introduction to the subject some academic conceptions of the concept acquisition are pointed out, and it is claimed that there is a lack of an empirical method for the study of innate concepts. Next, the article presents the definition that Chomsky has defended over time about such concepts. Finally, in a theoretical way, it presents the conditions for an empirical procedure for the study of innate concepts, called semantic analysis of corpus
Resumo:
A new formulation of a pose refinement technique using ``active'' models is described. An error term derived from the detection of image derivatives close to an initial object hypothesis is linearised and solved by least squares. The method is particularly well suited to problems involving external geometrical constraints (such as the ground-plane constraint). We show that the method is able to recover both the pose of a rigid model, and the structure of a deformable model. We report an initial assessment of the performance and cost of pose and structure recovery using the active model in comparison with our previously reported ``passive'' model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence.
Resumo:
A numerical algorithm for the biharmonic equation in domains with piecewise smooth boundaries is presented. It is intended for problems describing the Stokes flow in the situations where one has corners or cusps formed by parts of the domain boundary and, due to the nature of the boundary conditions on these parts of the boundary, these regions have a global effect on the shape of the whole domain and hence have to be resolved with sufficient accuracy. The algorithm combines the boundary integral equation method for the main part of the flow domain and the finite-element method which is used to resolve the corner/cusp regions. Two parts of the solution are matched along a numerical ‘internal interface’ or, as a variant, two interfaces, and they are determined simultaneously by inverting a combined matrix in the course of iterations. The algorithm is illustrated by considering the flow configuration of ‘curtain coating’, a flow where a sheet of liquid impinges onto a moving solid substrate, which is particularly sensitive to what happens in the corner region formed, physically, by the free surface and the solid boundary. The ‘moving contact line problem’ is addressed in the framework of an earlier developed interface formation model which treats the dynamic contact angle as part of the solution, as opposed to it being a prescribed function of the contact line speed, as in the so-called ‘slip models’. Keywords: Dynamic contact angle; finite elements; free surface flows; hybrid numerical technique; Stokes equations.
Resumo:
Diffuse reflectance spectroscopy (DRS) is increasingly being used to predict numerous soil physical, chemical and biochemical properties. However, soil properties and processes vary at different scales and, as a result, relationships between soil properties often depend on scale. In this paper we report on how the relationship between one such property, cation exchange capacity (CEC), and the DRS of the soil depends on spatial scale. We show this by means of a nested analysis of covariance of soils sampled on a balanced nested design in a 16 km × 16 km area in eastern England. We used principal components analysis on the DRS to obtain a reduced number of variables while retaining key variation. The first principal component accounted for 99.8% of the total variance, the second for 0.14%. Nested analysis of the variation in the CEC and the two principal components showed that the substantial variance components are at the > 2000-m scale. This is probably the result of differences in soil composition due to parent material. We then developed a model to predict CEC from the DRS and used partial least squares (PLS) regression do to so. Leave-one-out cross-validation results suggested a reasonable predictive capability (R2 = 0.71 and RMSE = 0.048 molc kg− 1). However, the results from the independent validation were not as good, with R2 = 0.27, RMSE = 0.056 molc kg− 1 and an overall correlation of 0.52. This would indicate that DRS may not be useful for predictions of CEC. When we applied the analysis of covariance between predicted and observed we found significant scale-dependent correlations at scales of 50 and 500 m (0.82 and 0.73 respectively). DRS measurements can therefore be useful to predict CEC if predictions are required, for example, at the field scale (50 m). This study illustrates that the relationship between DRS and soil properties is scale-dependent and that this scale dependency has important consequences for prediction of soil properties from DRS data
Resumo:
Paternity analysis based on eight microsatellite loci was used to investigate pollen and seed dispersal patterns of the dioecious wind- pollinated tree, Araucaria angustifolia. The study sites were a 5.4 ha isolated forest fragment and a small tree group situated 1.7 km away, located in Paran alpha State, Brazil. In the forest fragment, 121 males, 99 females, 66 seedlings and 92 juveniles were mapped and genotyped, together with 210 seeds. In the tree group, nine male and two female adults were mapped and genotyped, together with 20 seeds. Paternity analysis within the forest fragment indicated that at least 4% of the seeds, 3% of the seedlings and 7% of the juveniles were fertilized by pollen from trees in the adjacent group, and 6% of the seeds were fertilized by pollen from trees outside these stands. The average pollination distance within the forest fragment was 83 m; when the tree group was included the pollination distance was 2006m. The average number of effective pollen donors was estimated as 12.6. Mother- trees within the fragment could be assigned to all seedlings and juveniles, suggesting an absence of seed immigration. The distance of seedlings and juveniles from their assigned mother- trees ranged from 0.35 to 291m ( with an average of 83m). Significant spatial genetic structure among adult trees, seedlings, and juveniles was detected up to 50m, indicating seed dispersal over a short distance. The effective pollination neighborhood ranged from 0.4 to 3.3 ha. The results suggest that seed dispersal is restricted but that there is longdistance pollen dispersal between the forest fragment and the tree group; thus, the two stands of trees are not isolated.