952 resultados para Three models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a study and analysis of surface normal-base descriptors for 3D object recognition. Specifically, we evaluate the behaviour of descriptors in the recognition process using virtual models of objects created from CAD software. Later, we test them in real scenes using synthetic objects created with a 3D printer from the virtual models. In both cases, the same virtual models are used on the matching process to find similarity. The difference between both experiments is in the type of views used in the tests. Our analysis evaluates three subjects: the effectiveness of 3D descriptors depending on the viewpoint of camera, the geometry complexity of the model and the runtime used to do the recognition process and the success rate to recognize a view of object among the models saved in the database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plane model extraction from three-dimensional point clouds is a necessary step in many different applications such as planar object reconstruction, indoor mapping and indoor localization. Different RANdom SAmple Consensus (RANSAC)-based methods have been proposed for this purpose in recent years. In this study, we propose a novel method-based on RANSAC called Multiplane Model Estimation, which can estimate multiple plane models simultaneously from a noisy point cloud using the knowledge extracted from a scene (or an object) in order to reconstruct it accurately. This method comprises two steps: first, it clusters the data into planar faces that preserve some constraints defined by knowledge related to the object (e.g., the angles between faces); and second, the models of the planes are estimated based on these data using a novel multi-constraint RANSAC. We performed experiments in the clustering and RANSAC stages, which showed that the proposed method performed better than state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical machine translation (SMT) is an approach to Machine Translation (MT) that uses statistical models whose parameter estimation is based on the analysis of existing human translations (contained in bilingual corpora). From a translation student’s standpoint, this dissertation aims to explain how a phrase-based SMT system works, to determine the role of the statistical models it uses in the translation process and to assess the quality of the translations provided that system is trained with in-domain goodquality corpora. To that end, a phrase-based SMT system based on Moses has been trained and subsequently used for the English to Spanish translation of two texts related in topic to the training data. Finally, the quality of this output texts produced by the system has been assessed through a quantitative evaluation carried out with three different automatic evaluation measures and a qualitative evaluation based on the Multidimensional Quality Metrics (MQM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reviews of the sport psychology literature have identified a number of models of athlete development in sport (Alfermann & Stambulova, 2007; Durand-Bush &Salmela, 2001). However, minimal research has investigated the origins of knowledge from which each model was developed. The purpose of this study was to systematically examine the influential texts responsible for providing the basis of athlete development models in sport. A citation path analysis of the sport psychology literature was used to generate a knowledge development path of seven athlete development models in sport. The analysis identified influential texts and authors in the conceptualization of athlete development. The popula-tion of 229 texts (articles, books, book chapters) was selected in two phases. Phase1 texts were articles citing seven articles depicting models of athlete development(n  75). Phase 2 included texts cited three or more times by Phase 1 articles (n  154). The analysis revealed how the scholarship of Benjamin Bloom (1985) has been integrated into the field of sport psychology, and how two articles appearing in 1993 and 2003 helped shape present conceptualizations of athlete development

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report quantitative results from three brittle thrust wedge experiments, comparing numerical results directly with each other and with corresponding analogue results. We first test whether the participating codes reproduce predictions from analytical critical taper theory. Eleven codes pass the stable wedge test, showing negligible internal deformation and maintaining the initial surface slope upon horizontal translation over a frictional interface. Eight codes participated in the unstable wedge test that examines the evolution of a wedge by thrust formation from a subcritical state to the critical taper geometry. The critical taper is recovered, but the models show two deformation modes characterised by either mainly forward dipping thrusts or a series of thrust pop-ups. We speculate that the two modes are caused by differences in effective basal boundary friction related to different algorithms for modelling boundary friction. The third experiment examines stacking of forward thrusts that are translated upward along a backward thrust. The results of the seven codes that run this experiment show variability in deformation style, number of thrusts, thrust dip angles and surface slope. Overall, our experiments show that numerical models run with different numerical techniques can successfully simulate laboratory brittle thrust wedge models at the cm-scale. In more detail, however, we find that it is challenging to reproduce sandbox-type setups numerically, because of frictional boundary conditions and velocity discontinuities. We recommend that future numerical-analogue comparisons use simple boundary conditions and that the numerical Earth Science community defines a plasticity test to resolve the variability in model shear zones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Underwater video transects have become a common tool for quantitative analysis of the seafloor. However a major difficulty remains in the accurate determination of the area surveyed as underwater navigation can be unreliable and image scaling does not always compensate for distortions due to perspective and topography. Depending on the camera set-up and available instruments, different methods of surface measurement are applied, which make it difficult to compare data obtained by different vehicles. 3-D modelling of the seafloor based on 2-D video data and a reference scale can be used to compute subtransect dimensions. Focussing on the length of the subtransect, the data obtained from 3-D models created with the software PhotoModeler Scanner are compared with those determined from underwater acoustic positioning (ultra short baseline, USBL) and bottom tracking (Doppler velocity log, DVL). 3-D model building and scaling was successfully conducted on all three tested set-ups and the distortion of the reference scales due to substrate roughness was identified as the main source of imprecision. Acoustic positioning was generally inaccurate and bottom tracking unreliable on rough terrain. Subtransect lengths assessed with PhotoModeler were on average 20% longer than those derived from acoustic positioning due to the higher spatial resolution and the inclusion of slope. On a high relief wall bottom tracking and 3-D modelling yielded similar results. At present, 3-D modelling is the most powerful, albeit the most time-consuming, method for accurate determination of video subtransect dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Authors: B.H. Johnson, R.E. Heath, B.B. Hsieh, K.W. Kim, H.L. Butler.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"A few ... articles about the Dramatic museum which have appeared in American periodicals": p. 25.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-impact, localized intense rainfall episodes represent a major socio-economic problem for societies worldwide, and at the same time these events are notoriously difficult to simulate properly in climate models. Here, the authors investigate how horizontal resolution and model formulation influence this issue by applying the HARMONIE regional climate model (HCLIM) with three different setups; two using convection parameterization at 15 and 6.25 km horizontal resolution (the latter within the “grey-zone” scale), with lateral boundary conditions provided by ERA-Interim reanalysis and integrated over a pan-European domain, and one with explicit convection at 2 km resolution (HCLIM2) over the Alpine region driven by the 15 km model. Seven summer seasons were sampled and validated against two high-resolution observational data sets. All HCLIM versions underestimate the number of dry days and hours by 20-40%, and overestimate precipitation over the Alpine ridge. Also, only modest added value were found of “grey-zone” resolution. However, the single most important outcome is the substantial added value in HCLIM2 compared to the coarser model versions at sub-daily time scales. It better captures the local-to-regional spatial patterns of precipitation reflecting a more realistic representation of the local and meso-scale dynamics. Further, the duration and spatial frequency of precipitation events, as well as extremes, are closer to observations. These characteristics are key ingredients in heavy rainfall events and associated flash floods, and the outstanding results using HCLIM in convection-permitting setting are convincing and encourage further use of the model to study changes in such events in changing climates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Willingness to pay models have shown the theoretical relationships between the contingent valuation, cost of illness and the avertive behaviour approaches. In this paper, field survey data are used to compare the relationships between these three approaches and to demonstrate that contingent valuation bids exceed the sum of cost of illness and the avertive behaviour approach estimates. The estimates provide a validity check for CV bids and further support the claim that contingent valuation studies are theoretically consistent.