11 resultados para Computational Geometry and Object Modelling

em Universidad de Alicante


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. Historically, supergiant (sg)B[e] stars have been difficult to include in theoretical schemes for the evolution of massive OB stars. Aims. The location of Wd1-9 within the coeval starburst cluster Westerlund 1 means that it may be placed into a proper evolutionary context and we therefore aim to utilise a comprehensive multiwavelength dataset to determine its physical properties and consequently its relation to other sgB[e] stars and the global population of massive evolved stars within Wd1. Methods. Multi-epoch R- and I-band VLT/UVES and VLT/FORS2 spectra are used to constrain the properties of the circumstellar gas, while an ISO-SWS spectrum covering 2.45−45μm is used to investigate the distribution, geometry and composition of the dust via a semi-analytic irradiated disk model. Radio emission enables a long term mass-loss history to be determined, while X-ray observations reveal the physical nature of high energy processes within the system. Results. Wd1-9 exhibits the rich optical emission line spectrum that is characteristic of sgB[e] stars. Likewise its mid-IR spectrum resembles those of the LMC sgB[e] stars R66 and 126, revealing the presence of equatorially concentrated silicate dust, with a mass of ~10−4M⊙. Extreme historical and ongoing mass loss (≳ 10−4M⊙yr−1) is inferred from the radio observations. The X-ray properties of Wd1-9 imply the presence of high temperature plasma within the system and are directly comparable to a number of confirmed short-period colliding wind binaries within Wd1. Conclusions. The most complete explanation for the observational properties of Wd1-9 is that it is a massive interacting binary currently undergoing, or recently exited from, rapid Roche-lobe overflow, supporting the hypothesis that binarity mediates the formation of (a subset of) sgB[e] stars. The mass loss rate of Wd1-9 is consistent with such an assertion, while viable progenitor and descendent systems are present within Wd1 and comparable sgB[e] binaries have been identified in the Galaxy. Moreover, the rarity of sgB[e] stars - only two examples are identified from a census of ~ 68 young massive Galactic clusters and associations containing ~ 600 post-Main Sequence stars - is explicable given the rapidity (~ 104yr) expected for this phase of massive binary evolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Array measurements have become a valuable tool for site response characterization in a non-invasive way. The array design, i.e. size, geometry and number of stations, has a great influence in the quality of the obtained results. From the previous parameters, the number of available stations uses to be the main limitation for the field experiments, because of the economical and logistical constraints that it involves. Sometimes, from the initially planned array layout, carefully designed before the fieldwork campaign, one or more stations do not work properly, modifying the prearranged geometry. Whereas other times, there is not possible to set up the desired array layout, because of the lack of stations. Therefore, for a planned array layout, the number of operative stations and their arrangement in the array become a crucial point in the acquisition stage and subsequently in the dispersion curve estimation. In this paper we carry out an experimental work to analyze which is the minimum number of stations that would provide reliable dispersion curves for three prearranged array configurations (triangular, circular with central station and polygonal geometries). For the optimization study, we analyze together the theoretical array responses and the experimental dispersion curves obtained through the f-k method. In the case of the f-k method, we compare the dispersion curves obtained for the original or prearranged arrays with the ones obtained for the modified arrays, i.e. the dispersion curves obtained when a certain number of stations n is removed, each time, from the original layout of X geophones. The comparison is evaluated by means of a misfit function, which helps us to determine how constrained are the studied geometries by stations removing and which station or combination of stations affect more to the array capability when they are not available. All this information might be crucial to improve future array designs, determining when it is possible to optimize the number of arranged stations without losing the reliability of the obtained results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a complete system for the treatment of both geographical and temporal dimensions in text and its application to information retrieval. This system has been evaluated in both the GeoTime task of the 8th and 9th NTCIR workshop in the years 2010 and 2011 respectively, making it possible to compare the system to contemporary approaches to the topic. In order to participate in this task we have added the temporal dimension to our GIR system. The system proposed here has a modular architecture in order to add or modify features. In the development of this system, we have followed a QA-based approach as well as multi-search engines to improve the system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have investigated the influence of electrode material and crystallographic structure on electron transfer and biofilm formation of Geobacter sulfurreducens. Single-crystal gold - Au(110), Au(111), Au(210) - and platinum - Pt(100), Pt(110), Pt(111), Pt(210) - electrodes were tested and compared to graphite rods. G. sulfurreducens electrochemically interacts with all these materials with different attachment kinetics and final current production, although redox species involved in the electron transfer to the anode are virtually the same in all cases. Initial bacterial colonization was fastest on graphite up to the monolayer level, whereas gold electrodes led to higher final current densities. Crystal geometry showed to have an important influence, with Au(210) sustaining a current density of up to 1442 (± 101) μA cm- 2 at the steady state, over Au(111) with 961 (± 94) μA cm- 2 and Au(110) with 944 (± 89) μA cm- 2. On the other hand, the platinum electrodes displayed the lowest performances, including Pt(210). Our results indicate that both crystal geometry and electrode material are key parameters for the efficient interaction of bacteria with the substrate and should be considered for the design of novel materials and microbial devices to optimize energy production.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theory and methods of linear algebra are a useful alternative to those of convex geometry in the framework of Voronoi cells and diagrams, which constitute basic tools of computational geometry. As shown by Voigt and Weis in 2010, the Voronoi cells of a given set of sites T, which provide a tesselation of the space called Voronoi diagram when T is finite, are solution sets of linear inequality systems indexed by T. This paper exploits systematically this fact in order to obtain geometrical information on Voronoi cells from sets associated with T (convex and conical hulls, tangent cones and the characteristic cones of their linear representations). The particular cases of T being a curve, a closed convex set and a discrete set are analyzed in detail. We also include conclusions on Voronoi diagrams of arbitrary sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different kinds of algorithms can be chosen so as to compute elementary functions. Among all of them, it is worthwhile mentioning the shift-and-add algorithms due to the fact that they have been specifically designed to be very simple and to save computer resources. In fact, almost the only operations usually involved with these methods are additions and shifts, which can be easily and efficiently performed by a digital processor. Shift-and-add algorithms allow fairly good precision with low cost iterations. The most famous algorithm belonging to this type is CORDIC. CORDIC has the capability of approximating a wide variety of functions with only the help of a slight change in their iterations. In this paper, we will analyze the requirements of some engineering and industrial problems in terms of type of operands and functions to approximate. Then, we will propose the application of shift-and-add algorithms based on CORDIC to these problems. We will make a comparison between the different methods applied in terms of the precision of the results and the number of iterations required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present study is to identify and evaluate the relationship between Woodpigeon (Columba palumbus, Linnaeus, 1758) density and different environmental gradients (thermotype, ombrotype, continentality and latitudinal), land use and landscape structure, using geographic information systems and multivariate modelling. Transects (n = 396) were developed to estimate the density of Woodpigeon in the Marina Baja (Alicante, Spain) from 2006 to 2008. The highestdensity for Woodpigeon was in September-October (1.28birds/10ha) and the lowest inFebruary-March (0.34birds/10ha). Moreover, there were more Woodpigeons in areas with a mesomediterranean thermotypethan in thermomediterranean or supramediterranean ones. There was greater densityinthe intermediate zones compared to thecoast and interior. The natural or cultural landscape had the highest Woodpigeon density (1.53birds/10ha), with both denseand clear pine forest values standing out. Therefore, it is very important to conserve these traditional landscapes with adequate management strategies in order to maintain, resident and transient Woodpigeon populations. These natural areas are open places where the Woodpigeons find food and detect the presence ofpredators. Thus, this study will enable more precise knowledge of the ecological factors (habitat variables) that intervene in the distribution of Woodpigeon populations and their density.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evacuation route planning is a fundamental task for building engineering projects. Safety regulations are established so that all occupants are driven on time out of a building to a secure place when faced with an emergency situation. As an example, Spanish building code requires the planning of evacuation routes on large and, usually, public buildings. Engineers often plan these routes on single building projects, repeatedly assigning clusters of rooms to each emergency exit in a trial-and-error process. But problems may arise for a building complex where distribution and use changes make visual analysis cumbersome and sometimes unfeasible. This problem could be solved by using well-known spatial analysis techniques, implemented as a specialized software able to partially emulate engineer reasoning. In this paper we propose and test an easily reproducible methodology that makes use of free and open source software components for solving a case study. We ran a complete test on a building floor at the University of Alicante (Spain). This institution offers a web service (WFS) that allows retrieval of 2D geometries from any building within its campus. We demonstrate how geospatial technologies and computational geometry algorithms can be used for automating the creation and optimization of evacuation routes. In our case study, the engineers’ task is to verify that the load capacity of each emergency exit does not exceed the standards specified by Spain’s current regulations. Using Dijkstra’s algorithm, we obtain the shortest paths from every room to the most appropriate emergency exit. Once these paths are calculated, engineers can run simulations and validate, based on path statistics, different cluster configurations. Techniques and tools applied in this research would be helpful in the design and risk management phases of any complex building project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A suitable knowledge of the orientation and motion of the Earth in space is a common need in various fields. That knowledge has been ever necessary to carry out astronomical observations, but with the advent of the space age, it became essential for making observations of satellites and predicting and determining their orbits, and for observing the Earth from space as well. Given the relevant role it plays in Space Geodesy, Earth rotation is considered as one of the three pillars of Geodesy, the other two being geometry and gravity. Besides, research on Earth rotation has fostered advances in many fields, such as Mathematics, Astronomy and Geophysics, for centuries. One remarkable feature of the problem is in the extreme requirements of accuracy that must be fulfilled in the near future, about a millimetre on the tangent plane to the planet surface, roughly speaking. That challenges all of the theories that have been devised and used to-date; the paper makes a short review of some of the most relevant methods, which can be envisaged as milestones in Earth rotation research, emphasizing the Hamiltonian approach developed by the authors. Some contemporary problems are presented, as well as the main lines of future research prospected by the International Astronomical Union/International Association of Geodesy Joint Working Group on Theory of Earth Rotation, created in 2013.