3 resultados para Acceleration data structure
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
It is presented the analysis of a retaining wall designed for the basement of a residential building, located in Natal/RN, which consists in a spaced pile wall, anchored by tiebacks, in sand. This structure was instrumented in order to measure the wall s horizontal movements and the load distribution throughout the anchor fixed length. The horizontal movements were measured with an inclinometer, and the loads in the anchors were measured with strain gages, installed in three places throughout the anchor fixed length. Measurements for displacement were done right after the implementation of each stage of the building and right after the conclusion of the building, and the measurements for loads in the anchors were done during the performance test, at the moment of the locking off and, also, right after the conclusion of the building. From the data of displacement were obtained velocity and acceleration data of wall. It was found that the time elapsed on braced installation was decisive in the magnitude of the displacements. The maximum horizontal displacement of wall ranged between 0,18 and 0,66% of the final depth of excavation. The loads in the anchors strongly reduced to approximately half the anchor fixed length, followed an exponential distribution. Furthermore, it was found that there was a loss of load in the anchors over time, reaching 50% loss in one of them
Resumo:
We revisit the problem of visibility, which is to determine a set of primitives potentially visible in a set of geometry data represented by a data structure, such as a mesh of polygons or triangles, we propose a solution for speeding up the three-dimensional visualization processing in applications. We introduce a lean structure , in the sense of data abstraction and reduction, which can be used for online and interactive applications. The visibility problem is especially important in 3D visualization of scenes represented by large volumes of data, when it is not worthwhile keeping all polygons of the scene in memory. This implies a greater time spent in the rendering, or is even impossible to keep them all in huge volumes of data. In these cases, given a position and a direction of view, the main objective is to determine and load a minimum ammount of primitives (polygons) in the scene, to accelerate the rendering step. For this purpose, our algorithm performs cutting primitives (culling) using a hybrid paradigm based on three known techniques. The scene is divided into a cell grid, for each cell we associate the primitives that belong to them, and finally determined the set of primitives potentially visible. The novelty is the use of triangulation Ja 1 to create the subdivision grid. We chose this structure because of its relevant characteristics of adaptivity and algebrism (ease of calculations). The results show a substantial improvement over traditional methods when applied separately. The method introduced in this work can be used in devices with low or no dedicated processing power CPU, and also can be used to view data via the Internet, such as virtual museums applications
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform