932 resultados para Computer Graphics
Resumo:
In this paper, the compression of multispectral images is addressed. Such 3-D data are characterized by a high correlation across the spectral components. The efficiency of the state-of-the-art wavelet-based coder 3-D SPIHT is considered. Although the 3-D SPIHT algorithm provides the obvious way to process a multispectral image as a volumetric block and, consequently, maintain the attractive properties exhibited in 2-D (excellent performance, low complexity, and embeddedness of the bit-stream), its 3-D trees structure is shown to be not adequately suited for 3-D wavelet transformed (DWT) multispectral images. The fact that each parent has eight children in the 3-D structure considerably increases the list of insignificant sets (LIS) and the list of insignificant pixels (LIP) since the partitioning of any set produces eight subsets which will be processed similarly during the sorting pass. Thus, a significant portion from the overall bit-budget is wastedly spent to sort insignificant information. Through an investigation based on results analysis, we demonstrate that a straightforward 2-D SPIHT technique, when suitably adjusted to maintain the rate scalability and carried out in the 3-D DWT domain, overcomes this weakness. In addition, a new SPIHT-based scalable multispectral image compression algorithm is used in the initial iterations to exploit the redundancies within each group of two consecutive spectral bands. Numerical experiments on a number of multispectral images have shown that the proposed scheme provides significant improvements over related works.
Resumo:
The process of making replicas of heritage has traditionally been developed by public agencies, corporations and museums and is not commonly used in schools. Currently there are technologies that allow creating cheap replicas. The new 3D reconstruction software, based on photographs and low cost 3D printers allow to make replicas at a cost much lower than traditional. This article describes the process of creating replicas of the sculpture Goslar Warrior of artist Henry Moore, located in Santa Cruz de Tenerife. To make this process, first, a digital model have been created using Autodesk Recap 360, Autodesk 123D Catch and Autodesk Meshmixer MarkerBot MakerWare applications. Physical replication, has been reproduced in polylactic acid (PLA) by MakerBot Replicator 2 3D printer. In addition, a cost analysis using, in one hand, the printer mentioned, and in the other hand, 3D printing services both online and local, is included. Finally, there has been a specific action with 141 students and 12 high school teachers, who filled a questionnary about the use of sculptural replicas in education.
Resumo:
Improving performance in sports requires a better understanding of the perception-action loop employed by athletes. Because of its inherent limitations, video playback doesn't permit this type of in-depth analysis. Interactive, immersive virtual reality can overcome these limitations and foster a better understanding of sports performance.
Resumo:
The motivation for this paper is to present procedures for automatically creating idealised finite element models from the 3D CAD solid geometry of a component. The procedures produce an accurate and efficient analysis model with little effort on the part of the user. The technique is applicable to thin walled components with local complex features and automatically creates analysis models where 3D elements representing the complex regions in the component are embedded in an efficient shell mesh representing the mid-faces of the thin sheet regions. As the resulting models contain elements of more than one dimension, they are referred to as mixed dimensional models. Although these models are computationally more expensive than some of the idealisation techniques currently employed in industry, they do allow the structural behaviour of the model to be analysed more accurately, which is essential if appropriate design decisions are to be made. Also, using these procedures, analysis models can be created automatically whereas the current idealisation techniques are mostly manual, have long preparation times, and are based on engineering judgement. In the paper the idealisation approach is first applied to 2D models that are used to approximate axisymmetric components for analysis. For these models 2D elements representing the complex regions are embedded in a 1D mesh representing the midline of the cross section of the thin sheet regions. Also discussed is the coupling, which is necessary to link the elements of different dimensionality together. Analysis results from a 3D mixed dimensional model created using the techniques in this paper are compared to those from a stiffened shell model and a 3D solid model to demonstrate the improved accuracy of the new approach. At the end of the paper a quantitative analysis of the reduction in computational cost due to shell meshing thin sheet regions demonstrates that the reduction in degrees of freedom is proportional to the square of the aspect ratio of the region, and for long slender solids, the reduction can be proportional to the aspect ratio of the region if appropriate meshing algorithms are used.
Resumo:
HIV-1 integrase (IN) has become an attractive target since drug resistance against HIV-1 reverse transcriptase (RT) and protease (PR) has appeared. Diketo acid (DKA) inhibitors are potent and selective inhibitors of HIV-1 IN: however the action mechanism is not well understood. Here, to study the inhibition mechanism of DKAs we performed 10 ns comparative molecular dynamics simulations on HIV-1 IN bound with three most representative DMA inhibitors: Shionogi inhibitor, S-1360 and two Merck inhibitors L-731,988 and L-708,906. Our simulations show that the acidic part of S-1360 formed salt bridge and cation-pi interactions with Lys159. In addition, the catalytic Glu152 in S-1360 was pushed away from the active site to form an ion-pair interaction with Arg199. The Merck inhibitors can maintain either one or both of these ion-pair interaction features. The difference in potencies of the DMA inhibitors is thus attributed to the different binding modes at the catalytic site. Such structural information at atomic level, not only demonstrates the action modes of DMA inhibitors but also provides a novel starting point for structural-based design of HIV-1 IN inhibitors.
Resumo:
This paper presents an approach which enables new parameters to be added to a CAD model for optimization purposes. It aims to remove a common roadblock to CAD based optimization, where the parameterization of the model does not offer the shape sufficient flexibility for a truly optimized shape to be created. A technique has been developed which uses adjoint based sensitivity maps to predict
the sensitivity of performance to the addition to a model of four different feature types, allowing the feature providing the greatest benefit to be selected. The optimum position to add the feature is also discussed. It is anticipated that the approach could be used to iteratively add features to a model, providing greater flexibility to the shape of the model, and allowing the newly-added parameters to be used as design variables in a subsequent shape optimization.
Resumo:
The finite element method plays an extremely important role in forging process design as it provides a valid means to quantify forging errors and thereby govern die shape modification to improve the dimensional accuracy of the component. However, this dependency on process simulation could raise significant problems and present a major drawback if the finite element simulation results were inaccurate. This paper presents a novel approach to assess the dimensional accuracy and shape quality of aeroengine blades formed from finite element hot-forging simulation. The proposed virtual inspection system uses conventional algorithms adopted by modern coordinate measurement processes as well as the latest free-form surface evaluation techniques to provide a robust framework for virtual forging error assessment. Established techniques for the physical registration of real components have been adapted to localise virtual models in relation to a nominal Design Coordinate System. Blades are then automatically analysed using a series of intelligent routines to generate measurement data and compute dimensional errors. The results of a comparison study indicate that the virtual inspection results and actual coordinate measurement data are highly comparable, validating the approach as an effective and accurate means to quantify forging error in a virtual environment. Consequently, this provides adequate justification for the implementation of the virtual inspection system in the virtual process design, modelling and validation of forged aeroengine blades in industry.
Resumo:
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.
This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.
Resumo:
Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.
Resumo:
In this paper, a novel framework for dense pixel matching based on dynamic programming is introduced. Unlike most techniques proposed in the literature, our approach assumes neither known camera geometry nor the availability of rectified images. Under such conditions, the matching task cannot be reduced to finding correspondences between a pair of scanlines. We propose to extend existing dynamic programming methodologies to a larger dimensional space by using a 3D scoring matrix so that correspondences between a line and a whole image can be calculated. After assessing our framework on a standard evaluation dataset of rectified stereo images, experiments are conducted on unrectified and non-linearly distorted images. Results validate our new approach and reveal the versatility of our algorithm.