905 resultados para Computer Graphics Interattiva, Maya 3D, Unity 3D.
Resumo:
Improving performance in sports requires a better understanding of the perception-action loop employed by athletes. Because of its inherent limitations, video playback doesn't permit this type of in-depth analysis. Interactive, immersive virtual reality can overcome these limitations and foster a better understanding of sports performance.
Resumo:
HIV-1 integrase (IN) has become an attractive target since drug resistance against HIV-1 reverse transcriptase (RT) and protease (PR) has appeared. Diketo acid (DKA) inhibitors are potent and selective inhibitors of HIV-1 IN: however the action mechanism is not well understood. Here, to study the inhibition mechanism of DKAs we performed 10 ns comparative molecular dynamics simulations on HIV-1 IN bound with three most representative DMA inhibitors: Shionogi inhibitor, S-1360 and two Merck inhibitors L-731,988 and L-708,906. Our simulations show that the acidic part of S-1360 formed salt bridge and cation-pi interactions with Lys159. In addition, the catalytic Glu152 in S-1360 was pushed away from the active site to form an ion-pair interaction with Arg199. The Merck inhibitors can maintain either one or both of these ion-pair interaction features. The difference in potencies of the DMA inhibitors is thus attributed to the different binding modes at the catalytic site. Such structural information at atomic level, not only demonstrates the action modes of DMA inhibitors but also provides a novel starting point for structural-based design of HIV-1 IN inhibitors.
Resumo:
This paper presents an approach which enables new parameters to be added to a CAD model for optimization purposes. It aims to remove a common roadblock to CAD based optimization, where the parameterization of the model does not offer the shape sufficient flexibility for a truly optimized shape to be created. A technique has been developed which uses adjoint based sensitivity maps to predict
the sensitivity of performance to the addition to a model of four different feature types, allowing the feature providing the greatest benefit to be selected. The optimum position to add the feature is also discussed. It is anticipated that the approach could be used to iteratively add features to a model, providing greater flexibility to the shape of the model, and allowing the newly-added parameters to be used as design variables in a subsequent shape optimization.
Resumo:
The finite element method plays an extremely important role in forging process design as it provides a valid means to quantify forging errors and thereby govern die shape modification to improve the dimensional accuracy of the component. However, this dependency on process simulation could raise significant problems and present a major drawback if the finite element simulation results were inaccurate. This paper presents a novel approach to assess the dimensional accuracy and shape quality of aeroengine blades formed from finite element hot-forging simulation. The proposed virtual inspection system uses conventional algorithms adopted by modern coordinate measurement processes as well as the latest free-form surface evaluation techniques to provide a robust framework for virtual forging error assessment. Established techniques for the physical registration of real components have been adapted to localise virtual models in relation to a nominal Design Coordinate System. Blades are then automatically analysed using a series of intelligent routines to generate measurement data and compute dimensional errors. The results of a comparison study indicate that the virtual inspection results and actual coordinate measurement data are highly comparable, validating the approach as an effective and accurate means to quantify forging error in a virtual environment. Consequently, this provides adequate justification for the implementation of the virtual inspection system in the virtual process design, modelling and validation of forged aeroengine blades in industry.
Resumo:
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.
This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.
Resumo:
Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.
Resumo:
The development of an automated system for the quality assessment of aerodrome ground lighting (AGL), in accordance with associated standards and recommendations, is presented. The system is composed of an image sensor, placed inside the cockpit of an aircraft to record images of the AGL during a normal descent to an aerodrome. A model-based methodology is used to ascertain the optimum match between a template of the AGL and the actual image data in order to calculate the position and orientation of the camera at the instant the image was acquired. The camera position and orientation data are used along with the pixel grey level for each imaged luminaire, to estimate a value for the luminous intensity of a given luminaire. This can then be compared with the expected brightness for that luminaire to ensure it is operating to the required standards. As such, a metric for the quality of the AGL pattern is determined. Experiments on real image data is presented to demonstrate the application and effectiveness of the system.
Resumo:
A major concern in stiffener run-out regions, where the stiffener is terminated due to a cut-out, intersecting rib, or some other structural feature which interrupts the load path, is the relatively weak skin–stiffener interface in the absence of mechanical fasteners. More damage tolerant stiffener run-outs are clearly required and these are investigated in this paper. Using a parametric finite element analysis, the run-out region was optimised for stable debonding crack growth. The modified run-out, as well as a baseline configuration, were manufactured and tested. Damage initiation and propagation was investigated in detail using state-of-the-art monitoring equipment including Acoustic Emission and Digital Image Correlation. As expected, the baseline configuration failed catastrophically. The modified run-out showed improved crack-growth stability, but subsequent delamination failure in the stiffener promptly led to catastrophic failure.
Harmonic generation and wave mixing in nonlinear metamaterials and photonic crystals (Invited paper)
Resumo:
The basic concepts and phenomenology of wave mixing and harmonic generation are reviewed in context of the recent advances in the enhanced nonlinear activity in metamaterials and photonic crystals. The effects of dispersion, field confinement and phase synchronism are illustrated by the examples of the on-purpose designed artificial nonlinear structures. (c) 2012 Wiley Periodicals, Inc. Int J RF and Microwave CAE 22:469482, 2012.
Resumo:
Social signals and interpretation of carried information is of high importance in Human Computer Interaction. Often used for affect recognition, the cues within these signals are displayed in various modalities. Fusion of multi-modal signals is a natural and interesting way to improve automatic classification of emotions transported in social signals. Throughout most present studies, uni-modal affect recognition as well as multi-modal fusion, decisions are forced for fixed annotation segments across all modalities. In this paper, we investigate the less prevalent approach of event driven fusion, which indirectly accumulates asynchronous events in all modalities for final predictions. We present a fusion approach, handling short-timed events in a vector space, which is of special interest for real-time applications. We compare results of segmentation based uni-modal classification and fusion schemes to the event driven fusion approach. The evaluation is carried out via detection of enjoyment-episodes within the audiovisual Belfast Story-Telling Corpus.
Resumo:
Painterly rendering has been linked to computer vision, but we propose to link it to human vision because perception and painting are two processes that are interwoven. Recent progress in developing computational models allows to establish this link. We show that completely automatic rendering can be obtained by applying four image representations in the visual system: (1) colour constancy can be used to correct colours, (2) coarse background brightness in combination with colour coding in cytochrome-oxidase blobs can be used to create a background with a big brush, (3) the multi-scale line and edge representation provides a very natural way to render fi ner brush strokes, and (4) the multi-scale keypoint representation serves to create saliency maps for Focus-of-Attention, and FoA can be used to render important structures. Basic processes are described, renderings are shown, and important ideas for future research are discussed.