820 resultados para Computer-Aided Engineering


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Valve and cardiac activity were simultaneously measured in the blue mussel (Mytilus edulis) in response to 10 d copper exposure. Valve movements, heart rates and heart-rate variability were obtained non-invasively using a Musselmonitor(R) (valve activity) and a modified version of the Computer-Aided Physiological Monitoring system (CAPMON; cardiac activity). After 2 d exposure of mussels (4 individuals per treatment group) to a range of dissolved copper concentrations (0 to 12.5 mu M as CuCl2) median valve positions (% open) and median heart rates (beats per minute) declined as a function of copper concentration. Heart-rate variability (coefficient of variation for interpulse durations) rose in a concentration-dependent manner. The 48 h EC50 values (concentrations of copper causing 50% change) for valve positions, heart rates and heart-rate variability were 2.1, 0.8, and 0.06 mu M, respectively. Valve activity was weakly correlated with both heart rate (r = 0.48 +/- 0.02) and heart-rate variability (r = 0.32 +/- 0.06) for control individuals (0 mu M Cu2+). This resulted from a number of short enclosure events that did not coincide with a change in cardiac activity. Exposure of mussels to increasing copper concentrations (greater than or equal to 0.8 mu M) progressively reduced the correlation between valve activity and heart rates (r = 0 for individuals dosed with greater than or equal to 6.3 mu M Cu2+), while correlations between valve activity and heart-rate variability were unaffected. The poor correlations resulted from periods of valve flapping that were not mimicked by similar fluctuations in heart rate or heart-rate variability. The data suggest that the copper-induced bradycardia observed in mussels is not a consequence of prolonged valve closure.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.

This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a novel framework for dense pixel matching based on dynamic programming is introduced. Unlike most techniques proposed in the literature, our approach assumes neither known camera geometry nor the availability of rectified images. Under such conditions, the matching task cannot be reduced to finding correspondences between a pair of scanlines. We propose to extend existing dynamic programming methodologies to a larger dimensional space by using a 3D scoring matrix so that correspondences between a line and a whole image can be calculated. After assessing our framework on a standard evaluation dataset of rectified stereo images, experiments are conducted on unrectified and non-linearly distorted images. Results validate our new approach and reveal the versatility of our algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During the past century, several epidemics of human African trypanosomiasis, a deadly disease caused by the protist Trypanosoma brucei, have afflicted sub-Saharan Africa. Over 10 000 new victims are reported each year, with hundreds of thousands more at risk. As current drug treatments are either highly toxic or ineffective, novel trypanocides are urgently needed. The T. brucei galactose synthesis pathway is one potential therapeutic target. Although galactose is essential for T. brucei survival, the parasite lacks the transporters required to intake galactose from the environment. UDP-galactose 4'-epimerase (TbGalE) is responsible for the epimerization of UDP-glucose to UDP-galactose and is therefore of great interest to medicinal chemists. Using molecular dynamics simulations, we investigate the atomistic motions of TbGalE in both the apo and holo states. The sampled conformations and protein dynamics depend not only on the presence of a UDP-sugar ligand, but also on the chirality of the UDP-sugar C4 atom. This dependence provides important insights into TbGalE function and may help guide future computer-aided drug discovery efforts targeting this protein.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a requirement for better integration between design and analysis tools, which is difficult due to their different objectives, separate data representations and workflows. Currently, substantial effort is required to produce a suitable analysis model from design geometry. Robust links are required between these different representations to enable analysis attributes to be transferred between different design and analysis packages for models at various levels of fidelity.

This paper describes a novel approach for integrating design and analysis models by identifying and managing the relationships between the different representations. Three key technologies, Cellular Modeling, Virtual Topology and Equivalencing, have been employed to achieve effective simulation model management. These technologies and their implementation are discussed in detail. Prototype automated tools are introduced demonstrating how multiple simulation models can be linked and maintained to facilitate seamless integration throughout the design cycle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Defining Simulation Intent involves capturing high level modelling and idealisation decisions in order to create an efficient and fit-for-purpose analysis. These decisions are recorded as attributes of the decomposed design space.

An approach to defining Simulation Intent is described utilising three known technologies: Cellular Modelling, the subdivision of space into volumes of simulation significance (structures, gas paths, internal and external airflows etc.); Equivalencing, maintaining a consistent and coherent description
of the equivalent representations of the spatial cells in different analysis models; and Virtual Topology, which offers tools for partitioning and de-partitioning the model without disturbing the manufacturing oriented design geometry. The end result is a convenient framework to which high level analysis attributes can be applied, and from which detailed analysis models can be generated
with a high degree of controllability, repeatability and automation. There are multiple novel aspects to the approach, including its reusability, robustness to changes in model topology and the inherent links created between analysis models at different levels of fidelity and physics.

By utilising Simulation Intent, CAD modelling for simulation can be fully exploited and simulation work-flows can be more readily automated, reducing many repetitive manual tasks (e.g. the definition of appropriate coupling between elements of different types and the application of boundary conditions). The approach has been implemented and tested with practical examples, and
significant benefits are demonstrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines the applicability of an immersive virtual reality (VR) system to the process of organizational learning in a manufacturing context. The work focuses on the extent to which realism has to be represented in a simulated product build scenario in order to give the user an effective learning experience for an assembly task. Current technologies allow the visualization and manipulation of objects in VR systems but physical behaviors such as contact between objects and the effects of gravity are not commonly represented in off the shelf simulation solutions and the computational power required to facilitate these functions remains a challenge. This work demonstrates how physical behaviors can be coded and represented through the development of more effective mechanisms for the computer aided design (CAD) and VR interface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor upsizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and output quality. In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the less important computations are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings. © 2006 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and "output quality". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (Peak Signal to Noise Ratio) under aggressive voltage scaling as well as extreme process variations in. sub-70nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are affected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Social signals and interpretation of carried information is of high importance in Human Computer Interaction. Often used for affect recognition, the cues within these signals are displayed in various modalities. Fusion of multi-modal signals is a natural and interesting way to improve automatic classification of emotions transported in social signals. Throughout most present studies, uni-modal affect recognition as well as multi-modal fusion, decisions are forced for fixed annotation segments across all modalities. In this paper, we investigate the less prevalent approach of event driven fusion, which indirectly accumulates asynchronous events in all modalities for final predictions. We present a fusion approach, handling short-timed events in a vector space, which is of special interest for real-time applications. We compare results of segmentation based uni-modal classification and fusion schemes to the event driven fusion approach. The evaluation is carried out via detection of enjoyment-episodes within the audiovisual Belfast Story-Telling Corpus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper outlines the importance of robust interface management for facilitating finite element analysis workflows. Topological equivalences between analysis model representations are identified and maintained in an editable and accessible manner. The model and its interfaces are automatically represented using an analysis-specific cellular decomposition of the design space. Rework of boundary conditions following changes to the design geometry or the analysis idealization can be minimized by tracking interface dependencies. Utilizing this information with the Simulation Intent specified by an analyst, automated decisions can be made to process the interface information required to rebuild analysis models. Through this work automated boundary condition application is realized within multi-component, multi-resolution and multi-fidelity analysis workflows.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

New techniques are presented for using the medial axis to generate decompositions on which high quality block-structured meshes with well-placed mesh singularities can be generated. Established medial-axis-based meshing algorithms are effective for some geometries, but in general, they do not produce the most favourable decompositions, particularly when there are geometric concavities. This new approach uses both the topological and geometric information in the medial axis to establish a valid and effective arrangement of mesh singularities for any 2-D surface. It deals with concavities effectively and finds solutions that are most appropriate to the geometric shapes. Resulting meshes are shown for a number of example models.