850 resultados para computer aided design


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problems of collaborative engineering design and knowledge management at the conceptual stage in a network of dissimilar enterprises was investigated. This issue in engineering design is a result of the supply chain and virtual enterprise (VE) oriented industry that demands faster time to market and accurate cost/manufacturing analysis from conception. The solution consisted of a de-centralised super-peer net architecture to establish and maintain communications between enterprises in a VE. In the solution outlined below, the enterprises are able to share knowledge in a common format and nomenclature via the building-block shareable super-ontology that can be tailored on a project by project basis, whilst maintaining the common nomenclature of the ‘super-ontology’ eliminating knowledge interpretation issues. The two-tier architecture layout of the solution glues together the peer-peer and super-ontologies to form a coherent system for both internal and virtual enterprise knowledge management and product development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Product knowledge support needs are compared in two companies with different production volumes and product complexity. Knowledge support requirements identified include: function, performance data, requirements data, common parts, regulatory guidelines and layout data. A process based data driven knowledge reuse method is evaluated in light of the identified product knowledge needs. The evaluation takes place through developing a pilot case with each company. It is found that the method provides more benefit to the high complexity design domain, in which a significant amount of work takes place at the conceptual design stages, relying on a conceptual product representation. There is not such a clear value proposition in a design environment whose main challenge is layout design and the application of standard parts and features. The method supports the requirement for conceptual product representation but does not fully support a standard parts library.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in silicon technology have been a key development in the realisation of many telecommunication and signal processing systems. In many cases, the development of application-specific digital signal processing (DSP) chips is the most cost-effective solution and provides the highest performance. Advances made in computer-aided design (CAD) tools and design methodologies now allow designers to develop complex chips within months or even weeks. This paper gives an insight into the challenges and design methodologies of implementing advanced highperformance chips for DSP. In particular, the paper reviews some of the techniques used to develop circuit architectures from high-level descriptions and the tools which are then used to realise silicon layout.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MinneSPEC proposes reduced input sets that microprocessor designers can use to model representative short-running workloads. A four-step methodology verifies the program behavior similarity of these input sets to reference sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major concern in stiffener run-out regions, where the stiffener is terminated due to a cut-out, intersecting rib, or some other structural feature which interrupts the load path, is the relatively weak skin–stiffener interface in the absence of mechanical fasteners. More damage tolerant stiffener run-outs are clearly required and these are investigated in this paper. Using a parametric finite element analysis, the run-out region was optimised for stable debonding crack growth. The modified run-out, as well as a baseline configuration, were manufactured and tested. Damage initiation and propagation was investigated in detail using state-of-the-art monitoring equipment including Acoustic Emission and Digital Image Correlation. As expected, the baseline configuration failed catastrophically. The modified run-out showed improved crack-growth stability, but subsequent delamination failure in the stiffener promptly led to catastrophic failure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a requirement for better integration between design and analysis tools, which is difficult due to their different objectives, separate data representations and workflows. Currently, substantial effort is required to produce a suitable analysis model from design geometry. Robust links are required between these different representations to enable analysis attributes to be transferred between different design and analysis packages for models at various levels of fidelity.

This paper describes a novel approach for integrating design and analysis models by identifying and managing the relationships between the different representations. Three key technologies, Cellular Modeling, Virtual Topology and Equivalencing, have been employed to achieve effective simulation model management. These technologies and their implementation are discussed in detail. Prototype automated tools are introduced demonstrating how multiple simulation models can be linked and maintained to facilitate seamless integration throughout the design cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor upsizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and output quality. In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the less important computations are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings. © 2006 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and "output quality". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (Peak Signal to Noise Ratio) under aggressive voltage scaling as well as extreme process variations in. sub-70nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are affected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines the influence on the engineering design process of the primary objective of validation, whether it is proving a model, a technology or a product. Through the examination of a number of stiffened panel case studies, the relationships between simulation, validation, design and the final product are established and discussed. The work demonstrates the complex interactions between the original (or anticipated) design model, the analysis model, the validation activities and the product in service. The outcome shows clearly some unintended consequences. High fidelity validation test simulations require a different set of detailed parameters to accurately capture behaviour. By doing so, there is a divergence from the original computer-aided design model, intrinsically limiting the value of the validation with respect to the product. This work represents a shift from the traditional perspective of encapsulating and controlling errors between simulation and experimental test to consideration of the wider design-test process. Specifically, it is a reflection on the implications of how models are built and validated, and the effect on results and understanding of structural behaviour. This article then identifies key checkpoints in the design process and how these should be used to update the computer-aided design system parameters for a design. This work strikes at a fundamental challenge in understanding the interaction between design, certification and operation of any complex system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese de doutoramento, Farmácia (Química Farmacêutica e Terapêutica), Universidade de Lisboa, Faculdade de Farmácia, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the aid of the cobalt labelling technique, frog spinal cord motor neuron dendrites of the subpial dendritic plexus have been identified in serial electron micrographs. Computer reconstructions of various lengths (2.5-9.8 micron) of dendritic segments showed the contours of these dendrites to be highly irregular, and to present many thorn-like projections 0.4-1.8 micron long. Number, size and distribution of synaptic contacts were also determined. Almost half of the synapses occurred at the origins of the thorns and these synapses had the largest contact areas. Only 8 out of 54 synapses analysed were found on thorns and these were the smallest. For the total length of reconstructed dendrites there was, on average, one synapse per 1.2 micron, while 4.4% of the total dendritic surface was covered with synaptic contacts. The functional significance of these distal dendrites and their capacity to influence the soma membrane potential is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By enhancing a real scene with computer generated objects, Augmented Reality (AR), has proven itself as a valuable Human-Computer Interface (HCI) in numerous application areas such as medical, military, entertainment and manufacturing. It enables higher performance of on-site tasks with seamless presentation of up-to-date, task-related information to the users during the operation. AR has potentials in design because the current interface provided by Computer-aided Design (CAD) packages is less intuitive and reports show that the presence of physical objects help design thinking and communication. This research explores the use of AR to improve the efficiency of a design process, specifically in mechanical design.