103 resultados para Reptrack Methodology
Resumo:
The aim of this paper is to investigate the mechanism of nanoscale fatigue using nano-impact and multiple-loading cycle nanoindentation tests, and compare it to previously reported findings of nanoscale fatigue using integrated stiffness and depth sensing approach. Two different film loading mechanism, loading history and indenter shapes are compared to comprehend the influence of test methodology on the nanoscale fatigue failure mechanisms of DLC film. An amorphous 100 nm thick DLC film was deposited on a 500 μm silicon substrate using sputtering of graphite target in pure argon atmosphere. Nano-impact and multiple-load cycle indentations were performed in the load range of 100 μN to 1000 μN and 0.1 mN to 100 mN, respectively. Both test types were conducted using conical and Berkovich indenters. Results indicate that for the case of conical indenter, the combination of nano-impact and multiple-loading cycle nanoindentation tests provide information on the life and failure mechanism of DLC film, which is comparable to the previously reported findings using the integrated stiffness and depth sensing approach. However, the comparison of results is sensitive to the applied load, loading mechanism, test-type and probe geometry. The loading mechanism and load history is therefore critical which also leads to two different definitions of film failure. The choice of exact test methodology, load and probe geometry should therefore be dictated by the in-service tribological conditions, and where necessary both test methodologies can be used to provide better insights of failure mechanism. Molecular dynamics (MD) simulations of the elastic response of nanoindentation is reported, which indicates that the elastic modulus of the film measured using MD simulation was higher than that experimentally measured. This difference is attributed to the factors related to the presence of material defects, crystal structure, residual stress, indenter geometry and loading/unloading rate differences between the MD and experimental results.
Resumo:
This paper introduces hybrid address spaces as a fundamental design methodology for implementing scalable runtime systems on many-core architectures without hardware support for cache coherence. We use hybrid address spaces for an implementation of MapReduce, a programming model for large-scale data processing, and the implementation of a remote memory access (RMA) model. Both implementations are available on the Intel SCC and are portable to similar architectures. We present the design and implementation of HyMR, a MapReduce runtime system whereby different stages and the synchronization operations between them alternate between a distributed memory address space and a shared memory address space, to improve performance and scalability. We compare HyMR to a reference implementation and we find that HyMR improves performance by a factor of 1.71× over a set of representative MapReduce benchmarks. We also compare HyMR with Phoenix++, a state-of-art implementation for systems with hardware-managed cache coherence in terms of scalability and sustained to peak data processing bandwidth, where HyMR demon- strates improvements of a factor of 3.1× and 3.2× respectively. We further evaluate our hybrid remote memory access (HyRMA) programming model and assess its performance to be superior of that of message passing.
Resumo:
The purposes of this chapter are to argue for (i) the heuristic value of the concept of mask and masking in research which has its basis in psychodynamic theory but relating it to socio-cultural theory as means to understanding self-experience (ii) the value of creating and performing masks as one valuable methodological ‘embodied’ form in social and educational research that represent individuals’ richly textured self-other constructions and allow for the interrogation of any simplistic dichotomies associated with notions of ‘inside’ ‘outside’ categories (iii) exploring possibilities and dilemmas of interpretation within this frame
Resumo:
Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor upsizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and output quality. In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the less important computations are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings. © 2006 IEEE.
Resumo:
Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and "output quality". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (Peak Signal to Noise Ratio) under aggressive voltage scaling as well as extreme process variations in. sub-70nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are affected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.
Resumo:
Plasma etch is a key process in modern semiconductor manufacturing facilities as it offers process simplification and yet greater dimensional tolerances compared to wet chemical etch technology. The main challenge of operating plasma etchers is to maintain a consistent etch rate spatially and temporally for a given wafer and for successive wafers processed in the same etch tool. Etch rate measurements require expensive metrology steps and therefore in general only limited sampling is performed. Furthermore, the results of measurements are not accessible in real-time, limiting the options for run-to-run control. This paper investigates a Virtual Metrology (VM) enabled Dynamic Sampling (DS) methodology as an alternative paradigm for balancing the need to reduce costly metrology with the need to measure more frequently and in a timely fashion to enable wafer-to-wafer control. Using a Gaussian Process Regression (GPR) VM model for etch rate estimation of a plasma etch process, the proposed dynamic sampling methodology is demonstrated and evaluated for a number of different predictive dynamic sampling rules. © 2013 IEEE.