40 resultados para Time of processing
Resumo:
Al-5 wt pct Si alloy is processed by upset forging in the temperature range 300 K to 800 K and in the strain rate range 0.02 to 200 s−1. The hardness and tensile properties of the product have been studied. A “safe” window in the strain rate-temperature field has been identified for processing of this alloy to obtain maximum tensile ductility in the product. For the above strain rate range, the temperature range of processing is 550 K to 700 K for obtaining high ductility in the product. On the basis of microstructure and the ductility of the product, the temperature-strain rate regimes of damage due to cavity formation at particles and wedge cracking have been isolated for this alloy. The tensile fracture features recorded on the product specimens are in conformity with the above damage mechanisms. A high temperature treatment above ≈600 K followed by fairly fast cooling gives solid solution strengthening in the alloy at room temperature.
Resumo:
An efficient measurement technique for studying the effect of transient electromagnetic fields under controlled conditions has been described. Broad-band TEM fields with a rise-time of a few nanoseconds were generated using a stripline method. Theoretical results are obtained and experimental measurements which confirm these results are described. The work will form the basis for a study of the susceptibility of digital integrated circuits and their interconnections to transient electromagnetic fields.
Resumo:
Mycobacterium tuberculosis readily activates both CD4+ and Vdelta2+ gammadelta T cells. Despite similarity in function, these T-cell subsets differ in the antigens they recognize and the manners in which these antigens are presented by M. tuberculosis-infected monocytes. We investigated mechanisms of antigen processing of M. tuberculosis antigens to human CD4 and gammadelta T cells by monocytes. Initial uptake of M. tuberculosis bacilli and subsequent processing were required for efficient presentation not only to CD4 T cells but also to Vdelta2+ gammadelta T cells. For gammadelta T cells, recognition of M. tuberculosis-infected monocytes was dependent on Vdelta2+ T-cell-receptor expression. Recognition of M. tuberculosis antigens by CD4+ T cells was restricted by the class II major histocompatibility complex molecule HLA-DR. Processing of M. tuberculosis bacilli for Vdelta2+ gammadelta T cells was inhibitable by Brefeldin A, whereas processing of soluble mycobacterial antigens for gammadelta T cells was not sensitive to Brefeldin A. Processing of M. tuberculosis bacilli for CD4+ T cells was unaffected by Brefeldin A. Lysosomotropic agents such as chloroquine and ammonium chloride did not affect the processing of M. tuberculosis bacilli for CD4+ and gammadelta T cells. In contrast, both inhibitors blocked processing of soluble mycobacterial antigens for CD4+ T cells. Chloroquine and ammonium chloride insensitivity of processing of M. tuberculosis bacilli was not dependent on the viability of the bacteria, since processing of both formaldehyde-fixed dead bacteria and mycobacterial antigens covalently coupled to latex beads was chloroquine insensitive. Thus, the manner in which mycobacterial antigens were taken up by monocytes (particulate versus soluble) influenced the antigen processing pathway for CD4+ and gammadelta T cells.
Resumo:
Run-time interoperability between different applications based on H.264/AVC is an emerging need in networked infotainment, where media delivery must match the desired resolution and quality of the end terminals. In this paper, we describe the architecture and design of a polymorphic ASIC to support this. The H.264 decoding flow is partitioned into modules, such that the polymorphic ASIC meets the design goals of low-power, low-area, high flexibility, high throughput and fast interoperability between different profiles and levels of H.264. We demonstrate the idea with a multi-mode decoder that can decode baseline, main and high profile H.264 streams and can interoperate at run.time across these profiles. The decoder is capable of processing frame sizes of up to 1024 times 768 at 30 fps. The design synthesized with UMC 0.13 mum technology, occupies 250 k gates and runs at 100 MHz.
Resumo:
The problem of scheduling divisible loads in distributed computing systems, in presence of processor release time is considered. The objective is to find the optimal sequence of load distribution and the optimal load fractions assigned to each processor in the system such that the processing time of the entire processing load is a minimum. This is a difficult combinatorial optimization problem and hence genetic algorithms approach is presented for its solution.
Resumo:
This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.
Resumo:
Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.
Resumo:
Avoiding the loss of coherence of quantum mechanical states is an important prerequisite for quantum information processing. Dynamical decoupling (DD) is one of the most effective experimental methods for maintaining coherence, especially when one can access only the qubit system and not its environment (bath). It involves the application of pulses to the system whose net effect is a reversal of the system-environment interaction. In any real system, however, the environment is not static, and therefore the reversal of the system-environment interaction becomes imperfect if the spacing between refocusing pulses becomes comparable to or longer than the correlation time of the environment. The efficiency of the refocusing improves therefore if the spacing between the pulses is reduced. Here, we quantify the efficiency of different DD sequences in preserving different quantum states. We use C-13 nuclear spins as qubits and an environment of H-1 nuclear spins as the environment, which couples to the qubit via magnetic dipole-dipole couplings. Strong dipole-dipole couplings between the proton spins result in a rapidly fluctuating environment with a correlation time of the order of 100 mu s. Our experimental results show that short delays between the pulses yield better performance if they are compared with the bath correlation time. However, as the pulse spacing becomes shorter than the bath correlation time, an optimum is reached. For even shorter delays, the pulse imperfections dominate over the decoherence losses and cause the quantum state to decay.
Resumo:
Compressive stress-strain curves have been generated over a range of temperatures (900-1100-degrees-C and strain rates (0.001-100 s-1) for two starting structures consisting of lath alpha2 and equiaxed alpha2 in a Ti-24Al-11Nb alloy. The data from these tests have been analysed in terms of a dynamic model for processing. The results define domains of strain rate and temperature in which dynamic recrystallization of alpha2 occurs for both starting structures. The rate controlling process for dynamic recrystallization is suggested to be cross-slip in the alpha2 phase. A region of processing instability has also been defined within which shear bands form in the lath structure. Recrystallization of the beta phase is shown to occur for different combinations of strain rate and temperature from those in which the alpha2 phase recrystallizes dynamically
Resumo:
The characteristics of the hot deformation of Zr-2.5Nb (wt-%) in the temperature range 650-950 degrees C and in the strain rate range 0.001-100 s(-1) have been studied using hot compression testing. Two different preform microstructures: equiaxed (alpha + beta) and beta transformed have been investigated. For this study, the approach of processing maps has been adopted and their interpretation carried out using the dynamic materials model. The efficiency of power dissipation given by [2m/(m + 1)], where m is the strain rate sensitivity, is plotted as a function of temperature and strain rate to obtain a processing map. A domain of dynamic recrystallisation has been identified in the maps of equiaxed (alpha + beta) and beta transformed preforms. In the case of equiaxed (alpha + beta), the stress-strain curves are steady state and the dynamic recrystallisation domain in the map occurs with a peak efficiency of 45% at 850 degrees C and 0.001 s(-1). On the other hand the beta transformed preform exhibits stress-strain curves with continuous flow softening. The corresponding processing map shows a domain of dynamic recrystallisation occurring by the shearing of alpha platelets followed by globularisation with a peak efficiency of 54% at 750 degrees C and 0.001 s(-1). The characteristics of dynamic recrystallisation are analysed on the basis of a simple model which considers the rates of nucleation and growth of recrystallised gains. Calculations show that these two rates are nearly equal and that the nucleation of dynamic recrystallisation is essentially controlled by mechanical recovery involving the cross-slip of screw dislocations. Analysis of flow instabilities using a continuum criterion revealed that Zi-2.5Nb exhibits flow localisation at temperatures lower than 700 degrees C and strain rates higher than 1 s(-1).
Resumo:
The characteristics of hot deformation of beta-quenched Zr-2.5Nb-0.5Cu in the temperature range 650-1050 degrees C and in the strain rate range 0.001-100 s(-1) have been studied using hot compression testing. For this study, the approach of processing maps has been adopted and their interpretation done using the Dynamic Materials Model. The efficiency of power dissipation given by [2m/(m + 1)], where m is strain rate sensitivity, is plotted as a function of temperature and strain rate to obtain a processing map. The processing map for Zr-2.5Nb-0.5Cu within (alpha + beta) phase field showed a domain of dynamic recrystallization, occurring by shearing of alpha-platelets followed by spheroidization, with a peak efficiency of 48% at 750 degrees C and 0.001 s(-1). The stress-strain curves in this domain had features of continuous flow softening and all these are similar to that in Zr-2.5Nb alloy. In the beta-phase field, a second domain with a peak efficiency of 47% occurred at 1050 degrees C and 0.001 s(-1) and this domain is correlated with the superplasticity of beta-phase. The beta-deformation characteristics of this alloy are similar to that observed in pure beta-zirconium with large grain size. Analysis of flow instabilities using a continuum criterion revealed that the Zr-2.5Nb-0.5Cu exhibits flow localization at temperatures higher than 800 degrees C and strain rates higher than about 30 s(-1) and that the addition of copper to Zr-2.5Nb reduces its susceptibility to flow instability, particularly in the (alpha + beta) phase field.
Resumo:
Fork-join queueing systems offer a natural modelling paradigm for parallel processing systems and for assembly operations in automated manufacturing. The analysis of fork-join queueing systems has been an important subject of research in recent years. Existing analysis methodologies-both exact and approximate-assume that the servers are failure-free. In this study, we consider fork-join queueing systems in the presence of server failures and compute the cumulative distribution of performability with respect to the response time of such systems. For this, we employ a computational methodology that uses a recent technique based on randomization. We compare the performability of three different fork-join queueing models proposed in the literature: the distributed model, the centralized splitting model, and the split-merge model. The numerical results show that the centralized splitting model offers the highest levels of performability, followed by the distributed splitting and split-merge models.
Resumo:
Structural Health Monitoring has gained wide acceptance in the recent past as a means to monitor a structure and provide an early warning of an unsafe condition using real-time data. Utilization of structurally integrated, distributed sensors to monitor the health of a structure through accurate interpretation of sensor signals and real-time data processing can greatly reduce the inspection burden. The rapid improvement of the Fiber Optic Sensor technology for strain, vibration, ultrasonic and acoustic emission measurements in recent times makes it feasible alternative to the traditional strain gauges, PVDF and conventional Piezoelectric sensors used for Non Destructive Evaluation (NDE) and Structural Health Monitoring (SHM). Optical fiber-based sensors offer advantages over conventional strain gauges, and PZT devices in terms of size, ease of embedment, immunity from electromagnetic interference (EMI) and potential for multiplexing a number of sensors. The objective of this paper is to demonstrate the acoustic wave sensing using Extrinsic Fabry-Perot Interferometric (EFPI) sensor on a GFRP composite laminates. For this purpose experiments have been carried out initially for strain measurement with Fiber Optic Sensors on GFRP laminates with intentionally introduced holes of different sizes as defects. The results obtained from these experiments are presented in this paper. Numerical modeling has been carried out to obtain the relationship between the defect size and strain.
Resumo:
In the determination of the response time of u.h.v. damped capacitive impulse voltage dividers using the CIGRE IMR-1MS group (1) method and the arrangement suggested by the International Electrotechnical Commission (the I EC square loop),the surge impedance of the connecting lead has been found to influence the accuracy of determination. To avoid this difficulty,a new graphical procedure is proposed. As this method uses only those data points which can be determined with good accuracy, errors in response-time area evaluation do not influence the result.
Resumo:
Instruction reuse is a microarchitectural technique that improves the execution time of a program by removing redundant computations at run-time. Although this is the job of an optimizing compiler, they do not succeed many a time due to limited knowledge of run-time data. In this paper we examine instruction reuse of integer ALU and load instructions in network processing applications. Specifically, this paper attempts to answer the following questions: (1) How much of instruction reuse is inherent in network processing applications?, (2) Can reuse be improved by reducing interference in the reuse buffer?, (3) What characteristics of network applications can be exploited to improve reuse?, and (4) What is the effect of reuse on resource contention and memory accesses? We propose an aggregation scheme that combines the high-level concept of network traffic i.e. "flows" with a low level microarchitectural feature of programs i.e. repetition of instructions and data along with an architecture that exploits temporal locality in incoming packet data to improve reuse. We find that for the benchmarks considered, 1% to 50% of instructions are reused while the speedup achieved varies between 1% and 24%. As a side effect, instruction reuse reduces memory traffic and can therefore be considered as a scheme for low power.