192 resultados para Garment sizes.
Resumo:
A circular array of Piezoelectric Wafer Active Sensor (PWAS) has been employed to detect surface damages like corrosion using lamb waves. The array consists of a number of small PWASs of 10 mm diameter and 1 mm thickness. The advantage of a circular array is its compact arrangement and large area of coverage for monitoring with small area of physical access. Growth of corrosion is monitored in a laboratory-scale set-up using the PWAS array and the nature of reflected and transmitted Lamb wave patterns due to corrosion is investigated. The wavelet time-frequency maps of the sensor signals are employed and a damage index is plotted against the damage parameters and varying frequency of the actuation signal (a windowed sine signal). The variation of wavelet coefficient for different growth of corrosion is studied. Wavelet coefficient as function of time gives an insight into the effect of corrosion in time-frequency scale. We present here a method to eliminate the time scale effect which helps in identifying easily the signature of damage in the measured signals. The proposed method becomes useful in determining the approximate location of the corrosion with respect to the location of three neighboring sensors in the circular array. A cumulative damage index is computed for varying damage sizes and the results appear promising.
Resumo:
The Ag-Ni system is characterized by large differences in atomic sizes (14%) and a positive heat of mixing (+23 kJ mol(-1)). The binary equilibrium diagram for this system therefore exhibits a large miscibility gap in both solid and liquid state. This paper explores the size-dependent changes in microstructure and the suppression of the miscibility gap which occurs when free alloy particles of nanometer size are synthesized by co-reduction of Ag and Ni metal precursors. The paper reports that complete mixing between Ag and Ni atoms could be achieved for smaller nanoparticles (<7 nm). These particles exhibit a single-phase solid solution with face-centered cubic (fcc) structure. With increase in size, the nanoparticles revealed two distinct regions. One of the regions is composed of pure Ag. This region partially surrounds a region of fcc solid solution at an early stage of decomposition. Experimental observations were compared with the results obtained from the thermodynamic calculations, which compared the free energies corresponding to a physical mixture of pure Ag and Ni phases and a fcc Ag-Ni solid solution for different particle sizes. Results from the theoretical calculations revealed that, for the Ag-Ni system, solid solution was energetically preferred over the physical mixture configuration for particle sizes of 7 nm and below. The experimentally observed two-phase microstructure for larger particles was thus primarily due to the growth of Ag-rich regions epitaxially on initially formed small fcc Ag-Ni nanoparticles. (C) 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
Computational grids are increasingly being used for executing large multi-component scientific applications. The most widely reported advantages of application execution on grids are the performance benefits, in terms of speeds, problem sizes or quality of solutions, due to increased number of processors. We explore the possibility of improved performance on grids without increasing the application’s processor space. For this, we consider grids with multiple batch systems. We explore the challenges involved in and the advantages of executing long-running multi-component applications on multiple batch sites with a popular multi-component climate simulation application, CCSM, as the motivation.We have performed extensive simulation studies to estimate the single and multi-site execution rates of the applications for different system characteristics.Our experiments show that in many cases, multiple batch executions can have better execution rates than a single site execution.
Resumo:
Abstract—DC testing of parametric faults in non-linear analog circuits based on a new transformation, entitled, V-Transform acting on polynomial coefficient expansion of the circuit function is presented. V-Transform serves the dual purpose of monotonizing polynomial coefficients of circuit function expansion and increasing the sensitivity of these coefficients to circuit parameters. The sensitivity of V-Transform Coefficients (VTC) to circuit parameters is up to 3x-5x more than sensitivity of polynomial coefficients. As a case study, we consider a benchmark elliptic filter to validate our method. The technique is shown to uncover hitherto untestable parametric faults whose sizes are smaller than 10 % of the nominal values. I.
Resumo:
In this paper we propose a concept and report experimental results based on a circular array of Piezoelectric Wafer Active Sensors (PWASs) for rapid localization and parametric identification of corrosion type damage in metallic plates. Implementation of this circular array of PWASs combines the use of ultrasonic Lamb wave propagation technique and an algorithm based on symmetry breaking in the signal pattern to locate and monitor the growth of a corrosion pit on a metallic plate. Wavelet time-frequency maps of the sensor signals are employed to obtain an insight regarding the effect of corrosion growth on the Lamb wave transmission in time-frequency scale. We present here a method to eliminate the time scale, which helps in identifying easily the signature of damage in the measured signals. The proposed method becomes useful in determining the approximate location of the damage with respect to the location of three neighboring sensors in the circular array. A cumulative damage index is computed from the wavelet coefficients for varying damage sizes and the results appear promising. Damage index is plotted against the damage parameters for frequency sweep of the excitation signal (a windowed sine signal). Results of corrosion damage are compared with circular holes of various sizes to demonstrate the applicability of present method to different types of damage. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Accurate system planning and performance evaluation requires knowledge of the joint impact of scheduling, interference, and fading. However, current analyses either require costly numerical simulations or make simplifying assumptions that limit the applicability of the results. In this paper, we derive analytical expressions for the spectral efficiency of cellular systems that use either the channel-unaware but fair round robin scheduler or the greedy, channel-aware but unfair maximum signal to interference ratio scheduler. As is the case in real deployments, non-identical co-channel interference at each user, both Rayleigh fading and lognormal shadowing, and limited modulation constellation sizes are accounted for in the analysis. We show that using a simple moment generating function-based lognormal approximation technique and an accurate Gaussian-Q function approximation leads to results that match simulations well. These results are more accurate than erstwhile results that instead used the moment-matching Fenton-Wilkinson approximation method and bounds on the Q function. The spectral efficiency of cellular systems is strongly influenced by the channel scheduler and the small constellation size that is typically used in third generation cellular systems.
Resumo:
We look at graphical descriptions of block codes known as trellises, which illustrate connections between algebra and graph theory, and can be used to develop powerful decoding algorithms. Trellis sizes for linear block codes are known to grow exponentially with the code parameters. Of considerable interest to coding theorists therefore, are more compact descriptions called tail-biting trellises which in some cases can be much smaller than any conventional trellis for the same code . We derive some interesting properties of tail-biting trellises and present a new decoding algorithm.
Resumo:
Structural Health Monitoring has gained wide acceptance in the recent past as a means to monitor a structure and provide an early warning of an unsafe condition using real-time data. Utilization of structurally integrated, distributed sensors to monitor the health of a structure through accurate interpretation of sensor signals and real-time data processing can greatly reduce the inspection burden. The rapid improvement of the Fiber Optic Sensor technology for strain, vibration, ultrasonic and acoustic emission measurements in recent times makes it feasible alternative to the traditional strain gauges, PVDF and conventional Piezoelectric sensors used for Non Destructive Evaluation (NDE) and Structural Health Monitoring (SHM). Optical fiber-based sensors offer advantages over conventional strain gauges, and PZT devices in terms of size, ease of embedment, immunity from electromagnetic interference (EMI) and potential for multiplexing a number of sensors. The objective of this paper is to demonstrate the acoustic wave sensing using Extrinsic Fabry-Perot Interferometric (EFPI) sensor on a GFRP composite laminates. For this purpose experiments have been carried out initially for strain measurement with Fiber Optic Sensors on GFRP laminates with intentionally introduced holes of different sizes as defects. The results obtained from these experiments are presented in this paper. Numerical modeling has been carried out to obtain the relationship between the defect size and strain.
Resumo:
We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.
Resumo:
We consider the problem of scheduling semiconductor burn-in operations, where burn-in ovens are modelled as batch processing machines. Most of the studies assume that ready times and due dates of jobs are agreeable (i.e., ri < rj implies di ≤ dj). In many real world applications, the agreeable property assumption does not hold. Therefore, in this paper, scheduling of a single burn-in oven with non-agreeable release times and due dates along with non-identical job sizes as well as non-identical processing of time problem is formulated as a Non-Linear (0-1) Integer Programming optimisation problem. The objective measure of the problem is minimising the maximum completion time (makespan) of all jobs. Due to computational intractability, we have proposed four variants of a two-phase greedy heuristic algorithm. Computational experiments indicate that two out of four proposed algorithms have excellent average performance and also capable of solving any large-scale real life problems with a relatively low computational effort on a Pentium IV computer.
Resumo:
Due to the importance of collective communications in scientific parallel applications, many strategies have been devised for optimizing collective communications for different kinds of parallel environments. There has been an increasing interest to evolve efficient broadcast algorithms for computational grids. In this paper, we present application-oriented adaptive techniques that take into account resource characteristics as well as the application's usage of broadcasts for deriving efficient broadcast trees. In particular, we consider two broadcast parameters used in the application, namely, the broadcast message sizes and the time interval between the broadcasts. The results indicate that our adaptive strategies can provide 20% average improvement in performance over the popular MPICH-G2's MPI_Bcast implementation for loaded network conditions.
Resumo:
Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%