951 resultados para Garment sizes.
Resumo:
We look at graphical descriptions of block codes known as trellises, which illustrate connections between algebra and graph theory, and can be used to develop powerful decoding algorithms. Trellis sizes for linear block codes are known to grow exponentially with the code parameters. Of considerable interest to coding theorists therefore, are more compact descriptions called tail-biting trellises which in some cases can be much smaller than any conventional trellis for the same code . We derive some interesting properties of tail-biting trellises and present a new decoding algorithm.
Resumo:
Structural Health Monitoring has gained wide acceptance in the recent past as a means to monitor a structure and provide an early warning of an unsafe condition using real-time data. Utilization of structurally integrated, distributed sensors to monitor the health of a structure through accurate interpretation of sensor signals and real-time data processing can greatly reduce the inspection burden. The rapid improvement of the Fiber Optic Sensor technology for strain, vibration, ultrasonic and acoustic emission measurements in recent times makes it feasible alternative to the traditional strain gauges, PVDF and conventional Piezoelectric sensors used for Non Destructive Evaluation (NDE) and Structural Health Monitoring (SHM). Optical fiber-based sensors offer advantages over conventional strain gauges, and PZT devices in terms of size, ease of embedment, immunity from electromagnetic interference (EMI) and potential for multiplexing a number of sensors. The objective of this paper is to demonstrate the acoustic wave sensing using Extrinsic Fabry-Perot Interferometric (EFPI) sensor on a GFRP composite laminates. For this purpose experiments have been carried out initially for strain measurement with Fiber Optic Sensors on GFRP laminates with intentionally introduced holes of different sizes as defects. The results obtained from these experiments are presented in this paper. Numerical modeling has been carried out to obtain the relationship between the defect size and strain.
Resumo:
We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.
Resumo:
We consider the problem of scheduling semiconductor burn-in operations, where burn-in ovens are modelled as batch processing machines. Most of the studies assume that ready times and due dates of jobs are agreeable (i.e., ri < rj implies di ≤ dj). In many real world applications, the agreeable property assumption does not hold. Therefore, in this paper, scheduling of a single burn-in oven with non-agreeable release times and due dates along with non-identical job sizes as well as non-identical processing of time problem is formulated as a Non-Linear (0-1) Integer Programming optimisation problem. The objective measure of the problem is minimising the maximum completion time (makespan) of all jobs. Due to computational intractability, we have proposed four variants of a two-phase greedy heuristic algorithm. Computational experiments indicate that two out of four proposed algorithms have excellent average performance and also capable of solving any large-scale real life problems with a relatively low computational effort on a Pentium IV computer.
Resumo:
Due to the importance of collective communications in scientific parallel applications, many strategies have been devised for optimizing collective communications for different kinds of parallel environments. There has been an increasing interest to evolve efficient broadcast algorithms for computational grids. In this paper, we present application-oriented adaptive techniques that take into account resource characteristics as well as the application's usage of broadcasts for deriving efficient broadcast trees. In particular, we consider two broadcast parameters used in the application, namely, the broadcast message sizes and the time interval between the broadcasts. The results indicate that our adaptive strategies can provide 20% average improvement in performance over the popular MPICH-G2's MPI_Bcast implementation for loaded network conditions.
Resumo:
Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%