52 resultados para improve acoustic performance
em Indian Institute of Science - Bangalore - Índia
Resumo:
A fairly comprehensive computer program incorporating explicit expressions for the four-pole parameters of concentric-tube resonators, plug mufflers, and three-duct cross-flow perforated elements has been used for parametric studies. The parameters considered are hole diameter, the center-to-center distance between consecutive holes (which decides porosity), the incoming mean flow Mach number, the area expansion ratio, the number of partitions of chambers within a given overall shell length, and the relative lengths of these partitions or chambers, all normalized with respect to the exhaust pipe diameter. Transmission loss has been plotted as a function of a normalized frequency parameter. Additionally, the effect of the tail pipe length on insertion loss for an anechoic source has also been studied. These studies have been supplemented by empirical expressions for the normalized static pressure drop for different types of perforated-element mufflers developed from experimental observations.
Resumo:
Modeling of wave propagation in hoses, unlike in rigid pipes or waveguides, introduces a coupling between the inside medium, the hose wall, and the outside medium, This alters the axial wave number and thence the corresponding effective speed of sound inside the hose resulting in sound radiation into the outside medium, also called the breakout or shell noise, The existing literature on the subject is such that a hose cannot be integrated into the,whole piping system made up of sections of hoses, pipes, and mufflers to predict the acoustical performance in terms of transmission loss (TL), The present paper seeks to fill this gap, Three one-dimensional coupled wave equations are written to account for the presence of a yielding wall with a finite lumped transverse impedance of the hose material, The resulting wave equation can readily be reduced to a transfer matrix form using an effective wave number for a moving medium in a hose section, Incorporating the effect of fluid loading due to the outside medium also allows prediction of the transverse TL and the breakout noise, Axial TL and transverse TL have been combined into net TL needed by designers, Predictions of the axial as well as transverse TL are shown to compare well with those of a rigorous 3-D analysis using only one-hundredth of the computation time, Finally, results of some parametric studies are reported for engineers involved in the acoustical design of hoses. (C) 1996 Institute of Noise Control Engineering.
Resumo:
Thermoacoustics is the interaction between heat and sound, which are useful in designing heat engines and heat pumps. Research in the field of thermoacoustics focuses on the demand to improve the performance which is achieved by altering operational, geometrical and fluid parameters. The present study deals with improving the performance of twin thermoacoustic prime mover, which has gained the significant importance in the recent years for the production of high amplitude sound waves. The performance of twin thermoacoustic prime mover is evaluated in terms of onset temperature difference, resonance frequency and pressure amplitude of the acoustic waves by varying the resonator length and charge pressures of fluid medium nitrogen. DeltaEC, the free simulation software developed by LANL, USA is employed in the present study to simulate the performance of twin thermoacoustic prime mover. Experimental and simulated results are compared and the deviation is found to be within 10%.
Resumo:
Surface treatment alters the frictional behaviour of pistons in I.C. engines and can be used to improve engine performance. Surface treatments applied to aluminium alloy pistons of a high speed diesel engine and their effect on the engine performance are described. Certain piston surface treatments improve engine performance and also reduce the run-in period.
Resumo:
In this work, we evaluate the benefits of using Grids with multiple batch systems to improve the performance of multi-component and parameter sweep parallel applications by reduction in queue waiting times. Using different job traces of different loads, job distributions and queue waiting times corresponding to three different queuing policies(FCFS, conservative and EASY backfilling), we conducted a large number of experiments using simulators of two important classes of applications. The first simulator models Community Climate System Model (CCSM), a prominent multi-component application and the second simulator models parameter sweep applications. We compare the performance of the applications when executed on multiple batch systems and on a single batch system for different system and application configurations. We show that there are a large number of configurations for which application execution using multiple batch systems can give improved performance over execution on a single system.
Resumo:
The acoustical behavior of an elliptical chamber muffler having an end-inlet and side-outlet port is analyzed semi-analytically. A uniform piston source is assumed to model the 3-D acoustic field in the elliptical chamber cavity. Towards this end, we consider the modal expansion of acoustic pressure field in the elliptical cavity in terms of angular and radial Mathieu functions, subjected to rigid wall condition, whereupon under the assumption of a point source, Green's function is obtained. On integrating this function over piston area of the side or end port and dividing it by piston area, one obtains the acoustic field, whence one can find the impedance matrix parameters characterizing the 2-port system. The acoustic performance of these configurations is evaluated in terms of transmission loss (TL). The analytical results thus obtained are compared with 3-D HA carried on a commercial software for certain muffler configurations. These show excellent agreement, thereby validating the 3-D semi-analytical piston driven model. The influence of the chamber length as well as the angular and axial location of the end and side ports on TL performance is also discussed, thus providing useful guidelines to the muffler designer. (c) 2011 Elsevier B.V. All rights reserved.
Resumo:
Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.
Resumo:
The acoustical behaviour of an elliptical chamber muffler having a side inlet and side outlet port is analyzed in this paper, wherein a uniform velocity piston source is assumed to model the 3-D acoustic field in the elliptical chamber cavity. Towards this end, we consider the modal expansion of the acoustic pressure field in the elliptical cavity in terms of the angular and radial Mathieu func-tions, subjected to the rigid wall condition. Then, the Green's function due to the point source lo-cated on the side (curved) surface of the elliptical chamber is obtained. On integrating this function over the elliptical piston area on the curved surface of the elliptical chamber and subsequent divi-sion by the area of the elliptic piston, one obtains the acoustic pressure field due to the piston driven source which is equivalent to considering plane wave propagation in the side ports. Thus, one can obtain the acoustic pressure response functions, i.e., the impedance matrix (Z) parameters due to the sources (ports) located on the side surface, from which one may also obtain a progressive wave rep-resentation in terms of the scattering matrix (S). Finally, the acoustic performance of the muffler is evaluated in terms of the Transmission loss (TL) which is computed in terms of the scattering pa-rameters. The effect of the axial length of the muffler and the angular location of the ports on the TL characteristics is studied in detail. The acoustically long chambers show dominant axial plane wave propagation while the TL spectrum of short chambers indicates the dominance of the trans-versal modes. The 3-D analytical results are compared with the 3-D FEM simulations carried on a commercial software and are shown to be in an excellent agreement, thereby validating the analyti-cal procedure suggested in this work.
Resumo:
It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.
Resumo:
The twin demands of energy-efficiency and higher performance on DRAM are highly emphasized in multicore architectures. A variety of schemes have been proposed to address either the latency or the energy consumption of DRAMs. These schemes typically require non-trivial hardware changes and end up improving latency at the cost of energy or vice-versa. One specific DRAM performance problem in multicores is that interleaved accesses from different cores can potentially degrade row-buffer locality. In this paper, based on the temporal and spatial locality characteristics of memory accesses, we propose a reorganization of the existing single large row-buffer in a DRAM bank into multiple sub-row buffers (MSRB). This re-organization not only improves row hit rates, and hence the average memory latency, but also brings down the energy consumed by the DRAM. The first major contribution of this work is proposing such a reorganization without requiring any significant changes to the existing widely accepted DRAM specifications. Our proposed reorganization improves weighted speedup by 35.8%, 14.5% and 21.6% in quad, eight and sixteen core workloads along with a 42%, 28% and 31% reduction in DRAM energy. The proposed MSRB organization enables opportunities for the management of multiple row-buffers at the memory controller level. As the memory controller is aware of the behaviour of individual cores it allows us to implement coordinated buffer allocation schemes for different cores that take into account program behaviour. We demonstrate two such schemes, namely Fairness Oriented Allocation and Performance Oriented Allocation, which show the flexibility that memory controllers can now exploit in our MSRB organization to improve overall performance and/or fairness. Further, the MSRB organization enables additional opportunities for DRAM intra-bank parallelism and selective early precharging of the LRU row-buffer to further improve memory access latencies. These two optimizations together provide an additional 5.9% performance improvement.
Resumo:
We propose apractical, feature-level and score-level fusion approach by combining acoustic and estimated articulatory information for both text independent and text dependent speaker verification. From a practical point of view, we study how to improve speaker verification performance by combining dynamic articulatory information with the conventional acoustic features. On text independent speaker verification, we find that concatenating articulatory features obtained from measured speech production data with conventional Mel-frequency cepstral coefficients (MFCCs) improves the performance dramatically. However, since directly measuring articulatory data is not feasible in many real world applications, we also experiment with estimated articulatory features obtained through acoustic-to-articulatory inversion. We explore both feature level and score level fusion methods and find that the overall system performance is significantly enhanced even with estimated articulatory features. Such a performance boost could be due to the inter-speaker variation information embedded in the estimated articulatory features. Since the dynamics of articulation contain important information, we included inverted articulatory trajectories in text dependent speaker verification. We demonstrate that the articulatory constraints introduced by inverted articulatory features help to reject wrong password trials and improve the performance after score level fusion. We evaluate the proposed methods on the X-ray Microbeam database and the RSR 2015 database, respectively, for the aforementioned two tasks. Experimental results show that we achieve more than 15% relative equal error rate reduction for both speaker verification tasks. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.
Resumo:
Prequantization has been forwarded as a means to improve the performance of double phase holograms (DPHs). We show here that any improvement (even under the best of conditions) is not large enough to help the DPH to compete favourably with other holograms.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.