52 resultados para improve acoustic performance
Resumo:
This study examines the thermal efficiency of the operation of arc furnace and the effects of harmonics and voltage dips of a factory located near Bangkok. It also attempts to find ways to improve the performance of the arc furnace operation and minimize the effects of both harmonics and voltage dips. A dynamic model of the arc furnace has been developed incorporating both electrical and thermal characteristics. The model can be used to identify potential areas for improvement of the furnace and its operation. Snapshots of waveforms and measurement of RMS values of voltage, current and power at the furnace, at other feeders and at the point of common coupling were recorded. Harmonic simulation program and electromagnetic transient simulation program were used in the study to model the effects of harmonics and voltage dips and to identify appropriate static and dynamic filters to minimize their effects within the factory. The effects of harmonics and voltage dips were identified in records taken at the point of common coupling of another factory supplied by another feeder of the same substation. Simulation studies were made to examine the results on the second feeder when dynamic filters were used in the factory which operated the arc furnace. The methodology used and the mitigation strategy identified in the study are applicable to general situation in a power distribution system where an arc furnace is a part of the load of a customer
Resumo:
Accurate estimation of mass transport parameters is necessary for overall design and evaluation processes of the waste disposal facilities. The mass transport parameters, such as effective diffusion coefficient, retardation factor and diffusion accessible porosity, are estimated from observed diffusion data by inverse analysis. Recently, particle swarm optimization (PSO) algorithm has been used to develop inverse model for estimating these parameters that alleviated existing limitations in the inverse analysis. However, PSO solver yields different solutions in successive runs because of the stochastic nature of the algorithm and also because of the presence of multiple optimum solutions. Thus the estimated mean solution from independent runs is significantly different from the best solution. In this paper, two variants of the PSO algorithms are proposed to improve the performance of the inverse analysis. The proposed algorithms use perturbation equation for the gbest particle to gain information around gbest region on the search space and catfish particles in alternative iterations to improve exploration capabilities. Performance comparison of developed solvers on synthetic test data for two different diffusion problems reveals that one of the proposed solvers, CPPSO, significantly improves overall performance with improved best, worst and mean fitness values. The developed solver is further used to estimate transport parameters from 12 sets of experimentally observed diffusion data obtained from three diffusion problems and compared with published values from the literature. The proposed solver is quick, simple and robust on different diffusion problems. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The constant increase in the number of solved protein structures is of great help in understanding the basic principles behind protein folding and evolution. 3-D structural knowledge is valuable in designing and developing methods for comparison, modelling and prediction of protein structures. These approaches for structure analysis can be directly implicated in studying protein function and for drug design. The backbone of a protein structure favours certain local conformations which include alpha-helices, beta-strands and turns. Libraries of limited number of local conformations (Structural Alphabets) were developed in the past to obtain a useful categorization of backbone conformation. Protein Block (PB) is one such Structural Alphabet that gave a reasonable structure approximation of 0.42 angstrom. In this study, we use PB description of local structures to analyse conformations that are preferred sites for structural variations and insertions, among group of related folds. This knowledge can be utilized in improving tools for structure comparison that work by analysing local structure similarities. Conformational differences between homologous proteins are known to occur often in the regions comprising turns and loops. Interestingly, these differences are found to have specific preferences depending upon the structural classes of proteins. Such class-specific preferences are mainly seen in the all-beta class with changes involving short helical conformations and hairpin turns. A test carried out on a benchmark dataset also indicates that the use of knowledge on the class specific variations can improve the performance of a PB based structure comparison approach. The preference for the indel sites also seem to be confined to a few backbone conformations involving beta-turns and helix C-caps. These are mainly associated with short loops joining the regular secondary structures that mediate a reversal in the chain direction. Rare beta-turns of type I' and II' are also identified as preferred sites for insertions.
Resumo:
This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
It has been shown recently that the acoustic performance of the extended tube expansion chambers can be improved substantially by making the extended inlet and outlet equal to half and quarter chamber lengths, duly incorporating the end corrections due to the evanescent higher order modes that would be generated at the discontinuities. Such chambers however suffer from the disadvantages of high back pressure and generation of aerodynamic noise at the area discontinuities. These two disadvantages can be overcome by means of a perforated bridge between the extended inlet and extended outlet. This paper deals with design or tuning of these extended concentric tube resonators.
Resumo:
This work proposes a boosting-based transfer learning approach for head-pose classification from multiple, low-resolution views. Head-pose classification performance is adversely affected when the source (training) and target (test) data arise from different distributions (due to change in face appearance, lighting, etc). Under such conditions, we employ Xferboost, a Logitboost-based transfer learning framework that integrates knowledge from a few labeled target samples with the source model to effectively minimize misclassifications on the target data. Experiments confirm that the Xferboost framework can improve classification performance by up to 6%, when knowledge is transferred between the CLEAR and FBK four-view headpose datasets.
Resumo:
In animal populations, the constraints of energy and time can cause intraspecific variation in foraging behaviour. The proximate developmental mediators of such variation are often the mechanisms underlying perception and associative learning. Here, experience-dependent changes in foraging behaviour and their consequences were investigated in an urban population of free-ranging dogs, Canis familiaris by continually challenging them with the task of food extraction from specially crafted packets. Typically, males and pregnant/lactating (PL) females extracted food using the sophisticated `gap widening' technique, whereas non-pregnant/non-lactating (NPNL) females, the relatively underdeveloped `rip opening' technique. In contrast to most males and PL females (and a few NPNL females) that repeatedly used the gap widening technique and improved their performance in food extraction with experience, most NPNL females (and a few males and PL females) non-preferentially used the two extraction techniques and did not improve over successive trials. Furthermore, the ability of dogs to sophisticatedly extract food was positively related to their ability to improve their performance with experience. Collectively, these findings demonstrate that factors such as sex and physiological state can cause differences among individuals in the likelihood of learning new information and hence, in the rate of resource acquisition and monopolization.
Resumo:
The design methodology for flexible pavements needs to address the mechanisms of pavement failure, loading intensities and also develop suitable approaches for evaluation of pavement performance. In the recent years, the use of geocells to improve pavement performance has been receiving considerable attention. This paper studies the influence of geocells on the required thickness of pavements by placing it below the granular layers (base and sub-base) and above the subgrade. The reduction in thickness here refers to the reduction in the thickness of the GSB (Granular Sub-base) layer, with a possibility of altogether getting rid of it. To facilitate the analysis, a simple linear elastic approach is used, considering six of the sections as given in the Indian Roads Congress (IRC) code. All the analysis was done using the pavement analysis package KENPAVE. The results show that the use of geocells enables a reduction in pavement thickness.
Resumo:
H. 264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.
Resumo:
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.
Resumo:
This paper considers cooperative spectrum sensing algorithms for Cognitive Radios which focus on reducing the number of samples to make a reliable detection. We propose algorithms based on decentralized sequential hypothesis testing in which the Cognitive Radios sequentially collect the observations, make local decisions and send them to the fusion center for further processing to make a final decision on spectrum usage. The reporting channel between the Cognitive Radios and the fusion center is assumed more realistically as a Multiple Access Channel (MAC) with receiver noise. Furthermore the communication for reporting is limited, thereby reducing the communication cost. We start with an algorithm where the fusion center uses an SPRT-like (Sequential Probability Ratio Test) procedure and theoretically analyze its performance. Asymptotically, its performance is close to the optimal centralized test without fusion center noise. We further modify this algorithm to improve its performance at practical operating points. Later we generalize these algorithms to handle uncertainties in SNR and fading. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
As an alternative to the gold standard TiO2 photocatalyst, the use of zinc oxide (ZnO) as a robust candidate for wastewater treatment is widespread due to its similarity in charge carrier dynamics upon bandgap excitation and the generation of reactive oxygen species in aqueous suspensions with TiO2. However, the large bandgap of ZnO, the massive charge carrier recombination, and the photoinduced corrosion-dissolution at extreme pH conditions, together with the formation of inert Zn(OH)(2) during photocatalytic reactions act as barriers for its extensive applicability. To this end, research has been intensified to improve the performance of ZnO by tailoring its surface-bulk structure and by altering its photogenerated charge transfer pathways with an intention to inhibit the surface-bulk charge carrier recombination. For the first time, the several strategies, such as tailoring the intrinsic defects, surface modification with organic compounds, doping with foreign ions, noble metal deposition, heterostructuring with other semiconductors and modification with carbon nanostructures, which have been successfully employed to improve the photoactivity and stability of ZnO are critically reviewed. Such modifications enhance the charge separation and facilitate the generation of reactive oxygenated free radicals, and also the interaction with the pollutant molecules. The synthetic route to obtain hierarchical nanostructured morphologies and study their impact on the photocatalytic performance is explained by considering the morphological influence and the defect-rich chemistry of ZnO. Finally, the crystal facet engineering of polar and non-polar facets and their relevance in photocatalysis is outlined. It is with this intention that the present review directs the further design, tailoring and tuning of the physico-chemical and optoelectronic properties of ZnO for better applications, ranging from photocatalysis to photovoltaics.
Resumo:
It has been shown that iterative re-weighted strategies will often improve the performance of many sparse reconstruction algorithms. However, these strategies are algorithm dependent and cannot be easily extended for an arbitrary sparse reconstruction algorithm. In this paper, we propose a general iterative framework and a novel algorithm which iteratively enhance the performance of any given arbitrary sparse reconstruction algorithm. We theoretically analyze the proposed method using restricted isometry property and derive sufficient conditions for convergence and performance improvement. We also evaluate the performance of the proposed method using numerical experiments with both synthetic and real-world data. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
NrichD