85 resultados para Operation based method
em Indian Institute of Science - Bangalore - Índia
Resumo:
The present paper aims at studying the performance characteristics of a subspace based algorithm for source localization in shallow water such as coastal water. Specifically, we study the performance of Multi Image Subspace Algorithm (MISA). Through first-order perturbation analysis and computer simulation it is shown that MISA is unbiased and statistically efficient. Further, we bring out the role of multipaths (or images) in reducing the error in the localization. It is shown that the presence of multipaths is found to improve the range and depth estimates. This may be attributed to the increased curvature of the wavefront caused by interference from many coherent multipaths.
Resumo:
With the development of deep sequencing methodologies, it has become important to construct site saturation mutant (SSM) libraries in which every nucleotide/codon in a gene is individually randomized. We describe methodologies for the rapid, efficient, and economical construction of such libraries using inverse polymerase chain reaction (PCR). We show that if the degenerate codon is in the middle of the mutagenic primer, there is an inherent PCR bias due to the thermodynamic mismatch penalty, which decreases the proportion of unique mutants. Introducing a nucleotide bias in the primer can alleviate the problem. Alternatively, if the degenerate codon is placed at the 5' end, there is no PCR bias, which results in a higher proportion of unique mutants. This also facilitates detection of deletion mutants resulting from errors during primer synthesis. This method can be used to rapidly generate SSM libraries for any gene or nucleotide sequence, which can subsequently be screened and analyzed by deep sequencing. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
In the domain of manual mechanical assembly, expert knowledge is an important means of supporting assembly planning that leads to fewer issues during actual assembly. Knowledge based systems can be used to provide assembly planners with expert knowledge as advice. However, acquisition of knowledge remains a difficult task to automate, while manual acquisition is tedious, time-consuming, and requires engagement of knowledge engineers with specialist knowledge to understand and translate expert knowledge. This paper describes the development, implementation and preliminary evaluation of a method that asks a series of questions to an expert, so as to automatically acquire necessary diagnostic and remedial knowledge as rules for use in a knowledge based system for advising assembly planners diagnose and resolve issues. The method, called a questioning procedure, organizes its questions around an assembly situation which it presents to the expert as the context, and adapts its questions based on the answers it receives from the expert. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
In this study, we report the gas sensing behavior of BiNbO4 nanopowder prepared by a low temperature simple solution-based method. Before the sensing behaviour study, the as-synthesized nanopowder was characterized by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, UV-diffuse reflectance spectroscopy, impedance analysis, and surface area measurement. The NH3 sensing behavior of BiNbO4 was then studied by temperature modulation (50-350 degrees C) as well as concentration modulation (20-140 ppm). At the optimum operating temperature of 325 degrees C, the sensitivity was measured to be 90%. The cross-sensitivity of as-synthesized BiNbO4 sensor was also investigated by assessing the sensing behavior toward other gases such as hydrogen sulphide (H2S), ethanol (C2H5OH), and liquid petroleum gas (LPG). Finally, selectivity of the sensing material toward NH3 was characterized by observing the sensor response with gas concentrations in the range 20-140 ppm. The response and recovery time for NH3 sensing at 120 ppm were about 16 s and about 17 s, respectively.
Resumo:
Recently, efficient scheduling algorithms based on Lagrangian relaxation have been proposed for scheduling parallel machine systems and job shops. In this article, we develop real-world extensions to these scheduling methods. In the first part of the paper, we consider the problem of scheduling single operation jobs on parallel identical machines and extend the methodology to handle multiple classes of jobs, taking into account setup times and setup costs, The proposed methodology uses Lagrangian relaxation and simulated annealing in a hybrid framework, In the second part of the paper, we consider a Lagrangian relaxation based method for scheduling job shops and extend it to obtain a scheduling methodology for a real-world flexible manufacturing system with centralized material handling.
Resumo:
In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.
Resumo:
The paper discusses basically a wave propagation based method for identifying the damage due to skin-stiffener debonding in a stiffened structure. First, a spectral finite element model (SFEM) is developed for modeling wave propagation in general built-up structures, using the concept of assembling 2D spectral plate elements and the model is then used in modeling wave propagation in a skin-stiffener type structure. The damage force indicator (DFI) technique, which is derived from the dynamic stiffness matrix of the healthy stiffened structure (obtained from the SFEM model) along with the nodal displacements of the debonded stiffened structure (obtained from 2D finite element model), is used to identify the damage due to the presence of debond in a stiffened structure.
Resumo:
Gaussian processes (GPs) are promising Bayesian methods for classification and regression problems. Design of a GP classifier and making predictions using it is, however, computationally demanding, especially when the training set size is large. Sparse GP classifiers are known to overcome this limitation. In this letter, we propose and study a validation-based method for sparse GP classifier design. The proposed method uses a negative log predictive (NLP) loss measure, which is easy to compute for GP models. We use this measure for both basis vector selection and hyperparameter adaptation. The experimental results on several real-world benchmark data sets show better orcomparable generalization performance over existing methods.
Resumo:
Further improvement in performance, to achieve near transparent quality LSF quantization, is shown to be possible by using a higher order two dimensional (2-D) prediction in the coefficient domain. The prediction is performed in a closed-loop manner so that the LSF reconstruction error is the same as the quantization error of the prediction residual. We show that an optimum 2-D predictor, exploiting both inter-frame and intra-frame correlations, performs better than existing predictive methods. Computationally efficient split vector quantization technique is used to implement the proposed 2-D prediction based method. We show further improvement in performance by using weighted Euclidean distance.
Resumo:
Two methods based on wavelet/wavelet packet expansion to denoise and compress optical tomography data containing scattered noise are presented, In the first, the wavelet expansion coefficients of noisy data are shrunk using a soft threshold. In the second, the data are expanded into a wavelet packet tree upon which a best basis search is done. The resulting coefficients are truncated on the basis of energy content. It can be seen that the first method results in efficient denoising of experimental data when scattering particle density in the medium surrounding the object was up to 12.0 x 10(6) per cm(3). This method achieves a compression ratio of approximate to 8:1. The wavelet packet based method resulted in a compression of up to 11:1 and also exhibited reasonable noise reduction capability. Tomographic reconstructions obtained from denoised data are presented. (C) 1999 Published by Elsevier Science B.V. All rights reserved,
Resumo:
The method of stress characteristics has been employed to compute the end-bearing capacity of driven piles. The dependency of the soil internal friction angle on the stress level has been incorporated to achieve more realistic predictions for the end-bearing capacity of piles. The validity of the assumption of the superposition principle while using the bearing capacity equation based on soil plasticity concepts, when applied to deep foundations, has been examined. Fourteen pile case histories were compiled with cone penetration tests (CPT) performed in the vicinity of different pile locations. The end-bearing capacity of the piles was computed using different methods, namely, static analysis, effective stress approach, direct CPT, and the proposed approach. The comparison between predictions made by different methods and measured records shows that the stress-level-based method of stress characteristics compares better with experimental data. Finally, the end-bearing capacity of driven piles in sand was expressed in terms of a general expression with the addition of a new factor that accounts for different factors contributing to the bearing capacity. The influence of the soil nonassociative flow rule has also been included to achieve more realistic results.
Resumo:
In this paper, a model for composite beam with embedded de-lamination is developed using the wavelet based spectral finite element (WSFE) method particularly for damage detection using wave propagation analysis. The simulated responses are used as surrogate experimental results for the inverse problem of detection of damage using wavelet filtering. The WSFE technique is very similar to the fast fourier transform (FFT) based spectral finite element (FSFE) except that it uses compactly supported Daubechies scaling function approximation in time. Unlike FSFE formulation with periodicity assumption, the wavelet-based method allows imposition of initial values and thus is free from wrap around problems. This helps in analysis of finite length undamped structures, where the FSFE method fails to simulate accurate response. First, numerical experiments are performed to study the effect of de-lamination on the wave propagation characteristics. The responses are simulated for different de-lamination configurations for both broad-band and narrow-band excitations. Next, simulated responses are used for damage detection using wavelet analysis.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discoverymethods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies.Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discovery methods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]