163 resultados para iterative methods
Resumo:
The three-component chiral derivatization protocols have been developed for H-1, C-13 and F-19 NMR spectroscopic discrimination of chiral diacids by their coordination and self-assembly with optically active (R)-alpha-methylbenzylamine and 2-formylphenylboronic acid or 3-fluoro-2-formylmethylboronic acid. These protocols yield a mixture of diastereomeric imino-boronate esters which are identified by the well-resolved diastereotopic peaks with significant chemical shift differences ranging up to 0.6 and 2.1 ppm in their corresponding H-1 and F-19 NMR spectra, without any racemization or kinetic resolution, thereby enabling the determination of enantiopurity. A protocol has also been developed for discrimination of chiral alpha-methyl amines, using optically pure trans-1,2-cyclohexanedicarboxylic acid in combination with 2-formylphenylboronic acid or 3-fluoro-2-fluoromethylboronic acid. The proposed strategies have been demonstrated on large number of chiral diacids and chiral alpha-methyl amines.
Resumo:
Precision inspection of manufactured components having multiple complex surfaces and variable tolerance definition is an involved, complex and time-consuming function. In routine practice, a jig is used to present the part in a known reference frame to carry out the inspection process. Jigs involve both time and cost in their development, manufacture and use. This paper describes 'as is where is inspection' (AIWIN), a new automated inspection technique that accelerates the inspection process by carrying out a fast registration procedure and establishing a quick correspondence between the part to inspect and its CAD geometry. The main challenge in doing away with a jig is that the inspection reference frame could be far removed from the CAD frame. Traditional techniques based on iterative closest point (ICP) or Newton methods require either a large number of iterations for convergence or fail in such a situation. A two-step coarse registration process is proposed to provide a good initial guess for a modified ICP algorithm developed earlier (Ravishankar et al., Int J Adv Manuf Technol 46(1-4):227-236, 2010). The first step uses a calibrated sphere for local hard registration and fixing the translation error. This transformation locates the centre for the sphere in the CAD frame. In the second step, the inverse transformation (involving pure rotation about multiple axes) required to align the inspection points measured on the manufactured part with the CAD point dataset of the model is determined and enforced. This completes the coarse registration enabling fast convergence of the modified ICP algorithm. The new technique has been implemented on complex freeform machined components and the inspection results clearly show that the process is precise and reliable with rapid convergence. © 2011 Springer-Verlag London Limited.
Resumo:
Comparison of multiple protein structures has a broad range of applications in the analysis of protein structure, function and evolution. Multiple structure alignment tools (MSTAs) are necessary to obtain a simultaneous comparison of a family of related folds. In this study, we have developed a method for multiple structure comparison largely based on sequence alignment techniques. A widely used Structural Alphabet named Protein Blocks (PBs) was used to transform the information on 3D protein backbone conformation as a ID sequence string. A progressive alignment strategy similar to CLUSTALW was adopted for multiple PB sequence alignment (mulPBA). Highly similar stretches identified by the pairwise alignments are given higher weights during the alignment. The residue equivalences from PB based alignments are used to obtain a three dimensional fit of the structures followed by an iterative refinement of the structural superposition. Systematic comparisons using benchmark datasets of MSTAs underlines that the alignment quality is better than MULTIPROT, MUSTANG and the alignments in HOMSTRAD, in more than 85% of the cases. Comparison with other rigid-body and flexible MSTAs also indicate that mulPBA alignments are superior to most of the rigid-body MSTAs and highly comparable to the flexible alignment methods. (C) 2012 Elsevier Masson SAS. All rights reserved.
Resumo:
This paper presents a detailed investigation of the erects of piezoelectricity, spontaneous polarization and charge density on the electronic states and the quasi-Fermi level energy in wurtzite-type semiconductor heterojunctions. This has required a full solution to the coupled Schrodinger-Poisson-Navier model, as a generalization of earlier work on the Schrodinger-Poisson problem. Finite-element-based simulations have been performed on a A1N/GaN quantum well by using both one-step calculation as well as the self-consistent iterative scheme. Results have been provided for field distributions corresponding to cases with zero-displacement boundary conditions and also stress-free boundary conditions. It has been further demonstrated by using four case study examples that a complete self-consistent coupling of electromechanical fields is essential to accurately capture the electromechanical fields and electronic wavefunctions. We have demonstrated that electronic energies can change up to approximately 0.5 eV when comparing partial and complete coupling of electromechanical fields. Similarly, wavefunctions are significantly altered when following a self-consistent procedure as opposed to the partial-coupling case usually considered in literature. Hence, a complete self-consistent procedure is necessary when addressing problems requiring more accurate results on optoelectronic properties of low-dimensional nanostructures compared to those obtainable with conventional methodologies.
Resumo:
Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.
Resumo:
Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.
Resumo:
In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.
Resumo:
We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.
Resumo:
Present study performs the spatial and temporal trend analysis of annual, monthly and seasonal maximum and minimum temperatures (t(max), t(min)) in India. Recent trends in annual, monthly, winter, pre-monsoon, monsoon and post-monsoon extreme temperatures (t(max), t(min)) have been analyzed for three time slots viz. 1901-2003,1948-2003 and 1970-2003. For this purpose, time series of extreme temperatures of India as a whole and seven homogeneous regions, viz. Western Himalaya (WH), Northwest (NW), Northeast (NE), North Central (NC), East coast (EC), West coast (WC) and Interior Peninsula (IP) are considered. Rigorous trend detection analysis has been exercised using variety of non-parametric methods which consider the effect of serial correlation during analysis. During the last three decades minimum temperature trend is present in All India as well as in all temperature homogeneous regions of India either at annual or at any seasonal level (winter, pre-monsoon, monsoon, post-monsoon). Results agree with the earlier observation that the trend in minimum temperature is significant in the last three decades over India (Kothawale et al., 2010). Sequential MK test reveals that most of the trend both in maximum and minimum temperature began after 1970 either in annual or seasonal levels. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]
Resumo:
Effects of dynamic contact angle models on the flow dynamics of an impinging droplet in sharp interface simulations are presented in this article. In the considered finite element scheme, the free surface is tracked using the arbitrary Lagrangian-Eulerian approach. The contact angle is incorporated into the model by replacing the curvature with the Laplace-Beltrami operator and integration by parts. Further, the Navier-slip with friction boundary condition is used to avoid stress singularities at the contact line. Our study demonstrates that the contact angle models have almost no influence on the flow dynamics of the non-wetting droplets. In computations of the wetting and partially wetting droplets, different contact angle models induce different flow dynamics, especially during recoiling. It is shown that a large value for the slip number has to be used in computations of the wetting and partially wetting droplets in order to reduce the effects of the contact angle models. Among all models, the equilibrium model is simple and easy to implement. Further, the equilibrium model also incorporates the contact angle hysteresis. Thus, the equilibrium contact angle model is preferred in sharp interface numerical schemes.
Resumo:
We propose an iterative algorithm to detect transient segments in audio signals. Short time Fourier transform(STFT) is used to detect rapid local changes in the audio signal. The algorithm has two steps that iteratively - (a) calculate a function of the STFT and (b) build a transient signal. A dynamic thresholding scheme is used to locate the potential positions of transients in the signal. The iterative procedure ensures that genuine transients are built up while the localised spectral noise are suppressed by using an energy criterion. The extracted transient signal is later compared to a ground truth dataset. The algorithm performed well on two databases. On the EBU-SQAM database of monophonic sounds, the algorithm achieved an F-measure of 90% while on our database of polyphonic audio an F-measure of 91% was achieved. This technique is being used as a preprocessing step for a tempo analysis algorithm and a TSR (Transients + Sines + Residue) decomposition scheme.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.