28 resultados para Improved Borsch-Supan Method
em Indian Institute of Science - Bangalore - Índia
Resumo:
Taylor (1948) suggested the method for determination of the settlement, d, corresponding to 90% consolidation utilizing the characteristics of the degree of consolidation, U, versus the square root of the time factor, square root of T, plot. Based on the properties of the slope of U versus square root of T curve, a new method is proposed to determine d corresponding to any U above 70% consolidation for evaluation of the coefficient of consolidation, Cn. The effects of the secondary consolidation on the Cn value at different percentages of consolidation can be studied. Cn, closer to the field values, can be determined in less time as compared to Taylor's method. At any U in between 75 and 95% consolidation, Cn(U) due to the new method lies in between Taylor's Cn and Casagrande's Cn.
Resumo:
The distribution of zinc cation between crystallographically nonequivalent positions in ZnFe204 has been determined by anomalous X-ray scattering near the Zn K absorption edge. Measured intensity ratio with two energies close to the edge can be quantitatively explained only by assigning all zinc cations to the tetrahedral position in the approximately cubic close packed array of oxygen ions. A similar conclusion has also been reached for ZnxFe3-x04 solid solutions with x = 0.73, 0.54 and 0.35 employing the improved X-ray method. This is consistent with the EXAFS results which indicate an almost unchanged environmental structure around zinc cation in these solid solutions.
Resumo:
Parkin (1978) suggested the velocity method based on the observation that the theoretical rate of consolidation and time factor plot on a log-log scale yields an initial slope of 1:2 up to 50% consolidation. A new method is proposed that is an improvement over Parkin's velocity method because it minimizes the problems encountered in using that method. The results obtained agree with the other methods in use.
Resumo:
The method of Gibbs-Duhem integration suggested by Speiser et al. has been modified to derive activities from distribution equilibria. It is shown that, in general, the activities of components in melts with a common anion can be calculated, without using their standard Gibbs energies of formation, from eqUilibrium ratios and the knowledge of activities in the metal phase. Moreover, if systems are so chosen that the concentration of one element in the metal phase lies in the Henry's law region (less than 1 %), information on activities in the metal phase is not required. Conversely, activities of elements in an alloy can be readily calculated from equilibrium distribution ratios alone, if the salt phase in equilibrium contains very small amounts of one element. Application of the method is illustrated using distribution ratios from the literature on AgCI-CuCI, AgBr-CuBr, and CuDo.5 -PbD systems. The results indicate that covalent bonding and van der Waals repulsive interactions in certain types of fused salt melts can significantly affect the thermodynamic properties of mixing.
Resumo:
Neural data are inevitably contaminated by noise. When such noisy data are subjected to statistical analysis, misleading conclusions can be reached. Here we attempt to address this problem by applying a state-space smoothing method, based on the combined use of the Kalman filter theory and the Expectation–Maximization algorithm, to denoise two datasets of local field potentials recorded from monkeys performing a visuomotor task. For the first dataset, it was found that the analysis of the high gamma band (60–90 Hz) neural activity in the prefrontal cortex is highly susceptible to the effect of noise, and denoising leads to markedly improved results that were physiologically interpretable. For the second dataset, Granger causality between primary motor and primary somatosensory cortices was not consistent across two monkeys and the effect of noise was suspected. After denoising, the discrepancy between the two subjects was significantly reduced.
Resumo:
Two algorithms that improve upon the sequent-peak procedure for reservoir capacity calculation are presented. The first incorporates storage-dependent losses (like evaporation losses) exactly as the standard linear programming formulation does. The second extends the first so as to enable designing with less than maximum reliability even when allowable shortfall in any failure year is also specified. Together, the algorithms provide a more accurate, flexible and yet fast method of calculating the storage capacity requirement in preliminary screening and optimization models.
Resumo:
A popular dynamic imaging technique, k-t BLAST (ktB) is studied here for BAR imaging. ktB utilizes correlations in k-space and time, to reconstruct the image time series with only a fraction of the data. The algorithm works by unwrapping the aliased Fourier conjugate space of k-t (y-f-space). The unwrapping process utilizes the estimate of the true y-f-space, by acquiring densely sampled low k-space data. The drawbacks of this method include separate training scan, blurred training estimates and aliased phase maps. The proposed changes are incorporation of phase information from the training map and using generalized-series-extrapolated training map. The proposed technique is compared with ktB on real fMRI data. The proposed changes allow for ktB to operate at an acceleration factor of 6. Performance is evaluated by comparing activation maps obtained using reconstructed images. An improvement of up to 10 dB is observed in thePSNR of activation maps. Besides, a 10% reduction in RMSE is obtained over the entire time series of fMRI images. Peak improvement of the proposed method over ktB is 35%, averaged over five data sets. (C)2010 Elsevier Inc. All rights reserved.
Resumo:
Hybrid elements, which are based on a two-field variational formulation with the displacements and stresses interpolated separately, are known to deliver very high accuracy, and to alleviate to a large extent problems of locking that plague standard displacement-based formulations. The choice of the stress interpolation functions is of course critical in ensuring the high accuracy and robustness of the method. Generally, an attempt is made to keep the stress interpolation to the minimum number of terms that will ensure that the stiffness matrix has no spurious zero-energy modes, since it is known that the stiffness increases with the increase in the number of terms. Although using such a strategy of keeping the number of interpolation terms to a minimum works very well in static problems, it results either in instabilities or fails to converge in transient problems. This is because choosing the stress interpolation functions merely on the basis of removing spurious energy modes can violate some basic principles that interpolation functions should obey. In this work, we address the issue of choosing the interpolation functions based on such basic principles of interpolation theory and mechanics. Although this procedure results in the use of more number of terms than the minimum (and hence in slightly increased stiffness) in many elements, we show that the performance continues to be far superior to displacement-based formulations, and, more importantly, that it also results in considerably increased robustness.
Resumo:
Accurate, reliable and economical methods of determining stress distributions are important for fastener joints. In the past the contact stress problems in these mechanically fastened joints using interference or push or clearance fit pins were solved using both inverse and iterative techniques. Inverse techniques were found to be most efficient, but at times inadequate in the presence of asymmetries. Iterative techniques based on the finite element method of analysis have wider applications, but they have the major drawbacks of being expensive and time-consuming. In this paper an improved finite element technique for iteration is presented to overcome these drawbacks. The improved iterative technique employs a frontal solver for elimination of variables not requiring iteration, by creation of a dummy element. This automatically results in a large reduction in computer time and in the size of the problem to be handled during iteration. Numerical results are compared with those available in the literature. The method is used to study an eccentrically located pin in a quasi-isotropic laminated plate under uniform tension.
Resumo:
Stereospecific synthesis of 4-formylcarane (2) has been achieved through hydroboration-carbonylation of DELTA-3-carene. Both the reactions are optimised using sodium borohydride. The method is utilised for the synthesis of sandatrile (3), a novel perfumery chemical.
Resumo:
A successful protein-protein docking study culminates in identification of decoys at top ranks with near-native quaternary structures. However, this task remains enigmatic because no generalized scoring functions exist that effectively infer decoys according to the similarity to near-native quaternary structures. Difficulties arise because of the highly irregular nature of the protein surface and the significant variation of the nonbonding and solvation energies based on the chemical composition of the protein-protein interface. In this work, we describe a novel method combining an interface-size filter, a regression model for geometric compatibility (based on two correlated surface and packing parameters), and normalized interaction energy (calculated from correlated nonbonded and solvation energies), to effectively rank decoys from a set of 10,000 decoys. Tests on 30 unbound binary protein-protein complexes show that in 16 cases we can identify at least one decoy in top three ranks having <= 10 angstrom backbone root mean square deviation from true binding geometry. Comparisons with other state-of-art methods confirm the improved ranking power of our method without the use of any experiment-guided restraints, evolutionary information, statistical propensities, or modified interaction energy equations. Tests on 118 less-difficult bound binary protein-protein complexes with <= 35% sequence redundancy at the interface showed that in 77% cases, at least 1 in 10,000 decoys were identified with <= 5 angstrom backbone root mean square deviation from true geometry at first rank. The work will promote the use of new concepts where correlations among parameters provide more robust scoring models. It will facilitate studies involving molecular interactions, including modeling of large macromolecular assemblies and protein structure prediction. (C) 2010 Wiley Periodicals, Inc. J Comput Chem 32: 787-796, 2011.
Resumo:
Routing of floods is essential to control the flood flow at the flood control station such that it is within the specified safe limit. In this paper, the applicability of the extended Muskingum method is examined for routing of floods for a case study of Hirakud reservoir, Mahanadi river basin, India. The inflows to the flood control station are of two types-one controllable which comprises of reservoir releases for power and spill and the other is uncontrollable which comprises of inflow from lower tributaries and intermediate catchment between the reservoir and the flood control station. Muskingum model is improved to incorporate multiple sources of inflows and single outflow to route the flood in the reach. Instead of time lag and prismoidal flow parameters, suitable coefficients for various types of inflows were derived using Linear Programming. Presently, the decisions about operation of gates of Hirakud dam are being taken once in 12 h during floods. However, four time intervals of 24, 18, 12 and 6 h are examined to test the sensitivity of the routing time interval on the computed flood flow at the flood control station. It is observed that mean relative error decreases with decrease in routing interval both for calibration and testing phase. It is concluded that the extended Muskingum method can be explored for similar reservoir configurations such as Hirakud reservoir with suitable modifications. (C) 2010 International Association of Hydro-environment Engineering and Research. Asia Pacific Division. Published by Elsevier By. All rights reserved.
Resumo:
To resolve many flow features accurately, like accurate capture of suction peak in subsonic flows and crisp shocks in flows with discontinuities, to minimise the loss in stagnation pressure in isentropic flows or even flow separation in viscous flows require an accurate and low dissipative numerical scheme. The first order kinetic flux vector splitting (KFVS) method has been found to be very robust but suffers from the problem of having much more numerical diffusion than required, resulting in inaccurate computation of the above flow features. However, numerical dissipation can be reduced by refining the grid or by using higher order kinetic schemes. In flows with strong shock waves, the higher order schemes require limiters, which reduce the local order of accuracy to first order, resulting in degradation of flow features in many cases. Further, these schemes require more points in the stencil and hence consume more computational time and memory. In this paper, we present a low dissipative modified KFVS (m-KFVS) method which leads to improved splitting of inviscid fluxes. The m-KFVS method captures the above flow features more accurately compared to first order KFVS and the results are comparable to second order accurate KFVS method, by still using the first order stencil. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Due to its wide applicability, semi-supervised learning is an attractive method for using unlabeled data in classification. In this work, we present a semi-supervised support vector classifier that is designed using quasi-Newton method for nonsmooth convex functions. The proposed algorithm is suitable in dealing with very large number of examples and features. Numerical experiments on various benchmark datasets showed that the proposed algorithm is fast and gives improved generalization performance over the existing methods. Further, a non-linear semi-supervised SVM has been proposed based on a multiple label switching scheme. This non-linear semi-supervised SVM is found to converge faster and it is found to improve generalization performance on several benchmark datasets. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.