130 resultados para Suggested Methods
Resumo:
This Paper deals with the analysis of liquid limit of soils, an inferential parameter of universal acceptance. It has been undertaken primarily to re-examine one-point methods of determination of liquid limit water contents. It has been shown by basic characteristics of soils and associated physico-chemical factors that critical shear strengths at liquid limit water contents arise out of force field equilibrium and are independent of soil type. This leads to the formation of a scientific base for liquid limit determination by one-point methods, which hitherto was formulated purely on statistical analysis of data. Available methods (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) of one-point liquid limit determination have been critically re-examined. A simple one-point cone penetrometer method of computing liquid limit has been suggested and compared with other methods. Experimental data of Sherwood & Ryley (1970) have been employed for comparison of different cone penetration methods. Results indicate that, apart from mere statistical considerations, one-point methods have a strong scientific base on the uniqueness of modified flow line irrespective of soil type. Normalized flow line is obtained by normalization of water contents by liquid limit values thereby nullifying the effects of surface areas and associated physico-chemical factors that are otherwise reflected in different responses at macrolevel.Cet article traite de l'analyse de la limite de liquidité des sols, paramètre déductif universellement accepté. Cette analyse a été entreprise en premier lieu pour ré-examiner les méthodes à un point destinées à la détermination de la teneur en eau à la limite de liquidité. Il a été démontré par les caractéristiques fondamentales de sols et par des facteurs physico-chimiques associés que les résistances critiques à la rupture au cisaillement pour des teneurs en eau à la limite de liquidité résultent de l'équilibre des champs de forces et sont indépendantes du type de sol concerné. On peut donc constituer une base scientifique pour la détermination de la limite de liquidité par des méthodes à un point lesquelles, jusqu'alors, n'avaient été formulées que sur la base d'une analyse statistique des données. Les méthodes dont on dispose (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) pour la détermination de la limite de liquidité à un point font l'objet d'un ré-examen critique. Une simple méthode d'analyse à un point à l'aide d'un pénétromètre à cône pour le calcul de la limite de liquidité a été suggérée et comparée à d'autres méthodes. Les données expérimentales de Sherwood & Ryley (1970) ont été utilisées en vue de comparer différentes méthodes de pénétration par cône. En plus de considérations d'ordre purement statistque, les résultats montrent que les méthodes de détermination à un point constituent une base scientifique solide en raison du caractère unique de la ligne de courant modifiée, quel que soit le type de sol La ligne de courant normalisée est obtenue par la normalisation de la teneur en eau en faisant appel à des valeurs de limite de liquidité pour, de cette manière, annuler les effets des surfaces et des facteurs physico-chimiques associés qui sans cela se manifesteraient dans les différentes réponses au niveau macro.
Resumo:
A simple technique is devised for making prisms with submultiple or half angles. As an application of these prisms, methods are suggested to measure the angles of the Pechan and Pellin-Broca prisms without using expensive spectrometers, autocollimators, and angle gauges. (C) 2002 Society of Photo-Optical Instrumentation Engineers.
Resumo:
The present paper develops a family of explicit algorithms for rotational dynamics and presents their comparison with several existing methods. For rotational motion the configuration space is a non-linear manifold, not a Euclidean vector space. As a consequence the rotation vector and its time derivatives correspond to different tangent spaces of rotation manifold at different time instants. This renders the usual integration algorithms for Euclidean space inapplicable for rotation. In the present algorithms this problem is circumvented by relating the equation of motion to a particular tangent space. It has been accomplished with the help of already existing relation between rotation increments which belongs to two different tangent spaces. The suggested method could in principle make any integration algorithm on Euclidean space, applicable to rotation. However, the present paper is restricted only within explicit Runge-Kutta enabled to handle rotation. The algorithms developed here are explicit and hence computationally cheaper than implicit methods. Moreover, they appear to have much higher local accuracy and hence accurate in predicting any constants of motion for reasonably longer time. The numerical results for solutions as well as constants of motion, indicate superior performance by most of our algorithms, when compared to some of the currently known algorithms, namely ALGO-C1, STW, LIEMID[EA], MCG, SUBCYC-M.
Resumo:
Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Interaction of tetrathiafulvalene (TTF) and tetracyanoethylene (TCNE) with few-layer graphene samples prepared by the exfoliation of graphite oxide (EG), conversion of nanodiamond (DG) and arc-evaporation of graphite in hydrogen (HG) has been investigated by Raman spectroscopy to understand the role of the graphene surface. The position and full-width at half maximum of the Raman G-band are affected on interaction with TTF and TCNE and the effect is highest with EG and least with HG. The effect of TTF and TCNE on the 2D-band is also maximum with EG. The magnitude of interaction between the donor/acceptor molecules varies in the same order as the surface areas of the graphenes. (C) 2009 Published by Elsevier B. V.
Resumo:
Cum ./LSTA_A_8828879_O_XML_IMAGES/LSTA_A_8828879_O_ILM0001.gif rule [Singh (1975)] has been suggested in the literature for finding approximately optimum strata boundaries for proportional allocation, when the stratification is done on the study variable. This paper shows that for the class of density functions arising from the Wang and Aggarwal (1984) representation of the Lorenz Curve (or DBV curves in case of inventory theory), the cum ./LSTA_A_8828879_O_XML_IMAGES/LSTA_A_8828879_O_ILM0002.gif rule in place of giving approximately optimum strata boundaries, yields exactly optimum boundaries. It is also shown that the conjecture of Mahalanobis (1952) “. . .an optimum or nearly optimum solutions will be obtained when the expected contribution of each stratum to the total aggregate value of Y is made equal for all strata” yields exactly optimum strata boundaries for the case considered in the paper.
Resumo:
Plates with V-through edge notches subjected to pure bending and specimens with rectangular edge-through-notches subjected to combined bending and axial pull were investigated (under live-load and stress-frozen conditions) in a completely nondestructive manner using scattered-light photoelasticity. Stress-intensity factors (SIFs) were evaluated by analysing the singular stress distributions near crack-tips. Improved methods are suggested for the evaluation of SIFs. The thickness-wise variation of SIFs is also obtained in the investigation. The results obtained are compared with the available theoretical solutions.
Resumo:
Vermicular graphite cast iron is a new addition to the family of cast irons. Various methods for producing vermicular graphite cast iron are briefly discussed in this paper. The mechanical and physical properties of cast irons with vermicular graphite have been found to be intermediate between those of gray and ductile irons. Other properties such as casting characteristics, scaling resistance, damping capacity and machinability have been compared with those of gray and ductile irons. Probable applications of vermicular graphite cast irons are suggested.
Resumo:
Conformational preferences of thiocarbonohydrazide (H2NNHCSNHNH2) in its basic and N,N′-diprotonated forms are examined by calculating the barrier to internal rotation around the C---N bonds, using the theoretical LCAO—MO (ab initio and semiempirical CNDO and EHT) methods. The calculated and experimental results are compared with each other and also with values for N,N′-dimethylthiourea which is isoelectronic with thiocarbonohydrazide. The suitability of these methods for studying rotational isomerism seems suspect when lone pair interactions are present.
Resumo:
One difficulty in summarising biological survivorship data is that the hazard rates are often neither constant nor increasing with time or decreasing with time in the entire life span. The promising Weibull model does not work here. The paper demonstrates how bath tub shaped quadratic models may be used in such a case. Further, sometimes due to a paucity of data actual lifetimes are not as certainable. It is shown how a concept from queuing theory namely first in first out (FIFO) can be profitably used here. Another nonstandard situation considered is one in which lifespan of the individual entity is too long compared to duration of the experiment. This situation is dealt with, by using ancilliary information. In each case the methodology is illustrated with numerical examples.
Resumo:
A comparison is made of the performance of a weather Doppler radar with a staggered pulse repetition time and a radar with a random (but known) phase. As a standard for this comparison, the specifications of the forthcoming next generation weather radar (NEXRAD) are used. A statistical analysis of the spectral momentestimates for the staggered scheme is developed, and a theoretical expression for the signal-to-noise ratio due to recohering-filteringrecohering for the random phase radar is obtained. Algorithms for assignment of correct ranges to pertinent spectral moments for both techniques are presented.
Resumo:
Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.
Resumo:
For the problem of speaker adaptation in speech recognition, the performance depends on the availability of adaptation data. In this paper, we have compared several existing speaker adaptation methods, viz. maximum likelihood linear regression (MLLR), eigenvoice (EV), eigenspace-based MLLR (EMLLR), segmental eigenvoice (SEV) and hierarchical eigenvoice (HEV) based methods. We also develop a new method by modifying the existing HEV method for achieving further performance improvement in a limited available data scenario. In the sense of availability of adaptation data, the new modified HEV (MHEV) method is shown to perform better than all the existing methods throughout the range of operation except the case of MLLR at the availability of more adaptation data.
Resumo:
Fundamental investigations in ultrasonics in India date back to the early 20th century. But, fundamental and applied research in the field of nondestructive evaluation (NDE) came much later. In the last four decades it has grown steadily in academic institutions, national laboratories and industry. Currently, commensurate with rapid industrial growth and realisation of the benefits of NDE, the activity is becoming much stronger, deeper, broader and very wide spread. Acoustic Emission (AE) is a recent entry into the field of nondestructive evaluation. Pioneering efforts in India in AE were carried out at the Indian Institute of Science in the early 1970s. The nuclear industry was the first to utilise it. Current activity in AE in the country spans materials research, incipient failure detection, integrity evaluation of structures, fracture mechanics studies and rock mechanics. In this paper, we attempt to project the current scenario in ultrasonics and acoustic emission research in India.