1000 resultados para Katerji method
Resumo:
Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.
Resumo:
A novel combined near- and mid-infrared (NIR and MIR) spectroscopic method has been researched and developed for the analysis of complex substances such as the Traditional Chinese Medicine (TCM), Illicium verum Hook. F. (IVHF), and its noxious adulterant, Iuicium lanceolatum A.C. Smith (ILACS). Three types of spectral matrix were submitted for classification with the use of the linear discriminant analysis (LDA) method. The data were pretreated with either the successive projections algorithm (SPA) or the discrete wavelet transform (DWT) method. The SPA method performed somewhat better, principally because it required less spectral features for its pretreatment model. Thus, NIR or MIR matrix as well as the combined NIR/MIR one, were pretreated by the SPA method, and then analysed by LDA. This approach enabled the prediction and classification of the IVHF, ILACS and mixed samples. The MIR spectral data produced somewhat better classification rates than the NIR data. However, the best results were obtained from the combined NIR/MIR data matrix with 95–100% correct classifications for calibration, validation and prediction. Principal component analysis (PCA) of the three types of spectral data supported the results obtained with the LDA classification method.
Resumo:
A novel near-infrared spectroscopy (NIRS) method has been researched and developed for the simultaneous analyses of the chemical components and associated properties of mint (Mentha haplocalyx Briq.) tea samples. The common analytes were: total polysaccharide content, total flavonoid content, total phenolic content, and total antioxidant activity. To resolve the NIRS data matrix for such analyses, least squares support vector machines was found to be the best chemometrics method for prediction, although it was closely followed by the radial basis function/partial least squares model. Interestingly, the commonly used partial least squares was unsatisfactory in this case. Additionally, principal component analysis and hierarchical cluster analysis were able to distinguish the mint samples according to their four geographical provinces of origin, and this was further facilitated with the use of the chemometrics classification methods-K-nearest neighbors, linear discriminant analysis, and partial least squares discriminant analysis. In general, given the potential savings with sampling and analysis time as well as with the costs of special analytical reagents required for the standard individual methods, NIRS offered a very attractive alternative for the simultaneous analysis of mint samples.
Resumo:
In this paper, we aim at predicting protein structural classes for low-homology data sets based on predicted secondary structures. We propose a new and simple kernel method, named as SSEAKSVM, to predict protein structural classes. The secondary structures of all protein sequences are obtained by using the tool PSIPRED and then a linear kernel on the basis of secondary structure element alignment scores is constructed for training a support vector machine classifier without parameter adjusting. Our method SSEAKSVM was evaluated on two low-homology datasets 25PDB and 1189 with sequence homology being 25% and 40%, respectively. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies on these two data sets are 86.3% and 84.5%, respectively, which are higher than those obtained by other existing methods. Especially, our method achieves higher accuracies (88.1% and 88.5%) for differentiating the α + β class and the α/β class compared to other methods. This suggests that our method is valuable to predict protein structural classes particularly for low-homology protein sequences. The source code of the method in this paper can be downloaded at http://math.xtu.edu.cn/myphp/math/research/source/SSEAK_source_code.rar.
A novel human leucocyte antigen-DRB1 genotyping method based on multiplex primer extension reactions
Resumo:
We have developed and validated a semi-automated fluorescent method of genotyping human leucocyte antigen (HLA)-DRB1 alleles, HLA-DRB1*01-16, by multiplex primer extension reactions. This method is based on the extension of a primer that anneals immediately adjacent to the single-nucleotide polymorphism with fluorescent dideoxynucleotide triphosphates (minisequencing), followed by analysis on an ABI Prism 3700 capillary electrophoresis instrument. The validity of the method was confirmed by genotyping 261 individuals using both this method and polymerase chain reaction with sequence-specific primer (PCR-SSP) or sequencing and by demonstrating Mendelian inheritance of HLA-DRB1 alleles in families. Our method provides a rapid means of performing high-throughput HLA-DRB1 genotyping using only two PCR reactions followed by four multiplex primer extension reactions and PCR-SSP for some allele groups. In this article, we describe the method and discuss its advantages and limitations.
Resumo:
Lentiviral vectors pseudotyped with vesicular stomatitis virus glycoprotein (VSV-G) are emerging as the vectors of choice for in vitro and in vivo gene therapy studies. However, the current method for harvesting lentivectors relies upon ultracentrifugation at 50 000 g for 2 h. At this ultra-high speed, rotors currently in use generally have small volume capacity. Therefore, preparations of large volumes of high-titre vectors are time-consuming and laborious to perform. In the present study, viral vector supernatant harvests from vector-producing cells (VPCs) were pre-treated with various amounts of poly-L-lysine (PLL) and concentrated by low speed centrifugation. Optimal conditions were established when 0.005% of PLL (w/v) was added to vector supernatant harvests, followed by incubation for 30 min and centrifugation at 10 000 g for 2 h at 4 degreesC. Direct comparison with ultracentrifugation demonstrated that the new method consistently produced larger volumes (6 ml) of high-titre viral vector at 1 x 10(8) transduction unit (TU)/ml (from about 3000 ml of supernatant) in one round of concentration. Electron microscopic analysis showed that PLL/viral vector formed complexes, which probably facilitated easy precipitation at low-speed concentration (10 000 g), a speed which does not usually precipitate viral particles efficiently. Transfection of several cell lines in vitro and transduction in vivo in the liver with the lentivector/PLL complexes demonstrated efficient gene transfer without any significant signs of toxicity. These results suggest that the new method provides a convenient means for harvesting large volumes of high-titre lentivectors, facilitate gene therapy experiments in large animal or human gene therapy trials, in which large amounts of lentiviral vectors are a prerequisite.
Resumo:
By using the method of characteristics, the effect of footing-soil interface friction angle (delta) on the bearing capacity factor N-gamma was computed for a strip footing. The analysis was performed by employing a curved trapped wedge under the footing base; this wedge joins the footing base at a distance B-t from the footing edge. For a given footing width (B), the value of B-t increases continuously with a decrease in delta. For delta = 0, no trapped wedge exists below the footing base, that is, B-t/B = 0.5. On the contrary, with delta = phi, the point of emergence of the trapped wedge approaches toward the footing edge with an increase in phi. The magnitude of N-gamma increases substantially with an increase in delta/phi. The maximum depth of the plastic zone becomes higher for greater values of delta/phi. The results from the present analysis were found to compare well with those reported in the literature.
Resumo:
Careful study of various aspects presented in the note reveals basic fallacies in the concept and final conclusions.The Authors claim to have presented a new method of determining C-v. However, the note does not contain a new method. In fact, the method proposed is an attempt to generate settlement vs. time data using only two values of (t,8). The Authors have used a rectangular hyperbola method to determine C-v from the predicated 8- t data. In this context, the title of the paper itself is misleading and questionable. The Authors have compared C-v values predicated with measured values, both of them being the results of the rectangular hyperbola method.
Resumo:
Reaction of 6-acetoxy-5-bromomethylquinoline (1c) and 2-bromomethyl-4-(2'-pyridyl)phenyl acetate (2b) with tetrachlorocatechol in acetone in the presence of anhydrous potassium carbonate resulted in the formation of diastereomeric products 3c, 3d, 4e and 4f.
Resumo:
We present a generalization of the finite volume evolution Galerkin scheme [M. Lukacova-Medvid'ova,J. Saibertov'a, G. Warnecke, Finite volume evolution Galerkin methods for nonlinear hyperbolic systems, J. Comp. Phys. (2002) 183 533-562; M. Luacova-Medvid'ova, K.W. Morton, G. Warnecke, Finite volume evolution Galerkin (FVEG) methods for hyperbolic problems, SIAM J. Sci. Comput. (2004) 26 1-30] for hyperbolic systems with spatially varying flux functions. Our goal is to develop a genuinely multi-dimensional numerical scheme for wave propagation problems in a heterogeneous media. We illustrate our methodology for acoustic waves in a heterogeneous medium but the results can be generalized to more complex systems. The finite volume evolution Galerkin (FVEG) method is a predictor-corrector method combining the finite volume corrector step with the evolutionary predictor step. In order to evolve fluxes along the cell interfaces we use multi-dimensional approximate evolution operator. The latter is constructed using the theory of bicharacteristics under the assumption of spatially dependent wave speeds. To approximate heterogeneous medium a staggered grid approach is used. Several numerical experiments for wave propagation with continuous as well as discontinuous wave speeds confirm the robustness and reliability of the new FVEG scheme.
Resumo:
Taylor (1948) suggested the method for determination of the settlement, d, corresponding to 90% consolidation utilizing the characteristics of the degree of consolidation, U, versus the square root of the time factor, square root of T, plot. Based on the properties of the slope of U versus square root of T curve, a new method is proposed to determine d corresponding to any U above 70% consolidation for evaluation of the coefficient of consolidation, Cn. The effects of the secondary consolidation on the Cn value at different percentages of consolidation can be studied. Cn, closer to the field values, can be determined in less time as compared to Taylor's method. At any U in between 75 and 95% consolidation, Cn(U) due to the new method lies in between Taylor's Cn and Casagrande's Cn.
Resumo:
A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP2O7]4H(2)O and [Co(en)(2)H2P3O10]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.
Resumo:
A one step, clean and efficient, conversion of arylaldehydes, ketones and ketals into the corresponding hydrocarbon using ionic hydrogenation conditions employing sodium cyanoborohydride in the presence of two to three equivalents of BF3. OEt(2) is described.
Resumo:
Many websites presently provide the facility for users to rate items quality based on user opinion. These ratings are used later to produce item reputation scores. The majority of websites apply the mean method to aggregate user ratings. This method is very simple and is not considered as an accurate aggregator. Many methods have been proposed to make aggregators produce more accurate reputation scores. In the majority of proposed methods the authors use extra information about the rating providers or about the context (e.g. time) in which the rating was given. However, this information is not available all the time. In such cases these methods produce reputation scores using the mean method or other alternative simple methods. In this paper, we propose a novel reputation model that generates more accurate item reputation scores based on collected ratings only. Our proposed model embeds statistical data, previously disregarded, of a given rating dataset in order to enhance the accuracy of the generated reputation scores. In more detail, we use the Beta distribution to produce weights for ratings and aggregate ratings using the weighted mean method. Experiments show that the proposed model exhibits performance superior to that of current state-of-the-art models.
Resumo:
Time-frequency analysis of various simulated and experimental signals due to elastic wave scattering from damage are performed using wavelet transform (WT) and Hilbert-Huang transform (HHT) and their performances are compared in context of quantifying the damages. Spectral finite element method is employed for numerical simulation of wave scattering. An analytical study is carried out to study the effects of higher-order damage parameters on the reflected wave from a damage. Based on this study, error bounds are computed for the signals in the spectral and also on the time-frequency domains. It is shown how such an error bound can provide all estimate of error in the modelling of wave propagation in structure with damage. Measures of damage based on WT and HHT is derived to quantify the damage information hidden in the signal. The aim of this study is to obtain detailed insights into the problem of (1) identifying localised damages (2) dispersion of multifrequency non-stationary signals after they interact with various types of damage and (3) quantifying the damages. Sensitivity analysis of the signal due to scattered wave based on time-frequency representation helps to correlate the variation of damage index measures with respect to the damage parameters like damage size and material degradation factors.