59 resultados para root mean square roughness
Resumo:
Finite element analysis (FEA) models of uniaxial loading of pumpkin peel and flesh tissues were developed and validated using experimental results. The tensile model was developed for both linear elastic and plastic material models, the compression model was develop d only with the plastic material model. The outcomes of force versus time curves obtained from FEA models followed similar pattern to the experimental curves however the curve resulted with linear elastic material properties had a higher difference with the experimental curves. The values of predicted forces were determined and compared with the experimental curve. An error indicator was introduced and computed for each case and compared. Additionally Root Mean Square Error (RMSE) values were also calculated for each model and compared. The results of modelling were used to develop material model for peel and flesh tissues in FEA modelling of mechanical peeling of tough skinned vegetables.
Resumo:
Straylight, lens yellowing and ocular aberrations were assessed in a group of people with type 1 diabetes and in an age matched control group. Most of the former had low levels of neuropathy. Relative to the control group, the type 1 diabetes group demonstrated greater straylight, greater lens yellowing, and differences in some higher-order aberration co-efficients without significant increase in root-mean-square higher-order aberrations. Differences between groups did not increase significantly with age. The results are similar to the findings for ocular biometry reported previously for this group of participants, and suggest that age-related changes in the optics of the eyes of people with well-controlled diabetes need not be accelerated.
Resumo:
Background Wavefront-guided Laser-assisted in situ keratomileusis (LASIK) is a widespread and effective surgical treatment for myopia and astigmatic correction but whether it induces higher-order aberrations remains controversial. The study was designed to evaluate the changes in higher-order aberrations after wavefront-guided ablation with IntraLase femtosecond laser in moderate to high astigmatism. Methods Twenty-three eyes of 15 patients with moderate to high astigmatism (mean cylinder, −3.22 ± 0.59 dioptres) aged between 19 and 35 years (mean age, 25.6 ± 4.9 years) were included in this prospective study. Subjects with cylinder ≥ 1.5 and ≤2.75 D were classified as moderate astigmatism while high astigmatism was ≥3.00 D. All patients underwent a femtosecond laser–enabled (150-kHz IntraLase iFS; Abbott Medical Optics Inc) wavefront-guided ablation. Uncorrected (UDVA), corrected (CDVA) distance visual acuity in logMAR, keratometry, central corneal thickness (CCT) and higher-order aberrations (HOAs) over a 6 mm pupil, were assessed before and 6 months, postoperatively. The relationship between postoperative change in HOA and preoperative mean spherical equivalent refraction, mean astigmatism, and postoperative CCT were tested. Results At the last follow-up, the mean UDVA was increased (P < 0.0001) but CDVA remained unchanged (P = 0.48) and no eyes lost ≥2 lines of CDVA. Mean spherical equivalent refraction was reduced (P < 0.0001) and was within ±0.50 D range in 61 % of eyes. The average corneal curvature was flatter by 4 D and CCT was reduced by 83 μm (P < 0.0001, for all), postoperatively. Coma aberrations remained unchanged (P = 0.07) while the change in trefoil (P = 0.047) postoperatively, was not clinically significant. The 4th order HOAs (spherical aberration and secondary astigmatism) and the HOA root mean square (RMS) increased from −0.18 ± 0.07 μm, 0.04 ± 0.03 μm and 0.47 ± 0.11 μm, preoperatively, to 0.33 ± 0.19 μm (P = 0.004), 0.21 ± 0.09 μm (P < 0.0001) and 0.77 ± 0.27 μm (P < 0.0001), six months postoperatively. The change in spherical aberration after the procedure increased with an increase in the degree of preoperative myopia. Conclusions Wavefront-guided IntraLASIK offers a safe and effective option for vision and visual function improvement in astigmatism. Although, reduction of HOA is possible in a few eyes, spherical-like aberrations are increased in majority of the treated eyes.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
Retinal image properties such as contrast and spatial frequency play important roles in the development of normal vision. For example, visual environments comprised solely of low contrast and/or low spatial frequencies induce myopia. The visual image is processed by the retina and it then locally controls eye growth. In terms of the retinal neurotransmitters that link visual stimuli to eye growth, there is strong evidence to suggest involvement of the retinal dopamine (DA) system. For example, effectively increasing retinal DA levels by using DA agonists can suppress the development of form-deprivation myopia (FDM). However, whether visual feedback controls eye growth by modulating retinal DA release, and/or some other factors, is still being elucidated. This thesis is chiefly concerned with the relationship between the dopaminergic system and retinal image properties in eye growth control. More specifically, whether the amount of retinal DA release reduces as the complexity of the image degrades was determined. For example, we investigated whether the level of retinal DA release decreased as image contrast decreased. In addition, the effects of spatial frequency, spatial energy distribution slope, and spatial phase on retinal DA release and eye growth were examined. When chicks were 8-days-old, a cone-lens imaging system was applied monocularly (+30 D, 3.3 cm cone). A short-term treatment period (6 hr) and a longer-term treatment period (4.5 days) were used. The short-term treatment tests for the acute reduction in DA release by the visual stimulus, as is seen with diffusers and lenses, whereas the 4.5 day point tests for reduction in DA release after more prolonged exposure to the visual stimulus. In the contrast study, 1.35 cyc/deg square wave grating targets of 95%, 67%, 45%, 12% or 4.2% contrast were used. Blank (0% contrast) targets were included for comparison. In the spatial frequency study, both sine and square wave grating targets with either 0.017 cyc/deg and 0.13 cyc/deg fundamental spatial frequencies and 95% contrast were used. In the spectral slope study, 30% root-mean-squared (RMS) contrast fractal noise targets with spectral fall-off of 1/f0.5, 1/f and 1/f2 were used. In the spatial alignment study, a structured Maltese cross (MX) target, a structured circular patterned (C) target and the scrambled versions of these two targets (SMX and SC) were used. Each treatment group comprised 6 chicks for ocular biometry (refraction and ocular dimension measurement) and 4 for analysis of retinal DA release. Vitreal dihydroxyphenylacetic acid (DOPAC) was analysed through ion-paired reversed phase high performance liquid chromatography with electrochemical detection (HPLC-ED), as a measure of retinal DA release. For the comparison between retinal DA release and eye growth, large reductions in retinal DA release possibly due to the decreased light level inside the cone-lens imaging system were observed across all treated eyes while only those exposed to low contrast, low spatial frequency sine wave grating, 1/f2, C and SC targets had myopic shifts in refraction. Amongst these treatment groups, no acute effect was observed and longer-term effects were only found in the low contrast and 1/f2 groups. These findings suggest that retinal DA release does not causally link visual stimuli properties to eye growth, and these target induced changes in refractive development are not dependent on the level of retinal DA release. Retinal dopaminergic cells might be affected indirectly via other retinal cells that immediately respond to changes in the image contrast of the retinal image.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
Linear adaptive channel equalization using the least mean square (LMS) algorithm and the recursive least-squares(RLS) algorithm for an innovative multi-user (MU) MIMOOFDM wireless broadband communications system is proposed. The proposed equalization method adaptively compensates the channel impairments caused by frequency selectivity in the propagation environment. Simulations for the proposed adaptive equalizer are conducted using a training sequence method to determine optimal performance through a comparative analysis. Results show an improvement of 0.15 in BER (at a SNR of 16 dB) when using Adaptive Equalization and RLS algorithm compared to the case in which no equalization is employed. In general, adaptive equalization using LMS and RLS algorithms showed to be significantly beneficial for MU-MIMO-OFDM systems.
Resumo:
Several approaches have been introduced in the literature for active noise control (ANC) systems. Since the filtered-x least-mean-square (FxLMS) algorithm appears to be the best choice as a controller filter, researchers tend to improve performance of ANC systems by enhancing and modifying this algorithm. This paper proposes a new version of the FxLMS algorithm, as a first novelty. In many ANC applications, an on-line secondary path modeling method using white noise as a training signal is required to ensure convergence of the system. As a second novelty, this paper proposes a new approach for on-line secondary path modeling on the basis of a new variable-step-size (VSS) LMS algorithm in feed forward ANC systems. The proposed algorithm is designed so that the noise injection is stopped at the optimum point when the modeling accuracy is sufficient. In this approach, a sudden change in the secondary path during operation makes the algorithm reactivate injection of the white noise to re-adjust the secondary path estimate. Comparative simulation results shown in this paper indicate the effectiveness of the proposed approach in reducing both narrow-band and broad-band noise. In addition, the proposed ANC system is robust against sudden changes of the secondary path model.
Resumo:
Purpose: To use a large wavefront database of a clinical population to investigate relationships between refractions and higher order aberrations and between aberrations of right and left eyes. Methods: Third and fourth-order aberration coefficients and higher-order root-mean-squared aberrations (HO RMS), scaled to a pupil size of 4.5 mm diameter, were analysed in a population of about 24,000 patients from Carl Zeiss Vision's European wavefront database. Correlations were determined between the aberrations and the variables of refraction, near addition and cylinder. Results: Most aberration coefficients were significantly dependent upon these variables, but the proportions of aberrations that could be explained by these factors were less than 2% except for spherical aberration (12%), horizontal coma (9%) and HO RMS (7%). Near addition was the major contributor for horizontal coma (8.5% out of 9.5%) and spherical equivalent was the major contributor for spherical aberration (7.7% out of 11.6%). Interocular correlations were highly significant for all aberration coefficients, varying between 0.16 and 0.81. Anisometropia was a variable of significance for three aberrations (vertical coma, secondary astigmatism and tetrafoil), but little importance can be placed on this because of the small proportions of aberrations that can be explained by refraction (all less than 1.0 %). Conclusions: Most third- and fourth-order aberration coefficients were significantly dependent upon spherical equivalent, near addition and cylinder, but only horizontal coma (9%) and spherical aberration (12%) showed dependencies of greater than 2%. Interocular correlations were highly significant for all aberration coefficients, but anisometropia had little influence on aberration coefficients.
Resumo:
The movement of molecules inside living cells is a fundamental feature of biological processes. The ability to both observe and analyse the details of molecular diffusion in vivo at the single-molecule and single-cell level can add significant insight into understanding molecular architectures of diffus- ing molecules and the nanoscale environment in which the molecules diffuse. The tool of choice for monitoring dynamic molecular localization in live cells is fluorescence microscopy, especially so combining total internal reflection fluorescence with the use of fluorescent protein (FP) reporters in offering exceptional imaging contrast for dynamic processes in the cell mem- brane under relatively physiological conditions compared with competing single-molecule techniques. There exist several different complex modes of diffusion, and discriminating these from each other is challenging at the mol- ecular level owing to underlying stochastic behaviour. Analysis is traditionally performed using mean square displacements of tracked particles; however, this generally requires more data points than is typical for single FP tracks owing to photophysical instability. Presented here is a novel approach allowing robust Bayesian ranking of diffusion processes to dis-criminate multiple complex modes probabilistically. It is a computational approach that biologists can use to understand single-molecule features in live cells.
Resumo:
An Artificial Neural Network (ANN) is a computational modeling tool which has found extensive acceptance in many disciplines for modeling complex real world problems. An ANN can model problems through learning by example, rather than by fully understanding the detailed characteristics and physics of the system. In the present study, the accuracy and predictive power of an ANN was evaluated in predicting kinetic viscosity of biodiesels over a wide range of temperatures typically encountered in diesel engine operation. In this model, temperature and chemical composition of biodiesel were used as input variables. In order to obtain the necessary data for model development, the chemical composition and temperature dependent fuel properties of ten different types of biodiesels were measured experimentally using laboratory standard testing equipments following internationally recognized testing procedures. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture was optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the absolute fraction of variance (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found that ANN is highly accurate in predicting the viscosity of biodiesel and demonstrates the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties at different temperature levels. Therefore the model developed in this study can be a useful tool in accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.