884 resultados para Mean squared error


Relevância:

90.00% 90.00%

Publicador:

Resumo:

An Artificial Neural Network (ANN) is a computational modeling tool which has found extensive acceptance in many disciplines for modeling complex real world problems. An ANN can model problems through learning by example, rather than by fully understanding the detailed characteristics and physics of the system. In the present study, the accuracy and predictive power of an ANN was evaluated in predicting kinetic viscosity of biodiesels over a wide range of temperatures typically encountered in diesel engine operation. In this model, temperature and chemical composition of biodiesel were used as input variables. In order to obtain the necessary data for model development, the chemical composition and temperature dependent fuel properties of ten different types of biodiesels were measured experimentally using laboratory standard testing equipments following internationally recognized testing procedures. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture was optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the absolute fraction of variance (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found that ANN is highly accurate in predicting the viscosity of biodiesel and demonstrates the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties at different temperature levels. Therefore the model developed in this study can be a useful tool in accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biodiesel, produced from renewable feedstock represents a more sustainable source of energy and will therefore play a significant role in providing the energy requirements for transportation in the near future. Chemically, all biodiesels are fatty acid methyl esters (FAME), produced from raw vegetable oil and animal fat. However, clear differences in chemical structure are apparent from one feedstock to the next in terms of chain length, degree of unsaturation, number of double bonds and double bond configuration-which all determine the fuel properties of biodiesel. In this study, prediction models were developed to estimate kinematic viscosity of biodiesel using an Artificial Neural Network (ANN) modelling technique. While developing the model, 27 parameters based on chemical composition commonly found in biodiesel were used as the input variables and kinematic viscosity of biodiesel was used as output variable. Necessary data to develop and simulate the network were collected from more than 120 published peer reviewed papers. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture and learning algorithm were optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the coefficient of determination (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found high predictive accuracy of the ANN in predicting fuel properties of biodiesel and has demonstrated the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties. Therefore the model developed in this study can be a useful tool to accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays, demand for automated Gas metal arc welding (GMAW) is growing and consequently need for intelligent systems is increased to ensure the accuracy of the procedure. To date, welding pool geometry has been the most used factor in quality assessment of intelligent welding systems. But, it has recently been found that Mahalanobis Distance (MD) not only can be used for this purpose but also is more efficient. In the present paper, Artificial Neural Networks (ANN) has been used for prediction of MD parameter. However, advantages and disadvantages of other methods have been discussed. The Levenberg–Marquardt algorithm was found to be the most effective algorithm for GMAW process. It is known that the number of neurons plays an important role in optimal network design. In this work, using trial and error method, it has been found that 30 is the optimal number of neurons. The model has been investigated with different number of layers in Multilayer Perceptron (MLP) architecture and has been shown that for the aim of this work the optimal result is obtained when using MLP with one layer. Robustness of the system has been evaluated by adding noise into the input data and studying the effect of the noise in prediction capability of the network. The experiments for this study were conducted in an automated GMAW setup that was integrated with data acquisition system and prepared in a laboratory for welding of steel plate with 12 mm in thickness. The accuracy of the network was evaluated by Root Mean Squared (RMS) error between the measured and the estimated values. The low error value (about 0.008) reflects the good accuracy of the model. Also the comparison of the predicted results by ANN and the test data set showed very good agreement that reveals the predictive power of the model. Therefore, the ANN model offered in here for GMA welding process can be used effectively for prediction goals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To develop bioelectrical impedance analysis (BIA) equations to predict total body water (TBW) and fat-free mass (FFM) of Sri Lankan children. Subjects/Methods: Data were collected from 5- to 15-year-old healthy children. They were randomly assigned to validation (M/F: 105/83) and cross-validation (M/F: 53/41) groups. Height, weight and BIA were measured. TBW was assessed using isotope dilution method (D2 O). Multiple regression analysis was used to develop preliminary equations and cross-validated on an independent group. Final prediction equation was constructed combining the two groups and validated by PRESS (prediction of sum of squares) statistics. Impedance index (height2/impedance; cm2/Ω), weight and sex code (male = 1; female = 0) were used as variables. Results: Independent variables of the final prediction equation for TBW were able to predict 86.3% of variance with root means-squared error (RMSE) of 2.1l. PRESS statistics was 2.1l with press residuals of 1.2l. Independent variables were able to predict 86.9% of variance of FFM with RMSE of 2.7 kg. PRESS statistics was 2.8 kg with press residuals of 1.4 kg. Bland Altman technique showed that the majority of the residuals were within mean bias±1.96 s.d. Conclusions: Results of this study provide BIA equation for the prediction of TBW and FFM in Sri Lankan children. To the best of our knowledge there are no published BIA prediction equations validated on South Asian populations. Results of this study need to be affirmed by more studies on other closely related populations by using multi-component body composition assessment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We report an experimental study of a new type of turbulent flow that is driven purely by buoyancy. The flow is due to an unstable density difference, created using brine and water, across the ends of a long (length/diameter = 9) vertical pipe. The Schmidt number Sc is 670, and the Rayleigh number (Ra) based on the density gradient and diameter is about 10(8). Under these conditions the convection is turbulent, and the time-averaged velocity at any point is `zero'. The Reynolds number based on the Taylor microscale, Re-lambda, is about 65. The pipe is long enough for there to be an axially homogeneous region, with a linear density gradient, about 6-7 diameters long in the midlength of the pipe. In the absence of a mean flow and, therefore, mean shear, turbulence is sustained just by buoyancy. The flow can be thus considered to be an axially homogeneous turbulent natural convection driven by a constant (unstable) density gradient. We characterize the flow using flow visualization and particle image velocimetry (PIV). Measurements show that the mean velocities and the Reynolds shear stresses are zero across the cross-section; the root mean squared (r.m.s.) of the vertical velocity is larger than those of the lateral velocities (by about one and half times at the pipe axis). We identify some features of the turbulent flow using velocity correlation maps and the probability density functions of velocities and velocity differences. The flow away from the wall, affected mainly by buoyancy, consists of vertically moving fluid masses continually colliding and interacting, while the flow near the wall appears similar to that in wall-bound shear-free turbulence. The turbulence is anisotropic, with the anisotropy increasing to large values as the wall is approached. A mixing length model with the diameter of the pipe as the length scale predicts well the scalings for velocity fluctuations and the flux. This model implies that the Nusselt number would scale as (RaSc1/2)-Sc-1/2, and the Reynolds number would scale as (RaSc-1/2)-Sc-1/2. The velocity and the flux measurements appear to be consistent with the Ra-1/2 scaling, although it must be pointed out that the Rayleigh number range was less than 10. The Schmidt number was not varied to check the Sc scaling. The fluxes and the Reynolds numbers obtained in the present configuration are Much higher compared to what would be obtained in Rayleigh-Benard (R-B) convection for similar density differences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The utility of near infrared spectroscopy as a non-invasive technique for the assessment of internal eating quality parameters of mandarin fruit (Citrus reticulata cv. Imperial) was assessed. The calibration procedure for the attributes of TSS (total soluble solids) and DM (dry matter) was optimised with respect to a reference sampling technique, scan averaging, spectral window, data pre-treatment (in terms of derivative treatment and scatter correction routine) and regression procedure. The recommended procedure involved sampling of an equatorial position on the fruit with 1 scan per spectrum, and modified partial least squares model development on a 720–950-nm window, pre-treated as first derivative absorbance data (gap size of 4 data points) with standard normal variance and detrend scatter correction. Calibration model performance for the attributes of TSS and DM content was encouraging (typical Rc2 of >0.75 and 0.90, respectively; typical root mean squared standard error of calibration of <0.4 and 0.6%, respectively), whereas that for juiciness and total acidity was unacceptable. The robustness of the TSS and DM calibrations across new populations of fruit is documented in a companion study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Time evolution of mean-squared displacement based on molecular dynamics for a variety of adsorbate-zeolite systems is reported. Transition from ballistic to diffusive behavior is observed for all the systems. The transition times are found to be system dependent and show different types of dependence on temperature. Model calculations on a one-dimensional system are carried out which show that the characteristic length and transition times are dependent on the distance between the barriers, their heights, and temperature. In light of these findings, it is shown that it is possible to obtain valuable information about the average potential energy surface sampled under specific external conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We report an experimental study of a new type of turbulent flow that is driven purely by buoyancy. The flow is due to an unstable density difference, created using brine and water, across the ends of a long (length/diameter=9) vertical pipe. The Schmidt number Sc is 670, and the Rayleigh number (Ra) based on the density gradient and diameter is about 108. Under these conditions the convection is turbulent, and the time-averaged velocity at any point is ‘zero’. The Reynolds number based on the Taylor microscale, Reλ, is about 65. The pipe is long enough for there to be an axially homogeneous region, with a linear density gradient, about 6–7 diameters long in the midlength of the pipe. In the absence of a mean flow and, therefore, mean shear, turbulence is sustained just by buoyancy. The flow can be thus considered to be an axially homogeneous turbulent natural convection driven by a constant (unstable) density gradient. We characterize the flow using flow visualization and particle image velocimetry (PIV). Measurements show that the mean velocities and the Reynolds shear stresses are zero across the cross-section; the root mean squared (r.m.s.) of the vertical velocity is larger than those of the lateral velocities (by about one and half times at the pipe axis). We identify some features of the turbulent flow using velocity correlation maps and the probability density functions of velocities and velocity differences. The flow away from the wall, affected mainly by buoyancy, consists of vertically moving fluid masses continually colliding and interacting, while the flow near the wall appears similar to that in wall-bound shear-free turbulence. The turbulence is anisotropic, with the anisotropy increasing to large values as the wall is approached. A mixing length model with the diameter of the pipe as the length scale predicts well the scalings for velocity fluctuations and the flux. This model implies that the Nusselt number would scale as Ra1/2Sc1/2, and the Reynolds number would scale as Ra1/2Sc−1/2. The velocity and the flux measurements appear to be consistent with the Ra1/2 scaling, although it must be pointed out that the Rayleigh number range was less than 10. The Schmidt number was not varied to check the Sc scaling. The fluxes and the Reynolds numbers obtained in the present configuration are much higher compared to what would be obtained in Rayleigh–Bénard (R–B) convection for similar density differences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The authors consider the channel estimation problem in the context of a linear equaliser designed for a frequency selective channel, which relies on the minimum bit-error-ratio (MBER) optimisation framework. Previous literature has shown that the MBER-based signal detection may outperform its minimum-mean-square-error (MMSE) counterpart in the bit-error-ratio performance sense. In this study, they develop a framework for channel estimation by first discretising the parameter space and then posing it as a detection problem. Explicitly, the MBER cost function (CF) is derived and its performance studied, when transmitting binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) signals. It is demonstrated that the MBER based CF aided scheme is capable of outperforming existing MMSE, least square-based solutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study considers linear filtering methods for minimising the end-to-end average distortion of a fixed-rate source quantisation system. For the source encoder, both scalar and vector quantisation are considered. The codebook index output by the encoder is sent over a noisy discrete memoryless channel whose statistics could be unknown at the transmitter. At the receiver, the code vector corresponding to the received index is passed through a linear receive filter, whose output is an estimate of the source instantiation. Under this setup, an approximate expression for the average weighted mean-square error (WMSE) between the source instantiation and the reconstructed vector at the receiver is derived using high-resolution quantisation theory. Also, a closed-form expression for the linear receive filter that minimises the approximate average WMSE is derived. The generality of framework developed is further demonstrated by theoretically analysing the performance of other adaptation techniques that can be employed when the channel statistics are available at the transmitter also, such as joint transmit-receive linear filtering and codebook scaling. Monte Carlo simulation results validate the theoretical expressions, and illustrate the improvement in the average distortion that can be obtained using linear filtering techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We address the problem of denoising images corrupted by multiplicative noise. The noise is assumed to follow a Gamma distribution. Compared with additive noise distortion, the effect of multiplicative noise on the visual quality of images is quite severe. We consider the mean-square error (MSE) cost function and derive an expression for an unbiased estimate of the MSE. The resulting multiplicative noise unbiased risk estimator is referred to as MURE. The denoising operation is performed in the wavelet domain by considering the image-domain MURE. The parameters of the denoising function (typically, a shrinkage of wavelet coefficients) are optimized for by minimizing MURE. We show that MURE is accurate and close to the oracle MSE. This makes MURE-based image denoising reliable and on par with oracle-MSE-based estimates. Analogous to the other popular risk estimation approaches developed for additive, Poisson, and chi-squared noise degradations, the proposed approach does not assume any prior on the underlying noise-free image. We report denoising results for various noise levels and show that the quality of denoising obtained is on par with the oracle result and better than that obtained using some state-of-the-art denoisers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Particle tracking techniques are often used to assess the local mechanical properties of cells and biological fluids. The extracted trajectories are exploited to compute the mean-squared displacement that characterizes the dynamics of the probe particles. Limited spatial resolution and statistical uncertainty are the limiting factors that alter the accuracy of the mean-squared displacement estimation. We precisely quantified the effect of localization errors in the determination of the mean-squared displacement by separating the sources of these errors into two separate contributions. A "static error" arises in the position measurements of immobilized particles. A "dynamic error" comes from the particle motion during the finite exposure time that is required for visualization. We calculated the propagation of these errors on the mean-squared displacement. We examined the impact of our error analysis on theoretical model fluids used in biorheology. These theoretical predictions were verified for purely viscous fluids using simulations and a multiple-particle tracking technique performed with video microscopy. We showed that the static contribution can be confidently corrected in dynamics studies by using static experiments performed at a similar noise-to-signal ratio. This groundwork allowed us to achieve higher resolution in the mean-squared displacement, and thus to increase the accuracy of microrheology studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Suite à un stage avec la compagnie Hatch, nous possédons des jeux de données composés de séries chronologiques de vitesses de vent mesurées à divers sites dans le monde, sur plusieurs années. Les ingénieurs éoliens de la compagnie Hatch utilisent ces jeux de données conjointement aux banques de données d’Environnement Canada pour évaluer le potentiel éolien afin de savoir s’il vaut la peine d’installer des éoliennes à ces endroits. Depuis quelques années, des compagnies offrent des simulations méso-échelle de vitesses de vent, basées sur divers indices environnementaux de l’endroit à évaluer. Les ingénieurs éoliens veulent savoir s’il vaut la peine de payer pour ces données simulées, donc si celles-ci peuvent être utiles lors de l’estimation de la production d’énergie éolienne et si elles pourraient être utilisées lors de la prévision de la vitesse du vent long terme. De plus, comme l’on possède des données mesurées de vitesses de vent, l’on en profitera pour tester à partir de diverses méthodes statistiques différentes étapes de l’estimation de la production d’énergie. L’on verra les méthodes d’extrapolation de la vitesse du vent à la hauteur d’une turbine éolienne et l’on évaluera ces méthodes à l’aide de l’erreur quadratique moyenne. Aussi, on étudiera la modélisation de la vitesse du vent par la distributionWeibull et la variation de la distribution de la vitesse dans le temps. Finalement, l’on verra à partir de la validation croisée et du bootstrap si l’utilisation de données méso-échelle est préférable à celle de données des stations de référence, en plus de tester un modèle où les deux types de données sont utilisées pour prédire la vitesse du vent. Nous testerons la méthodologie globale présentement utilisée par les ingénieurs éoliens pour l’estimation de la production d’énergie d’un point de vue statistique, puis tenterons de proposer des changements à cette méthodologie, qui pourraient améliorer l’estimation de la production d’énergie annuelle.