46 resultados para Mean Absolute Scaled Error (MASE)
Resumo:
This study investigates the potential of Relevance Vector Machine (RVM)-based approach to predict the ultimate capacity of laterally loaded pile in clay. RVM is a sparse approximate Bayesian kernel method. It can be seen as a probabilistic version of support vector machine. It provides much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. RVM model outperforms the two other models based on root-mean-square-error (RMSE) and mean-absolute-error (MAE) performance criteria. It also stimates the prediction variance. The results presented in this paper clearly highlight that the RVM is a robust tool for prediction Of ultimate capacity of laterally loaded piles in clay.
Resumo:
Eleven GCMs (BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1) were evaluated for India (covering 73 grid points of 2.5 degrees x 2.5 degrees) for the climate variable `precipitation rate' using 5 performance indicators. Performance indicators used were the correlation coefficient, normalised root mean square error, absolute normalised mean bias error, average absolute relative error and skill score. We used a nested bias correction methodology to remove the systematic biases in GCM simulations. The Entropy method was employed to obtain weights of these 5 indicators. Ranks of the 11 GCMs were obtained through a multicriterion decision-making outranking method, PROMETHEE-2 (Preference Ranking Organisation Method of Enrichment Evaluation). An equal weight scenario (assigning 0.2 weight for each indicator) was also used to rank the GCMs. An effort was also made to rank GCMs for 4 river basins (Godavari, Krishna, Mahanadi and Cauvery) in peninsular India. The upper Malaprabha catchment in Karnataka, India, was chosen to demonstrate the Entropy and PROMETHEE-2 methods. The Spearman rank correlation coefficient was employed to assess the association between the ranking patterns. Our results suggest that the ensemble of GFDL2.0, MIROC3, BCCR-BCCM2.0, UKMO-HADCM3, MPIECHAM4 and UKMO-HADGEM1 is suitable for India. The methodology proposed can be extended to rank GCMs for any selected region.
Resumo:
Computing the maximum of sensor readings arises in several environmental, health, and industrial monitoring applications of wireless sensor networks (WSNs). We characterize the several novel design trade-offs that arise when green energy harvesting (EH) WSNs, which promise perpetual lifetimes, are deployed for this purpose. The nodes harvest renewable energy from the environment for communicating their readings to a fusion node, which then periodically estimates the maximum. For a randomized transmission schedule in which a pre-specified number of randomly selected nodes transmit in a sensor data collection round, we analyze the mean absolute error (MAE), which is defined as the mean of the absolute difference between the maximum and that estimated by the fusion node in each round. We optimize the transmit power and the number of scheduled nodes to minimize the MAE, both when the nodes have channel state information (CSI) and when they do not. Our results highlight how the optimal system operation depends on the EH rate, availability and cost of acquiring CSI, quantization, and size of the scheduled subset. Our analysis applies to a general class of sensor reading and EH random processes.
Resumo:
The method of least squares could be used to refine an imperfectly related trial structure by adoption of one of the following two procedures: (i) using all the observed at one time or (ii) successive refinement in stages with data of increasing resolution. While the former procedure is successful in the case of trial structures which are sufficiently accurate, only the latter has been found to be successful when the mean positional error (i.e.<|[Delta]r|>) for the atoms in the trial structure is large. This paper makes a theoretical study of the variation of the R index, mean phase-angle error, etc. as a function of <|[Delta]r|> for data corresponding to different esolutions in order to find the best refinement procedure [i.e. (i) or (ii)] which could be successfully employed for refining trial structures in which <|[Delta]r|> has large, medium and low values. It is found that a trial structure for which the mean positional error is large could be refined only by the method of successive refinement with data of increasing resolution.
Resumo:
Effects of basis set and electron correlation on the equilibrium geometry, force constants and vibrational spectra of BH3NH3 have been studied. A series of basis sets ranging from double zeta to triple zeta including polarization and diffuse functions have been utilized. All the SCF based calculations overestimate the dative B-N bond distance and considerable improvement occurs when the treatment for electron correlation is introduced. Detailed vibrational analysis for BH3NH3 has been carried out. The mean absolute percentage deviation of the ab initio predicted vibration frequencies of (BH3NH3)-B-11 from the experiment is about 10% for the SCF based calculations and the MP2 method shows better agreement, the overall deviation being 5-6%. The ground state effective force constants of BH3NH3 were obtained using RECOVES procedure. The RECOVES sets of force constants are found to be highly satisfactory for the prediction of the vibrational frequencies of different isotopomers of BH3NH3. The mean absolute percentage deviation of the calculated frequencies of different isotopomers from the experiment is much less than 1%. The RECOVES-MP2/augDZP set of force constants was found to be the best set among the different sets for this molecule. Theoretical infrared intensities are in fair agreement with the observed spectral features.
Resumo:
We consider the problem of finding the best features for value function approximation in reinforcement learning and develop an online algorithm to optimize the mean square Bellman error objective. For any given feature value, our algorithm performs gradient search in the parameter space via a residual gradient scheme and, on a slower timescale, also performs gradient search in the Grassman manifold of features. We present a proof of convergence of our algorithm. We show empirical results using our algorithm as well as a similar algorithm that uses temporal difference learning in place of the residual gradient scheme for the faster timescale updates.
Resumo:
The paper deals with a linearization technique in non-linear oscillations for systems which are governed by second-order non-linear ordinary differential equations. The method is based on approximation of the non-linear function by a linear function such that the error is least in the weighted mean square sense. The method has been applied to cubic, sine, hyperbolic sine, and odd polynomial types of non-linearities and the results obtained are more accurate than those given by existing linearization methods.
Resumo:
Infrared spectra of atmospherically important dimethylquinolines (DMQs), namely 2,4-DMQ, 2,6-DMQ, 2,7-DMQ, and 2,8-DMQ in the gas phase at 80 degrees C were recorded using a long variable path-length cell. DFT calculations were carried out to assign the bands in the experimentally observed spectra at the B3LYP/6-31G* level of theory. The spectral assignments particularly for the C-H stretching modes could not be made unambiguously using calculated anharmonic or scaled harmonic frequencies. To resolve this problem, a scaled force field method of assignment was used. Assignment of fundamental modes was confirmed by potential energy distributions (PEDs) of the normal modes derived by the scaled force fields using a modified version of the UMAT program in the QCPE package. We demonstrate that for large molecules such as the DMQs, the scaling of the force field is more effective in arriving at the correct assignment of the fundamentals for a quantitative vibrational analysis. An error analysis of the mean deviation of the calculated harmonic, anharmonic, and force field fitted frequencies from the observed frequency provides strong evidence for the correctness of the assignment.
Resumo:
‘Best’ solutions for the shock-structure problem are obtained by solving the Boltzmann equation for a rigid sphere gas by applying minimum error criteria on the Mott-Smith ansatz. The use of two such criteria minimizing respectively the local and total errors, as well as independent computations of the remaining error, establish the high accuracy of the solutions, although it is shown that the Mott-Smith distribution is not an exact solution of the Boltzmann equation even at infinite Mach number. The minimum local error method is found to be particularly simple and efficient. Adopting the present solutions as the standard of comparison, it is found that the widely used v2x-moment solutions can be as much as a third in error, but that results based on Rosen's method provide good approximations. Finally, it is shown that if the Maxwell mean free path on the hot side of the shock is chosen as the scaling length, the value of the density-slope shock thickness is relatively insensitive to the intermolecular potential. A comparison is made on this basis of present results with experiment, and very satisfactory quantitative agreement is obtained.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
Infrared spectra of atmospherically and astronomically important dimethylphenanthrenes (DMPs), namely 1,9-DMP, 2,4-DMP, and 3,9-DMP, were recorded in the gas phase from 400 to 4000 cm(-1) with a resolution of 0.5 cm(-1) at 110 degrees C using a 7.2 m gas cell. DFT calculations at the B3LYP/6-311G** level were carried out to get the harmonic and anharmonic frequencies and their corresponding intensities for the assignment of the observed bands. However, spectral assignments could not be made unambiguously using anharmonic or selectively scaled harmonic frequencies. Therefore, the scaled quantum mechanical (SQM) force field analysis method was adopted to achieve more accurate assignments. In this method force fields instead of frequencies were scaled. The Cartesian force field matrix obtained from the Gaussian calculations was converted to a nonredundant local coordinate force field matrix and then the force fields were scaled to match experimental frequencies in a consistent manner using a modified version of the UMAT program of the QCPE package. Potential energy distributions (PEDs) of the normal modes in terms of nonredundant local coordinates obtained from these calculations helped us derive the nature of the vibration at each frequency. The intensity of observed bands in the experimental spectra was calculated using estimated vapor pressures of the DMPs. An error analysis of the mean deviation between experimental and calculated intensities reveal that the observed methyl C-H stretching intensity deviates more compared to the aromatic C-H and non C-H stretching bands.
Resumo:
Bubble size in a gas liquid ejector has been measured using the image technique and analysed for estimation of Sauter mean diameter. The individual bubble diameter is estimated by considering the two dimensional contour of the ellipse, for the actual three dimensional ellipsoid in the system by equating the volume of the ellipsoid to that of the sphere. It is observed that the bubbles are of oblate and prolate shaped ellipsoid in this air water system. The bubble diameter is calculated based on this concept and the Sauter mean diameter is estimated. The error between these considerations is reported. The bubble size at different locations from the nozzle of the ejector is presented along with their percentage error which is around 18%.
Resumo:
Artificial Neural Networks (ANNs) have been found to be a robust tool to model many non-linear hydrological processes. The present study aims at evaluating the performance of ANN in simulating and predicting ground water levels in the uplands of a tropical coastal riparian wetland. The study involves comparison of two network architectures, Feed Forward Neural Network (FFNN) and Recurrent Neural Network (RNN) trained under five algorithms namely Levenberg Marquardt algorithm, Resilient Back propagation algorithm, BFGS Quasi Newton algorithm, Scaled Conjugate Gradient algorithm, and Fletcher Reeves Conjugate Gradient algorithm by simulating the water levels in a well in the study area. The study is analyzed in two cases-one with four inputs to the networks and two with eight inputs to the networks. The two networks-five algorithms in both the cases are compared to determine the best performing combination that could simulate and predict the process satisfactorily. Ad Hoc (Trial and Error) method is followed in optimizing network structure in all cases. On the whole, it is noticed from the results that the Artificial Neural Networks have simulated and predicted the water levels in the well with fair accuracy. This is evident from low values of Normalized Root Mean Square Error and Relative Root Mean Square Error and high values of Nash-Sutcliffe Efficiency Index and Correlation Coefficient (which are taken as the performance measures to calibrate the networks) calculated after the analysis. On comparison of ground water levels predicted with those at the observation well, FFNN trained with Fletcher Reeves Conjugate Gradient algorithm taken four inputs has outperformed all other combinations.
Resumo:
We consider the design of a linear equalizer with a finite number of coefficients in the context of a classical linear intersymbol-interference channel with additive Gaussian noise for channel estimation. Previous literature has shown that Minimum Bit Error Rate(MBER) based detection has outperformed Minimum Mean Squared Error (MMSE) based detection. We pose the channel estimation problem as a detection problem and propose a novel algorithm to estimate the channel based on the MBER framework for BPSK signals. It is shown that the proposed algorithm reduces BER compared to an MMSE based channel estimation when used in MMSE or MBER detection.
Resumo:
The authors consider the channel estimation problem in the context of a linear equaliser designed for a frequency selective channel, which relies on the minimum bit-error-ratio (MBER) optimisation framework. Previous literature has shown that the MBER-based signal detection may outperform its minimum-mean-square-error (MMSE) counterpart in the bit-error-ratio performance sense. In this study, they develop a framework for channel estimation by first discretising the parameter space and then posing it as a detection problem. Explicitly, the MBER cost function (CF) is derived and its performance studied, when transmitting binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) signals. It is demonstrated that the MBER based CF aided scheme is capable of outperforming existing MMSE, least square-based solutions.