981 resultados para absolute error


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study compared IQs and Verbal-Performance IQ discrepancies estimated from two seven-subtest short forms of the Wechsler Adult Intelligence Scale-Revised (WAIS-R) in a sample of 100 subjects referred for neuropsychological assessment. The short forms of Warrington, James, and Maciejewski (1986) and Ward (1990) yielded similar correlation coefficients and absolute error rates with respect to WAIS-R IQs, although the Warrington short form requires more time to administer and score. Both short forms were able to detect significant Verbal-Performance IQ discrepancies 70% of the time. However, they incorrectly yielded significant discrepancies for approximately 25% of the sample who did not have significant differences on the full WAIS-R. The results do not support reporting and interpreting significant Verbal-Performance IQ discrepancies estimated from these short forms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study investigates the potential of Relevance Vector Machine (RVM)-based approach to predict the ultimate capacity of laterally loaded pile in clay. RVM is a sparse approximate Bayesian kernel method. It can be seen as a probabilistic version of support vector machine. It provides much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. RVM model outperforms the two other models based on root-mean-square-error (RMSE) and mean-absolute-error (MAE) performance criteria. It also stimates the prediction variance. The results presented in this paper clearly highlight that the RVM is a robust tool for prediction Of ultimate capacity of laterally loaded piles in clay.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Assessing the impacts of climate variability on agricultural productivity at regional, national or global scale is essential for defining adaptation and mitigation strategies. We explore in this study the potential changes in spring wheat yields at Swift Current and Melfort, Canada, for different sowing windows under projected climate scenarios (i.e., the representative concentration pathways, RCP4.5 and RCP8.5). First, the APSIM model was calibrated and evaluated at the study sites using data from long term experimental field plots. Then, the impacts of change in sowing dates on final yield were assessed over the 2030-2099 period with a 1990-2009 baseline period of observed yield data, assuming that other crop management practices remained unchanged. Results showed that the performance of APSIM was quite satisfactory with an index of agreement of 0.80, R2 of 0.54, and mean absolute error (MAE) and root mean square error (RMSE) of 529 kg/ha and 1023 kg/ha, respectively (MAE = 476 kg/ha and RMSE = 684 kg/ha in calibration phase). Under the projected climate conditions, a general trend in yield loss was observed regardless of the sowing window, with a range from -24 to -94 depending on the site and the RCP, and noticeable losses during the 2060s and beyond (increasing CO2 effects being excluded). Smallest yield losses obtained through earlier possible sowing date (i.e., mid-April) under the projected future climate suggested that this option might be explored for mitigating possible adverse impacts of climate variability. Our findings could therefore serve as a basis for using APSIM as a decision support tool for adaptation/mitigation options under potential climate variability within Western Canada.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In computational molecular biology, the aim of restriction mapping is to locate the restriction sites of a given enzyme on a DNA molecule. Double digest and partial digest are two well-studied techniques for restriction mapping. While double digest is NP-complete, there is no known polynomial-time algorithm for partial digest. Another disadvantage of the above techniques is that there can be multiple solutions for reconstruction. In this paper, we study a simple technique called labeled partial digest for restriction mapping. We give a fast polynomial time (O(n(2) log n) worst-case) algorithm for finding all the n sites of a DNA molecule using this technique. An important advantage of the algorithm is the unique reconstruction of the DNA molecule from the digest. The technique is also robust in handling errors in fragment lengths which arises in the laboratory. We give a robust O(n(4)) worst-case algorithm that can provably tolerate an absolute error of O(Delta/n) (where Delta is the minimum inter-site distance), while giving a unique reconstruction. We test our theoretical results by simulating the performance of the algorithm on a real DNA molecule. Motivated by the similarity to the labeled partial digest problem, we address a related problem of interest-the de novo peptide sequencing problem (ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 389-398), which arises in the reconstruction of the peptide sequence of a protein molecule. We give a simple and efficient algorithm for the problem without using dynamic programming. The algorithm runs in time O(k log k), where k is the number of ions and is an improvement over the algorithm in Chen et al. (C) 2002 Elsevier Science (USA). All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a fractional order proportional-integral controller is developed for a miniature air vehicle for rectilinear path following and trajectory tracking. The controller is implemented by constructing a vector field surrounding the path to be followed, which is then used to generate course commands for the miniature air vehicle. The fractional order proportional-integral controller is simulated using the fundamentals of fractional calculus, and the results for this controller are compared with those obtained for a proportional controller and a proportional integral controller. In order to analyze the performance of the controllers, four performance metrics, namely (maximum) overshoot, control effort, settling time and integral of the timed absolute error cost, have been selected. A comparison of the nominal as well as the robust performances of these controllers indicates that the fractional order proportional-integral controller exhibits the best performance in terms of ITAE while showing comparable performances in all other aspects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work considers the identification of the available whitespace, i.e., the regions that do not contain any existing transmitter within a given geographical area. To this end, n sensors are deployed at random locations within the area. These sensors detect for the presence of a transmitter within their radio range r(s) using a binary sensing model, and their individual decisions are combined to estimate the available whitespace. The limiting behavior of the recovered whitespace as a function of n and r(s) is analyzed. It is shown that both the fraction of the available whitespace that the nodes fail to recover as well as their radio range optimally scale as log(n)/n as n gets large. The problem of minimizing the sum absolute error in transmitter localization is also analyzed, and the corresponding optimal scaling of the radio range and the necessary minimum transmitter separation is determined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computing the maximum of sensor readings arises in several environmental, health, and industrial monitoring applications of wireless sensor networks (WSNs). We characterize the several novel design trade-offs that arise when green energy harvesting (EH) WSNs, which promise perpetual lifetimes, are deployed for this purpose. The nodes harvest renewable energy from the environment for communicating their readings to a fusion node, which then periodically estimates the maximum. For a randomized transmission schedule in which a pre-specified number of randomly selected nodes transmit in a sensor data collection round, we analyze the mean absolute error (MAE), which is defined as the mean of the absolute difference between the maximum and that estimated by the fusion node in each round. We optimize the transmit power and the number of scheduled nodes to minimize the MAE, both when the nodes have channel state information (CSI) and when they do not. Our results highlight how the optimal system operation depends on the EH rate, availability and cost of acquiring CSI, quantization, and size of the scheduled subset. Our analysis applies to a general class of sensor reading and EH random processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a multiple initialization based spectral peak tracking (MISPT) technique for heart rate monitoring from photoplethysmography (PPG) signal. MISPT is applied on the PPG signal after removing the motion artifact using an adaptive noise cancellation filter. MISPT yields several estimates of the heart rate trajectory from the spectrogram of the denoised PPG signal which are finally combined using a novel measure called trajectory strength. Multiple initializations help in correcting erroneous heart rate trajectories unlike the typical SPT which uses only single initialization. Experiments on the PPG data from 12 subjects recorded during intensive physical exercise show that the MISPT based heart rate monitoring indeed yields a better heart rate estimate compared to the SPT with single initialization. On the 12 datasets MISPT results in an average absolute error of 1.11 BPM which is lower than 1.28 BPM obtained by the state-of-the-art online heart rate monitoring algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 mu m). In the case of surface finish, the absolute error is well below R-a 1 mu m (average value 0.32 mu m). The present approach can be easily generalized to other grinding operations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A partir das dimensões dos indivíduos pode-se definir dimensionamentos adequados para os produtos e postos de trabalho, proporcionando segurança e conforto aos usuários. Com o avanço da tecnologia de digitalização de imagens (escaneamento) 3D, é possível tirar algumas medidas de maneira mais rápida e com a redução da presença do entrevistado durante o processo. No entanto, faltam estudos que avaliem estas tecnologias no Brasil, sendo necessária a realização de uma comparação das tecnologias e das respectivas precisões para que seu uso em pesquisas. Com o objetivo de oferecer métodos comparativos para escolha dos marcadores e equipamentos a serem utilizados em uma pesquisa antropométrica tridimensional da população brasileira, no presente estudo estão comparadas duas tecnologias de escaneamento: o sistema a laser WBX da empresa norte americana Cyberware e o sistema MHT da empresa russa Artec Group. O método para avaliação da precisão dimensional dos dados advindos desses equipamentos de digitalização de imagens 3D teve cinco etapas: Estudo dos processos de escaneamento; Escaneamento dos marcadores de pontos anatômicos; Escaneamento utilizando um corpo de prova cilíndrico; Escaneamento de um manequim; Escaneamento de um voluntário que teve seus pontos anatômicos marcados para a retirada de medidas. Foi feita uma comparação entre as medidas retiradas manualmente, por meio de antropômetro e virtualmente, com o auxílio do software de modelagem tridimensional Rhinoceros. Em relação aos resultados obtidos na avaliação do manequim e do voluntário, concluiu-se que a magnitude do erro absoluto é semelhante para ambos os scanners, e permanece constante independentemente das dimensões sob análise. As principais diferenças são em relação às funcionalidades dos equipamentos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

主动队列管理(active queue management,简称AQM)是网络拥塞控制的研究热点之一,其中的关键问题是如何设计反馈控制策略.提出一种新的基于D稳定域和时间乘以误差绝对值乘积积分(integral of time-weighted absolute error,简称ITAE)性能准则的比例-积分-微分(proportional-integral-differential,简称PID)优化设计方法(简称DITAE-PID),并用于AQM控制器的设计,控制闭环系统的理想动态性能.首先在复平面上设定一组理想的D稳定域,然后以ITAE为目标函数,通过数值优化算法求出控制器的参数,使得闭环系统的所有特征根都在D稳定域内,以降低排队延时,提高有效吞吐量.对比仿真实验结果表明孩算法能够预先探测和控制拥塞,有较好的鲁棒性,链路利用率更高,丢包率更小,平均队列长度更趋于期望值,同时,趋于期望队列长度的时间更短,其综合性能明显优于典型的随机早期探测(random early detection,简称RED)和比例-积分(proportional-integral,简称PI)算法.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In addition to classical methods, namely kriging, Inverse Distance Weighting (IDW) and splines, which have been frequently used for interpolating the spatial patterns of soil properties, a relatively more accurate surface modelling technique is being developed in recent years, namely high accuracy surface modelling (HASM). It has been used in the numerical tests, DEM construction and the interpolation of climate and ecosystem changes. In this paper, HASM was applied to interpolate soil pH for assessing its feasibility of soil property interpolation in a red soil region of Jiangxi Province, China. Soil pH was measured on 150 samples of topsoil (0-20 cm) for the interpolation and comparing the performance of HASM, kriging. IDW and splines. The mean errors (MEs) of interpolations indicate little bias of interpolation for soil pH by the four techniques. HASM has less mean absolute error (MAE) and root mean square error (RMSE) than kriging, IDW and splines. HASM is still the most accurate one when we use the mean rank and the standard deviation of the ranks to avoid the outlier effects in assessing the prediction performance of the four methods. Therefore, HASM can be considered as an alternative and accurate method for interpolating soil properties. Further researches of HASM are needed to combine HASM with ancillary variables to improve the interpolation performance and develop a user-friendly algorithm that can be implemented in a GIS package. (C) 2009 Elsevier B.V. All rights reserved.