950 resultados para Mean Absolute Scaled Error (MASE)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose/aim Myopia incidence is increasing around the world. Myopisation is considered to be caused by a variety of factors. One consideration is whether higher-order aberrations (HOA) influence myopisation. More knowledge of optics in anisometropic eyes might give further insight into the development of refractive error. Materials and methods To analyse the possible influence of HOA on refractive error development, we compared HOA between anisometropes and isometropes. We analysed HOA up to the 4th order for both eyes of 20 anisometropes (mean age: 43 ± 17 years) and 20 isometropes (mean age: 33 ±17 years). HOA were measured with the Shack-Hartman i.Profiler (Carl Zeiss, Germany) and were recalculated for a 4 mm pupil. Mean spherical equivalent (MSE) was based on the subjective refraction. Anisometropia was defined as ≥1D interocular difference in MSE. The mean absolute differences between right and left eyes in spherical equivalent were 0.28 ± 0.21 D in the isometropic group and 2.81 ± 2.04 D in the anisometropic group. Interocular differences in HOA were compared with the interocular difference in MSE using correlations. Results For isometropes oblique trefoil, vertical coma, horizontal coma and spherical aberration showed significant correlations between the two eyes. In anisometropes all analysed higher-order aberrations correlated significantly between the two eyes except oblique secondary astigmatism and secondary astigmatism. When analysing anisometropes and isometropes separately, no significant correlations were found between interocular differences of higher-order aberrations and MSE. For isometropes and anisometropes combined, tetrafoil correlated significantly with MSE in left eyes. Conclusions The present study could not show that interocular differences of higher-order aberrations increase with increasing interocular difference in MSE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Index tracking is an investment approach where the primary objective is to keep portfolio return as close as possible to a target index without purchasing all index components. The main purpose is to minimize the tracking error between the returns of the selected portfolio and a benchmark. In this paper, quadratic as well as linear models are presented for minimizing the tracking error. The uncertainty is considered in the input data using a tractable robust framework that controls the level of conservatism while maintaining linearity. The linearity of the proposed robust optimization models allows a simple implementation of an ordinary optimization software package to find the optimal robust solution. The proposed model of this paper employs Morgan Stanley Capital International Index as the target index and the results are reported for six national indices including Japan, the USA, the UK, Germany, Switzerland and France. The performance of the proposed models is evaluated using several financial criteria e.g. information ratio, market ratio, Sharpe ratio and Treynor ratio. The preliminary results demonstrate that the proposed model lowers the amount of tracking error while raising values of portfolio performance measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates the potential of Relevance Vector Machine (RVM)-based approach to predict the ultimate capacity of laterally loaded pile in clay. RVM is a sparse approximate Bayesian kernel method. It can be seen as a probabilistic version of support vector machine. It provides much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. RVM model outperforms the two other models based on root-mean-square-error (RMSE) and mean-absolute-error (MAE) performance criteria. It also stimates the prediction variance. The results presented in this paper clearly highlight that the RVM is a robust tool for prediction Of ultimate capacity of laterally loaded piles in clay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assessing the impacts of climate variability on agricultural productivity at regional, national or global scale is essential for defining adaptation and mitigation strategies. We explore in this study the potential changes in spring wheat yields at Swift Current and Melfort, Canada, for different sowing windows under projected climate scenarios (i.e., the representative concentration pathways, RCP4.5 and RCP8.5). First, the APSIM model was calibrated and evaluated at the study sites using data from long term experimental field plots. Then, the impacts of change in sowing dates on final yield were assessed over the 2030-2099 period with a 1990-2009 baseline period of observed yield data, assuming that other crop management practices remained unchanged. Results showed that the performance of APSIM was quite satisfactory with an index of agreement of 0.80, R2 of 0.54, and mean absolute error (MAE) and root mean square error (RMSE) of 529 kg/ha and 1023 kg/ha, respectively (MAE = 476 kg/ha and RMSE = 684 kg/ha in calibration phase). Under the projected climate conditions, a general trend in yield loss was observed regardless of the sowing window, with a range from -24 to -94 depending on the site and the RCP, and noticeable losses during the 2060s and beyond (increasing CO2 effects being excluded). Smallest yield losses obtained through earlier possible sowing date (i.e., mid-April) under the projected future climate suggested that this option might be explored for mitigating possible adverse impacts of climate variability. Our findings could therefore serve as a basis for using APSIM as a decision support tool for adaptation/mitigation options under potential climate variability within Western Canada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eleven GCMs (BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1) were evaluated for India (covering 73 grid points of 2.5 degrees x 2.5 degrees) for the climate variable `precipitation rate' using 5 performance indicators. Performance indicators used were the correlation coefficient, normalised root mean square error, absolute normalised mean bias error, average absolute relative error and skill score. We used a nested bias correction methodology to remove the systematic biases in GCM simulations. The Entropy method was employed to obtain weights of these 5 indicators. Ranks of the 11 GCMs were obtained through a multicriterion decision-making outranking method, PROMETHEE-2 (Preference Ranking Organisation Method of Enrichment Evaluation). An equal weight scenario (assigning 0.2 weight for each indicator) was also used to rank the GCMs. An effort was also made to rank GCMs for 4 river basins (Godavari, Krishna, Mahanadi and Cauvery) in peninsular India. The upper Malaprabha catchment in Karnataka, India, was chosen to demonstrate the Entropy and PROMETHEE-2 methods. The Spearman rank correlation coefficient was employed to assess the association between the ranking patterns. Our results suggest that the ensemble of GFDL2.0, MIROC3, BCCR-BCCM2.0, UKMO-HADCM3, MPIECHAM4 and UKMO-HADGEM1 is suitable for India. The methodology proposed can be extended to rank GCMs for any selected region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computing the maximum of sensor readings arises in several environmental, health, and industrial monitoring applications of wireless sensor networks (WSNs). We characterize the several novel design trade-offs that arise when green energy harvesting (EH) WSNs, which promise perpetual lifetimes, are deployed for this purpose. The nodes harvest renewable energy from the environment for communicating their readings to a fusion node, which then periodically estimates the maximum. For a randomized transmission schedule in which a pre-specified number of randomly selected nodes transmit in a sensor data collection round, we analyze the mean absolute error (MAE), which is defined as the mean of the absolute difference between the maximum and that estimated by the fusion node in each round. We optimize the transmit power and the number of scheduled nodes to minimize the MAE, both when the nodes have channel state information (CSI) and when they do not. Our results highlight how the optimal system operation depends on the EH rate, availability and cost of acquiring CSI, quantization, and size of the scheduled subset. Our analysis applies to a general class of sensor reading and EH random processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on assessment and monitoring methods has primarily focused on fisheries with long multivariate data sets. Less research exists on methods applicable to data-poor fisheries with univariate data sets with a small sample size. In this study, we examine the capabilities of seasonal autoregressive integrated moving average (SARIMA) models to fit, forecast, and monitor the landings of such data-poor fisheries. We use a European fishery on meagre (Sciaenidae: Argyrosomus regius), where only a short time series of landings was available to model (n=60 months), as our case-study. We show that despite the limited sample size, a SARIMA model could be found that adequately fitted and forecasted the time series of meagre landings (12-month forecasts; mean error: 3.5 tons (t); annual absolute percentage error: 15.4%). We derive model-based prediction intervals and show how they can be used to detect problematic situations in the fishery. Our results indicate that over the course of one year the meagre landings remained within the prediction limits of the model and therefore indicated no need for urgent management intervention. We discuss the information that SARIMA model structure conveys on the meagre lifecycle and fishery, the methodological requirements of SARIMA forecasting of data-poor fisheries landings, and the capabilities SARIMA models present within current efforts to monitor the world’s data-poorest resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In addition to classical methods, namely kriging, Inverse Distance Weighting (IDW) and splines, which have been frequently used for interpolating the spatial patterns of soil properties, a relatively more accurate surface modelling technique is being developed in recent years, namely high accuracy surface modelling (HASM). It has been used in the numerical tests, DEM construction and the interpolation of climate and ecosystem changes. In this paper, HASM was applied to interpolate soil pH for assessing its feasibility of soil property interpolation in a red soil region of Jiangxi Province, China. Soil pH was measured on 150 samples of topsoil (0-20 cm) for the interpolation and comparing the performance of HASM, kriging. IDW and splines. The mean errors (MEs) of interpolations indicate little bias of interpolation for soil pH by the four techniques. HASM has less mean absolute error (MAE) and root mean square error (RMSE) than kriging, IDW and splines. HASM is still the most accurate one when we use the mean rank and the standard deviation of the ranks to avoid the outlier effects in assessing the prediction performance of the four methods. Therefore, HASM can be considered as an alternative and accurate method for interpolating soil properties. Further researches of HASM are needed to combine HASM with ancillary variables to improve the interpolation performance and develop a user-friendly algorithm that can be implemented in a GIS package. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sea surface salinity is a key physical parameter in ocean science. It is important in the ocean remote sensing to retrieve sea surface salinity by the microwave probe technology. Based on the in situ measurement data and remote sensing data of the Yellow Sea, we have built a new empirical model in this paper, which can be used to retrieve sea surface salinity of the Yellow Sea by means of the brightness temperature of the sea water at L-band. In this model, the influence of the roughness of the sea surface is considered, and the retrieved result is in good agreement with the in situ measurement data, where the mean absolute error of the retrieved sea surface salinity is about 0.288 psu. This result shows that our model has greater retrieval precision compared with similar models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

数值模式是潮波研究的一种有利手段,但在研究中会面临各种具体问题,包括开边界条件的确定、底摩擦系数和耗散系数的选取等。数据同化是解决这些问题的一种途径,即利用有限数量的潮汐观测资料对潮波进行最优估计,其根本目的是迫使模型预报值逼近观测值,使模式不要偏离实际情况太远。本文采用了一种优化开边界方法,沿着数值模型的开边界优化潮汐水位信息,目的是设法使数值解在动力约束的意义下接近观测值,获得研究区域的潮汐结果。边界值由指定优化问题的解来定,以提高模拟区域的潮汐精度,最优问题的解是基于通过开边界的能量通量的变化,处理开边界处的观测值与计算值之差的最小化。这里提供了辐射型边界条件,由Reid 和Bodine(本文简称为RB)推导,我们将采用的优化后的RB方法(称为ORB)是优化开边界的特殊情况。 本文对理想矩形海域( E- E, N- N, 分辨率 )进行了潮波模拟,有东部开边界,模式采用ECOM3D模式。对数据结果的误差分析采用,振幅平均偏差,平均绝对偏差,平均相对误差和均方根偏差四个值来衡量模拟结果的好坏程度。 需要优化入开边界的解析潮汐值本文采用的解析解由方国洪《海湾的潮汐与潮流》(1966年)方法提供,为验证本文所做的解析解和方文的一致,本文做了其第一个例子的关键值a,b,z,结果与其结果吻合的相当好。但略有差别,分析的可能原因是两法在具体迭代方案和计算机保留小数上有区别造成微小误差。另外,我们取m=20,得到更精确的数值,我们发现对前十项的各项参数值,取m=10,m=20各项参数略有改进。当然我们可以获得m更大的各项参数值。 同时为了检验解析解的正确性讨论m和l变化对边界值的影响,结果指出,增大m,m=20时,u的模最大在本身u1或u2的模的6%;m=100时,u的模最大在本身u1或u2的模的4%;m再增大,m=1000时,u的模最大在本身u1或u2的模的4%,改变不大。当l<1时, =0处u的模最大为2。当l=1时, =0处u的模最大为0.1,当l>1时,l越大,u的模越小,当l=10时,u的模最大为0.001,可以认为为0。 为检验该优化方法的应用情况,我们对理想矩形区域进行模拟,首先将本文所采用的优化开边界方法应用于30m的情况,在开边界优化入开边界得出模式解,所得模拟结果与解析解吻合得相当好,该模式解和解析解在整个区域上,振幅平均绝对偏差为9.9cm,相位平均绝对偏差只有4.0 ,均方根偏差只有13.3cm,说明该优化方法在潮波模型中有效。 为验证该优化方法在各种条件下的模拟结果情况,在下面我们做了三类敏感性试验: 第一类试验:为证明在开边界上使用优化方法相比于没有采用优化方法的模拟解更接近于解析解,我们来比较ORB条件与RB条件的优劣,我们模拟用了两个不同的摩擦系数,k分别为:0,0.00006。 结果显示,针对不同摩擦系数,显示在开边界上使用ORB条件的解比使用RB条件的解无论是振幅还是相位都有显著改善,两个试验均方根偏差优化程度分别为84.3%,83.7%。说明在开边界上使用优化方法相比于没有采用优化方法的模拟解更接近于解析解,大大提高了模拟水平。上述的两个试验得出, k=0.00006优化结果比k=0的好。 第二类试验,使用ORB条件确定优化开边界情况下,在东西边界加入出入流的情况,流考虑线性和非线性情况,结果显示,加入流的情况,潮汐模拟的效果降低不少,流为1Sv的情况要比5Sv的情况均方根偏差相差20cm,而不加流的情况只有0.2cm。线性流和非线性流情况两者模式解相差不大,振幅,相位各项指数都相近, 说明流的线性与否对结果影响不大。 第三类试验,不仅在开边界使用ORB条件,在模式内部也使用ORB条件,比较了内部优化和不优化情况与解析解的偏差。结果显示,选用不同的k,振幅都能得到很好的模拟,而相位相对较差。另外,在内部优化的情况下,考虑不同的k的模式解, 我们选用了与解析解相近的6个模式解的k,结果显示,不同的k,振幅都能得到很好的模拟,而相位较差。 总之,在开边界使用ORB条件比使用RB条件好,振幅相位都有大幅度改进,在加入出入流情况下,流的大小对模拟结果有影响,但线形流和非线性流差别不大。内部优化的结果显示,模式采用不同的k都能很好模拟解析解的振幅。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software metrics are the key tool in software quality management. In this paper, we propose to use support vector machines for regression applied to software metrics to predict software quality. In experiments we compare this method with other regression techniques such as Multivariate Linear Regression, Conjunctive Rule and Locally Weighted Regression. Results on benchmark dataset MIS, using mean absolute error, and correlation coefficient as regression performance measures, indicate that support vector machines regression is a promising technique for software quality prediction. In addition, our investigation of PCA based metrics extraction shows that using the first few Principal Components (PC) we can still get relatively good performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To study spectacle wear among rural Chinese children. METHODS: Visual acuity, refraction, spectacle wear, and visual function were measured. RESULTS: Among 1892 subjects (84.7% of the sample), the mean (SD) age was 14.7 (0.8) years. Among 948 children (50.1%) potentially benefiting from spectacle wear, 368 (38.8%) did not own them. Among 580 children owning spectacles, 17.9% did not wear them at school. Among 476 children wearing spectacles, 25.0% had prescriptions that could not improve their visual acuity to better than 6/12. Therefore, 62.3% (591 of 948) of children needing spectacles did not benefit from appropriate correction. Children not owning and not wearing spectacles had better self-reported visual function but worse visual acuity at initial examination than children wearing spectacles and had a mean (SD) refractive error of -2.06 (1.15) diopter (D) and -2.78 (1.32) D, respectively. Girls (P < .001) and older children (P = .03) were more likely to be wearing their spectacles. A common reason for nonwear (17.0%) was the belief that spectacles weaken the eyes. Among children without spectacles, 79.3% said their families would pay for them (mean, US $15). CONCLUSIONS: Although half of the children could benefit from spectacle wear, 62.3% were not wearing appropriate correction. These children have significant uncorrected refractive errors. There is potential to support programs through spectacle sales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contexte: Bien que plusieurs algorithmes pharmacogénétiques de prédiction de doses de warfarine aient été publiés, peu d’études ont comparé la validité de ces algorithmes en pratique clinique réelle. Objectif: Évaluer trois algorithmes pharmacogénomiques dans une population de patients qui initient un traitement à la warfarine et qui souffrent de fibrillation auriculaire ou de problèmes de valves cardiaques. Analyser la performance des algorithmes de Gage et al., de Michaud et al. ainsi que de l’IWPC quant à la prédiction de la dose de warfarine permettant d’atteindre l’INR thérapeutique. Méthodes: Un devis de cohorte rétrospectif fut utilisé afin d’évaluer la validité des algorithmes chez 605 patients ayant débuté une thérapie de warfarine à l’Institut de Cardiologie de Montréal. Le coefficient de corrélation de Pearson ainsi que l’erreur absolue moyenne ont été utilisés pour évaluer la précision des algorithmes. L’exactitude clinique des prédictions de doses fut évaluée en calculant le nombre de patients pour qui la dose prédite était sous-estimée, idéalement estimée ou surestimée. Enfin, la régression linéaire multiple a été utilisée pour évaluer la validité d’un modèle de prédiction de doses de warfarine obtenu en ajoutant de nouvelles covariables. Résultats : L’algorithme de Gage a obtenu la proportion de variation expliquée la plus élevée (R2 ajusté = 44 %) ainsi que la plus faible erreur absolue moyenne (MAE = 1.41 ± 0.06). De plus, la comparaison des proportions de patients ayant une dose prédite à moins de 20 % de la dose observée a confirmé que l’algorithme de Gage était également le plus performant. Conclusion : Le modèle publié par Gage en 2008 est l’algorithme pharmacogénétique le plus exact dans notre population pour prédire des doses thérapeutiques de warfarine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme