960 resultados para Root mean square error
Resumo:
Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.
Resumo:
In this work the evaluation of the dissolution profile of captopril-hydrochlorothiazide and zidovudine-lamivudine associations were carried out by multivariate spectroscopic method. The models were developed by partial least square regression from 20 synthetic mixtures using mean-centered spectral data. The external validation was accomplished with 5 synthetic mixtures shown mean prevision error of about 1%. Good agreement was observed in the analyses of commercial drugs (content uniformity and dissolution profile), considering the results obtained by the standard chromatographic method, with prevision error lower than 10%.
Resumo:
The objective of this paper was to evaluate the potential of neural networks (NN) as an alternative method to the basic epidemiological approach to describe epidemics of coffee rust. The NN was developed from the intensities of coffee (Coffea arabica) rust along with the climatic variables collected in Lavras-MG between 13 February 1998 and 20 April 2001. The NN was built with climatic variables that were either selected in a stepwise regression analysis or by the Braincel® system, software for NN building. Fifty-nine networks and 26 regression models were tested. The best models were selected based on small values of the mean square deviation (MSD) and of the mean prediction error (MPE). For the regression models, the highest coefficients of determination (R²) were used. The best model developed with neural networks had an MSD of 4.36 and an MPE of 2.43%. This model used the variables of minimum temperature, production, relative humidity of the air, and irradiance 30 days before the evaluation of disease. The best regression model was developed from 29 selected climatic variables in the network. The summary statistics for this model were: MPE=6.58%, MSE=4.36, and R²=0.80. The elaborated neural networks from a time series also were evaluated to describe the epidemic. The incidence of coffee rust at four previous fortnights resulted in a model with MPE=4.72% and an MSD=3.95.
Resumo:
The aim of the present study was to compare the modulation of heart rate in a group of postmenopausal women to that of a group of young women under resting conditions on the basis of R-R interval variability. Ten healthy postmenopausal women (mean ± SD, 58.3 ± 6.8 years) and 10 healthy young women (mean ± SD, 21.6 ± 0.82 years) were submitted to a control resting electrocardiogram (ECG) in the supine and sitting positions over a period of 6 min. The ECG was obtained from a one-channel heart monitor at the CM5 lead and processed and stored using an analog to digital converter connected to a microcomputer. R-R intervals were calculated on a beat-to-beat basis from the ECG recording in real time using a signal-processing software. Heart rate variability (HRV) was expressed as standard deviation (RMSM) and mean square root (RMSSD). In the supine position, the postmenopausal group showed significantly lower (P<0.05) median values of RMSM (34.9) and RMSSD (22.32) than the young group (RMSM: 62.11 and RMSSD: 49.1). The same occurred in the sitting position (RMSM: 33.0 and RMSSD: 18.9 compared to RMSM: 57.6 and RMSSD: 42.8 for the young group). These results indicate a decrease in parasympathetic modulation in postmenopausal women compared to young women which was possibly due both to the influence of age and hormonal factors. Thus, time domain HRV proved to be a noninvasive and sensitive method for the identification of changes in autonomic modulation of the sinus node in postmenopausal women.
Resumo:
Many unit root and cointegration tests require an estimate of the spectral density function at frequency zero at some process. Kernel estimators based on weighted sums of autocovariances constructed using estimated residuals from an AR(1) regression are commonly used. However, it is known that with substantially correlated errors, the OLS estimate of the AR(1) parameter is severely biased. in this paper, we first show that this least squares bias induces a significant increase in the bias and mean-squared error of kernel-based estimators.
Resumo:
This paper proposes a new iterative algorithm for OFDM joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the problem of "overfitting" such that the iterative approach may converge to a trivial solution. Although it is essential for this joint approach, the overfitting problem was relatively less studied in existing algorithms. In this paper, specifically, we apply a hard decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the phase noise, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical simulations are also given to verify the proposed algorithm.
Resumo:
This correspondence proposes a new algorithm for the OFDM joint data detection and phase noise (PHN) cancellation for constant modulus modulations. We highlight that it is important to address the overfitting problem since this is a major detrimental factor impairing the joint detection process. In order to attack the overfitting problem we propose an iterative approach based on minimum mean square prediction error (MMSPE) subject to the constraint that the estimated data symbols have constant power. The proposed constrained MMSPE algorithm (C-MMSPE) significantly improves the performance of existing approaches with little extra complexity being imposed. Simulation results are also given to verify the proposed algorithm.
Resumo:
Time correlation functions yield profound information about the dynamics of a physical system and hence are frequently calculated in computer simulations. For systems whose dynamics span a wide range of time, currently used methods require significant computer time and memory. In this paper, we discuss the multiple-tau correlator method for the efficient calculation of accurate time correlation functions on the fly during computer simulations. The multiple-tau correlator is efficacious in terms of computational requirements and can be tuned to the desired level of accuracy. Further, we derive estimates for the error arising from the use of the multiple-tau correlator and extend it for use in the calculation of mean-square particle displacements and dynamic structure factors. The method described here, in hardware implementation, is routinely used in light scattering experiments but has not yet found widespread use in computer simulations.
Resumo:
This paper analyzes the convergence behavior of the least mean square (LMS) filter when used in an adaptive code division multiple access (CDMA) detector consisting of a tapped delay line with adjustable tap weights. The sampling rate may be equal to or higher than the chip rate, and these correspond to chip-spaced (CS) and fractionally spaced (FS) detection, respectively. It is shown that CS and FS detectors with the same time-span exhibit identical convergence behavior if the baseband received signal is strictly bandlimited to half the chip rate. Even in the practical case when this condition is not met, deviations from this observation are imperceptible unless the initial tap-weight vector gives an extremely large mean squared error (MSE). This phenomenon is carefully explained with reference to the eigenvalues of the correlation matrix when the input signal is not perfectly bandlimited. The inadequacy of the eigenvalue spread of the tap-input correlation matrix as an indicator of the transient behavior and the influence of the initial tap weight vector on convergence speed are highlighted. Specifically, a initialization within the signal subspace or to the origin leads to very much faster convergence compared with initialization in the a noise subspace.
Resumo:
A dynamic, mechanistic model of enteric fermentation was used to investigate the effect of type and quality of grass forage, dry matter intake (DMI) and proportion of concentrates in dietary dry matter (DM) on variation in methane (CH(4)) emission from enteric fermentation in dairy cows. The model represents substrate degradation and microbial fermentation processes in rumen and hindgut and, in particular, the effects of type of substrate fermented and of pH oil the production of individual volatile fatty acids and CH, as end-products of fermentation. Effects of type and quality of fresh and ensiled grass were evaluated by distinguishing two N fertilization rates of grassland and two stages of grass maturity. Simulation results indicated a strong impact of the amount and type of grass consumed oil CH(4) emission, with a maximum difference (across all forage types and all levels of DM 1) of 49 and 77% in g CH(4)/kg fat and protein corrected milk (FCM) for diets with a proportion of concentrates in dietary DM of 0.1 and 0.4, respectively (values ranging from 10.2 to 19.5 g CH(4)/kg FCM). The lowest emission was established for early Cut, high fertilized grass silage (GS) and high fertilized grass herbage (GH). The highest emission was found for late cut, low-fertilized GS. The N fertilization rate had the largest impact, followed by stage of grass maturity at harvesting and by the distinction between GH and GS. Emission expressed in g CH(4)/kg FCM declined oil average 14% with an increase of DMI from 14 to 18 kg/day for grass forage diets with a proportion of concentrates of 0.1, and on average 29% with an increase of DMI from 14 to 23 kg/day for diets with a proportion of concentrates of 0.4. Simulation results indicated that a high proportion of concentrates in dietary DM may lead to a further reduction of CH, emission per kg FCM mainly as a result of a higher DM I and milk yield, in comparison to low concentrate diets. Simulation results were evaluated against independent data obtained at three different laboratories in indirect calorimetry trials with COWS consuming GH mainly. The model predicted the average of observed values reasonably, but systematic deviations remained between individual laboratories and root mean squared prediction error was a proportion of 0.12 of the observed mean. Both observed and predicted emission expressed in g CH(4)/kg DM intake decreased upon an increase in dietary N:organic matter (OM) ratio. The model reproduced reasonably well the variation in measured CH, emission in cattle sheds oil Dutch dairy farms and indicated that oil average a fraction of 0.28 of the total emissions must have originated from manure under these circumstances.
Resumo:
As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.
Resumo:
Radial transport in the tokamap, which has been proposed as a simple model for the motion in a stochastic plasma, is investigated. A theory for previous numerical findings is presented. The new results are stimulated by the fact that the radial diffusion coefficients is space-dependent. The space-dependence of the transport coefficient has several interesting effects which have not been elucidated so far. Among the new findings are the analytical predictions for the scaling of the mean radial displacement with time and the relation between the Fokker-Planck diffusion coefficient and the diffusion coefficient from the mean square displacement. The applicability to other systems is also discussed. (c) 2009 WILEY-VCH GmbH & Co. KGaA, Weinheim
Resumo:
Esta dissertação estuda o movimento do mercado acionário brasileiro com o objetivo de testar a trajetória de preços de pares de ações, aplicada à estratégia de pair trading. Os ativos estudados compreendem as ações que compõem o Ibovespa e a seleção dos pares é feita de forma unicamente estatística através da característica de cointegração entre ativos, sem análise fundamentalista na escolha. A teoria aqui aplicada trata do movimento similar de preços de pares de ações que evoluem de forma a retornar para o equilíbrio. Esta evolução é medida pela diferença instantânea dos preços comparada à média histórica. A estratégia apresenta resultados positivos quando a reversão à média se efetiva, num intervalo de tempo pré-determinado. Os dados utilizados englobam os anos de 2006 a 2010, com preços intra-diários para as ações do Ibovespa. As ferramentas utilizadas para seleção dos pares e simulação de operação no mercado foram MATLAB (seleção) e Streambase (operação). A seleção foi feita através do Teste de Dickey-Fuller aumentado aplicado no MATLAB para verificar a existência da raiz unitária dos resíduos da combinação linear entre os preços das ações que compõem cada par. A operação foi feita através de back-testing com os dados intra-diários mencionados. Dentro do intervalo testado, a estratégia mostrou-se rentável para os anos de 2006, 2007 e 2010 (com retornos acima da Selic). Os parâmetros calibrados para o primeiro mês de 2006 puderam ser aplicados com sucesso para o restante do intervalo (retorno de Selic + 5,8% no ano de 2006), para 2007, onde o retorno foi bastante próximo da Selic e para 2010, com retorno de Selic + 10,8%. Nos anos de maior volatilidade (2008 e 2009), os testes com os mesmos parâmetros de 2006 apresentaram perdas, mostrando que a estratégia é fortemente impactada pela volatilidade dos retornos dos preços das ações. Este comportamento sugere que, numa operação real, os parâmetros devem ser calibrados periodicamente, com o objetivo de adaptá-los aos cenários mais voláteis.
Resumo:
Este trabalho teve por objetivo estimar equações de regressão linear múltipla tendo, como variáveis explicativas, as demais características avaliadas em experimento de milho e, como variáveis principais, a diferença mínima significativa em percentagem da média (DMS%) e quadrado médio do erro (QMe), para peso de grãos. Com 610 experimentos conduzidos na Rede de Ensaios Nacionais de Competição de Cultivares de Milho, realizados entre 1986 e 1996 (522 experimentos) e em 1997 (88 experimentos), estimaram-se duas equações de regressão, com os 522 experimentos, validando estas pela análise de regressão simples entre os valores reais e os estimados pelas equações, com os 88 restantes, observando que, para a DMS% a equação não estimava o mesmo valor que a fórmula original e, para o QMe, a equação poderia ser utilizada na estimação. Com o teste de Lilliefors, verificou-se que os valores do QMe aderiam à distribuição normal padrão e foi construída uma tabela de classificação dos valores do QMe, baseada nos valores observados na análise da variância dos experimentos e nos estimados pela equação de regressão.
Resumo:
Growth curves models provide a visual assessment of growth as a function of time, and prediction body weight at a specific age. This study aimed at estimating tinamous growth curve using different models, and at verifying their goodness of fit. A total number 11,639 weight records from 411 birds, being 6,671 from females and 3,095 from males, was analyzed. The highest estimates of a parameter were obtained using Brody (BD), von Bertalanffy (VB), Gompertz (GP,) and Logistic function (LG). Adult females were 5.7% heavier than males. The highest estimates of b parameter were obtained in the LG, GP, BID, and VB models. The estimated k parameter values in decreasing order were obtained in LG, GP, VB, and BID models. The correlation between the parameters a and k showed heavier birds are less precocious than the lighter. The estimates of intercept, linear regression coefficient, quadratic regression coefficient, and differences between quadratic coefficient of functions and estimated ties of quadratic-quadratic-quadratic segmented polynomials (QQQSP) were: 31.1732 +/- 2.41339; 3.07898 +/- 0.13287; 0.02689 +/- 0.00152; -0.05566 +/- 0.00193; 0.02349 +/- 0.00107, and 57 and 145 days, respectively. The estimated predicted mean error values (PME) of VB, GP, BID, LG, and QQQSP models were, respectively, 0.8353; 0.01715; -0.6939; -2.2453; and -0.7544%. The coefficient of determination (RI) and least square error values (MS) showed similar results. In conclusion, the VB and the QQQSP models adequately described tinamous growth. The best model to describe tinamous growth was the Gompertz model, because it presented the highest R-2 values, easiness of convergence, lower PME, and the easiness of parameter biological interpretation.