936 resultados para Predictive Mean Squared Efficiency
Resumo:
This paper considers the high-rate performance of source coding for noisy discrete symmetric channels with random index assignment (IA). Accurate analytical models are developed to characterize the expected distortion performance of vector quantization (VQ) for a large class of distortion measures. It is shown that when the point density is continuous, the distortion can be approximated as the sum of the source quantization distortion and the channel-error induced distortion. Expressions are also derived for the continuous point density that minimizes the expected distortion. Next, for the case of mean squared error distortion, a more accurate analytical model for the distortion is derived by allowing the point density to have a singular component. The extent of the singularity is also characterized. These results provide analytical models for the expected distortion performance of both conventional VQ as well as for channel-optimized VQ. As a practical example, compression of the linear predictive coding parameters in the wideband speech spectrum is considered, with the log spectral distortion as performance metric. The theory is able to correctly predict the channel error rate that is permissible for operation at a particular level of distortion.
Resumo:
Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).
Resumo:
A study combining high resolution mass spectrometry (liquid chromatography-quadrupole time-of-flight-mass spectrometry, UPLC-QTof-MS) and chemometrics for the analysis of post-mortem brain tissue from subjects with Alzheimer’s disease (AD) (n = 15) and healthy age-matched controls (n = 15) was undertaken. The huge potential of this metabolomics approach for distinguishing AD cases is underlined by the correct prediction of disease status in 94–97% of cases. Predictive power was confirmed in a blind test set of 60 samples, reaching 100% diagnostic accuracy. The approach also indicated compounds significantly altered in concentration following the onset of human AD. Using orthogonal partial least-squares discriminant analysis (OPLS-DA), a multivariate model was created for both modes of acquisition explaining the maximum amount of variation between sample groups (Positive Mode-R2 = 97%; Q2 = 93%; root mean squared error of validation (RMSEV) = 13%; Negative Mode-R2 = 99%; Q2 = 92%; RMSEV = 15%). In brain extracts, 1264 and 1457 ions of interest were detected for the different modes of acquisition (positive and negative, respectively). Incorporation of gender into the model increased predictive accuracy and decreased RMSEV values. High resolution UPLC-QTof-MS has not previously been employed to biochemically profile post-mortem brain tissue, and the novel methods described and validated herein prove its potential for making new discoveries related to the etiology, pathophysiology, and treatment of degenerative brain disorders.
Resumo:
The self-consistent interaction between energetic particles and self-generated hydromagnetic waves in a cosmic ray pressure dominated plasma is considered. Using a three-dimensional hybrid magnetohydrodynamics (MHD)-kinetic code, which utilizes a spherical harmonic expansion of the Vlasov-Fokker-Planck equation, high-resolution simulations of the magnetic field growth including feedback on the cosmic rays are carried out. It is found that for shocks with high cosmic ray acceleration efficiency, the magnetic fields become highly disorganized, resulting in near isotropic diffusion, independent of the initial orientation of the ambient magnetic field. The possibility of sub-Bohm diffusion is demonstrated for parallel shocks, while the diffusion coefficient approaches the Bohm limit from below for oblique shocks. This universal behaviour suggests that Bohm diffusion in the root-mean-squared field inferred from observation may provide a realistic estimate for the maximum energy acceleration time-scale in young supernova remnants. Although disordered, the magnetic field is not self-similar suggesting a non-uniform energy-dependent behaviour of the energetic particle transport in the precursor. Possible indirect radiative signatures of cosmic ray driven magnetic field amplification are discussed.
Resumo:
Les logiciels utilisés sont Splus et R.
Resumo:
Adaptive filter is a primary method to filter Electrocardiogram (ECG), because it does not need the signal statistical characteristics. In this paper, an adaptive filtering technique for denoising the ECG based on Genetic Algorithm (GA) tuned Sign-Data Least Mean Square (SD-LMS) algorithm is proposed. This technique minimizes the mean-squared error between the primary input, which is a noisy ECG, and a reference input which can be either noise that is correlated in some way with the noise in the primary input or a signal that is correlated only with ECG in the primary input. Noise is used as the reference signal in this work. The algorithm was applied to the records from the MIT -BIH Arrhythmia database for removing the baseline wander and 60Hz power line interference. The proposed algorithm gave an average signal to noise ratio improvement of 10.75 dB for baseline wander and 24.26 dB for power line interference which is better than the previous reported works
Resumo:
This paper estimates a translog stochastic production function to examine the determinants of technical efficiency of freshwater prawn farming in Bangladesh. Primary data has been collected using random sampling from 90 farmers of three villages in southwestern Bangladesh. Prawn farming displayed much variability in technical efficiency ranging from 9.50 to 99.94% with mean technical efficiency of 65%, which suggested a substantial 35% of potential output can be recovered by removing inefficiency. For a land scarce country like Bangladesh this gain could help increase income and ensure better livelihood for the farmers. Based on the translog production function specification, farmers could be made scale efficient by providing more input to produce more output. The results suggest that farmers’ education and non-farm income significantly improve efficiency whilst farmers’ training, farm distance from the water canal and involvement in fish farm associations reduces efficiency. Hence, the study proposes strategies such as less involvement in farming-related associations and raising the effective training facilities of the farmers as beneficial adjustments for reducing inefficiency. Moreover, the key policy implication of the analysis is that investment in primary education would greatly improve technical efficiency.
Resumo:
The Gram-Schmidt (GS) orthogonalisation procedure has been used to improve the convergence speed of least mean square (LMS) adaptive code-division multiple-access (CDMA) detectors. However, this algorithm updates two sets of parameters, namely the GS transform coefficients and the tap weights, simultaneously. Because of the additional adaptation noise introduced by the former, it is impossible to achieve the same performance as the ideal orthogonalised LMS filter, unlike the result implied in an earlier paper. The authors provide a lower bound on the minimum achievable mean squared error (MSE) as a function of the forgetting factor λ used in finding the GS transform coefficients, and propose a variable-λ algorithm to balance the conflicting requirements of good tracking and low misadjustment.
Resumo:
The increase in ultraviolet radiation (UV) at surface, the high incidence of non-melanoma skin cancer (NMSC) in coast of Northeast of Brazil (NEB) and reduction of total ozone were the motivation for the present study. The overall objective was to identify and understand the variability of UV or Index Ultraviolet Radiation (UV Index) in the capitals of the east coast of the NEB and adjust stochastic models to time series of UV index aiming make predictions (interpolations) and forecasts / projections (extrapolations) followed by trend analysis. The methodology consisted of applying multivariate analysis (principal component analysis and cluster analysis), Predictive Mean Matching method for filling gaps in the data, autoregressive distributed lag (ADL) and Mann-Kendal. The modeling via the ADL consisted of parameter estimation, diagnostics, residuals analysis and evaluation of the quality of the predictions and forecasts via mean squared error and Pearson correlation coefficient. The research results indicated that the annual variability of UV in the capital of Rio Grande do Norte (Natal) has a feature in the months of September and October that consisting of a stabilization / reduction of UV index because of the greater annual concentration total ozone. The increased amount of aerosol during this period contributes in lesser intensity for this event. The increased amount of aerosol during this period contributes in lesser intensity for this event. The application of cluster analysis on the east coast of the NEB showed that this event also occurs in the capitals of Paraiba (João Pessoa) and Pernambuco (Recife). Extreme events of UV in NEB were analyzed from the city of Natal and were associated with absence of cloud cover and levels below the annual average of total ozone and did not occurring in the entire region because of the uneven spatial distribution of these variables. The ADL (4, 1) model, adjusted with data of the UV index and total ozone to period 2001-2012 made a the projection / extrapolation for the next 30 years (2013-2043) indicating in end of that period an increase to the UV index of one unit (approximately), case total ozone maintain the downward trend observed in study period
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.
Resumo:
Surface sediments from 68 small lakes in the Alps and 9 well-dated sediment core samples that cover a gradient of total phosphorus (TP) concentrations of 6 to 520 μg TP l-1 were studied for diatom, chrysophyte cyst, cladocera, and chironomid assemblages. Inference models for mean circulation log10 TP were developed for diatoms, chironomids, and benthic cladocera using weighted-averaging partial least squares. After screening for outliers, the final transfer functions have coefficients of determination (r2, as assessed by cross-validation, of 0.79 (diatoms), 0.68 (chironomids), and 0.49 (benthic cladocera). Planktonic cladocera and chrysophytes show very weak relationships to TP and no TP inference models were developed for these biota. Diatoms showed the best relationship with TP, whereas the other biota all have large secondary gradients, suggesting that variables other than TP have a strong influence on their composition and abundance. Comparison with other diatom – TP inference models shows that our model has high predictive power and a low root mean squared error of prediction, as assessed by cross-validation.
Resumo:
Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^
Resumo:
El manejo sustentable de los recursos naturales relacionados con proyectos de utilización de los recursos hídricos (entre otros), requiere en muchos casos de la modificación del relieve existente. Esto conlleva la necesidad de adecuación de la capa homogénea superior del suelo, operación que suele denominarse "sistematización", la cual facilita una distribución más uniforme de las lluvias y del agua de riego. Esta modificación de la capa superior del suelo es realizada en base a un proyecto, cuya inclinación responda a las pendientes naturales o a las establecidas por el diseñador. En la ejecución del diseño proyectado, en superficies superiores a una hectárea, el movimiento de tierra se realiza con equipos pesados, que no aseguran un alto porcentaje de eficiencia en lo que al movimiento de tierra se refiere, ya que parte del material se pierde en el acarreo, pero muy especialmente, por la compactación desuniforme del mismo, asociada con las texturas complejas del suelo a trabajar. El presente trabajo determinó el índice de precisión en la ejecución del proyecto de sistematización a partir de un índice estadístico internacionalmente aceptado, el "Root Mean Squared Error (RMSE)", comparando los valores altimétricos proyectados y los realmente obtenidos luego de la ejecución del proyecto, en tres parcelas con distinta secuencia de labores y maquinaria utilizadas, pero con el mismo tipo de suelo en el área del eje Pilar - La Plata (Argentina). Los resultados obtenidos, que varían de un RMSE de 4 a 6 cm, permiten concluir, para los sitios y las condiciones estudiadas, que no pueden asegurarse en la sistematización índices de precisión en la ejecución de la obra, inferiores a los 4 cm.
Resumo:
The purpose of this study was to compare a number of state-of-the-art methods in airborne laser scan- ning (ALS) remote sensing with regards to their capacity to describe tree size inequality and other indi- cators related to forest structure. The indicators chosen were based on the analysis of the Lorenz curve: Gini coefficient ( GC ), Lorenz asymmetry ( LA ), the proportions of basal area ( BALM ) and stem density ( NSLM ) stocked above the mean quadratic diameter. Each method belonged to one of these estimation strategies: (A) estimating indicators directly; (B) estimating the whole Lorenz curve; or (C) estimating a complete tree list. Across these strategies, the most popular statistical methods for area-based approach (ABA) were used: regression, random forest (RF), and nearest neighbour imputation. The latter included distance metrics based on either RF (NN–RF) or most similar neighbour (MSN). In the case of tree list esti- mation, methods based on individual tree detection (ITD) and semi-ITD, both combined with MSN impu- tation, were also studied. The most accurate method was direct estimation by best subset regression, which obtained the lowest cross-validated coefficients of variation of their root mean squared error CV(RMSE) for most indicators: GC (16.80%), LA (8.76%), BALM (8.80%) and NSLM (14.60%). Similar figures [CV(RMSE) 16.09%, 10.49%, 10.93% and 14.07%, respectively] were obtained by MSN imputation of tree lists by ABA, a method that also showed a number of additional advantages, such as better distributing the residual variance along the predictive range. In light of our results, ITD approaches may be clearly inferior to ABA with regards to describing the structural properties related to tree size inequality in for- ested areas.