939 resultados para Mean square error methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the effect of packet losses in video sequences and propose a lightweight Unequal Error Protection strategy which, by choosing which packet is discarded, reduces strongly the Mean Square Error of the received sequence

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The algorithms and graphic user interface software package ?OPT-PROx? are developed to meet food engineering needs related to canned food thermal processing simulation and optimization. The adaptive random search algorithm and its modification coupled with penalty function?s approach, and the finite difference methods with cubic spline approximation are utilized by ?OPT-PROx? package (http://tomakechoice. com/optprox/index.html). The diversity of thermal food processing optimization problems with different objectives and required constraints are solvable by developed software. The geometries supported by the ?OPT-PROx? are the following: (1) cylinder, (2) rectangle, (3) sphere. The mean square error minimization principle is utilized in order to estimate the heat transfer coefficient of food to be heated under optimal condition. The developed user friendly dialogue and used numerical procedures makes the ?OPT-PROx? software useful to food scientists in research and education, as well as to engineers involved in optimization of thermal food processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed target tracking in wireless sensor networks (WSN) is an important problem, in which agreement on the target state can be achieved using conventional consensus methods, which take long to converge. We propose distributed particle filtering based on belief propagation (DPF-BP) consensus, a fast method for target tracking. According to our simulations, DPF-BP provides better performance than DPF based on standard belief consensus (DPF-SBC) in terms of disagreement in the network. However, in terms of root-mean square error, it can outperform DPF-SBC only for a specific number of consensus iterations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Lean bodyweight (LBW) has been recommended for scaling drug doses. However, the current methods for predicting LBW are inconsistent at extremes of size and could be misleading with respect to interpreting weight-based regimens. Objective: The objective of the present study was to develop a semi-mechanistic model to predict fat-free mass (FFM) from subject characteristics in a population that includes extremes of size. FFM is considered to closely approximate LBW. There are several reference methods for assessing FFM, whereas there are no reference standards for LBW. Patients and methods: A total of 373 patients (168 male, 205 female) were included in the study. These data arose from two populations. Population A (index dataset) contained anthropometric characteristics, FFM estimated by dual-energy x-ray absorptiometry (DXA - a reference method) and bioelectrical impedance analysis (BIA) data. Population B (test dataset) contained the same anthropometric measures and FFM data as population A, but excluded BIA data. The patients in population A had a wide range of age (18-82 years), bodyweight (40.7-216.5kg) and BMI values (17.1-69.9 kg/m(2)). Patients in population B had BMI values of 18.7-38.4 kg/m(2). A two-stage semi-mechanistic model to predict FFM was developed from the demographics from population A. For stage 1 a model was developed to predict impedance and for stage 2 a model that incorporated predicted impedance was used to predict FFM. These two models were combined to provide an overall model to predict FFM from patient characteristics. The developed model for FFM was externally evaluated by predicting into population B. Results: The semi-mechanistic model to predict impedance incorporated sex, height and bodyweight. The developed model provides a good predictor of impedance for both males and females (r(2) = 0.78, mean error [ME] = 2.30 x 10(-3), root mean square error [RMSE] = 51.56 [approximately 10% of mean]). The final model for FFM incorporated sex, height and bodyweight. The developed model for FFM provided good predictive performance for both males and females (r(2) = 0.93, ME = -0.77, RMSE = 3.33 [approximately 6% of mean]). In addition, the model accurately predicted the FFM of subjects in population B (r(2) = 0.85, ME -0.04, RMSE = 4.39 [approximately 7% of mean]). Conclusions: A semi-mechanistic model has been developed to predict FFM (and therefore LBW) from easily accessible patient characteristics. This model has been prospectively evaluated and shown to have good predictive performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. Results: We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. Conclusion: The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Few-mode fiber transmission systems are typically impaired by mode-dependent loss (MDL). In an MDL-impaired link, maximum-likelihood (ML) detection yields a significant advantage in system performance compared to linear equalizers, such as zero-forcing and minimum-mean square error equalizers. However, the computational effort of the ML detection increases exponentially with the number of modes and the cardinality of the constellation. We present two methods that allow for near-ML performance without being afflicted with the enormous computational complexity of ML detection: improved reduced-search ML detection and sphere decoding. Both algorithms are tested regarding their performance and computational complexity in simulations of three and six spatial modes with QPSK and 16QAM constellations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults' self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18-59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 11/2-18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies. © 2014 Springer Science+Business Media New York.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sea- level variations have a significant impact on coastal areas. Prediction of sea level variations expected from the pre most critical information needs associated with the sea environment. For this, various methods exist. In this study, on the northern coast of the Persian Gulf have been studied relation to the effectiveness of parameters such as pressure, temperature and wind speed on sea leve and associated with global parameters such as the North Atlantic Oscillation index and NAO index and present statistic models for prediction of sea level. In the next step by using artificial neural network predict sea level for first in this region. Then compared results of the models. Prediction using statistical models estimated in terms correlation coefficient R = 0.84 and root mean square error (RMS) 21.9 cm for the Bushehr station, and R = 0.85 and root mean square error (RMS) 48.4 cm for Rajai station, While neural network used to have 4 layers and each middle layer six neurons is best for prediction and produces the results reliably in terms of correlation coefficient with R = 0.90126 and the root mean square error (RMS) 13.7 cm for the Bushehr station, and R = 0.93916 and the root mean square error (RMS) 22.6 cm for Rajai station. Therefore, the proposed methodology could be successfully used in the study area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Appetite and symptoms, conditions generally reported by the patients with cancer, are somewhat challenging for professionals to measure directly in clinical routine (latent conditions). Therefore, specific instruments are required for this purpose. This study aimed to perform a cultural adaptation of the Cancer Appetite and Symptom Questionnaire (CASQ), into Portuguese and evaluate its psychometric properties on a sample of Brazilian cancer patients. Methods: This is a validation study with Brazilian cancer patients. The face, content, and construct (factorial and convergent) validities of the Cancer Appetite and Symptom Questionnaire, the study tool, were estimated. Further, a confirmatory factor analysis (CFA) was conducted. The ratio of chi-square and degrees of freedom (χ2/df), comparative fit index (CFI), goodness of fit index (GFI) and root mean square error of approximation (RMSEA) were used for fit model assessment. In addition, the reliability of the instrument was estimated using the composite reliability (CR) and Cronbach’s alpha coefficient (α), and the invariance of the model in independent samples was estimated by a multigroup analysis (Δχ2). Results: Participants included 1,140 cancer patients with a mean age of 53.95 (SD = 13.25) years; 61.3% were women. After the CFA of the original CASQ structure, 2 items with inadequate factor weights were removed. Four correlations between errors were included to provide adequate fit to the sample (χ2/df = 8.532, CFI = .94, GFI = .95, and RMSEA = .08). The model exhibited a low convergent validity (AVE = .32). The reliability was adequate (CR = .82 α = .82). The refined model showed strong invariance in two independent samples (Δχ2: λ: p = .855; i: p = .824; Res: p = .390). A weak stability was obtained between patients undergoing chemotherapy and radiotherapy (Δχ2: λ: p = .155; i: p < .001; Res: p < .001), and between patients undergoing chemotherapy combined with radiotherapy and palliative care (Δχ2: λ: p = .058; i: p < .001; Res: p < .001). Conclusion: The Portuguese version of the CASQ had good face and construct validity and reliability. However, the CASQ still presented invariance in independent samples of Brazilian patients with cancer. However, the tool has low convergent validity and weak invariance in samples with different treatments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Evaluate the validity, reliability, and factorial invariance of the complete Portuguese version of the Oral Health Impact Profile (OHIP) and its short version (OHIP-14). Methods: A total of 1,162 adults enrolled in the Faculty of Dentistry of Araraquara/UNESP participated in the study; 73.1% were women; and the mean age was 40.7 ± 16.3 yr. We conducted a confirmatory factor analysis, where χ2/df, comparative fit index, goodness of fit index, and root mean square error of approximation were used as indices of goodness of fit. The convergent validity was judged from the average variance extracted and the composite reliability, and the internal consistency was estimated by Cronbach standardized alpha. The stability of the models was evaluated by multigroup analysis in independent samples (test and validation) and between users and nonusers of dental prosthesis. Results: We found best-fitting models for the OHIP-14 and among dental prosthesis users. The convergent validity was below adequate values for the factors “functional limitation” and “physical pain” for the complete version and for the factors “functional limitation” and “psychological discomfort” for the OHIP-14. Values of composite reliability and internal consistency were below adequate in the OHIP-14 for the factors “functional limitation” and “psychological discomfort.” We detected strong invariance between test and validation samples of the full version and weak invariance for OHIP-14. The models for users and nonusers of dental prosthesis were not invariant for both versions. Conclusion: The reduced version of the OHIP was parsimonious, reliable, and valid to capture the construct “impact of oral health on quality of life,” which was more pronounced in prosthesis users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many efforts are currently oriented toward extracting more information from ocean color than the chlorophyll a concentration. Among biological parameters potentially accessible from space, estimates of phytoplankton cell size and light absorption by colored detrital matter (CDM) would lead to an indirect assessment of major components of the organic carbon pool in the ocean, which would benefit oceanic carbon budget models. We present here 2 procedures to retrieve simultaneously from ocean color measurements in a limited number of bands, magnitudes, and spectral shapes for both light absorption by CDM and phytoplankton, along with a size parameter for phytoplankton. The performance of the 2 procedures was evaluated using different data sets that correspond to increasing uncertainties: ( 1) measured absorption coefficients of phytoplankton, particulate detritus, and colored dissolved organic matter ( CDOM) and measured chlorophyll a concentrations and ( 2) SeaWiFS upwelling radiance measurements and chlorophyll a concentrations estimated from global algorithms. In situ data were acquired during 3 cruises, differing by their relative proportions in CDM and phytoplankton, over a continental shelf off Brazil. No local information was introduced in either procedure, to make them more generally applicable. Over the study area, the absorption coefficient of CDM at 443 nm was retrieved from SeaWiFS radiances with a relative root mean square error (RMSE) of 33%, and phytoplankton light absorption coefficients in SeaWiFS bands ( from 412 to 510 nm) were retrieved with RMSEs between 28% and 33%. These results are comparable to or better than those obtained by 3 published models. In addition, a size parameter of phytoplankton and the spectral slope of CDM absorption were retrieved with RMSEs of 17% and 22%, respectively. If these methods are applied at a regional scale, the performances could be substantially improved by locally tuning some empirical relationships.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Ciências Actuariais

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate and compare the performance of Ripplet Type-1 transform and directional discrete cosine transform (DDCT) and their combinations for improved representation of MRI images while preserving its fine features such as edges along the smooth curves and textures. Methods: In a novel image representation method based on fusion of Ripplet type-1 and conventional/directional DCT transforms, source images were enhanced in terms of visual quality using Ripplet and DDCT and their various combinations. The enhancement achieved was quantified on the basis of peak signal to noise ratio (PSNR), mean square error (MSE), structural content (SC), average difference (AD), maximum difference (MD), normalized cross correlation (NCC), and normalized absolute error (NAE). To determine the attributes of both transforms, these transforms were combined to represent the entire image as well. All the possible combinations were tested to present a complete study of combinations of the transforms and the contrasts were evaluated amongst all the combinations. Results: While using the direct combining method (DDCT) first and then the Ripplet method, a PSNR value of 32.3512 was obtained which is comparatively higher than the PSNR values of the other combinations. This novel designed technique gives PSNR value approximately equal to the PSNR’s of parent techniques. Along with this, it was able to preserve edge information, texture information and various other directional image features. The fusion of DDCT followed by the Ripplet reproduced the best images. Conclusion: The transformation of images using Ripplet followed by DDCT ensures a more efficient method for the representation of images with preservation of its fine details like edges and textures.