899 resultados para Error correction methods
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The aim of this paper is to highlight some of the methods of imagetic information representation, reviewing the literature of the area and proposing a model of methodology adapted to Brazilian museums. An elaboration of a methodology of imagetic information representation is developed based on Brazilian characteristics of information treatment in order to adapt it to museums. Finally, spreadsheets that show this methodology are presented.
Resumo:
Aim. To identify the impact of pain on quality of life (QOL) of patients with chronic venous ulcers. Methods. A cross-sectional study was performed on 40 outpatients with chronic venous ulcers who were recruited at one outpatient care center in Sao Paulo, Brazil. WHOQOL-Bref was used to assess QOL, the McGill Pain Questionnarie-Short Form (MPQ) to identify pain characteristics, and an 11-point numerical pain rating scale to measure pain intensity. Kruskall-Wallis or ANOVA test, with post-hoc correction (Tukey test) was applied to compare groups. Multiple linear regression models were used. Results. The mean age of the patients was 67 +/- 11 years (range, 39-95 years), and 26 (65%) were women. The prevalence of pain was 90%, with worst pain mean intensity of 6.2 +/- 3.5. Severe pain was the most prevalent (21 patients, 52.5%). Pain most frequently reported was sensory-discriminative and evaluate in quality. Pain was significantly and negatively correlated with physical (PY), environmental (EV), and overall QOL. Compared to a no-pain group, those with pain had lower overall QOL. On multiple analyses, pain remained as a predictor of overall QOL (beta = -0.73, P = 0.03) and was also predictive of social QOL, whereas pain did not have any impact on physical, emotional, or social relationships QOL (beta = -3.85, P = 0.00) when adjusted for age, number, duration and frequency of wounds, pain dimension (MPQ), partnership, and economic status. Conclusion. To improve QOL of out-patients with chronic venous ulcers, the qualities and the intensity of pain must be considered differently.
Resumo:
ARTIOLI, G. G., B. GUALANO, E. FRANCHINI, F. B. SCAGLIUSI, M. TAKESIAN, M. FUCHS, and A. H. LANCHA. Prevalence, Magnitude, and Methods of Rapid Weight Loss among Judo Competitors. Med. Sci. Sports Exerc., Vol. 42, No. 3, pp. 436-442, 2010. Purpose: To identify the prevalence, magnitude, and methods of rapid weight loss among judo competitors. Methods: Athletes (607 males and 215 females; age = 19.3 +/- 5.3 yr, weight = 70 +/- 7.5 kg, height = 170.6 +/- 9.8 cm) completed a previously validated questionnaire developed to evaluate rapid weight loss in judo athletes, which provides a score. The higher the score obtained, the more aggressive the weight loss behaviors. Data were analyzed using descriptive statistics and frequency analyses. Mean scores obtained in the questionnaire were used to compare specific groups of athletes using, when appropriate, Mann-Whitney U-test or general linear model one-way ANOVA followed by Tamhane post hoc test. Results: Eighty-six percent of athletes reported that have already lost weight to compete. When heavyweights are excluded, this percentage rises to 89%. Most athletes reported reductions of up to 5% of body weight (mean +/- SD: 2.5 +/- 2.3%). The most weight ever lost was 2%-5%, whereas a great part of athletes reported reductions of 5%-10% (mean +/- SD: 6 +/- 4%). The number of reductions underwent in a season was 3 +/- 5. The reductions usually occurred within 7 +/- 7 d. Athletes began cutting weight at 12.6 +/- 6.1 yr. No significant differences were found in the score obtained by male versus female athletes as well as by athletes from different weight classes. Elite athletes scored significantly higher in the questionnaire than nonelite. Athletes who began cutting weight earlier also scored higher than those who began later. Conclusions: Rapid weight loss is highly prevalent in judo competitors. The level of aggressiveness in weight management behaviors seems to not be influenced by the gender or by the weight class, but it seems to be influenced by competitive level and by the age at which athletes began cutting weight.
Resumo:
The aim of the present study was to compare and correlate training impulse (TRIMP) estimates proposed by Banister (TRIMP(Banister)), Stagno (TRIMP(Stagno)) and Manzi (TRIMP(Manzi)). The subjects were submitted to an incremental test on cycle ergometer with heart rate and blood lactate concentration measurements. In the second occasion, they performed 30 min. of exercise at the intensity corresponding to maximal lactate steady state, and TRIMP(Banister), TRIMP(Stagno) and TRIMP(Manzi) were calculated. The mean values of TRIMP(Banister) (56.5 +/- 8.2 u.a.) and TRIMP(Stagno) (51.2 +/- 12.4 u.a.) were not different (P > 0.05) and were highly correlated (r = 0.90). Besides this, they presented a good agreement level, which means low bias and relatively narrow limits of agreement. On the other hand, despite highly correlated (r = 0.93), TRIMP(Stagno) and TRIMP(Manzi) (73.4 +/- 17.6 u.a.) were different (P < 0.05), with low agreement level. The TRIMP(Banister) e TRIMP(Manzi) estimates were not different (P = 0.06) and were highly correlated (r = 0.82), but showed low agreement level. Thus, we concluded that the investigated TRIMP methods are not equivalent. In practical terms, it seems prudent monitor the training process assuming only one of the estimates.
Resumo:
Fourier transform near infrared (FT-NIR) spectroscopy was evaluated as an analytical too[ for monitoring residual Lignin, kappa number and hexenuronic acids (HexA) content in kraft pulps of Eucalyptus globulus. Sets of pulp samples were prepared under different cooking conditions to obtain a wide range of compound concentrations that were characterised by conventional wet chemistry analytical methods. The sample group was also analysed using FT-NIR spectroscopy in order to establish prediction models for the pulp characteristics. Several models were applied to correlate chemical composition in samples with the NIR spectral data by means of PCR or PLS algorithms. Calibration curves were built by using all the spectral data or selected regions. Best calibration models for the quantification of lignin, kappa and HexA were proposed presenting R-2 values of 0.99. Calibration models were used to predict pulp titers of 20 external samples in a validation set. The lignin concentration and kappa number in the range of 1.4-18% and 8-62, respectively, were predicted fairly accurately (standard error of prediction, SEP 1.1% for lignin and 2.9 for kappa). The HexA concentration (range of 5-71 mmol kg(-1) pulp) was more difficult to predict and the SEP was 7.0 mmol kg(-1) pulp in a model of HexA quantified by an ultraviolet (UV) technique and 6.1 mmol kg(-1) pulp in a model of HexA quantified by anion-exchange chromatography (AEC). Even in wet chemical procedures used for HexA determination, there is no good agreement between methods as demonstrated by the UV and AEC methods described in the present work. NIR spectroscopy did provide a rapid estimate of HexA content in kraft pulps prepared in routine cooking experiments.
Resumo:
Molybdenum and tungsten bimetallic oxides were synthetized according to the following methods: Pechini, coprecipitation and solid state reaction (SSR). After the characterization, those solids were carbureted at programmed temperature. The carburation process was monitored by checking the consumption of carburant hydrocarbon and CO produced. The monitoring process permits to avoid or to diminish the formation of pirolytic carbon.
Resumo:
Understanding the product`s `end-of-life` is important to reduce the environmental impact of the products` final disposal. When the initial stages of product development consider end-of-life aspects, which can be established by ecodesign (a proactive approach of environmental management that aims to reduce the total environmental impact of products), it becomes easier to close the loop of materials. The `end-of-life` ecodesign methods generally include more than one `end-of-life` strategy. Since product complexity varies substantially, some components, systems or sub-systems are easier to be recycled, reused or remanufactured than others. Remanufacture is an effective way to maintain products in a closed-loop, reducing both environmental impacts and costs of the manufacturing processes. This paper presents some ecodesign methods focused on the integration of different `end-of-life` strategies, with special attention to remanufacturing, given its increasing importance in the international scenario to reduce the life cycle impacts of products. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We assess the performance of three unconditionally stable finite-difference time-domain (FDTD) methods for the modeling of doubly dispersive metamaterials: 1) locally one-dimensional FDTD; 2) locally one-dimensional FDTD with Strang splitting; and (3) alternating direction implicit FDTD. We use both double-negative media and zero-index media as benchmarks.
Resumo:
The airflow velocities and pressures are calculated from a three-dimensional model of the human larynx by using the finite element method. The laryngeal airflow is assumed to be incompressible, isothermal, steady, and created by fixed pressure drops. The influence of different laryngeal profiles (convergent, parallel, and divergent), glottal area, and dimensions of false vocal folds in the airflow are investigated. The results indicate that vertical and horizontal phase differences in the laryngeal tissue movements are influenced by the nonlinear pressure distribution across the glottal channel, and the glottal entrance shape influences the air pressure distribution inside the glottis. Additionally, the false vocal folds increase the glottal duct pressure drop by creating a new constricted channel in the larynx, and alter the airflow vortexes formed after the true vocal folds. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
On-line leak detection is a main concern for the safe operation of pipelines. Acoustic and mass balance are the most important and extensively applied technologies in field problems. The objective of this work is to compare these leak detection methods with respect to a given reference situation, i.e., the same pipeline and monitoring signals acquired at the inlet and outlet ends. Experimental tests were conducted in a 749 m long laboratory pipeline transporting water as the working fluid. The instrumentation included pressure transducers and electromagnetic flowmeters. Leaks were simulated by opening solenoid valves placed at known positions and previously calibrated to produce known average leak flow rates. Results have clearly shown the limitations and advantages of each method. It is also quite clear that acoustics and mass balance technologies are, in fact, complementary. In general, an acoustic leak detection system sends out an alarm more rapidly and locates the leak more precisely, provided that the rupture of the pipeline occurs abruptly enough. On the other hand, a mass balance leak detection method is capable of quantifying the leak flow rate very accurately and of detecting progressive leaks.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
Neodymium doped and undoped aluminum oxide samples were obtained using two different techniques: Pechini and sol-gel. Fine grained powders were produced using both procedures, which were analyzed using Scanning Electron Microscopy (SEM) and Thermo-Stimulated Luminescence (TSL). Results showed that neodymium ions incorporation is responsible for the creation of two new TSL peaks (125 and 265 degrees C) and, also, for the enhancement of the intrinsic TSL peak at 190 degrees C. An explanation was proposed for these observations. SEM gave the dimensions of the clusters produced by each method, showing that those obtained by Pechini are smaller than the ones produced by sol-gel; it can also explain the higher emission supplied by the first one. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper deals with the use of simplified methods to predict methane generation in tropical landfills. Methane recovery data obtained on site as part of a research program being carried Out at the Metropolitan Landfill, Salvador, Brazil, is analyzed and used to obtain field methane generation over time. Laboratory data from MSW samples of different ages are presented and discussed: and simplified procedures to estimate the methane generation potential, L(o), and the constant related to the biodegradation rate, k are applied. The first order decay method is used to fit field and laboratory results. It is demonstrated that despite the assumptions and the simplicity of the adopted laboratory procedures, the values L(o) and k obtained are very close to those measured in the field, thus making this kind of analysis very attractive for first approach purposes. (C) 2008 Elsevier Ltd. All rights reserved.