912 resultados para Error Correction Models
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
The aims of the present study were to compare the effects of two periodization models on metabolic syndrome risk factors in obese adolescents and verify whether the angiotensin-converting enzyme (ACE) genotype is important in establishing these effects. A total of 32 postpuberty obese adolescents were submitted to aerobic training (AT) and resistance training (RT) for 14 weeks. The subjects were divided into linear periodization (LP, n = 16) or daily undulating periodization (DUP, n = 16). Body composition, visceral and subcutaneous fat, glycemia, insulinemia, homeostasis model assessment of insulin resistance (HOMA-IR), lipid profiles, blood pressure, maximal oxygen consumption (VO(2max)), resting metabolic rate (RMR), muscular endurance were analyzed at baseline and after intervention. Both groups demonstrated a significant reduction in body mass, BMI, body fat, visceral and subcutaneous fat, total and low-density lipoprotein cholesterol, blood pressure and an increase in fat-free mass, VO(2max), and muscular endurance. However, only DUP promoted a reduction in insulin concentrations and HOMA-IR. It is important to emphasize that there was no statics difference between LP and DUP groups; however, it appears that there may be bigger changes in the DUP than LP group in some of the metabolic syndrome risk factors in obese adolescents with regard to the effect size (ES). Both periodization models presented a large effect on muscular endurance. Despite the limitation of sample size, our results suggested that the ACE genotype may influence the functional and metabolic characteristics of obese adolescents and may be considered in the future strategies for massive obesity control.
Resumo:
Background: Leptin-deficient mice (Lep(ob)/Lep(ob), also known as ob/ob) are of great importance for studies of obesity, diabetes and other correlated pathologies. Thus, generation of animals carrying the Lep(ob) gene mutation as well as additional genomic modifications has been used to associate genes with metabolic diseases. However, the infertility of Lep(ob)/Lep(ob) mice impairs this kind of breeding experiment. Objective: To propose a new method for production of Lep(ob)/Lep(ob) animals and Lep(ob)/Lep(ob)-derived animal models by restoring the fertility of Lep(ob)/Lep(ob) mice in a stable way through white adipose tissue transplantations. Methods: For this purpose, 1 g of peri-gonadal adipose tissue from lean donors was used in subcutaneous transplantations of Lep(ob)/Lep(ob) animals and a crossing strategy was established to generate Lep(ob)/Lep(ob)-derived mice. Results: The presented method reduced by four times the number of animals used to generate double transgenic models (from about 20 to 5 animals per double mutant produced) and minimized the number of genotyping steps (from 3 to 1 genotyping step, reducing the number of Lep gene genotyping assays from 83 to 6). Conclusion: The application of the adipose transplantation technique drastically improves both the production of Lep(ob)/Lep(ob) animals and the generation of Lep(ob)/Lep(ob)-derived animal models. International Journal of Obesity (2009) 33, 938-944; doi: 10.1038/ijo.2009.95; published online 16 June 2009
Resumo:
Aim. To identify the impact of pain on quality of life (QOL) of patients with chronic venous ulcers. Methods. A cross-sectional study was performed on 40 outpatients with chronic venous ulcers who were recruited at one outpatient care center in Sao Paulo, Brazil. WHOQOL-Bref was used to assess QOL, the McGill Pain Questionnarie-Short Form (MPQ) to identify pain characteristics, and an 11-point numerical pain rating scale to measure pain intensity. Kruskall-Wallis or ANOVA test, with post-hoc correction (Tukey test) was applied to compare groups. Multiple linear regression models were used. Results. The mean age of the patients was 67 +/- 11 years (range, 39-95 years), and 26 (65%) were women. The prevalence of pain was 90%, with worst pain mean intensity of 6.2 +/- 3.5. Severe pain was the most prevalent (21 patients, 52.5%). Pain most frequently reported was sensory-discriminative and evaluate in quality. Pain was significantly and negatively correlated with physical (PY), environmental (EV), and overall QOL. Compared to a no-pain group, those with pain had lower overall QOL. On multiple analyses, pain remained as a predictor of overall QOL (beta = -0.73, P = 0.03) and was also predictive of social QOL, whereas pain did not have any impact on physical, emotional, or social relationships QOL (beta = -3.85, P = 0.00) when adjusted for age, number, duration and frequency of wounds, pain dimension (MPQ), partnership, and economic status. Conclusion. To improve QOL of out-patients with chronic venous ulcers, the qualities and the intensity of pain must be considered differently.
Resumo:
Pires, FO, Hammond, J, Lima-Silva, AE, Bertuzzi, RCM, and Kiss, MAPDM. Ventilation behavior during upper-body incremental exercise. J Strength Cond Res 25(1): 225-230, 2011-This study tested the ventilation (V(E)) behavior during upper-body incremental exercise by mathematical models that calculate 1 or 2 thresholds and compared the thresholds identified by mathematical models with V-slope, ventilatory equivalent for oxygen uptake (V(E)/(V) over dotO(2)), and ventilatory equivalent for carbon dioxide uptake (V(E)/(V) over dotCO(2)). Fourteen rock climbers underwent an upper-body incremental test on a cycle ergometer with increases of approximately 20 W.min(-1) until exhaustion at a cranking frequency of approximately 90 rpm. The V(E) data were smoothed to 10-second averages for V(E) time plotting. The bisegmental and the 3-segmental linear regression models were calculated from 1 or 2 intercepts that best shared the V(E) curve in 2 or 3 linear segments. The ventilatory threshold(s) was determined mathematically by the intercept(s) obtained by bisegmental and 3-segmental models, by V-slope model, or visually by V(E)/(V) over dotO(2) and V(E)/(V) over dotCO(2). There was no difference between bisegmental (mean square error [MSE] = 35.3 +/- 32.7 l.min(-1)) and 3-segmental (MSE = 44.9 +/- 47.8 l.min(-1)) models in fitted data. There was no difference between ventilatory threshold identified by the bisegmental (28.2 +/- 6.8 ml.kg(-1).min(-1)) and second ventilatory threshold identified by the 3-segmental (30.0 +/- 5.1 ml.kg(-1).min(-1)), V(E)/(V) over dotO(2) (28.8 +/- 5.5 ml.kg(-1).min(-1)), or V-slope (28.5 +/- 5.6 ml.kg(-1).min(-1)). However, the first ventilatory threshold identified by 3-segmental (23.1 +/- 4.9 ml.kg(-1).min(-1)) or by VE/(V) over dotO(2) (24.9 +/- 4.4 ml.kg(-1).min(-1)) was different from these 4. The V(E) behavior during upper-body exercise tends to show only 1 ventilatory threshold. These findings have practical implications because this point is frequently used for aerobic training prescription in healthy subjects, athletes, and in elderly or diseased populations. The ventilatory threshold identified by V(E) curve should be used for aerobic training prescription in healthy subjects and athletes.
Resumo:
Fourier transform near infrared (FT-NIR) spectroscopy was evaluated as an analytical too[ for monitoring residual Lignin, kappa number and hexenuronic acids (HexA) content in kraft pulps of Eucalyptus globulus. Sets of pulp samples were prepared under different cooking conditions to obtain a wide range of compound concentrations that were characterised by conventional wet chemistry analytical methods. The sample group was also analysed using FT-NIR spectroscopy in order to establish prediction models for the pulp characteristics. Several models were applied to correlate chemical composition in samples with the NIR spectral data by means of PCR or PLS algorithms. Calibration curves were built by using all the spectral data or selected regions. Best calibration models for the quantification of lignin, kappa and HexA were proposed presenting R-2 values of 0.99. Calibration models were used to predict pulp titers of 20 external samples in a validation set. The lignin concentration and kappa number in the range of 1.4-18% and 8-62, respectively, were predicted fairly accurately (standard error of prediction, SEP 1.1% for lignin and 2.9 for kappa). The HexA concentration (range of 5-71 mmol kg(-1) pulp) was more difficult to predict and the SEP was 7.0 mmol kg(-1) pulp in a model of HexA quantified by an ultraviolet (UV) technique and 6.1 mmol kg(-1) pulp in a model of HexA quantified by anion-exchange chromatography (AEC). Even in wet chemical procedures used for HexA determination, there is no good agreement between methods as demonstrated by the UV and AEC methods described in the present work. NIR spectroscopy did provide a rapid estimate of HexA content in kraft pulps prepared in routine cooking experiments.
Resumo:
The airflow velocities and pressures are calculated from a three-dimensional model of the human larynx by using the finite element method. The laryngeal airflow is assumed to be incompressible, isothermal, steady, and created by fixed pressure drops. The influence of different laryngeal profiles (convergent, parallel, and divergent), glottal area, and dimensions of false vocal folds in the airflow are investigated. The results indicate that vertical and horizontal phase differences in the laryngeal tissue movements are influenced by the nonlinear pressure distribution across the glottal channel, and the glottal entrance shape influences the air pressure distribution inside the glottis. Additionally, the false vocal folds increase the glottal duct pressure drop by creating a new constricted channel in the larynx, and alter the airflow vortexes formed after the true vocal folds. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
This paper analyses the presence of financial constraint in the investment decisions of 367 Brazilian firms from 1997 to 2004, using a Bayesian econometric model with group-varying parameters. The motivation for this paper is the use of clustering techniques to group firms in a totally endogenous form. In order to classify the firms we used a hybrid clustering method, that is, hierarchical and non-hierarchical clustering techniques jointly. To estimate the parameters a Bayesian approach was considered. Prior distributions were assumed for the parameters, classifying the model in random or fixed effects. Ordinate predictive density criterion was used to select the model providing a better prediction. We tested thirty models and the better prediction considers the presence of 2 groups in the sample, assuming the fixed effect model with a Student t distribution with 20 degrees of freedom for the error. The results indicate robustness in the identification of financial constraint when the firms are classified by the clustering techniques. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an investigation of design code provisions for steel-concrete composite columns. The study covers the national building codes of United States, Canada and Brazil, and the transnational EUROCODE. The study is based on experimental results of 93 axially loaded concrete-filled tubular steel columns. This includes 36 unpublished, full scale experimental results by the authors and 57 results from the literature. The error of resistance models is determined by comparing experimental results for ultimate loads with code-predicted column resistances. Regression analysis is used to describe the variation of model error with column slenderness and to describe model uncertainty. The paper shows that Canadian and European codes are able to predict mean column resistance, since resistance models of these codes present detailed formulations for concrete confinement by a steel tube. ANSI/AISC and Brazilian codes have limited allowance for concrete confinement, and become very conservative for short columns. Reliability analysis is used to evaluate the safety level of code provisions. Reliability analysis includes model error and other random problem parameters like steel and concrete strengths, and dead and live loads. Design code provisions are evaluated in terms of sufficient and uniform reliability criteria. Results show that the four design codes studied provide uniform reliability, with the Canadian code being best in achieving this goal. This is a result of a well balanced code, both in terms of load combinations and resistance model. The European code is less successful in providing uniform reliability, a consequence of the partial factors used in load combinations. The paper also shows that reliability indexes of columns designed according to European code can be as low as 2.2, which is quite below target reliability levels of EUROCODE. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the experimental results of 32 axially loaded concrete-filled steel tubular columns (CFT). The load was introduced only on the concrete core by means of two high strength steel cylinders placed at the column ends to evaluate the passive confinement provided by the steel tube. The columns were filled with structural concretes with compressive strengths of 30, 60, 80 and 100 MPa. The outer diameter (D) of the column was 114.3 mm, and the length/diameter (L/D) ratios considered were 3, 5, 7 and 10. The wall thicknesses of the tubes (t) were 3.35 mm and 6.0 mm, resulting in diameter/thickness (D/t) ratios of 34 and 19, respectively. The force vs. axial strain curves obtained from the tests showed, in general, a good post-peak behavior of the CFT columns, even for those columns filled with high strength concrete. Three analytical models of confinement for short concrete-filled columns found in the literature were used to predict the axial capacity of the columns tested. To apply these models to slender columns, a correction factor was introduced to penalize the calculated results, giving good agreement with the experimental values. Additional results of 63 CFT columns tested by other researchers were also compared to the predictions of the modified analytical models and presented satisfactory results. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this paper is to provide and verify simplified models that predict the longitudinal stresses that develop in C-section purlins in uplift. The paper begins with the simple case of flexural stress: where the force has to be applied at the shear center, or the section braced in both flanges. Restrictions on load application point and restraint of the flanges are removed until arriving at the more complex problem of bending when movement of the tension flange alone is restricted, as commonly found in purlin-sheeting systems. Winter`s model for predicting the longitudinal stresses developed due to direct torsion is reviewed, verified, and then extended to cover the case of a bending member with tension flange restraint. The developed longitudinal stresses from flexure and restrained torsion are used to assess the elastic stability behavior of typical purlin-sheeting systems. Finally, strength predictions of typical C-section purlins are provided for existing AISI methods and a newly proposed extension to the direct strength method that employs the predicted longitudinal stress distributions within the strength prediction. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This communication proposes a simple way to introduce fibers into finite element modelling. This is a promising formulation to deal with fiber-reinforced composites by the finite element method (FEM), as it allows the consideration of short or long fibers placed arbitrarily inside a continuum domain (matrix). The most important feature of the formulation is that no additional degree of freedom is introduced into the pre-existent finite element numerical system to consider any distribution of fiber inclusions. In other words, the size of the system of equations used to solve a non-reinforced medium is the same as the one used to solve the reinforced counterpart. Another important characteristic is the reduced work required by the user to introduce fibers, avoiding `rebar` elements, node-by-node geometrical definitions or even complex mesh generation. An additional characteristic of the technique is the possibility of representing unbounded stresses at the end of fibers using a finite number of degrees of freedom. Further studies are required for non-linear applications in which localization may occur. Along the text the linear formulation is presented and the bounded connection between fibers and continuum is considered. Four examples are presented, including non-linear analysis, to validate and show the capabilities of the formulation. Copyright (c) 2007 John Wiley & Sons, Ltd.