165 resultados para Measurement error models
Resumo:
P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.
Resumo:
Southeastern Brazil has seen dramatic landscape modifications in recent decades, due to expansion of agriculture and urban areas; these changes have influenced the distribution and abundance of vertebrates. We developed predictive models of ecological and spatial distributions of capybaras (Hydrochoerus hydrochaeris) using ecological niche modeling. Most Occurrences of capybaras were in flat areas with water bodies Surrounded by sugarcane and pasture. More than 75% of the Piracicaba River basin was estimated as potentially habitable by capybara. The models had low omission error (2.3-3.4%), but higher commission error (91.0-98.5%); these ""model failures"" seem to be more related to local habitat characteristics than to spatial ones. The potential distribution of capybaras in the basin is associated with anthropogenic habitats, particularly with intensive land use for agriculture.
Resumo:
Soils are an important component in the biogeochemical cycle of carbon, storing about four times more carbon than biomass plants and nearly three times more than the atmosphere. Moreover, the carbon content is directly related on the capacity of water retention, fertility. among other properties. Thus, soil carbon quantification in field conditions is an important challenge related to carbon cycle and global climatic changes. Nowadays. Laser Induced Breakdown Spectroscopy (LIBS) can be used for qualitative elemental analyses without previous treatment of samples and the results are obtained quickly. New optical technologies made possible the portable LIBS systems and now, the great expectation is the development of methods that make possible quantitative measurements with LIBS. The goal of this work is to calibrate a portable LIBS system to carry out quantitative measures of carbon in whole tropical soil sample. For this, six samples from the Brazilian Cerrado region (Argisoil) were used. Tropical soils have large amounts of iron in their compositions, so the carbon line at 247.86 nm presents strong interference of this element (iron lines at 247.86 and 247.95). For this reason, in this work the carbon line at 193.03 nm was used. Using methods of statistical analysis as a simple linear regression, multivariate linear regression and cross-validation were possible to obtain correlation coefficients higher than 0.91. These results show the great potential of using portable LIBS systems for quantitative carbon measurements in tropical soils. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
The aims of the present study were to compare the effects of two periodization models on metabolic syndrome risk factors in obese adolescents and verify whether the angiotensin-converting enzyme (ACE) genotype is important in establishing these effects. A total of 32 postpuberty obese adolescents were submitted to aerobic training (AT) and resistance training (RT) for 14 weeks. The subjects were divided into linear periodization (LP, n = 16) or daily undulating periodization (DUP, n = 16). Body composition, visceral and subcutaneous fat, glycemia, insulinemia, homeostasis model assessment of insulin resistance (HOMA-IR), lipid profiles, blood pressure, maximal oxygen consumption (VO(2max)), resting metabolic rate (RMR), muscular endurance were analyzed at baseline and after intervention. Both groups demonstrated a significant reduction in body mass, BMI, body fat, visceral and subcutaneous fat, total and low-density lipoprotein cholesterol, blood pressure and an increase in fat-free mass, VO(2max), and muscular endurance. However, only DUP promoted a reduction in insulin concentrations and HOMA-IR. It is important to emphasize that there was no statics difference between LP and DUP groups; however, it appears that there may be bigger changes in the DUP than LP group in some of the metabolic syndrome risk factors in obese adolescents with regard to the effect size (ES). Both periodization models presented a large effect on muscular endurance. Despite the limitation of sample size, our results suggested that the ACE genotype may influence the functional and metabolic characteristics of obese adolescents and may be considered in the future strategies for massive obesity control.
Resumo:
Background: Leptin-deficient mice (Lep(ob)/Lep(ob), also known as ob/ob) are of great importance for studies of obesity, diabetes and other correlated pathologies. Thus, generation of animals carrying the Lep(ob) gene mutation as well as additional genomic modifications has been used to associate genes with metabolic diseases. However, the infertility of Lep(ob)/Lep(ob) mice impairs this kind of breeding experiment. Objective: To propose a new method for production of Lep(ob)/Lep(ob) animals and Lep(ob)/Lep(ob)-derived animal models by restoring the fertility of Lep(ob)/Lep(ob) mice in a stable way through white adipose tissue transplantations. Methods: For this purpose, 1 g of peri-gonadal adipose tissue from lean donors was used in subcutaneous transplantations of Lep(ob)/Lep(ob) animals and a crossing strategy was established to generate Lep(ob)/Lep(ob)-derived mice. Results: The presented method reduced by four times the number of animals used to generate double transgenic models (from about 20 to 5 animals per double mutant produced) and minimized the number of genotyping steps (from 3 to 1 genotyping step, reducing the number of Lep gene genotyping assays from 83 to 6). Conclusion: The application of the adipose transplantation technique drastically improves both the production of Lep(ob)/Lep(ob) animals and the generation of Lep(ob)/Lep(ob)-derived animal models. International Journal of Obesity (2009) 33, 938-944; doi: 10.1038/ijo.2009.95; published online 16 June 2009
Resumo:
Pires, FO, Hammond, J, Lima-Silva, AE, Bertuzzi, RCM, and Kiss, MAPDM. Ventilation behavior during upper-body incremental exercise. J Strength Cond Res 25(1): 225-230, 2011-This study tested the ventilation (V(E)) behavior during upper-body incremental exercise by mathematical models that calculate 1 or 2 thresholds and compared the thresholds identified by mathematical models with V-slope, ventilatory equivalent for oxygen uptake (V(E)/(V) over dotO(2)), and ventilatory equivalent for carbon dioxide uptake (V(E)/(V) over dotCO(2)). Fourteen rock climbers underwent an upper-body incremental test on a cycle ergometer with increases of approximately 20 W.min(-1) until exhaustion at a cranking frequency of approximately 90 rpm. The V(E) data were smoothed to 10-second averages for V(E) time plotting. The bisegmental and the 3-segmental linear regression models were calculated from 1 or 2 intercepts that best shared the V(E) curve in 2 or 3 linear segments. The ventilatory threshold(s) was determined mathematically by the intercept(s) obtained by bisegmental and 3-segmental models, by V-slope model, or visually by V(E)/(V) over dotO(2) and V(E)/(V) over dotCO(2). There was no difference between bisegmental (mean square error [MSE] = 35.3 +/- 32.7 l.min(-1)) and 3-segmental (MSE = 44.9 +/- 47.8 l.min(-1)) models in fitted data. There was no difference between ventilatory threshold identified by the bisegmental (28.2 +/- 6.8 ml.kg(-1).min(-1)) and second ventilatory threshold identified by the 3-segmental (30.0 +/- 5.1 ml.kg(-1).min(-1)), V(E)/(V) over dotO(2) (28.8 +/- 5.5 ml.kg(-1).min(-1)), or V-slope (28.5 +/- 5.6 ml.kg(-1).min(-1)). However, the first ventilatory threshold identified by 3-segmental (23.1 +/- 4.9 ml.kg(-1).min(-1)) or by VE/(V) over dotO(2) (24.9 +/- 4.4 ml.kg(-1).min(-1)) was different from these 4. The V(E) behavior during upper-body exercise tends to show only 1 ventilatory threshold. These findings have practical implications because this point is frequently used for aerobic training prescription in healthy subjects, athletes, and in elderly or diseased populations. The ventilatory threshold identified by V(E) curve should be used for aerobic training prescription in healthy subjects and athletes.
Resumo:
Fourier transform near infrared (FT-NIR) spectroscopy was evaluated as an analytical too[ for monitoring residual Lignin, kappa number and hexenuronic acids (HexA) content in kraft pulps of Eucalyptus globulus. Sets of pulp samples were prepared under different cooking conditions to obtain a wide range of compound concentrations that were characterised by conventional wet chemistry analytical methods. The sample group was also analysed using FT-NIR spectroscopy in order to establish prediction models for the pulp characteristics. Several models were applied to correlate chemical composition in samples with the NIR spectral data by means of PCR or PLS algorithms. Calibration curves were built by using all the spectral data or selected regions. Best calibration models for the quantification of lignin, kappa and HexA were proposed presenting R-2 values of 0.99. Calibration models were used to predict pulp titers of 20 external samples in a validation set. The lignin concentration and kappa number in the range of 1.4-18% and 8-62, respectively, were predicted fairly accurately (standard error of prediction, SEP 1.1% for lignin and 2.9 for kappa). The HexA concentration (range of 5-71 mmol kg(-1) pulp) was more difficult to predict and the SEP was 7.0 mmol kg(-1) pulp in a model of HexA quantified by an ultraviolet (UV) technique and 6.1 mmol kg(-1) pulp in a model of HexA quantified by anion-exchange chromatography (AEC). Even in wet chemical procedures used for HexA determination, there is no good agreement between methods as demonstrated by the UV and AEC methods described in the present work. NIR spectroscopy did provide a rapid estimate of HexA content in kraft pulps prepared in routine cooking experiments.
Resumo:
The airflow velocities and pressures are calculated from a three-dimensional model of the human larynx by using the finite element method. The laryngeal airflow is assumed to be incompressible, isothermal, steady, and created by fixed pressure drops. The influence of different laryngeal profiles (convergent, parallel, and divergent), glottal area, and dimensions of false vocal folds in the airflow are investigated. The results indicate that vertical and horizontal phase differences in the laryngeal tissue movements are influenced by the nonlinear pressure distribution across the glottal channel, and the glottal entrance shape influences the air pressure distribution inside the glottis. Additionally, the false vocal folds increase the glottal duct pressure drop by creating a new constricted channel in the larynx, and alter the airflow vortexes formed after the true vocal folds. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work presents an automated system for the measurement of form errors of mechanical components using an industrial robot. A three-probe error separation technique was employed to allow decoupling between the measured form error and errors introduced by the robotic system. A mathematical model of the measuring system was developed to provide inspection results by means of the solution of a system of linear equations. A new self-calibration procedure, which employs redundant data from several runs, minimizes the influence of probes zero-adjustment on the final result. Experimental tests applied to the measurement of straightness errors of mechanical components were accomplished and demonstrated the effectiveness of the employed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
This paper analyses the presence of financial constraint in the investment decisions of 367 Brazilian firms from 1997 to 2004, using a Bayesian econometric model with group-varying parameters. The motivation for this paper is the use of clustering techniques to group firms in a totally endogenous form. In order to classify the firms we used a hybrid clustering method, that is, hierarchical and non-hierarchical clustering techniques jointly. To estimate the parameters a Bayesian approach was considered. Prior distributions were assumed for the parameters, classifying the model in random or fixed effects. Ordinate predictive density criterion was used to select the model providing a better prediction. We tested thirty models and the better prediction considers the presence of 2 groups in the sample, assuming the fixed effect model with a Student t distribution with 20 degrees of freedom for the error. The results indicate robustness in the identification of financial constraint when the firms are classified by the clustering techniques. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an investigation of design code provisions for steel-concrete composite columns. The study covers the national building codes of United States, Canada and Brazil, and the transnational EUROCODE. The study is based on experimental results of 93 axially loaded concrete-filled tubular steel columns. This includes 36 unpublished, full scale experimental results by the authors and 57 results from the literature. The error of resistance models is determined by comparing experimental results for ultimate loads with code-predicted column resistances. Regression analysis is used to describe the variation of model error with column slenderness and to describe model uncertainty. The paper shows that Canadian and European codes are able to predict mean column resistance, since resistance models of these codes present detailed formulations for concrete confinement by a steel tube. ANSI/AISC and Brazilian codes have limited allowance for concrete confinement, and become very conservative for short columns. Reliability analysis is used to evaluate the safety level of code provisions. Reliability analysis includes model error and other random problem parameters like steel and concrete strengths, and dead and live loads. Design code provisions are evaluated in terms of sufficient and uniform reliability criteria. Results show that the four design codes studied provide uniform reliability, with the Canadian code being best in achieving this goal. This is a result of a well balanced code, both in terms of load combinations and resistance model. The European code is less successful in providing uniform reliability, a consequence of the partial factors used in load combinations. The paper also shows that reliability indexes of columns designed according to European code can be as low as 2.2, which is quite below target reliability levels of EUROCODE. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this paper is to provide and verify simplified models that predict the longitudinal stresses that develop in C-section purlins in uplift. The paper begins with the simple case of flexural stress: where the force has to be applied at the shear center, or the section braced in both flanges. Restrictions on load application point and restraint of the flanges are removed until arriving at the more complex problem of bending when movement of the tension flange alone is restricted, as commonly found in purlin-sheeting systems. Winter`s model for predicting the longitudinal stresses developed due to direct torsion is reviewed, verified, and then extended to cover the case of a bending member with tension flange restraint. The developed longitudinal stresses from flexure and restrained torsion are used to assess the elastic stability behavior of typical purlin-sheeting systems. Finally, strength predictions of typical C-section purlins are provided for existing AISI methods and a newly proposed extension to the direct strength method that employs the predicted longitudinal stress distributions within the strength prediction. (C) 2009 Elsevier Ltd. All rights reserved.