982 resultados para Prediction Error
Resumo:
Various methods are currently used in order to predict shallow landslides within the catchment scale. Among them, physically based models present advantages associated with the physical description of processes by means of mathematical equations. The main objective of this research is the prediction of shallow landslides using TRIGRS model, in a pilot catchment located at Serra do Mar mountain range, Sao Paulo State, southeastern Brazil. Susceptibility scenarios have been simulated taking into account different mechanical and hydrological values. These scenarios were analysed based on a landslide scars map from the January 1985 event, upon which two indexes were applied: Scars Concentration (SC - ratio between the number of cells with scars, in each class, and the total number of cells with scars within the catchment) and Landslide Potential (LP - ratio between the number of cells with scars, in each class, and the total number of cells in that same class). The results showed a significant agreement between the simulated scenarios and the scar's map. In unstable areas (SF <= 1), the SC values exceeded 50% in all scenarios. Based on the results, the use of this model should be considered an important tool for shallow landslide prediction, especially in areas where mechanical and hydrological properties of the materials are not well known.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.
Resumo:
We use the density functional theory/local-density approximation (DFT/LDA)-1/2 method [L. G. Ferreira , Phys. Rev. B 78, 125116 (2008)], which attempts to fix the electron self-energy deficiency of DFT/LDA by half-ionizing the whole Bloch band of the crystal, to calculate the band offsets of two Si/SiO(2) interface models. Our results are similar to those obtained with a ""state-of-the-art"" GW approach [R. Shaltaf , Phys. Rev. Lett. 100, 186401 (2008)], with the advantage of being as computationally inexpensive as the usual DFT/LDA. Our band gap and band offset predictions are in excellent agreement with experiments.
Resumo:
Southeastern Brazil has seen dramatic landscape modifications in recent decades, due to expansion of agriculture and urban areas; these changes have influenced the distribution and abundance of vertebrates. We developed predictive models of ecological and spatial distributions of capybaras (Hydrochoerus hydrochaeris) using ecological niche modeling. Most Occurrences of capybaras were in flat areas with water bodies Surrounded by sugarcane and pasture. More than 75% of the Piracicaba River basin was estimated as potentially habitable by capybara. The models had low omission error (2.3-3.4%), but higher commission error (91.0-98.5%); these ""model failures"" seem to be more related to local habitat characteristics than to spatial ones. The potential distribution of capybaras in the basin is associated with anthropogenic habitats, particularly with intensive land use for agriculture.
Resumo:
Inductively coupled plasma optical emission spectrometers (ICP DES) allow fast simultaneous measurements of several spectral lines for multiple elements. The combination of signal intensities of two or more emission lines for each element may bring such advantages as improvement of the precision, the minimization of systematic errors caused by spectral interferences and matrix effects. In this work, signal intensities for several spectral lines were combined for the determination of Al, Cd, Co, Cr, Mn, Pb, and Zn in water. Afterwards, parameters for evaluation of the calibration model were calculated to select the combination of emission lines leading to the best accuracy (lowest values of PRESS-Predicted error sum of squares and RMSEP-Root means square error of prediction). Limits of detection (LOD) obtained using multiple lines were 7.1, 0.5, 4.4, 0.042, 3.3, 28 and 6.7 mu g L(-1) (n = 10) for Al, Cd. Co, Cr, Mn, Pb and Zn, respectively, in the presence of concomitants. On the other hand, the LOD established for the most intense emission line were 16. 0.7, 8.4, 0.074. 23, 26 and 9.6 mu g L(-1) (n = 10) for these same elements in the presence of concomitants. The accuracy of the developed procedure was demonstrated using water certified reference material. The use of multiple lines improved the sensitivity making feasible the determination of these analytes according to the target values required for the current environmental legislation for water samples and it was also demonstrated that measurements in multiple lines can also be employed as a tool to verify the accuracy of an analytical procedure in ICP DES. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Fourier transform near infrared (FT-NIR) spectroscopy was evaluated as an analytical too[ for monitoring residual Lignin, kappa number and hexenuronic acids (HexA) content in kraft pulps of Eucalyptus globulus. Sets of pulp samples were prepared under different cooking conditions to obtain a wide range of compound concentrations that were characterised by conventional wet chemistry analytical methods. The sample group was also analysed using FT-NIR spectroscopy in order to establish prediction models for the pulp characteristics. Several models were applied to correlate chemical composition in samples with the NIR spectral data by means of PCR or PLS algorithms. Calibration curves were built by using all the spectral data or selected regions. Best calibration models for the quantification of lignin, kappa and HexA were proposed presenting R-2 values of 0.99. Calibration models were used to predict pulp titers of 20 external samples in a validation set. The lignin concentration and kappa number in the range of 1.4-18% and 8-62, respectively, were predicted fairly accurately (standard error of prediction, SEP 1.1% for lignin and 2.9 for kappa). The HexA concentration (range of 5-71 mmol kg(-1) pulp) was more difficult to predict and the SEP was 7.0 mmol kg(-1) pulp in a model of HexA quantified by an ultraviolet (UV) technique and 6.1 mmol kg(-1) pulp in a model of HexA quantified by anion-exchange chromatography (AEC). Even in wet chemical procedures used for HexA determination, there is no good agreement between methods as demonstrated by the UV and AEC methods described in the present work. NIR spectroscopy did provide a rapid estimate of HexA content in kraft pulps prepared in routine cooking experiments.
Resumo:
This paper deals with the problem of state prediction for descriptor systems subject to bounded uncertainties. The problem is stated in terms of the optimization of an appropriate quadratic functional. This functional is well suited to derive not only the robust predictor for descriptor systems but also that for usual state-space systems. Numerical examples are included in order to demonstrate the performance of this new filter. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
This paper analyses the presence of financial constraint in the investment decisions of 367 Brazilian firms from 1997 to 2004, using a Bayesian econometric model with group-varying parameters. The motivation for this paper is the use of clustering techniques to group firms in a totally endogenous form. In order to classify the firms we used a hybrid clustering method, that is, hierarchical and non-hierarchical clustering techniques jointly. To estimate the parameters a Bayesian approach was considered. Prior distributions were assumed for the parameters, classifying the model in random or fixed effects. Ordinate predictive density criterion was used to select the model providing a better prediction. We tested thirty models and the better prediction considers the presence of 2 groups in the sample, assuming the fixed effect model with a Student t distribution with 20 degrees of freedom for the error. The results indicate robustness in the identification of financial constraint when the firms are classified by the clustering techniques. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The results of a combined experimental program and numerical modeling program to evaluate the behavior of ungrouted hollow concrete blocks prisms under uniaxial compression are addressed. In the numerical program, three distinct approaches have been considered using a continuum model with a smeared approach, namely plane-stress, plane-strain and three-dimensional conditions. The response of the numerical simulations is compared with experimental data of masonry prisms using concrete blocks specifically designed for this purpose. The elastic and inelastic parameters were acquired from laboratory tests on concrete and mortar samples that constitute the blocks and the bed joint of the prisms. The results from the numerical simulations are discussed with respect to the ability to reproduce the global response of the experimental tests, and with respect to the failure behavior obtained. Good agreement between experimental and numerical results was found for the peak load and for the failure mode using the three-dimensional model, on four different sets of block/mortar types. Less good agreement was found for plain stress and plain strain models.
Resumo:
An updated flow pattern map was developed for CO2 on the basis of the previous Cheng-Ribatski-Wojtan-Thome CO2 flow pattern map [1,2] to extend the flow pattern map to a wider range of conditions. A new annular flow to dryout transition (A-D) and a new dryout to mist flow transition (D-M) were proposed here. In addition, a bubbly flow region which generally occurs at high mass velocities and low vapor qualities was added to the updated flow pattern map. The updated flow pattern map is applicable to a much wider range of conditions: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to +25 degrees C (reduced pressures from 0.21 to 0.87). The updated flow pattern map was compared to independent experimental data of flow patterns for CO2 in the literature and it predicts the flow patterns well. Then, a database of CO2 two-phase flow pressure drop results from the literature was set up and the database was compared to the leading empirical pressure drop models: the correlations by Chisholm [3], Friedel [4], Gronnerud [5] and Muller-Steinhagen and Heck [6], a modified Chisholm correlation by Yoon et al. [7] and the flow pattern based model of Moreno Quiben and Thome [8-10]. None of these models was able to predict the CO2 pressure drop data well. Therefore, a new flow pattern based phenomenological model of two-phase flow frictional pressure drop for CO2 was developed by modifying the model of Moreno Quiben and Thome using the updated flow pattern map in this study and it predicts the CO2 pressure drop database quite well overall. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Corresponding to the updated flow pattern map presented in Part I of this study, an updated general flow pattern based flow boiling heat transfer model was developed for CO2 using the Cheng-Ribatski-Wojtan-Thome [L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside horizontal tubes, Int. J. Heat Mass Transfer 49 (2006) 4082-4094; L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, Erratum to: ""New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside tubes"" [Heat Mass Transfer 49 (21-22) (2006) 4082-4094], Int. J. Heat Mass Transfer 50 (2007) 391] flow boiling heat transfer model as the starting basis. The flow boiling heat transfer correlation in the dryout region was updated. In addition, a new mist flow heat transfer correlation for CO2 was developed based on the CO2 data and a heat transfer method for bubbly flow was proposed for completeness sake. The updated general flow boiling heat transfer model for CO2 covers all flow regimes and is applicable to a wider range of conditions for horizontal tubes: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to 25 degrees C (reduced pressures from 0.21 to 0.87). The updated general flow boiling heat transfer model was compared to a new experimental database which contains 1124 data points (790 more than that in the previous model [Cheng et al., 2006, 2007]) in this study. Good agreement between the predicted and experimental data was found in general with 71.4% of the entire database and 83.2% of the database without the dryout and mist flow data predicted within +/-30%. However, the predictions for the dryout and mist flow regions were less satisfactory due to the limited number of data points, the higher inaccuracy in such data, scatter in some data sets ranging up to 40%, significant discrepancies from one experimental study to another and the difficulties associated with predicting the inception and completion of dryout around the perimeter of the horizontal tubes. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,