949 resultados para Statistical Model
DIGITAL ELEVATION MODEL VALIDATION WITH NO GROUND CONTROL: APPLICATION TO THE TOPODATA DEM IN BRAZIL
Resumo:
Digital Elevation Model (DEM) validation is often carried out by comparing the data with a set of ground control points. However, the quality of a DEM can also be considered in terms of shape realism. Beyond visual analysis, it can be verified that physical and statistical properties of the terrestrial relief are fulfilled. This approach is applied to an extract of Topodata, a DEM obtained by resampling the SRTM DEM over the Brazilian territory with a geostatistical approach. Several statistical indicators are computed, and they show that the quality of Topodata in terms of shape rendering is improved with regards to SRTM.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We consider model selection uncertainty in linear regression. We study theoretically and by simulation the approach of Buckland and co-workers, who proposed estimating a parameter common to all models under study by taking a weighted average over the models, using weights obtained from information criteria or the bootstrap. This approach is compared with the usual approach in which the 'best' model is used, and with Bayesian model averaging. The weighted predictor behaves similarly to model averaging, with generally more realistic mean-squared errors than the usual model-selection-based estimator.
Resumo:
The study of short implants is relevant to the biomechanics of dental implants, and research on crown increase has implications for the daily clinic. The aim of this study was to analyze the biomechanical interactions of a singular implant-supported prosthesis of different crown heights under vertical and oblique force, using the 3-D finite element method. Six 3-D models were designed with Invesalius 3.0, Rhinoceros 3D 4.0, and Solidworks 2010 software. Each model was constructed with a mandibular segment of bone block, including an implant supporting a screwed metal-ceramic crown. The crown height was set at 10, 12.5, and 15 mm. The applied force was 200 N (axial) and 100 N (oblique). We performed an ANOVA statistical test and Tukey tests; p < 0.05 was considered statistically significant. The increase of crown height did not influence the stress distribution on screw prosthetic (p > 0.05) under axial load. However, crown heights of 12.5 and 15 mm caused statistically significant damage to the stress distribution of screws and to the cortical bone (p <0.001) under oblique load. High crown to implant (C/I) ratio harmed microstrain distribution on bone tissue under axial and oblique loads (p < 0.001). Crown increase was a possible deleterious factor to the screws and to the different regions of bone tissue. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The phase diagram of an asymmetric N = 3 Ashkin-Teller model is obtained by a numerical analysis which combines Monte Carlo renormalization group and reweighting techniques. Present results reveal several differences with those obtained by mean-field calculations and a Hamiltonian approach. In particular, we found Ising critical exponents along a line where Goldschmidt has located the Kosterlitz-Thouless multicritical point. On the other hand, we did find nonuniversal exponents along another transition line. Symmetry breaking in this case is very similar to the N = 2 case, since the symmetries associated to only two of the Ising variables are broken. However, for large values of the coupling constant ratio XW = W/K, when the only broken symmetry is of a hidden variable, we detected first-order phase transitions giving evidence supporting the existence of a multicritical point, as suggested by Goldschmidt, but in a different region of the phase diagram. © 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We present both analytical and numerical results on the position of partition function zeros on the complex magnetic field plane of the q=2 state (Ising) and the q=3 state Potts model defined on phi(3) Feynman diagrams (thin random graphs). Our analytic results are based on the ideas of destructive interference of coexisting phases and low temperature expansions. For the case of the Ising model, an argument based on a symmetry of the saddle point equations leads us to a nonperturbative proof that the Yang-Lee zeros are located on the unit circle, although no circle theorem is known in this case of random graphs. For the q=3 state Potts model, our perturbative results indicate that the Yang-Lee zeros lie outside the unit circle. Both analytic results are confirmed by finite lattice numerical calculations.
Resumo:
Reaction norm models have been widely used to study genotype by environment interaction (G × E) in animal breeding. The objective of this study was to describe environmental sensitivity across first lactation in Brazilian Holstein cows using a reaction norm approach. A total of 50,168 individual monthly test day (TD) milk yields (10 test days) from 7476 complete first lactations of Holstein cattle were analyzed. The statistical models for all traits (10 TDs and for 305-day milk yield) included the fixed effects of contemporary group, age of cow (linear and quadratic effects), and days in milk (linear effect), except for 305-day milk yield. A hierarchical reaction norm model (HRNM) based on the unknown covariate was used. The present study showed the presence of G × E in milk yield across first lactation of Holstein cows. The variation in the heritability estimates implies differences in the response to selection depending on the environment where the animals of this population are evaluated. In the average environment, the heritabilities for all traits were rather similar, in range from 0.02 to 0.63. The scaling effect of G × E predominated throughout most of lactation. Particularly during the first 2 months of lactation, G × E caused reranking of breeding values. It is therefore important to include the environmental sensitivity of animals according to the phase of lactation in the genetic evaluations of Holstein cattle in tropical environments.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The south of Minas Gerais, Brazil stands out among various regions through its capacity for production of specialty coffees. Its potential, manifested through being one of the most award-winning Brazilian regions in recent years, has been recognized by the Cup of Excellence (COE). With the evident relationship between product quality and the environment in mind, the need arises for scientific studies to provide a foundation for discrimination of product origin, creating new methods for combating possible fraud. The aim of this study was to evaluate the use of carbon and nitrogen isotopes in discrimination of production environments of specialty coffees from the Serra da Mantiqueira of Minas Gerais by means of the discriminant model. Coffee samples were composed of ripe yellow and red fruits collected manually at altitudes below 1,000 m, from 1,000 to 1,200 m and above 1,200 m. The yellow and red fruits were subjected to dry processing and wet processing, with five replications. A total of 119 samples were used for discrimination of specialty coffee production environments by means of stable isotopes and statistical modeling. The model generated had an accuracy rate of 89% in discrimination of environments and was composed of the isotope variables of δ15N, δ13C, %C, %N, δD, δ18O (meteoric water) and sensory analysis scores. In addition, for the first time, discrimination of environments on a local geographic scale, within a single municipality, was proposed and successfully concluded. This shows that isotope analysis is an effective method in verifying geographic origin for specialty coffees.
Resumo:
We develop spatial statistical models for stream networks that can estimate relationships between a response variable and other covariates, make predictions at unsampled locations, and predict an average or total for a stream or a stream segment. There have been very few attempts to develop valid spatial covariance models that incorporate flow, stream distance, or both. The application of typical spatial autocovariance functions based on Euclidean distance, such as the spherical covariance model, are not valid when using stream distance. In this paper we develop a large class of valid models that incorporate flow and stream distance by using spatial moving averages. These methods integrate a moving average function, or kernel, against a white noise process. By running the moving average function upstream from a location, we develop models that use flow, and by construction they are valid models based on stream distance. We show that with proper weighting, many of the usual spatial models based on Euclidean distance have a counterpart for stream networks. Using sulfate concentrations from an example data set, the Maryland Biological Stream Survey (MBSS), we show that models using flow may be more appropriate than models that only use stream distance. For the MBSS data set, we use restricted maximum likelihood to fit a valid covariance matrix that uses flow and stream distance, and then we use this covariance matrix to estimate fixed effects and make kriging and block kriging predictions.
Resumo:
We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.
Resumo:
Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.
Resumo:
In this paper we propose a hybrid hazard regression model with threshold stress which includes the proportional hazards and the accelerated failure time models as particular cases. To express the behavior of lifetimes the generalized-gamma distribution is assumed and an inverse power law model with a threshold stress is considered. For parameter estimation we develop a sampling-based posterior inference procedure based on Markov Chain Monte Carlo techniques. We assume proper but vague priors for the parameters of interest. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumption of vague priors. Further, some discussions on model selection criteria are given. The methodology is illustrated on simulated and real lifetime data set.