899 resultados para Error correction methods
Resumo:
Influence diagnostics methods are extended in this article to the Grubbs model when the unknown quantity x (latent variable) follows a skew-normal distribution. Diagnostic measures are derived from the case-deletion approach and the local influence approach under several perturbation schemes. The observed information matrix to the postulated model and Delta matrices to the corresponding perturbed models are derived. Results obtained for one real data set are reported, illustrating the usefulness of the proposed methodology.
Resumo:
We propose a likelihood ratio test ( LRT) with Bartlett correction in order to identify Granger causality between sets of time series gene expression data. The performance of the proposed test is compared to a previously published bootstrapbased approach. LRT is shown to be significantly faster and statistically powerful even within non- Normal distributions. An R package named gGranger containing an implementation for both Granger causality identification tests is also provided.
A robust Bayesian approach to null intercept measurement error model with application to dental data
Resumo:
Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).
Resumo:
As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.
Resumo:
O desenvolvimento de projetos relacionados ao desempenho de diversas culturas tem recebido aperfeiçoamento cada vez maior, incorporado a modelos matemáticos sendo indispensável à utilização de equações cada vez mais consistentes que possibilitem previsão e maior aproximação do comportamento real, diminuindo o erro na obtenção das estimativas. Entre as operações unitárias que demandam maior estudo estão aquelas relacionadas com o crescimento da cultura, caracterizadas pela temperatura ideal para o acréscimo de matéria seca. Pelo amplo uso dos métodos matemáticos na representação, análise e obtenção de estimativas de graus-dia, juntamente com a grande importância que a cultura da cana-de-açúcar tem para a economia brasileira, foi realizada uma avaliação dos modelos matemáticos comumente usados e dos métodos numéricos de integração na estimativa da disponibilidade de graus-dia para essa cultura, na região de Botucatu, Estado de São Paulo. Os modelos de integração, com discretização de 6 em 6 h, apresentaram resultados satisfatórios na estimativa de graus-dia. As metodologias tradicionais apresentaram desempenhos satisfatórios quanto à estimativa de grausdia com base na curva de temperatura horária para cada dia e para os agrupamentos de três, sete, 15 e 30 dias. Pelo método numérico de integração, a região de Botucatu, Estado de São Paulo, apresentou disponibilidade térmica anual média de 1.070,6 GD para a cultura da cana-de-açúcar.
Resumo:
In this work calibration models were constructed to determine the content of total lipids and moisture in powdered milk samples. For this, used the near-infrared spectroscopy by diffuse reflectance, combined with multivariate calibration. Initially, the spectral data were submitted to correction of multiplicative light scattering (MSC) and Savitzsky-Golay smoothing. Then, the samples were divided into subgroups by application of hierarchical clustering analysis of the classes (HCA) and Ward Linkage criterion. Thus, it became possible to build regression models by partial least squares (PLS) that allowed the calibration and prediction of the content total lipid and moisture, based on the values obtained by the reference methods of Soxhlet and 105 ° C, respectively . Therefore, conclude that the NIR had a good performance for the quantification of samples of powdered milk, mainly by minimizing the analysis time, not destruction of the samples and not waste. Prediction models for determination of total lipids correlated (R) of 0.9955, RMSEP of 0.8952, therefore the average error between the Soxhlet and NIR was ± 0.70%, while the model prediction to content moisture correlated (R) of 0.9184, RMSEP, 0.3778 and error of ± 0.76%
Resumo:
This work is combined with the potential of the technique of near infrared spectroscopy - NIR and chemometrics order to determine the content of diclofenac tablets, without destruction of the sample, to which was used as the reference method, ultraviolet spectroscopy, which is one of the official methods. In the construction of multivariate calibration models has been studied several types of pre-processing of NIR spectral data, such as scatter correction, first derivative. The regression method used in the construction of calibration models is the PLS (partial least squares) using NIR spectroscopic data of a set of 90 tablets were divided into two sets (calibration and prediction). 54 were used in the calibration samples and the prediction was used 36, since the calibration method used was crossvalidation method (full cross-validation) that eliminates the need for a validation set. The evaluation of the models was done by observing the values of correlation coefficient R 2 and RMSEC mean square error (calibration error) and RMSEP (forecast error). As the forecast values estimated for the remaining 36 samples, which the results were consistent with the values obtained by UV spectroscopy
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The purpose of this study was to differentiate the dentoalveolar and skeletal effects to better understand orthodontic treatment. We evaluated the treatment changes associated with the bionator and the removable headgear splint (RHS). Methods: The sample comprised 51 consecutively treated Class II patients from 1 office who had all been successfully treated with either a bionator (n = 17) or an RHS appliance (n = 17). Class II patients waiting to start treatment later served as controls (n = 17). A modified version of the Johnston pitchfork analysis was used to quantify the dentoalveolar and skeletal contributions to the anteroposterior correction at the levels of the molars and the incisors. Results: Both appliances significantly improved anteroposterior molar relationships (2.15 mm for the bionator, 2.27 mm for the RHS), primarily by dentoalveolar modifications (1.49 and 2.36 mm for the bionator and the RHS, respectively), with greater maxillary molar distalization in the RHS group. Overjet relationships also improved significantly compared with the controls (3.11 and 2.12 mm for the bionator and the RHS, respectively), due primarily to retroclination of the maxillary incisors (2.2 and 2.38 mm for the bionator and the RHS, respectively). The differences between overall corrections and dentoalveolar modifications for both molar and overjet relationships were explained by skeletal responses, with the bionator group showing significantly greater anterior mandibular displacement than the RHS group. Conclusions: The bionator and the RHS effectively corrected the molar relationships and overjets of Class II patients primarily by dentoalveolar changes. (Am J Orthod Dentofacial Orthop 2008; 134: 732-41)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Statistical analysis of data is crucial in cephalometric investigations. There are certainly excellent examples of good statistical practice in the field, but some articles published worldwide have carried out inappropriate analyses. Objective: The purpose of this study was to show that when the double records of each patient are traced on the same occasion, a control chart for differences between readings needs to be drawn, and limits of agreement and coefficients of repeatability must be calculated. Material and methods: Data from a well-known paper in Orthodontics were used for showing common statistical practices in cephalometric investigations and for proposing a new technique of analysis. Results: A scatter plot of the two radiograph readings and the two model readings with the respective regression lines are shown. Also, a control chart for the mean of the differences between radiograph readings was obtained and a coefficient of repeatability was calculated. Conclusions: A standard error assuming that mean differences are zero, which is referred to in Orthodontics and Facial Orthopedics as the Dahlberg error, can be calculated only for estimating precision if accuracy is already proven. When double readings are collected, limits of agreement and coefficients of repeatability must be calculated. A graph with differences of readings should be presented and outliers discussed.
Resumo:
The aims of this study were: (1) to verify the validity of previous proposed models to estimate the lowest exercise duration (T (LOW)) and the highest intensity (I (HIGH)) at which VO(2)max is reached (2) to test the hypothesis that parameters involved in these models, and hence the validity of these models are affected by aerobic training status. Thirteen cyclists (EC), eleven runners (ER) and ten untrained (U) subjects performed several cycle-ergometer exercise tests to fatigue in order to determine and estimate T (LOW) (ET (LOW)) and I (HIGH) (EI (HIGH)). The relationship between the time to achieved VO(2)max and time to exhaustion (T (lim)) was used to estimate ET (LOW). EI (HIGH) was estimated using the critical power model. I (HIGH) was assumed as the highest intensity at which VO2 was equal or higher than the average of VO(2)max values minus one typical error. T (LOW) was considered T (lim) associated with I (HIGH). No differences were found in T (LOW) between ER (170 +/- 31 s) and U (209 +/- 29 s), however, both showed higher values than EC (117 +/- 29 s). I (HIGH) was similar between U (269 +/- 73 W) and ER (319 +/- 50 W), and both were lower than EC (451 +/- 33 W). EI (HIGH) was similar and significantly correlated with I-HIGH only in U (r = 0.87) and ER (r = 0.62). ET (LOW) and T (LOW) were different only for U and not significantly correlated in all groups. These data suggest that the aerobic training status affects the validity of the proposed models for estimating I (HIGH).
Resumo:
An iterated deferred correction algorithm based on Lobatto Runge-Kutta formulae is developed for the efficient numerical solution of nonlinear stiff two-point boundary value problems. An analysis of the stability properties of general deferred correction schemes which are based on implicit Runge-Kutta methods is given and results which are analogous to those obtained for initial value problems are derived. A revised definition of symmetry is presented and this ensures that each deferred correction produces an optimal increase in order. Finally, some numerical results are given to demonstrate the superior performance of Lobatto formulae compared with mono-implicit formulae on stiff two-point boundary value problems. (C) 1998 Elsevier B.V. Ltd. All rights reserved.