36 resultados para Forecast error variance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main object of this paper is to discuss the Bayes estimation of the regression coefficients in the elliptically distributed simple regression model with measurement errors. The posterior distribution for the line parameters is obtained in a closed form, considering the following: the ratio of the error variances is known, informative prior distribution for the error variance, and non-informative prior distributions for the regression coefficients and for the incidental parameters. We proved that the posterior distribution of the regression coefficients has at most two real modes. Situations with a single mode are more likely than those with two modes, especially in large samples. The precision of the modal estimators is studied by deriving the Hessian matrix, which although complicated can be computed numerically. The posterior mean is estimated by using the Gibbs sampling algorithm and approximations by normal distributions. The results are applied to a real data set and connections with results in the literature are reported. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: To study the oculometric parameters of hyperopia in children with esotropic amblyopia, comparing amblyopic eyes with fellow eyes. Methods: Thirty-seven patients (5-8 years old) with bilateral hyperopia and esotropic amblyopia underwent a comprehensive ophthalmic examination, including cycloplegic refraction, keratometry and A-scan ultrasonography. Anterior chamber depth, lens thickness, vitreous chamber depth and total axial length were recorded. The refractive power of the crystalline lens was calculated using Bennett`s equations. Paired Student`s t-tests were used to compare ocular biometric measurements between amblyopic eyes and their fellow eyes. The associations of biometric parameters with refractive errors were assessed using Pearson correlation coefficients and linear regression. Multivariable models including axial length, corneal power and lens power were also constructed. Results: Amblyopic eyes were found to have significantly more hyperopic refraction, less corneal power, greater lens power, shorter vitreous chamber depth and shorter axial length, despite similar anterior chamber depth and lens thickness. The strongest correlation with refractive error was observed for the axial length/corneal radius ratio (r(36) = -0.92, p < 0.001 for amblyopic and r(36) = 0.87, p < 0.001 for fellow eyes). Axial length accounted for 39.2% (R(2)) of the refractive error variance in amblyopic eyes and 35.5% in fellow eyes. Adding corneal power to the model increased R(2) to 85.7% and 79.6%, respectively. A statistically significant correlation was found between axial length and corneal power, indicating decreasing corneal power with increasing axial length, and they were similar for amblyopic eyes (r(36) = 0.53,p < 0.001) and fellow eyes (r(36) = -0.57, p < 0.001). A statistically significant correlation was also found between axial length and lens power, indicating decreasing lens power with increasing axial length (r(36) = -0.72, p < 0.001 for amblyopic eyes and r(36) = -0.69, p < 0.001 for fellow eyes). Conclusions: We observed that the correlation among the major oculometric parameters and their individual contribution to hyperopia in esotropic children were similar in amblyopic and non-amblyopic eyes. This finding suggests that the counterbalancing effect of greater corneal and lens power associated with shorter axial length is similar in both eyes of patients with esotropic amblyopia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work aims to compare different nonlinear functions for describing the growth curves of Nelore females. The growth curve parameters, their (co) variance components, and environmental and genetic effects were estimated jointly through a Bayesian hierarchical model. In the first stage of the hierarchy, 4 nonlinear functions were compared: Brody, Von Bertalanffy, Gompertz, and logistic. The analyses were carried out using 3 different data sets to check goodness of fit while having animals with few records. Three different assumptions about SD of fitting errors were considered: constancy throughout the trajectory, linear increasing until 3 yr of age and constancy thereafter, and variation following the nonlinear function applied in the first stage of the hierarchy. Comparisons of the overall goodness of fit were based on Akaike information criterion, the Bayesian information criterion, and the deviance information criterion. Goodness of fit at different points of the growth curve was compared applying the Gelfand`s check function. The posterior means of adult BW ranged from 531.78 to 586.89 kg. Greater estimates of adult BW were observed when the fitting error variance was considered constant along the trajectory. The models were not suitable to describe the SD of fitting errors at the beginning of the growth curve. All functions provided less accurate predictions at the beginning of growth, and predictions were more accurate after 48 mo of age. The prediction of adult BW using nonlinear functions can be accurate when growth curve parameters and their (co) variance components are estimated jointly. The hierarchical model used in the present study can be applied to the prediction of mature BW in herds in which a portion of the animals are culled before adult age. Gompertz, Von Bertalanffy, and Brody functions were adequate to establish mean growth patterns and to predict the adult BW of Nelore females. The Brody model was more accurate in predicting the birth weight of these animals and presented the best overall goodness of fit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho avalia o desempenho de previsões sazonais do modelo climático regional RegCM3, aninhado ao modelo global CPTEC/COLA. As previsões com o RegCM3 utilizaram 60 km de resolução horizontal num domínio que inclui grande parte da América do Sul. As previsões do RegCM3 e CPTEC/COLA foram avaliadas utilizando as análises de chuva e temperatura do ar do Climate Prediction Center (CPC) e National Centers for Enviromental Prediction (NCEP), respectivamente. Entre maio de 2005 e julho de 2007, 27 previsões sazonais de chuva e temperatura do ar (exceto a temperatura do CPTEC/COLA, que possui 26 previsões) foram avaliadas em três regiões do Brasil: Nordeste (NDE), Sudeste (SDE) e Sul (SUL). As previsões do RegCM3 também foram comparadas com as climatologias das análises. De acordo com os índices estatísticos (bias, coeficiente de correlação, raiz quadrada do erro médio quadrático e coeficiente de eficiência), nas três regiões (NDE, SDE e SUL) a chuva sazonal prevista pelo RegCM3 é mais próxima da observada do que a prevista pelo CPTEC/COLA. Além disto, o RegCM3 também é melhor previsor da chuva sazonal do que da média das observações nas três regiões. Para temperatura, as previsões do RegCM3 são superiores às do CPTEC/COLA nas áreas NDE e SUL, enquanto o CPTEC/COLA é superior no SDE. Finalmente, as previsões de chuva e temperatura do RegCM3 são mais próximas das observações do que a climatologia observada. Estes resultados indicam o potencial de utilização do RegCM3 para previsão sazonal, que futuramente deverá ser explorado através de previsão por conjunto.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este artigo discute um modelo de previsão combinada para a realização de prognósticos climáticos na escala sazonal. Nele, previsões pontuais de modelos estocásticos são agregadas para obter as melhores projeções no tempo. Utilizam-se modelos estocásticos autoregressivos integrados a médias móveis, de suavização exponencial e previsões por análise de correlações canônicas. O controle de qualidade das previsões é feito através da análise dos resíduos e da avaliação do percentual de redução da variância não-explicada da modelagem combinada em relação às previsões dos modelos individuais. Exemplos da aplicação desses conceitos em modelos desenvolvidos no Instituto Nacional de Meteorologia (INMET) mostram bons resultados e ilustram que as previsões do modelo combinado, superam na maior parte dos casos a de cada modelo componente, quando comparadas aos dados observados.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction. This method is used to forecast the harvest date of banana bunches from as early as the plant shooting stage. It facilitates the harvest of bunches with the same physiological age. The principle, key advantages, time required and expected results are presented. Materials and methods. Details of the four steps of the method ( installation of the temperature sensor, tagging bunches at the flowering stage, temperature sum calculation and estimation of bunch harvest date) are described. Possible problems are discussed. Results. The application of the method allows drawing a curve of the temperature sum accumulated by the bunches which have to be harvested at exactly 900 degree-days physiological age.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the time-variant reliability analysis of structures with random resistance or random system parameters. It deals with the problem of a random load process crossing a random barrier level. The implications of approximating the arrival rate of the first overload by an ensemble-crossing rate are studied. The error involved in this so-called ""ensemble-crossing rate"" approximation is described in terms of load process and barrier distribution parameters, and in terms of the number of load cycles. Existing results are reviewed, and significant improvements involving load process bandwidth, mean-crossing frequency and time are presented. The paper shows that the ensemble-crossing rate approximation can be accurate enough for problems where load process variance is large in comparison to barrier variance, but especially when the number of load cycles is small. This includes important practical applications like random vibration due to impact loadings and earthquake loading. Two application examples are presented, one involving earthquake loading and one involving a frame structure subject to wind and snow loadings. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reconciliation can be divided into stages, each stage representing the performance of a mining operation, such as: long-term estimation, short-term estimation, planning, mining and mineral processing. The gold industry includes another stage which is the budget, when the company informs the financial market of its annual production forecast. The division of reconciliation into stages increases the reliability of the annual budget informed by the mining companies, while also detecting and correcting the critical steps responsible for the overall estimation error by the optimization of sampling protocols and equipment. This paper develops and validates a new reconciliation model for the gold industry, which is based on correct sampling practices and the subdivision of reconciliation into stages, aiming for better grade estimates and more efficient control of the mining industry`s processes, from resource estimation to final production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The common practice of reconciliation is based on definition of the mine call factor (MCF) and its application to resource or grade control estimates. The MCF expresses the difference, a ratio or percentage, between the predicted grade and the grade reported by the plant. Therefore, its application allows to correct future estimates. This practice is named reactive reconciliation. However the use of generic factors that are applied across differing time scales and material types often disguises the causes of the error responsible for the discrepancy. The root causes of any given variance can only be identified by analyzing the information behind any variance and, then, making changes to methodologies and processes. This practice is named prognostication, or proactive reconciliation, an iterative process resulting in constant recalibration of the inputs and the calculations. The prognostication allows personnel to adjust processes so that results align within acceptable tolerance ranges, and not only to correct model estimates. This study analyses the reconciliation practices performed at a gold mine in Brazil and suggests a new sampling protocol, based on prognostication concepts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the ""Guidelines for Formal Safety Assessment"" of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the ""International Safety Management Code"" are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the several kinds of services that use the Internet and data networks infra-structures, the present networks are characterized by the diversity of types of traffic that have statistical properties as complex temporal correlation and non-gaussian distribution. The networks complex temporal correlation may be characterized by the Short Range Dependence (SRD) and the Long Range Dependence - (LRD). Models as the fGN (Fractional Gaussian Noise) may capture the LRD but not the SRD. This work presents two methods for traffic generation that synthesize approximate realizations of the self-similar fGN with SRD random process. The first one employs the IDWT (Inverse Discrete Wavelet Transform) and the second the IDWPT (Inverse Discrete Wavelet Packet Transform). It has been developed the variance map concept that allows to associate the LRD and SRD behaviors directly to the wavelet transform coefficients. The developed methods are extremely flexible and allow the generation of Gaussian time series with complex statistical behaviors.