989 resultados para Error serial correlation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article we investigate the asymptotic and finite-sample properties of predictors of regression models with autocorrelated errors. We prove new theorems associated with the predictive efficiency of generalized least squares (GLS) and incorrectly structured GLS predictors. We also establish the form associated with their predictive mean squared errors as well as the magnitude of these errors relative to each other and to those generated from the ordinary least squares (OLS) predictor. A large simulation study is used to evaluate the finite-sample performance of forecasts generated from models using different corrections for the serial correlation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article a partial-adjustment model, which shows how equity prices fail to adjust instantaneously to new information, is estimated using a Kalman filter. For the components of the Dow Jones Industrial 30 index I aim to identify whether overreaction or noise is the cause of serial correlation and high volatility associated with opening returns. I find that the tendency for overreaction in opening prices is much stronger than for closing prices; therefore, overreaction rather than noise may account for differences in the return behavior of opening and closing returns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The techniques and insights from two distinct areas of financial economic modelling are combined to provide evidence of the influence of firm size on the volatility of stock portfolio returns. Portfolio returns are characterized by positive serial correlation induced by the varying levels of non-synchronous trading among the component stocks. This serial correlation is greatest for portfolios of small firms. The conditional volatility of stock returns has been shown to be well represented by the GARCH family of statistical processes. Using a GARCH model of the variance of capitalization-based portfolio returns, conditioned on the autocorrelation structure in the conditional mean, striking differences related to firm size are uncovered.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE The aim was to assess changes of tumour hypoxia during primary radiochemotherapy (RCT) for head and neck cancer (HNC) and to evaluate their relationship with treatment outcome. MATERIAL AND METHODS Hypoxia was assessed by FMISO-PET in weeks 0, 2 and 5 of RCT. The tumour volume (TV) was determined using FDG-PET/MRI/CT co-registered images. The level of hypoxia was quantified on FMISO-PET as TBRmax (SUVmaxTV/SUVmean background). The hypoxic subvolume (HSV) was defined as TV that showed FMISO uptake ⩾1.4 times blood pool activity. RESULTS Sixteen consecutive patients (T3-4, N+, M0) were included (mean follow-up 31, median 44months). Mean TBRmax decreased significantly (p<0.05) from 1.94 to 1.57 (week 2) and 1.27 (week 5). Mean HSV in week 2 and week 5 (HSV2=5.8ml, HSV3=0.3ml) were significantly (p<0.05) smaller than at baseline (HSV1=15.8ml). Kaplan-Meier plots of local recurrence free survival stratified at the median TBRmax showed superior local control for less hypoxic tumours, the difference being significant at baseline and after 2weeks (p=0.031, p=0.016). CONCLUSIONS FMISO-PET documented that in most HNC reoxygenation starts early during RCT and is correlated with better outcome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In spite of considerable technical advance in MRI techniques, the optical resolution of these methods are still limited. Consequently, the delineation of cytoarchitectonic fields based on probabilistic maps and brain volume changes, as well as small-scale changes seen in MRI scans need to be verified by neuronanatomical/neuropathological diagnostic tools. To attend the current interdisciplinary needs of the scientific community, brain banks have to broaden their scope in order to provide high quality tissue suitable for neuroimaging- neuropathology/anatomy correlation studies. The Brain Bank of the Brazilian Aging Brain Research Group (BBBABSG) of the University of Sao Paulo Medical School (USPMS) collaborates with researchers interested in neuroimaging-neuropathological correlation studies providing brains submitted to postmortem MRI in-situ. In this paper we describe and discuss the parameters established by the BBBABSG to select and to handle brains for fine-scale neuroimaging-neuropathological correlation studies, and to exclude inappropriate/unsuitable autopsy brains. We tried to assess the impact of the postmortem time and storage of the corpse on the quality of the MRI scans and to establish fixation protocols that are the most appropriate to these correlation studies. After investigation of a total of 36 brains, postmortem interval and low body temperature proved to be the main factors determining the quality of routine MRI protocols. Perfusion fixation of the brains after autopsy by mannitol 20% followed by formalin 20% was the best method for preserving the original brain shape and volume, and for allowing further routine and immunohistochemical staining. Taken to together, these parameters offer a methodological progress in screening and processing of human postmortem tissue in order to guarantee high quality material for unbiased correlation studies and to avoid expenditures by post-imaging analyses and histological processing of brain tissue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and aims: Hip fracture is a devastating event in terms of outcome in the elderly, and the best predictor of hip fracture risk is hip bone density, usually measured by dual X-ray absorptiometry (DXA). However, bone density can also be ascertained from computerized tomography (CT) scans, and mid-thigh scans are frequently employed to assess the muscle and fat composition of the lower limb. Therefore, we examined if it was possible to predict hip bone density using mid-femoral bone density. Methods: Subjects were 803 ambulatory white and black women and men, aged 70-79 years, participating in the Health, Aging and Body Composition (Health ABC) Study. Bone mineral content (BMC, g) and volumetric bone mineral density (vBMD, mg/cm(3)) of the mid-femur were obtained by CT, whereas BMC and areal bone mineral density (aBMD, g/cm(2)) of the hip (femoral neck and trochanter) were derived from DXA. Results: In regression analyses stratified by race and sex, the coefficient of determination was low with mid-femoral BMC, explaining 6-27% of the variance in hip BMC, with a standard error of estimate (SEE) ranging from 16 to 22% of the mean. For mid-femur vBMD, the variance explained in hip aBMD was 2-17% with a SEE ranging from 15 to 18%. Adjusting aBMD to approximate volumetric density did not improve the relationships. In addition, the utility of fracture prediction was examined. Forty-eight subjects had one or more fractures (various sites) during a mean follow-up of 4.07 years. In logistic regression analysis, there was no association between mid-femoral vBMD and fracture (all fractures), whereas a 1 SD increase in hip BMD was associated with reduced odds for fracture of similar to60%. Conclusions: These results do not support the use of CT-derived mid-femoral vBMD or BMC to predict DXA-measured hip bone mineral status, irrespective of race or sex in older adults. Further, in contrast to femoral neck and trochanter BMD, mid-femur vBMD was not able to predict fracture (all fractures). (C) 2003, Editrice Kurtis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Iowa Department of Transportation has been determining a present serviceability index (PSI) on the primary highway system since 1968. A CHLOE profilometer has been used as the standard for calibrating the Roadmeters that do the system survey. The current Roadmeter, an IJK Iowa DOT developed unit, is not considered an acceptable Roadmeter for determining the FHWA required International Roughness Index (IRI). Iowa purchased a commercial version of the South Dakota type profile (SD Unit) to obtain IRI. This study was undertaken to correlate the IRI to the IJK Roadmeter and retire the Roadmeter. One hundred forty-seven pavement management sections (IPMS) were tested in June and July 1991 with both units. Correlation coefficients and standard error of estimates were: r' Std. Error PCC pavements 0.81 0.15 Composite pavements 0.71 0.18 ACC pavements 0.77 0.17 The correlation equations developed from this work will allow use of the IRI to predict the IJK Roadmeter response with sufficient accuracy. Trend analysis should also not be affected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.