935 resultados para interval-censored data


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A number of methods of evaluating the validity of interval forecasts of financial data are analysed, and illustrated using intraday FTSE100 index futures returns. Some existing interval forecast evaluation techniques, such as the Markov chain approach of Christoffersen (1998), are shown to be inappropriate in the presence of periodic heteroscedasticity. Instead, we consider a regression-based test, and a modified version of Christoffersen's Markov chain test for independence, and analyse their properties when the financial time series exhibit periodic volatility. These approaches lead to different conclusions when interval forecasts of FTSE100 index futures returns generated by various GARCH(1,1) and periodic GARCH(1,1) models are evaluated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Researchers analyzing spatiotemporal or panel data, which varies both in location and over time, often find that their data has holes or gaps. This thesis explores alternative methods for filling those gaps and also suggests a set of techniques for evaluating those gap-filling methods to determine which works best.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction Jatropha gossypifolia has been used quite extensively by traditional medicine for the treatment of several diseases in South America and Africa. This medicinal plant has therapeutic potential as a phytomedicine and therefore the establishment of innovative analytical methods to characterise their active components is crucial to the future development of a quality product. Objective To enhance the chromatographic resolution of HPLC-UV-diode-array detector (DAD) experiments applying chemometric tools. Methods Crude leave extracts from J. gossypifolia were analysed by HPLC-DAD. A chromatographic band deconvolution method was designed and applied using interval multivariate curve resolution by alternating least squares (MCR-ALS). Results The MCR-ALS method allowed the deconvolution from up to 117% more bands, compared with the original HPLC-DAD experiments, even in regions where the UV spectra showed high similarity. The method assisted in the dereplication of three C-glycosylflavones isomers: vitexin/isovitexin, orientin/homorientin and schaftoside/isoschaftoside. Conclusion The MCR-ALS method is shown to be a powerful tool to solve problems of chromatographic band overlapping from complex mixtures such as natural crude samples. Copyright © 2013 John Wiley & Sons, Ltd. Extracts from J. gossypifolia were analyzed by HPLC-DAD and, dereplicated applying MCR-ALS. The method assisted in the detection of three C-glycosylflavones isomers: vitexin/isovitexin, orientin/homorientin and schaftoside/isoschaftoside. The application of MCR-ALS allowed solving problems of chromatographic band overlapping from complex mixtures such as natural crude samples. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In medical follow-up studies, ordered bivariate survival data are frequently encountered when bivariate failure events are used as the outcomes to identify the progression of a disease. In cancer studies interest could be focused on bivariate failure times, for example, time from birth to cancer onset and time from cancer onset to death. This paper considers a sampling scheme where the first failure event (cancer onset) is identified within a calendar time interval, the time of the initiating event (birth) can be retrospectively confirmed, and the occurrence of the second event (death) is observed sub ject to right censoring. To analyze this type of bivariate failure time data, it is important to recognize the presence of bias arising due to interval sampling. In this paper, nonparametric and semiparametric methods are developed to analyze the bivariate survival data with interval sampling under stationary and semi-stationary conditions. Numerical studies demonstrate the proposed estimating approaches perform well with practical sample sizes in different simulated models. We apply the proposed methods to SEER ovarian cancer registry data for illustration of the methods and theory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sizes and power of selected two-sample tests of the equality of survival distributions are compared by simulation for small samples from unequally, randomly-censored exponential distributions. The tests investigated include parametric tests (F, Score, Likelihood, Asymptotic), logrank tests (Mantel, Peto-Peto), and Wilcoxon-Type tests (Gehan, Prentice). Equal sized samples, n = 18, 16, 32 with 1000 (size) and 500 (power) simulation trials, are compared for 16 combinations of the censoring proportions 0%, 20%, 40%, and 60%. For n = 8 and 16, the Asymptotic, Peto-Peto, and Wilcoxon tests perform at nominal 5% size expectations, but the F, Score and Mantel tests exceeded 5% size confidence limits for 1/3 of the censoring combinations. For n = 32, all tests showed proper size, with the Peto-Peto test most conservative in the presence of unequal censoring. Powers of all tests are compared for exponential hazard ratios of 1.4 and 2.0. There is little difference in power characteristics of the tests within the classes of tests considered. The Mantel test showed 90% to 95% power efficiency relative to parametric tests. Wilcoxon-type tests have the lowest relative power but are robust to differential censoring patterns. A modified Peto-Peto test shows power comparable to the Mantel test. For n = 32, a specific Weibull-exponential comparison of crossing survival curves suggests that the relative powers of logrank and Wilcoxon-type tests are dependent on the scale parameter of the Weibull distribution. Wilcoxon-type tests appear more powerful than logrank tests in the case of late-crossing and less powerful for early-crossing survival curves. Guidelines for the appropriate selection of two-sample tests are given. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ever since its discovery, Eocene Thermal Maximum 2 (ETM2; ~53.7 Ma) has been considered as one of the "little brothers" of the Paleocene-Eocene Thermal Maximum (PETM; ~56 Ma) as it displays similar characteristics including abrupt warming, ocean acidification, and biotic shifts. One of the remaining key questions is what effect these lesser climate perturbations had on ocean circulation and ventilation and, ultimately, biotic disruptions. Here we characterize ETM2 sections of the NE Atlantic (Deep Sea Drilling Project Sites 401 and 550) using multispecies benthic foraminiferal stable isotopes, grain size analysis, XRF core scanning, and carbonate content. The magnitude of the carbon isotope excursion (0.85-1.10 per mil) and bottom water warming (2-2.5°C) during ETM2 seems slightly smaller than in South Atlantic records. The comparison of the lateral d13C gradient between the North and South Atlantic reveals that a transient circulation switch took place during ETM2, a similar pattern as observed for the PETM. New grain size and published faunal data support this hypothesis by indicating a reduction in deepwater current velocity. Following ETM2, we record a distinct intensification of bottom water currents influencing Atlantic carbonate accumulation and biotic communities, while a dramatic and persistent clay reduction hints at a weakening of the regional hydrological cycle. Our findings highlight the similarities and differences between the PETM and ETM2. Moreover, the heterogeneity of hyperthermal expression emphasizes the need to specifically characterize each hyperthermal event and its background conditions to minimalize artifacts in global climate and carbonate burial models for the early Paleogene.