972 resultados para Portmanteau test statistics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: A large proportion of students identify statistics courses as the most anxiety-inducing courses in their curriculum. Many students feel impaired by feelings of state anxiety in the examination and therefore probably show lower achievements. AIMS: The study investigates how statistics anxiety, attitudes (e.g., interest, mathematical self-concept) and trait anxiety, as a general disposition to anxiety, influence experiences of anxiety as well as achievement in an examination. SAMPLE: Participants were 284 undergraduate psychology students, 225 females and 59 males. METHODS: Two weeks prior to the examination, participants completed a demographic questionnaire and measures of the STARS, the STAI, self-concept in mathematics, and interest in statistics. At the beginning of the statistics examination, students assessed their present state anxiety by the KUSTA scale. After 25 min, all examination participants gave another assessment of their anxiety at that moment. Students' examination scores were recorded. Structural equation modelling techniques were used to test relationships between the variables in a multivariate context. RESULTS: Statistics anxiety was the only variable related to state anxiety in the examination. Via state anxiety experienced before and during the examination, statistics anxiety had a negative influence on achievement. However, statistics anxiety also had a direct positive influence on achievement. This result may be explained by students' motivational goals in the specific educational setting. CONCLUSIONS: The results provide insight into the relationship between students' attitudes, dispositions, experiences of anxiety in the examination, and academic achievement, and give recommendations to instructors on how to support students prior to and in the examination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Janet Taylor, Ross D King, Thomas Altmann and Oliver Fiehn (2002). Application of metabolomics to plant genotype discrimination using statistics and machine learning. 1st European Conference on Computational Biology (ECCB). (published as a journal supplement in Bioinformatics 18: S241-S248).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a statistical-based fault diagnosis scheme for application to internal combustion engines. The scheme relies on an identified model that describes the relationships between a set of recorded engine variables using principal component analysis (PCA). Since combustion cycles are complex in nature and produce nonlinear relationships between the recorded engine variables, the paper proposes the use of nonlinear PCA (NLPCA). The paper further justifies the use of NLPCA by comparing the model accuracy of the NLPCA model with that of a linear PCA model. A new nonlinear variable reconstruction algorithm and bivariate scatter plots are proposed for fault isolation, following the application of NLPCA. The proposed technique allows the diagnosis of different fault types under steady-state operating conditions. More precisely, nonlinear variable reconstruction can remove the fault signature from the recorded engine data, which allows the identification and isolation of the root cause of abnormal engine behaviour. The paper shows that this can lead to (i) an enhanced identification of potential root causes of abnormal events and (ii) the masking of faulty sensor readings. The effectiveness of the enhanced NLPCA based monitoring scheme is illustrated by its application to a sensor fault and a process fault. The sensor fault relates to a drift in the fuel flow reading, whilst the process fault relates to a partial blockage of the intercooler. These faults are introduced to a Volkswagen TDI 1.9 Litre diesel engine mounted on an experimental engine test bench facility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to examine the structure of the statistics anxiety rating scale. Responses from 650 undergraduate psychology students throughout the UK were collected through an on-line study. Based on previous research three different models were specified and estimated using confirmatory factor analysis. Fit indices were used to determine if the model fitted the data and a likelihood ratio difference test was used to determine the best fitting model. The original six factor model was the best explanation of the data. All six subscales were intercorrelated and internally consistent. It was concluded that the statistics anxiety rating scale was found to measure the six subscales it was designed to assess in a UK population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion-i-Silvestre et al.[Econometrics Journal,Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions.We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion-i-Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]showed that the number of breaks and their positions may be allowed to differ acrossindividuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross-sectional dependence are accommodated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quick, easy way to master all the statistics you'll ever need The bad news first: if you want a psychology degree you'll need to know statistics. Now for the good news: Psychology Statistics For Dummies. Featuring jargon-free explanations, step-by-step instructions and dozens of real-life examples, Psychology Statistics For Dummies makes the knotty world of statistics a lot less baffling. Rather than padding the text with concepts and procedures irrelevant to the task, the authors focus only on the statistics psychology students need to know. As an alternative to typical, lead-heavy statistics texts or supplements to assigned course reading, this is one book psychology students won't want to be without. Ease into statistics – start out with an introduction to how statistics are used by psychologists, including the types of variables they use and how they measure them Get your feet wet – quickly learn the basics of descriptive statistics, such as central tendency and measures of dispersion, along with common ways of graphically depicting information Meet your new best friend – learn the ins and outs of SPSS, the most popular statistics software package among psychology students, including how to input, manipulate and analyse data Analyse this – get up to speed on statistical analysis core concepts, such as probability and inference, hypothesis testing, distributions, Z-scores and effect sizes Correlate that – get the lowdown on common procedures for defining relationships between variables, including linear regressions, associations between categorical data and more Analyse by inference – master key methods in inferential statistics, including techniques for analysing independent groups designs and repeated-measures research designs Open the book and find: Ways to describe statistical data How to use SPSS statistical software Probability theory and statistical inference Descriptive statistics basics How to test hypotheses Correlations and other relationships between variables Core concepts in statistical analysis for psychology Analysing research designs Learn to: Use SPSS to analyse data Master statistical methods and procedures using psychology-based explanations and examples Create better reports Identify key concepts and pass your course

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Lung clearance index (LCI) derived from sulfur hexafluoride (SF6) multiple breath washout (MBW) is a sensitive measure of lung disease in people with cystic fibrosis (CF). However, it can be time-consuming, limiting its use clinically. Aim: To compare the repeatability, sensitivity and test duration of LCI derived from washout to 1/30th (LCI1/30), 1/20th (LCI1/20) and 1/10th (LCI1/10) to ‘standard’ LCI derived from washout to 1/40th initial concentration (LCI1/40). Methods: Triplicate MBW test results from 30 clinically stable people with CF and 30 healthy controls were analysed retrospectively. MBW tests were performed using 0.2% SF6 and a modified Innocor device. All LCI end points were calculated using SimpleWashout software. Repeatability was assessed using coefficient of variation (CV%). The proportion of people with CF with and without abnormal LCI and forced expiratory volume in 1 s (FEV1) % predicted was compared. Receiver operating characteristic (ROC) curve statistics were calculated. Test duration of all LCI end points was compared using paired t tests. Results: In people with CF, LCI1/40 CV% (p=0.16), LCI1/30 CV%, (p=0.53), LCI1/20 CV% (p=0.14) and LCI1/10 CV% (p=0.25) was not significantly different to controls. The sensitivity of LCI1/40, LCI1/30 and LCI1/20 to the presence of CF was equal (67%). The sensitivity of LCI1/10 and FEV1% predicted was lower (53% and 47% respectively). Area under the ROC curve (95% CI) for LCI1/40, LCI1/30, LCI1/20, LCI1/10 and FEV1% predicted was 0.89 (0.80 to 0.97), 0.87 (0.77 to 0.96), 0.87 (0.78 to 0.96), 0.83 (0.72 to 0.94) and 0.73 (0.60 to 0.86), respectively. Test duration of LCI1/30, LCI1/20 and LCI1/10 was significantly shorter compared with the test duration of LCI1/40 in people with CF (p<0.0001) equating to a 5%, 9% and 15% time saving, respectively. Conclusions: In this study, LCI1/20 was a repeatable and sensitive measure with equal diagnostic performance to LCI1/40. LCI1/20 was shorter, potentially offering a more feasible research and clinical measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ecological sciences have experienced immense growth over the course of this century, and chances are that they will continue to grow well on into the next millennium. There are some good reasons for this – ecology encompasses some of the most pressing concerns facing humanity. With recent advances in data collection technology and ambitious field research, ecologists are increasingly calling upon multivariate statistics to explore and test for patterns in their data. The goal of FISH 560 (Applied Multivariate Statistics for Ecologists) at the University of Washington is to introduce graduate students to the multivariate statistical techniques necessary to carry out sophisticated analyses and to critically evaluate scientific papers using these approaches. It is a practical, hands-on course emphasizing the analysis and interpretation of multivariate analysis, and covers the majority of approaches in common use by ecologists. To celebrate the hard work of past students, I am pleased to announce the creation of the Electronic Journal of Applied Multivariate Statistics (EJAMS). Each year, students in FISH 560 are required to write a final paper consisting of a statistical analysis of their own multivariate data set. These papers are submitted to EJAMS at the end of quarter and are peer reviewed by two other class members. A decision on publication is based on the reviewers’ recommendations and my own reading the paper. In closing, there is a need for the rapid dissemination of ecological research using multivariate statistics at the University of Washington. EJAMS is committed to this challenge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to develop, test and benchmark a framework and a predictive risk model for hospital emergency readmission within 12 months. We performed the development using routinely collected Hospital Episode Statistics data covering inpatient hospital admissions in England. Three different timeframes were used for training, testing and benchmarking: 1999 to 2004, 2000 to 2005 and 2004 to 2009 financial years. Each timeframe includes 20% of all inpatients admitted within the trigger year. The comparisons were made using positive predictive value, sensitivity and specificity for different risk cut-offs, risk bands and top risk segments, together with the receiver operating characteristic curve. The constructed Bayes Point Machine using this feature selection framework produces a risk probability for each admitted patient, and it was validated for different timeframes, sub-populations and cut-off points. At risk cut-off of 50%, the positive predictive value was 69.3% to 73.7%, the specificity was 88.0% to 88.9% and sensitivity was 44.5% to 46.3% across different timeframes. Also, the area under the receiver operating characteristic curve was 73.0% to 74.3%. The developed framework and model performed considerably better than existing modelling approaches with high precision and moderate sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The European Court of Justice has held that as from 21 December 2012 insurers may no longer charge men and women differently on the basis of scientific evidence that is statistically linked to their sex, effectively prohibiting the use of sex as a factor in the calculation of premiums and benefits for the purposes of insurance and related financial services throughout the European Union. This ruling marks a sharp turn away from the traditional view that insurers should be allowed to apply just about any risk assessment criterion, so long as it is sustained by the findings of actuarial science. The naïveté behind the assumption that insurers’ recourse to statistical data and probabilistic analysis, given their scientific nature, would suffice to keep them out of harm’s way was exposed. In this article I look at the flaws of this assumption and question whether this judicial decision, whilst constituting a most welcome landmark in the pursuit of equality between men and women, has nonetheless gone too far by saying too little on the million dollar question of what separates admissible criteria of differentiation from inadmissible forms of discrimination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel test of spatial independence of the distribution of crystals or phases in rocks based on compositional statistics is introduced. It improves and generalizes the common joins-count statistics known from map analysis in geographic information systems. Assigning phases independently to objects in RD is modelled by a single-trial multinomial random function Z(x), where the probabilities of phases add to one and are explicitly modelled as compositions in the K-part simplex SK. Thus, apparent inconsistencies of the tests based on the conventional joins{count statistics and their possibly contradictory interpretations are avoided. In practical applications we assume that the probabilities of phases do not depend on the location but are identical everywhere in the domain of de nition. Thus, the model involves the sum of r independent identical multinomial distributed 1-trial random variables which is an r-trial multinomial distributed random variable. The probabilities of the distribution of the r counts can be considered as a composition in the Q-part simplex SQ. They span the so called Hardy-Weinberg manifold H that is proved to be a K-1-affine subspace of SQ. This is a generalisation of the well-known Hardy-Weinberg law of genetics. If the assignment of phases accounts for some kind of spatial dependence, then the r-trial probabilities do not remain on H. This suggests the use of the Aitchison distance between observed probabilities to H to test dependence. Moreover, when there is a spatial uctuation of the multinomial probabilities, the observed r-trial probabilities move on H. This shift can be used as to check for these uctuations. A practical procedure and an algorithm to perform the test have been developed. Some cases applied to simulated and real data are presented. Key words: Spatial distribution of crystals in rocks, spatial distribution of phases, joins-count statistics, multinomial distribution, Hardy-Weinberg law, Hardy-Weinberg manifold, Aitchison geometry

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult to achieve because the relative values of the forecast components often fail to behave in a way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It has been shown that cause-specic mortality forecasts are pessimistic when compared with all-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approach of using log mortality rates and forecasts the density of deaths in the life table. Since these values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbing state), they are intrinsically relative rather than absolute values across decrements as well as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison (1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that the unit sum constraint is honoured. The structure of the best-known, single-decrement mortality-rate forecasting model, devised by Lee and Carter (1992), is expressed in compositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortality by cause of death for Japan