945 resultados para joint hypothesis tests


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents both modelling and experimental test data to characterise the performance of four non-destructive tests. The focus is on determining the presence and rough magnitude of thermal fatigue cracks within the solder joints for a surface mount resistor on a strip of FR4 PCB. The tests all operate by applying mechanical loads to the PCB and monitoring the strain response at the top of the resistor. The modelling results show that of the four tests investigated, three are sensitive to the presence of a crack in the joint and its magnitude. Hence these tests show promise in being able to detect cracking caused by accelerated testing. The experimental data supports these results although more validation is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the mid-1820s, banks became the first businesses in Great Britain and Ireland to be allowed to form freely on an unlimited liability joint-stock basis. Walter Bagehot warned that their shares would ultimately be owned by widows, orphans, and other impecunious individuals. Another hypothesis is that the governing bodies of these banks, constrained by special legal restrictions on share trading, acted effectively to prevent such shares being transferred to the less wealthy. We test both conjectures using the archives of an Irish joint-stock bank. The results do not support Bagehot's hypothesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the finite sample properties of three testing regimes for the null hypothesis of a panel unit root against stationary alternatives in the presence of cross-sectional correlation. The regimes of Bai and Ng (2004), Moon and Perron (2004) and Pesaran (2007) are assessed in the presence of multiple factors and also other non-standard situations. The behaviour of some information criteria used to determine the number of factors in a panel is examined and new information criteria with improved properties in small-N panels proposed. An application to the efficient markets hypothesis is also provided. The null hypothesis of a panel random walk is not rejected by any of the tests, supporting the efficient markets hypothesis in the financial services sector of the Australian Stock Exchange.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrating evidence from multiple domains is useful in prioritizing disease candidate genes for subsequent testing. We ranked all known human genes (n = 3819) under linkage peaks in the Irish Study of High-Density Schizophrenia Families using three different evidence domains: 1) a meta-analysis of microarray gene expression results using the Stanley Brain collection, 2) a schizophrenia protein-protein interaction network, and 3) a systematic literature search. Each gene was assigned a domain-specific p-value and ranked after evaluating the evidence within each domain. For comparison to this
ranking process, a large-scale candidate gene hypothesis was also tested by including genes with Gene Ontology terms related to neurodevelopment. Subsequently, genotypes of 3725 SNPs in 167 genes from a custom Illumina iSelect array were used to evaluate the top ranked vs. hypothesis selected genes. Seventy-three genes were both highly ranked and involved in neurodevelopment (category 1) while 42 and 52 genes were exclusive to neurodevelopment (category 2) or highly ranked (category 3), respectively. The most significant associations were observed in genes PRKG1, PRKCE, and CNTN4 but no individual SNPs were significant after correction for multiple testing. Comparison of the approaches showed an excess of significant tests using the hypothesis-driven neurodevelopment category. Random selection of similar sized genes from two independent genome-wide association studies (GWAS) of schizophrenia showed the excess was unlikely by chance. In a further meta-analysis of three GWAS datasets, four candidate SNPs reached nominal significance. Although gene ranking using integrated sources of prior information did not enrich for significant results in the current experiment, gene selection using an a priori hypothesis (neurodevelopment) was superior to random selection. As such, further development of gene ranking strategies using more carefully selected sources of information is warranted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forearm skin biopsies were obtained from diabetic subjects with and without limited joint mobility, and from non-diabetic control subjects. Collagen purified from these samples was assayed for non-enzymatic glycosylation. The level in all diabetic patients was significantly greater than that in control subjects (p less than 0.001), but those diabetic patients with limited joint mobility had a level of collagen glycosylation similar to that in those with normal joints (15.3 +/- 1.3 and 16.5 +/- 1.3 nmol fructose/10 mg protein, respectively; mean +/- SEM). Glycosylation of collagen in the diabetic patients correlated with glycosylated haemoglobin measured at the time of skin biopsy (r = 0.60). These results do not support the hypothesis that non-enzymatic glycosylation of collagen, as reflected by the ketoamine link, plays an important role in the development of limited joint mobility in diabetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE:

"Blind" shoulder injections are often inaccurate and infiltrate untargeted structures. We tested a hypothesis that optimizing certain anatomical and positional factors would improve accuracy and reduce dispersal.

METHODS:

We evaluated one subacromial and one glenohumeral injection technique on cadavers.

RESULTS:

Mean accuracy was 91% for subacromial-targeted and 74 and 91% (worst- and best-case scenarios) for joint-targeted injections. Mean dispersal was 19% for subacromial-targeted and 16% for joint-targeted injections. All results bettered those reported previously.

CONCLUSION:

These "optimized" techniques might improve accuracy and limit dispersal of blind shoulder injections in clinical situations, benefiting efficacy and safety. However, evaluation is required in a clinical setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we re-examine two important aspects of the dynamics of relative primary commodity prices, namely the secular trend and the short run volatility. To do so, we employ 25 series, some of them starting as far back as 1650 and powerful panel data stationarity tests that allow for endogenous multiple structural breaks. Results show that all the series are stationary after allowing for endogenous multiple breaks. Test results on the Prebisch–Singer hypothesis, which states that relative commodity prices follow a downward secular trend, are mixed but with a majority of series showing negative trends. We also make a first attempt at identifying the potential drivers of the structural breaks. We end by investigating the dynamics of the volatility of the 25 relative primary commodity prices also allowing for endogenous multiple breaks. We describe the often time-varying volatility in commodity prices and show that it has increased in recent years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes the use of an improved covariate unit root test which exploits the cross-sectional dependence information when the panel data null hypothesis of a unit root is rejected. More explicitly, to increase the power of the test, we suggest the utilization of more than one covariate and offer several ways to select the ‘best’ covariates from the set of potential covariates represented by the individuals in the panel. Employing our methods, we investigate the Prebish-Singer hypothesis for nine commodity prices. Our results show that this hypothesis holds for all but the price of petroleum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we analyze the behavior of real interest rates over the long-run using historical data for nine developed economies, to assess the extent to which the recent decline observed in most advanced countries is at odds with the past data, as suggested by the Secular Stagnation hypothesis. By using data from 1703 and performing stationarity and structural breaks tests, we find that the recent decline in interest rates is not explained by a structural break in the time series. Our results also show that considering long-run data leads to different conclusions than using short-run data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluid inteliigence has been defined as an innate ability to reason which is measured commonly by the Raven's Progressive Matrices (RPM). Individual differences in fluid intelligence are currently explained by the Cascade model (Fry & Hale, 1996) and the Controlled Attention hypothesis (Engle, Kane, & Tuholski, 1999; Kane & Engle, 2002). The first theory is based on a complex relation among age, speed, and working memory which is described as a Cascade. The alternative to this theory, the Controlled Attention hypothesis, is based on the proposition that it is the executive attention component of working memory that explains performance on fluid intelligence tests. The first goal of this study was to examine whether the Cascade model is consistent within the visuo-spatial and verbal-numerical modalities. The second goal was to examine whether the executive attention component ofworking memory accounts for the relation between working memory and fluid intelligence. Two hundred and six undergraduate students between the ages of 18 and 28 completed a battery of cognitive tests selected to measure processing speed, working memory, and controlled attention which were selected from two cognitive modalities, verbalnumerical and visuo-spatial. These were used to predict performance on two standard measures of fluid intelligence: the Raven's Progressive Matrices (RPM) and the Shipley Institute of Living Scales (SILS) subtests. Multiple regression and Structural Equation Modeling (SEM) were used to test the Cascade model and to determine the independent and joint effects of controlled attention and working memory on general fluid intelligence. Among the processing speed measures only spatial scan was related to the RPM. No other significant relations were observed between processing speed and fluid intelligence. As 1 a construct, working memory was related to the fluid intelligence tests. Consistent with the predictions for the RPM there was support for the Cascade model within the visuo-spatial modality but not within the verbal-numerical modality. There was no support for the Cascade model with respect to the SILS tests. SEM revealed that there was a direct path between controlled attention and RPM and between working memory and RPM. However, a significant path between set switching and RPM explained the relation between controlled attention and RPM. The prediction that controlled attention mediated the relation between working memory and RPM was therefore not supported. The findings support the view that the Cascade model may not adequately explain individual differences in fluid intelligence and this may be due to the differential relations observed between working memory and fluid intelligence across different modalities. The findings also show that working memory is not a domain-general construct and as a result its relation with fluid intelligence may be dependent on the nature of the working memory modality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans ce texte, nous revoyons certains développements récents de l’économétrie qui peuvent être intéressants pour des chercheurs dans des domaines autres que l’économie et nous soulignons l’éclairage particulier que l’économétrie peut jeter sur certains thèmes généraux de méthodologie et de philosophie des sciences, tels la falsifiabilité comme critère du caractère scientifique d’une théorie (Popper), la sous-détermination des théories par les données (Quine) et l’instrumentalisme. En particulier, nous soulignons le contraste entre deux styles de modélisation - l’approche parcimonieuse et l’approche statistico-descriptive - et nous discutons les liens entre la théorie des tests statistiques et la philosophie des sciences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In This Paper Several Additional Gmm Specification Tests Are Studied. a First Test Is a Chow-Type Test for Structural Parameter Stability of Gmm Estimates. the Test Is Inspired by the Fact That \"Taste and Technology\" Parameters Are Uncovered. the Second Set of Specification Tests Are Var Encompassing Tests. It Is Assumed That the Dgp Has a Finite Var Representation. the Moment Restrictions Which Are Suggested by Economic Theory and Exploited in the Gmm Procedure Represent One Possible Characterization of the Dgp. the Var Is a Different But Compatible Characterization of the Same Dgp. the Idea of the Var Encompassing Tests Is to Compare Parameter Estimates of the Euler Conditions and Var Representations of the Dgp Obtained Separately with Parameter Estimates of the Euler Conditions and Var Representations Obtained Jointly. There Are Several Ways to Construct Joint Systems Which Are Discussed in the Paper. Several Applications Are Also Discussed.