928 resultados para Modified signed likelihood ratio statistic
Resumo:
The Birnbaum-Saunders regression model is commonly used in reliability studies. We address the issue of performing inference in this class of models when the number of observations is small. Our simulation results suggest that the likelihood ratio test tends to be liberal when the sample size is small. We obtain a correction factor which reduces the size distortion of the test. Also, we consider a parametric bootstrap scheme to obtain improved critical values and improved p-values for the likelihood ratio test. The numerical results show that the modified tests are more reliable in finite samples than the usual likelihood ratio test. We also present an empirical application. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In this article, we present a new control chart for monitoring the covariance matrix in a bivariate process. In this method, n observations of the two variables were considered as if they came from a single variable (as a sample of 2n observations), and a sample variance was calculated. This statistic was used to build a new control chart specifically as a VMIX chart. The performance of the new control chart was compared with its main competitors: the generalized sampled variance chart, the likelihood ratio test, Nagao's test, probability integral transformation (v(t)), and the recently proposed VMAX chart. Among these statistics, only the VMAX chart was competitive with the VMIX chart. For shifts in both variances, the VMIX chart outperformed VMAX; however, VMAX showed better performance for large shifts (higher than 10%) in one variance.
Resumo:
2000 Mathematics Subject Classification: 62F25, 62F03.
Resumo:
Background: Queensland men aged 50 years and older are at high risk for melanoma. Early detection via skin self examination (SSE) (particularly whole-body SSE) followed by presentation to a doctor with suspicious lesions, may decrease morbidity and mortality from melanoma. Prevalence of whole-body SSE (wbSSE) is lower in Queensland older men compared to other population subgroups. With the exception of the present study no previous research has investigated the determinants of wbSSE in older men, or interventions to increase the behaviour in this population. Furthermore, although past SSE intervention studies for other populations have cited health behaviour models in the development of interventions, no study has tested these models in full. The Skin Awareness Study: A recent randomised trial, called the Skin Awareness Study, tested the impact of a video-delivered intervention compared to written materials alone on wbSSE in men aged 50 years or older (n=930). Men were recruited from the general population and interviewed over the telephone at baseline and 13 months. The proportion of men who reported wbSSE rose from 10% to 31% in the control group, and from 11% to 36% in the intervention group. Current research: The current research was a secondary analysis of data collected for the Skin Awareness Study. The objectives were as follows: • To describe how men who did not take up any SSE during the study period differed from those who did take up examining their skin. • To determine whether the intervention program was successful in affecting the constructs of the Health Belief Model it was aimed at (self-efficacy, perceived threat, and outcome expectations); and whether this in turn influenced wbSSE. • To determine whether the Health Action Process Approach (HAPA) was a better predictor of wbSSE behaviour compared to the Health Belief Model (HBM). Methods: For objective 1, men who did not report any past SSE at baseline (n=308) were categorised as having ‘taken up SSE’ (reported SSE at study end) or ‘resisted SSE’ (reported no SSE at study end). Bivariate logistic regression, followed by multivariable regression, investigated the association between participant characteristics measured at baseline and resisting SSE. For objective 2 proxy measures of self-efficacy, perceived threat, and outcome expectations were selected. To determine whether these mediated the effect of the intervention on the outcome, a mediator analysis was performed with all participants who completed interviews at both time points (n=830) following the Baron and Kenny approach, modified for use with structural equation modelling (SEM). For objective 3, control group participants only were included (n=410). Proxy measures of all HBM and HAPA constructs were selected and SEM was used to build up models and test the significance of each hypothesised pathway. A likelihood ratio test compared the HAPA to the HBM. Results: Amongst men who did not report any SSE at baseline, 27% did not take up any SSE by the end of the study. In multivariable analyses, resisting SSE was associated with having more freckly skin (p=0.027); being unsure about the statement ‘if I saw something suspicious on my skin, I’d go to the doctor straight away’ (p=0.028); not intending to perform SSE (p=0.015), having lower SSE self-efficacy (p<0.001), and having no recommendation for SSE from a doctor (p=0.002). In the mediator analysis none of the tested variables mediated the relationship between the intervention and wbSSE. In regards to health behaviour models, the HBM did not predict wbSSE well overall. Only the construct of self-efficacy was a significant predictor of future wbSSE (p=0.001), while neither perceived threat (p=0.584) nor outcome expectations (p=0.220) were. By contrast, when the HAPA constructs were added, all three HBM variables predicted intention to perform SSE, which in turn predicted future behaviour (p=0.015). The HAPA construct of volitional self-efficacy was also associated with wbSSE (p=0.046). The HAPA was a significantly better model compared to the HBM (p<0.001). Limitations: Items selected to measure HBM and HAPA model constructs for objectives 2 and 3 may not have accurately reflected each construct. Conclusions: This research added to the evidence base on how best to target interventions to older men; and on the appropriateness of particular health behaviour models to guide interventions. Findings indicate that to overcome resistance those men with more negative pre-existing attitudes to SSE (not intending to do it, lower initial self-efficacy) may need to be targeted with more intensive interventions in the future. Involving general practitioners in recommending SSE to their patients in this population, alongside disseminating an intervention, may increase its success. Comparison of the HBM and HAPA showed that while two of the three HBM variables examined did not directly predict future wbSSE, all three were associated with intention to self-examine skin. This suggests that in this population, intervening on these variables may increase intention to examine skin, but not necessarily the behaviour itself. Future interventions could potentially focus on increasing both the motivational variables of perceived threat and outcome expectations as well as a combination of both action and volitional self-efficacy; with the aim of increasing intention as well as its translation to taking up and maintaining regular wbSSE.
Resumo:
In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.
Resumo:
This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.
Resumo:
Negabinary is a component of the positional number system. A complete set of negabinary arithmetic operations are presented, including the basic addition/subtraction logic, the two-step carry-free addition/subtraction algorithm based on negabinary signed-digit (NSD) representation, parallel multiplication, and the fast conversion from NSD to the normal negabinary in the carry-look-ahead mode. All the arithmetic operations can be performed with binary logic. By programming the binary reference bits, addition and subtraction can be realized in parallel with the same binary logic functions. This offers a technique to perform space-variant arithmetic-logic functions with space-invariant instructions. Multiplication can be performed in the tree structure and it is simpler than the modified signed-digit (MSD) counterpart. The parallelism of the algorithms is very suitable for optical implementation. Correspondingly, a general-purpose optical logic system using an electron trapping device is suggested. Various complex logic functions can be performed by programming the illumination of the data arrays without additional temporal latency of the intermediate results. The system can be compact. These properties make the proposed negabinary arithmetic-logic system a strong candidate for future applications in digital optical computing with the development of smart pixel arrays. (C) 1999 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(99)00803-X].
Resumo:
We present, for the first time to our knowledge, a generalized lookahead logic algorithm for number conversion from signed-digit to complement representation. By properly encoding the signed-digits, all the operations are performed by binary logic, and unified logical expressions can be obtained for conversion from modified-signed-digit (MSD) to 2's complement, trinary signed-digit (TSD) to 3's complement, and quarternary signed-digit (QSD) to 4's complement. For optical implementation, a parallel logical array module using an electron-trapping device is employed and experimental results are shown. This optical module is suitable for implementing complex logic functions in the form of the sum of the product. The algorithm and architecture are compatible with a general-purpose optoelectronic computing system. (C) 2001 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Background: We conducted a survival analysis of all the confirmed cases of Adult Tuberculosis (TB) patients treated in Cork-City, Ireland. The aim of this study was to estimate Survival time (ST), including median time of survival and to assess the association and impact of covariates (TB risk factors) to event status and ST. The outcome of the survival analysis is reported in this paper. Methods: We used a retrospective cohort study research design to review data of 647 bacteriologically confirmed TB patients from the medical record of two teaching hospitals. Mean age 49 years (Range 18–112). We collected information on potential risk factors of all confirmed cases of TB treated between 2008–2012. For the survival analysis, the outcome of interest was ‘treatment failure’ or ‘death’ (whichever came first). A univariate descriptive statistics analysis was conducted using a non- parametric procedure, Kaplan -Meier (KM) method to estimate overall survival (OS), while the Cox proportional hazard model was used for the multivariate analysis to determine possible association of predictor variables and to obtain adjusted hazard ratio. P value was set at <0.05, log likelihood ratio test at >0.10. Data were analysed using SPSS version 15.0. Results: There was no significant difference in the survival curves of male and female patients. (Log rank statistic = 0.194, df = 1, p = 0.66) and among different age group (Log rank statistic = 1.337, df = 3, p = 0.72). The mean overall survival (OS) was 209 days (95%CI: 92–346) while the median was 51 days (95% CI: 35.7–66). The mean ST for women was 385 days (95%CI: 76.6–694) and for men was 69 days (95%CI: 48.8–88.5). Multivariate Cox regression showed that patient who had history of drug misuse had 2.2 times hazard than those who do not have drug misuse. Smokers and alcohol drinkers had hazard of 1.8 while patients born in country of high endemicity (BICHE) had hazard of 6.3 and HIV co-infection hazard was 1.2. Conclusion: There was no significant difference in survival curves of male and female and among age group. Women had a higher ST compared to men. But men had a higher hazard rate compared to women. Anti-TNF, immunosuppressive medication and diabetes were found to be associated with longer ST, while alcohol, smoking, RICHE, BICHE was associated with shorter ST.
Resumo:
BACKGROUND: The impact of bronchiectasis on sedentary behaviour and physical activity is unknown. It is important to explore this to identify the need for physical activity interventions and how to tailor interventions to this patient population. We aimed to explore the patterns and correlates of sedentary behaviour and physical activity in bronchiectasis.
METHODS: Physical activity was assessed in 63 patients with bronchiectasis using an ActiGraph GT3X+ accelerometer over seven days. Patients completed: questionnaires on health-related quality-of-life and attitudes to physical activity (questions based on an adaption of the transtheoretical model (TTM) of behaviour change); spirometry; and the modified shuttle test (MST). Multiple linear regression analysis using forward selection based on likelihood ratio statistics explored the correlates of sedentary behaviour and physical activity dimensions. Between-group analysis using independent sample t-tests were used to explore differences for selected variables.
RESULTS: Fifty-five patients had complete datasets. Average daily time, mean(standard deviation) spent in sedentary behaviour was 634(77)mins, light-lifestyle physical activity was 207(63)mins and moderate-vigorous physical activity (MVPA) was 25(20)mins. Only 11% of patients met recommended guidelines. Forced expiratory volume in one-second percentage predicted (FEV1% predicted) and disease severity were not correlates of sedentary behaviour or physical activity. For sedentary behaviour, decisional balance 'pros' score was the only correlate. Performance on the MST was the strongest correlate of physical activity. In addition to the MST, there were other important correlate variables for MVPA accumulated in ≥10-minute bouts (QOL-B Social Functioning) and for activity energy expenditure (Body Mass Index and QOL-B Respiratory Symptoms).
CONCLUSIONS: Patients with bronchiectasis demonstrated a largely inactive lifestyle and few met the recommended physical activity guidelines. Exercise capacity was the strongest correlate of physical activity, and dimensions of the QOL-B were also important. FEV1% predicted and disease severity were not correlates of sedentary behaviour or physical activity. The inclusion of a range of physical activity dimensions could facilitate in-depth exploration of patterns of physical activity. This study demonstrates the need for interventions targeted at reducing sedentary behaviour and increasing physical activity, and provides information to tailor interventions to the bronchiectasis population.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certain assumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur in the proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p. There are many statistical tests being used to check whether empirical marker data obeys the Hardy-Weinberg principle. Among these are the classical xi-square test (with or without continuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combination with Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE) are numerical in nature, requiring the computation of a test statistic and a p-value. There is however, ample space for the use of graphics in HWE tests, in particular for the ternary plot. Nowadays, many genetical studies are using genetical markers known as Single Nucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the counts one typically computes genotype frequencies and allele frequencies. These frequencies satisfy the unit-sum constraint, and their analysis therefore falls within the realm of compositional data analysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotype frequencies can be adequately represented in a ternary plot. Compositions that are in exact HWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected in a statistical test are typically “close" to the parabola, whereas compositions that differ significantly from HWE are “far". By rewriting the statistics used to test for HWE in terms of heterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted in the ternary plot. This way, compositions can be tested for HWE purely on the basis of their position in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphical representations where large numbers of SNPs can be tested for HWE in a single graph. Several examples of graphical tests for HWE (implemented in R software), will be shown, using SNP data from different human populations
Resumo:
A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.