23 resultados para Multiple hypothesis testing
em Aston University Research Archive
Resumo:
Spectral and coherence methodologies are ubiquitous for the analysis of multiple time series. Partial coherence analysis may be used to try to determine graphical models for brain functional connectivity. The outcome of such an analysis may be considerably influenced by factors such as the degree of spectral smoothing, line and interference removal, matrix inversion stabilization and the suppression of effects caused by side-lobe leakage, the combination of results from different epochs and people, and multiple hypothesis testing. This paper examines each of these steps in turn and provides a possible path which produces relatively ‘clean’ connectivity plots. In particular we show how spectral matrix diagonal up-weighting can simultaneously stabilize spectral matrix inversion and reduce effects caused by side-lobe leakage, and use the stepdown multiple hypothesis test procedure to help formulate an interaction strength.
Resumo:
We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.
Resumo:
This research sets out to assess if the PHC system in rural Nigeria is effective by testing the research hypothesis: `PHC can be effective if and only if the Health Care Delivery System matches the attitudes and expectations of the Community'. The field surveys to accomplish this task were carried out in IBO, YORUBA, and HAUSA rural communities. A variety of techniques have been used as Research Methodology and these include questionnaires, interviews and personal observations of events in the rural community. This thesis embraces three main parts. Part I traces the socio-cultural aspects of PHC in rural Nigeria, describes PHC management activities in Nigeria and the practical problems inherent in the system. Part II describes various theoretical and practical research techniques used for the study and concentrates on the field work programme, data analysis and the research hypothesis-testing. Part III focusses on general strategies to improve PHC system in Nigeria to make it more effective. The research contributions to knowledge and the summary of main conclusions of the study are highlighted in this part also. Based on testing and exploring the research hypothesis as stated above, some conclusions have been arrived at, which suggested that PHC in rural Nigeria is ineffective as revealed in people's low opinions of the system and dissatisfaction with PHC services. Many people had expressed the view that they could not obtain health care services in time, at a cost they could afford and in a manner acceptable to them. Following the conclusions, some alternative ways to implement PHC programmes in rural Nigeria have been put forward to improve and make the Nigerian PHC system more effective.
Resumo:
Citation information: Armstrong RA, Davies LN, Dunne MCM & Gilmartin B. Statistical guidelines for clinical studies of human vision. Ophthalmic Physiol Opt 2011, 31, 123-136. doi: 10.1111/j.1475-1313.2010.00815.x ABSTRACT: Statistical analysis of data can be complex and different statisticians may disagree as to the correct approach leading to conflict between authors, editors, and reviewers. The objective of this article is to provide some statistical advice for contributors to optometric and ophthalmic journals, to provide advice specifically relevant to clinical studies of human vision, and to recommend statistical analyses that could be used in a variety of circumstances. In submitting an article, in which quantitative data are reported, authors should describe clearly the statistical procedures that they have used and to justify each stage of the analysis. This is especially important if more complex or 'non-standard' analyses have been carried out. The article begins with some general comments relating to data analysis concerning sample size and 'power', hypothesis testing, parametric and non-parametric variables, 'bootstrap methods', one and two-tail testing, and the Bonferroni correction. More specific advice is then given with reference to particular statistical procedures that can be used on a variety of types of data. Where relevant, examples of correct statistical practice are given with reference to recently published articles in the optometric and ophthalmic literature.
Resumo:
Although considerable effort has been invested in the measurement of banking efficiency using Data Envelopment Analysis, hardly any empirical research has focused on comparison of banks in Gulf States Countries This paper employs data on Gulf States banking sector for the period 2000-2002 to develop efficiency scores and rankings for both Islamic and conventional banks. We then investigate the productivity change using Malmquist Index and decompose the productivity into technical change and efficiency change. Further, hypothesis testing and statistical precision in the context of nonparametric efficiency and productivity measurement have been used. Specially, cross-country analysis of efficiency and comparisons of efficiencies between Islamic banks and conventional banks have been investigated using Mann-Whitney test.
Resumo:
The incentive dilemma refers to a situation in which incentives are offered but do not work as intended. The authors suggest that, in an interorganizational context, whether a principal-provided incentive works is a function of how it is evaluated by an agent: for its contribution to the agent's bottom line (instrumental evaluation) and for the extent it is strategically aligned with the agent's direction (congruence evaluation). To further understand when incentives work, the influence of two key contextual variables-industry volatility and dependence-are examined. A field study featuring 57 semi-structured depth interviews and 386 responses from twin surveys in the information technology and brewing industries provide data for hypothesis testing. When and whether incentives work is demonstrated by certain conditions under which the agent's evaluation of an incentive has positive or negative effects on its compliance and active representation. Further, some outcomes are reversed in the high volatility condition. © 2013 Academy of Marketing Science.
Resumo:
This work attempts to shed light to the fundamental concepts behind the stability of Multi-Agent Systems. We view the system as a discrete time Markov chain with a potentially unknown transitional probability distribution. The system will be considered to be stable when its state has converged to an equilibrium distribution. Faced with the non-trivial task of establishing the convergence to such a distribution, we propose a hypothesis testing approach according to which we test whether the convergence of a particular system metric has occurred. We describe some artificial multi-agent ecosystems that were developed and we present results based on these systems which confirm that this approach qualitatively agrees with our intuition.
Resumo:
We agree with de Jong et al.'s argument that business historians should make their methods more explicit and welcome a more general debate about the most appropriate methods for business historical research. But rather than advocating one ‘new business history’, we argue that contemporary debates about methodology in business history need greater appreciation for the diversity of approaches that have developed in the last decade. And while the hypothesis-testing framework prevalent in the mainstream social sciences favoured by de Jong et al. should have its place among these methodologies, we identify a number of additional streams of research that can legitimately claim to have contributed novel methodological insights by broadening the range of interpretative and qualitative approaches to business history. Thus, we reject privileging a single method, whatever it may be, and argue instead in favour of recognising the plurality of methods being developed and used by business historians – both within their own field and as a basis for interactions with others.
Resumo:
While the literature has suggested the possibility of breach being composed of multiple facets, no previous study has investigated this possibility empirically. This study examined the factor structure of typical component forms in order to develop a multiple component form measure of breach. Two studies were conducted. In study 1 (N = 420) multi-item measures based on causal indicators representing promissory obligations were developed for the five potential component forms (delay, magnitude, type/form, inequity and reciprocal imbalance). Exploratory factor analysis showed that the five components loaded onto one higher order factor, namely psychological contract breach suggesting that breach is composed of different aspects rather than types of breach. Confirmatory factor analysis provided further evidence for the proposed model. In addition, the model achieved high construct reliability and showed good construct, convergent, discriminant and predictive validity. Study 2 data (N = 189), used to validate study 1 results, compared the multiple-component measure with an established multiple item measure of breach (rather than a single item as in study 1) and also tested for discriminant validity with an established multiple item measure of violation. Findings replicated those in study 1. The findings have important implications for considering alternative, more comprehensive and elaborate ways of assessing breach.
Resumo:
Researchers often use 3-way interactions in moderated multiple regression analysis to test the joint effect of 3 independent variables on a dependent variable. However, further probing of significant interaction terms varies considerably and is sometimes error prone. The authors developed a significance test for slope differences in 3-way interactions and illustrate its importance for testing psychological hypotheses. Monte Carlo simulations revealed that sample size, magnitude of the slope difference, and data reliability affected test power. Application of the test to published data yielded detection of some slope differences that were undetected by alternative probing techniques and led to changes of results and conclusions. The authors conclude by discussing the test's applicability for psychological research. Copyright 2006 by the American Psychological Association.
Resumo:
Discriminant analysis (also known as discriminant function analysis or multiple discriminant analysis) is a multivariate statistical method of testing the degree to which two or more populations may overlap with each other. It was devised independently by several statisticians including Fisher, Mahalanobis, and Hotelling ). The technique has several possible applications in Microbiology. First, in a clinical microbiological setting, if two different infectious diseases were defined by a number of clinical and pathological variables, it may be useful to decide which measurements were the most effective at distinguishing between the two diseases. Second, in an environmental microbiological setting, the technique could be used to study the relationships between different populations, e.g., to what extent do the properties of soils in which the bacterium Azotobacter is found differ from those in which it is absent? Third, the method can be used as a multivariate ‘t’ test , i.e., given a number of related measurements on two groups, the analysis can provide a single test of the hypothesis that the two populations have the same means for all the variables studied. This statnote describes one of the most popular applications of discriminant analysis in identifying the descriptive variables that can distinguish between two populations.
Resumo:
Background: Cancer-related self-tests are currently available to buy in pharmacies or over the internet, including tests for faecal occult blood, PSA and haematuria. Self-tests have potential benefits (e.g. convenience) but there are also potential harms (e.g. delays in seeking treatment). The extent of cancer-related self-test use in the UK is not known. This study aimed to determine the prevalence of cancer-related self-test use. Methods: Adults (n = 5,545) in the West Midlands were sent a questionnaire that collected socio-demographic information and data regarding previous and potential future use of 18 different self-tests. Prevalence rates were directly standardised to the England population. The postcode based Index of Multiple Deprivation 2004 was used as aproxy measure of deprivation. Results: 2,925 (54%) usable questionnaires were returned. 1.2% (95% CI 0.83% to 1.66%) of responders reported having used a cancer related self test kit and a further 36% reported that they would consider using one in the future. Logistic regression analyses suggest that increasing age, deprivation category and employment status were associated with cancer-related self-test kit use. Conclusion: We conclude that one in 100 of the adult population have used a cancer-related self-test kit and over a third would consider using one in the future. Self-test kit use could alter perceptions of risk, cause psychological morbidity and impact on the demand for healthcare.
Resumo:
In 2002, we published a paper [Brock, J., Brown, C., Boucher, J., Rippon, G., 2002. The temporal binding deficit hypothesis of autism. Development and Psychopathology 142, 209-224] highlighting the parallels between the psychological model of 'central coherence' in information processing [Frith, U., 1989. Autism: Explaining the Enigma. Blackwell, Oxford] and the neuroscience model of neural integration or 'temporal binding'. We proposed that autism is associated with abnormalities of information integration that is caused by a reduction in the connectivity between specialised local neural networks in the brain and possible overconnectivity within the isolated individual neural assemblies. The current paper updates this model, providing a summary of theoretical and empirical advances in research implicating disordered connectivity in autism. This is in the context of changes in the approach to the core psychological deficits in autism, of greater emphasis on 'interactive specialisation' and the resultant stress on early and/or low-level deficits and their cascading effects on the developing brain [Johnson, M.H., Halit, H., Grice, S.J., Karmiloff-Smith, A., 2002. Neuroimaging of typical and atypical development: a perspective from multiple levels of analysis. Development and Psychopathology 14, 521-536].We also highlight recent developments in the measurement and modelling of connectivity, particularly in the emerging ability to track the temporal dynamics of the brain using electroencephalography (EEG) and magnetoencephalography (MEG) and to investigate the signal characteristics of this activity. This advance could be particularly pertinent in testing an emerging model of effective connectivity based on the balance between excitatory and inhibitory cortical activity [Rubenstein, J.L., Merzenich M.M., 2003. Model of autism: increased ratio of excitation/inhibition in key neural systems. Genes, Brain and Behavior 2, 255-267; Brown, C., Gruber, T., Rippon, G., Brock, J., Boucher, J., 2005. Gamma abnormalities during perception of illusory figures in autism. Cortex 41, 364-376]. Finally, we note that the consequence of this convergence of research developments not only enables a greater understanding of autism but also has implications for prevention and remediation. © 2006.
Resumo:
Gestalt grouping rules imply a process or mechanism for grouping together local features of an object into a perceptual whole. Several psychophysical experiments have been interpreted as evidence for constrained interactions between nearby spatial filter elements and this has led to the hypothesis that element linking might be mediated by these interactions. A common tacit assumption is that these interactions result in response modulation which disturbs a local contrast code. We addressed this possibility by performing contrast discrimination experiments using two-dimensional arrays of multiple Gabor patches arranged either (i) vertically, (ii) in circles (coherent conditions), or (iii) randomly (incoherent condition), as well as for a single Gabor patch. In each condition, contrast increments were applied to either the entire test stimulus (experiment 1) or a single patch whose position was cued (experiment 2). In experiment 3, the texture stimuli were reduced to a single contour by displaying only the central vertical strip. Performance was better for the multiple-patch conditions than for the single-patch condition, but whether the multiple-patch stimulus was coherent or not had no systematic effect on the results in any of the experiments. We conclude that constrained local interactions do not interfere with a local contrast code for our suprathreshold stimuli, suggesting that, in general, this is not the way in which element linking is achieved. The possibility that interactions are involved in enhancing the detectability of contour elements at threshold remains unchallenged by our experiments.
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.