951 resultados para Statistical hypothesis testing
Resumo:
Although considerable effort has been invested in the measurement of banking efficiency using Data Envelopment Analysis, hardly any empirical research has focused on comparison of banks in Gulf States Countries This paper employs data on Gulf States banking sector for the period 2000-2002 to develop efficiency scores and rankings for both Islamic and conventional banks. We then investigate the productivity change using Malmquist Index and decompose the productivity into technical change and efficiency change. Further, hypothesis testing and statistical precision in the context of nonparametric efficiency and productivity measurement have been used. Specially, cross-country analysis of efficiency and comparisons of efficiencies between Islamic banks and conventional banks have been investigated using Mann-Whitney test.
Resumo:
PURPOSE: The Bonferroni correction adjusts probability (p) values because of the increased risk of a type I error when making multiple statistical tests. The routine use of this test has been criticised as deleterious to sound statistical judgment, testing the wrong hypothesis, and reducing the chance of a type I error but at the expense of a type II error; yet it remains popular in ophthalmic research. The purpose of this article was to survey the use of the Bonferroni correction in research articles published in three optometric journals, viz. Ophthalmic & Physiological Optics, Optometry & Vision Science, and Clinical & Experimental Optometry, and to provide advice to authors contemplating multiple testing. RECENT FINDINGS: Some authors ignored the problem of multiple testing while others used the method uncritically with no rationale or discussion. A variety of methods of correcting p values were employed, the Bonferroni method being the single most popular. Bonferroni was used in a variety of circumstances, most commonly to correct the experiment-wise error rate when using multiple 't' tests or as a post-hoc procedure to correct the family-wise error rate following analysis of variance (anova). Some studies quoted adjusted p values incorrectly or gave an erroneous rationale. SUMMARY: Whether or not to use the Bonferroni correction depends on the circumstances of the study. It should not be used routinely and should be considered if: (1) a single test of the 'universal null hypothesis' (Ho ) that all tests are not significant is required, (2) it is imperative to avoid a type I error, and (3) a large number of tests are carried out without preplanned hypotheses.
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.
Resumo:
Doutoramento em Economia
Resumo:
Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps rubella vaccine and autism controversy in the UK to illustrate a number of threshold concepts underlying good study design and interpretation of scientific evidence. These include the necessity of sufficient sample size, representative and random sampling, appropriate controls and inferring causation.
Resumo:
Ochnaceae s.str. (Malpighiales) are a pantropical family of about 500 species and 27 genera of almost exclusively woody plants. Infrafamilial classification and relationships have been controversial partially due to the lack of a robust phylogenetic framework. Including all genera except Indosinia and Perissocarpa and DNA sequence data for five DNA regions (ITS, matK, ndhF, rbcL, trnL-F), we provide for the first time a nearly complete molecular phylogenetic analysis of Ochnaceae s.l. resolving most of the phylogenetic backbone of the family. Based on this, we present a new classification of Ochnaceae s.l., with Medusagynoideae and Quiinoideae included as subfamilies and the former subfamilies Ochnoideae and Sauvagesioideae recognized at the rank of tribe. Our data support a monophyletic Ochneae, but Sauvagesieae in the traditional circumscription is paraphyletic because Testulea emerges as sister to the rest of Ochnoideae, and the next clade shows Luxemburgia+Philacra as sister group to the remaining Ochnoideae. To avoid paraphyly, we classify Luxemburgieae and Testuleeae as new tribes. The African genus Lophira, which has switched between subfamilies (here tribes) in past classifications, emerges as sister to all other Ochneae. Thus, endosperm-free seeds and ovules with partly to completely united integuments (resulting in an apparently single integument) are characters that unite all members of that tribe. The relationships within its largest clade, Ochnineae (former Ochneae), are poorly resolved, but former Ochninae (Brackenridgea, Ochna) are polyphyletic. Within Sauvagesieae, the genus Sauvagesia in its broad circumscription is polyphyletic as Sauvagesia serrata is sister to a clade of Adenarake, Sauvagesia spp., and three other genera. Within Quiinoideae, in contrast to former phylogenetic hypotheses, Lacunaria and Touroulia form a clade that is sister to Quiina. Bayesian ancestral state reconstructions showed that zygomorphic flowers with adaptations to buzz-pollination (poricidal anthers), a syncarpous gynoecium (a near-apocarpous gynoecium evolved independently in Quiinoideae and Ochninae), numerous ovules, septicidal capsules, and winged seeds with endosperm are the ancestral condition in Ochnoideae. Although in some lineages poricidal anthers were lost secondarily, the evolution of poricidal superstructures secured the maintenance of buzz-pollination in some of these genera, indicating a strong selective pressure on keeping that specialized pollination system.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
Two experiments tested predictions from a theory in which processing load depends on relational complexity (RC), the number of variables related in a single decision. Tasks from six domains (transitivity, hierarchical classification, class inclusion, cardinality, relative-clause sentence comprehension, and hypothesis testing) were administered to children aged 3-8 years. Complexity analyses indicated that the domains entailed ternary relations (three variables). Simpler binary-relation (two variables) items were included for each domain. Thus RC was manipulated with other factors tightly controlled. Results indicated that (i) ternary-relation items were more difficult than comparable binary-relation items, (ii) the RC manipulation was sensitive to age-related changes, (iii) ternary relations were processed at a median age of 5 years, (iv) cross-task correlations were positive, with all tasks loading on a single factor (RC), (v) RC factor scores accounted for 80% (88%) of age-related variance in fluid intelligence (compositionality of sets), (vi) binary- and ternary-relation items formed separate complexity classes, and (vii) the RC approach to defining cognitive complexity is applicable to different content domains. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
The aim of this study is to examine the implications of the IPPA in the perception of illness and wellbeing in MS patients. Methods - This is a quasi experimental study non-randomized study with 24 MS patients diagnosed at least 1 year before, and with an EDSS score of under 7. We used the IPPA in 3 groups of eight people in 3 Portuguese hospitals (Lisbon, Coimbra, and Porto). The sessions were held once a week for 90 minutes, over a period of 7 weeks. The instruments used were: We asked the subjects the question “Please classify the severity of your disease?” and used the Personal Wellbeing Scale (PWS) at the beginning (time A) and end (time B) of the IPPA. We used the SPSS version 20. A non-parametric statistical hypothesis test (Wilcoxon test) was used for the variable analysis. The intervention followed the recommendations of the Helsinki Declaration. Results – The results suggest that there are differences between time A and B, the perception of illness decreased (p<0.08), while wellbeing increased (p<0.01). Conclusions: The IPPA can play an important role in modifying the perception of disease severity and personal wellbeing.
Resumo:
Num mundo hipercompetitivo, a afirmação da virtuosidade tem enfrentado consideráveis resistências, sendo mesmo considerada como sinónimo de fraqueza ou ingenuidade. Todavia, e perante evidências dos potenciais perigos do exercício da liderança desprovido de valores, ética e moralidade, elevam-se as vozes em defesa de uma liderança virtuosa, capaz de aportar contributos significativamente positivos às organizações e seus colaboradores. Partindo desta premissa, esta investigação teve como objetivo analisar, com base nas perceções dos liderados, o impacto da liderança virtuosa no comprometimento organizacional, assim como o contributo deste último no desempenho individual. Sustentados numa metodologia quantitativa, inquirimos, numa primeira fase, 113 liderados provenientes de organizações localizadas no território português, com vista a apurar quais as virtudes que mais valorizavam num líder. Os dados para o teste de hipóteses foram recolhidos através da aplicação de uma bateria de testes junto de 351 liderados, também a exercer funções em organizações a operar em Portugal. Os resultados sugerem que as perceções dos liderados em torno de três dimensões de virtuosidade da liderança (liderança baseada em valores, perseverança e maturidade) contribuem para o comprometimento organizacional, sobretudo nas suas vertentes afetiva e normativa e, que este último, por sua vez, é capaz de influenciar positivamente o desempenho individual.
Resumo:
Probability and Statistics—Selected Problems is a unique book for senior undergraduate and graduate students to fast review basic materials in Probability and Statistics. Descriptive statistics are presented first, and probability is reviewed secondly. Discrete and continuous distributions are presented. Sample and estimation with hypothesis testing are presented in the last two chapters. The solutions for proposed excises are listed for readers to references.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
This paper considers trade secrecy as an appropriation mechanism in the context ofb the US Economic Espionage Act (EEA) 1996. We examine the relation between trade secret intensity and firm size, using a cross section of 95 court cases. The paper builds on extant work in three respects. First, we create a unique body of evidence, using EEA prosecutions from 1996 to 2008. Second, we use an econometric approach to measurement, estimation and hypothesis testing. This allows us comprehensively to test the robustness of findings. Third, we focus on objectively measured valuations, instead of the subjective, self-reported values used elsewhere. We find a stable, robust value for the elasticity of trade secret intensity with respect to firm size, which indicates that a 10% reduction in firm size leads to a 7% increase in trade secret intensity. We find that this result is not sensitive to industrial sector, sample trimming, or functional form.
Resumo:
In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.