932 resultados para Qualitative data analysis software


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is a brief report of a research project, coordinated by me and funded by the Portuguese Government. It studies ‘The Representation of the Feminine in the Portuguese Press’ (POCI/COM 55780/2004), and works on the content analysis of discourse on the feminine in various Portuguese newspapers, covering the time span of February 1st till April 30th 2006. The paper is divided into two parts: in the first part, I will briefly discuss the typology used to code the text units of selected articles; in the second part, I will explore the most expressive percentages of the first two weeks of February for the content analysis of the Diário de Notícias newspaper. These percentages were obtained with the NVivo 6 qualitative data treatment software programme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on ""Joint Research Using Small Tokamaks"". (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I takt med att GIS (Grafiska InformationsSystem) blir allt vanligare och mer användarvänligt har WM-data sett att kunder skulle ha intresse i att kunna koppla information från sin verksamhet till en kartbild. Detta för att lättare kunna ta till sig informationen om hur den geografiskt finns utspridd över ett område för att t.ex. ordna effektivare tranporter. WM-data, som det här arbetet är utfört åt, avser att ta fram en prototyp som sedan kan visas upp för att påvisa för kunder och andra intressenter att detta är möjligt att genomföra genom att skapa en integration mellan redan befintliga system. I det här arbetet har prototypen tagits fram med skogsindustrin och dess lager som inriktning. Befintliga program som integrationen ska skapas mellan är båda webbaserade och körs i en webbläsare. Analysprogrammet som ska användas heter Insikt och är utvecklat av företaget Trimma, kartprogrammet heter GIMS som är WM-datas egna program. Det ska vara möjligt att i Insikt analysera data och skapa en rapport. Den ska sedan skickas till GIMS där informationen skrivs ut på kartan på den plats som respektive information hör till. Det ska även gå att välja ut ett eller flera områden i kartan och skicka till Insikt för att analysera information från enbart de utvalda områdena. En prototyp med önskad funktionalitet har under arbetets gång tagits fram, men för att ha en säljbar produkt är en del arbeta kvar. Prototypen har visats för ett antal intresserade som tyckte det var intressant och tror att det är något som skulle kunna användas flitigt inom många områden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: High intercoder reliability (ICR) is required in qualitative content analysis for assuring quality when more than one coder is involved in data analysis. The literature is short of standardized procedures for ICR procedures in qualitative content analysis. OBJECTIVE: To illustrate how ICR assessment can be used to improve codings in qualitative content analysis. METHODS: Key steps of the procedure are presented, drawing on data from a qualitative study on patients' perspectives on low back pain. RESULTS: First, a coding scheme was developed using a comprehensive inductive and deductive approach. Second, 10 transcripts were coded independently by two researchers, and ICR was calculated. A resulting kappa value of .67 can be regarded as satisfactory to solid. Moreover, varying agreement rates helped to identify problems in the coding scheme. Low agreement rates, for instance, indicated that respective codes were defined too broadly and would need clarification. In a third step, the results of the analysis were used to improve the coding scheme, leading to consistent and high-quality results. DISCUSSION: The quantitative approach of ICR assessment is a viable instrument for quality assurance in qualitative content analysis. Kappa values and close inspection of agreement rates help to estimate and increase quality of codings. This approach facilitates good practice in coding and enhances credibility of analysis, especially when large samples are interviewed, different coders are involved, and quantitative results are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ahead of the World Cup in Brazil the crucial question for the Swiss national coach is the nomination of the starting eleven central back pair. A fuzzy set Qualitative Comparative Analysis assesses the defensive performances of different Swiss central back pairs during the World Cup campaign (2011 – 2014). This analysis advises Ottmar Hitzfeld to nominate Steve von Bergen and Johan Djourou as the starting eleven central back pair. The alternative with a substantially weaker empirical validity would be Johan Djourou together with Phillippe Senderos. Furthermore, this paper aims to be a step forward in mainstream football analytics. It analyses the undervalued and understudied defense (Anderson and Sally 2012, Statsbomb 2013) by explaining collective defensive performances instead of assessments of individual player or team performances. However, a qualitatively (better defensive metrics) and quantitatively (more games) improved and extended data set would allow for a more sophisticated analysis of collective defensive performances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article shows software that allows determining the statistical behavior of qualitative data originating surveys previously transformed with a Likert’s scale to quantitative data. The main intention is offer to users a useful tool to know statistics' characteristics and forecasts of financial risks in a fast and simple way. Additionally,this paper presents the definition of operational risk. On the other hand, the article explains different techniques to do surveys with a Likert’s scale (Avila, 2008) to know expert’s opinion with the transformation of qualitative data to quantitative data. In addition, this paper will show how is very easy to distinguish an expert’s opinion related to risk, but when users have a lot of surveys and matrices is very difficult to obtain results because is necessary to compare common data. On the other hand, statistical value representative must be extracted from common data to get weight of each risk. In the end, this article exposes the development of “Qualitative Operational Risk Software” or QORS by its acronym, which has been designed to determine the root of risks in organizations and its value at operational risk OpVaR (Jorion, 2008; Chernobai et al, 2008) when input data comes from expert’s opinion and their associated matrices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last several years there has been an increase in the amount of qualitative research using in-depth interviews and comprehensive content analyses in sport psychology. However, no explicit method has been provided to deal with the large amount of unstructured data. This article provides common guidelines for organizing and interpreting unstructured data. Two main operations are suggested and discussed: first, coding meaningful text segments, or creating tags, and second, regrouping similar text segments,or creating categories. Furthermore, software programs for the microcomputer are presented as away to facilitate the organization and interpretation of qualitative data

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By switching the level of analysis and aggregating data from the micro-level of individual cases to the macro-level, quantitative data can be analysed within a more case-based approach. This paper presents such an approach in two steps: In a first step, it discusses the combination of Social Network Analysis (SNA) and Qualitative Comparative Analysis (QCA) in a sequential mixed-methods research design. In such a design, quantitative social network data on individual cases and their relations at the micro-level are used to describe the structure of the network that these cases constitute at the macro-level. Different network structures can then be compared by QCA. This strategy allows adding an element of potential causal explanation to SNA, while SNA-indicators allow for a systematic description of the cases to be compared by QCA. Because mixing methods can be a promising, but also a risky endeavour, the methodological part also discusses the possibility that underlying assumptions of both methods could clash. In a second step, the research design presented beforehand is applied to an empirical study of policy network structures in Swiss politics. Through a comparison of 11 policy networks, causal paths that lead to a conflictual or consensual policy network structure are identified and discussed. The analysis reveals that different theoretical factors matter and that multiple conjunctural causation is at work. Based on both the methodological discussion and the empirical application, it appears that a combination of SNA and QCA can represent a helpful methodological design for social science research and a possibility of using quantitative data with a more case-based approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this second article, statistical ideas are extended to the problem of testing whether there is a true difference between two samples of measurements. First, it will be shown that the difference between the means of two samples comes from a population of such differences which is normally distributed. Second, the 't' distribution, one of the most important in statistics, will be applied to a test of the difference between two means using a simple data set drawn from a clinical experiment in optometry. Third, in making a t-test, a statistical judgement is made as to whether there is a significant difference between the means of two samples. Before the widespread use of statistical software, this judgement was made with reference to a statistical table. Even if such tables are not used, it is useful to understand their logical structure and how to use them. Finally, the analysis of data, which are known to depart significantly from the normal distribution, will be described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article provides a unique contribution to the debates about archived qualitative data by drawing on two uses of the same data - British Migrants in Spain: the Extent and Nature of Social Integration, 2003-2005 - by Jones (2009) and Oliver and O'Reilly (2010), both of which utilise Bourdieu's concepts analytically and produce broadly similar findings. We argue that whilst the insights and experiences of those researchers directly involved in data collection are important resources for developing contextual knowledge used in data analysis, other kinds of critical distance can also facilitate credible data use. We therefore challenge the assumption that the idiosyncratic relationship between context, reflexivity and interpretation limits the future use of data. Moreover, regardless of the complex genealogy of the data itself, given the number of contingencies shaping the qualitative research process and thus the potential for partial or inaccurate interpretation, contextual familiarity need not be privileged over other aspects of qualitative praxis such as sustained theoretical insight, sociological imagination and methodological rigour. © Sociological Research Online, 1996-2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using fuzzy-set qualitative comparative analysis (fsQCA), this study investigates the conditions leading to a higher level of innovation. More specifically, the study explores the impact of inter-organisational knowledge transfer networks and organisations' internal capabilities on different types of innovation in Small to Medium size Enterprises (SMEs) in the high-tech sector. A survey instrument was used to collect data from a sample of UK SMEs. The findings show that although individual factors are important, there is no need for a company to perform well in all the areas. The fsQCA, which enables the examination of the impacts of different combinations of factors, reveals that there are a number of paths to achieve better incremental and radical innovation performance. Companies need to choose the one that is closest to their abilities and fits best with their resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the latest development in computer science, multivariate data analysis methods became increasingly popular among economists. Pattern recognition in complex economic data and empirical model construction can be more straightforward with proper application of modern softwares. However, despite the appealing simplicity of some popular software packages, the interpretation of data analysis results requires strong theoretical knowledge. This book aims at combining the development of both theoretical and applicationrelated data analysis knowledge. The text is designed for advanced level studies and assumes acquaintance with elementary statistical terms. After a brief introduction to selected mathematical concepts, the highlighting of selected model features is followed by a practice-oriented introduction to the interpretation of SPSS1 outputs for the described data analysis methods. Learning of data analysis is usually time-consuming and requires efforts, but with tenacity the learning process can bring about a significant improvement of individual data analysis skills.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.