881 resultados para Qualitative data analysis software


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To outline the importance of the clarity of data analysis in the doing and reporting of interview-based qualitative research.

Approach: We explore the clear links between data analysis and evidence. We argue that transparency in the data analysis process is integral to determining the evidence that is generated. Data analysis must occur concurrently with data collection and comprises an ongoing process of 'testing the fit' between the data collected and analysis. We discuss four steps in the process of thematic data analysis: immersion, coding, categorising and generation of themes.

Conclusion: Rigorous and systematic analysis of qualitative data is integral to the production of high-quality research. Studies that give an explicit account of the data analysis process provide insights into how conclusions are reached while studies that explain themes anchored to data and theory produce the strongest evidence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional methods of qualitative data analysis require transcription of audio-recorded data prior to conduct of the coding and analysis process. In this paper Alison Hutchinson describes and illustrates an innovative method of data analysis that comprises the use of audio-editing software to save selected audio bytes from digital audio recordings of meetings. The use of a database to code and manage the linked audio files and generate detailed and summary reports, including reporting of code frequencies according to participant code and/or meeting, is also highlighted. The advantage of using this approach in the analysis of audio-recorded data is that the process may be undertaken in the medium in which the data were collected. Though time-consuming, this process negates the need for expensive and time intensive transcription of recorded data.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developers of interactive software are confronted by an increasing variety of software tools to help engineer the interactive aspects of software applications. Not only do these tools fall into different categories in terms of functionality, but within each category there is a growing number of competing tools with similar, although not identical, features. Choice of user interface development tool (UIDT) is therefore becoming increasingly complex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations. Discussion. The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve clinicians, patients and lay people. The stages of the method are illustrated using examples from a published study. Summary. Used effectively, with the leadership of an experienced qualitative researcher, the Framework Method is a systematic and flexible approach to analysing qualitative data and is appropriate for use in research teams even where not all members have previous experience of conducting qualitative research. © 2013 Gale et al.; licensee BioMed Central Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

- Background Substance use is common among gay/bisexual men and is associated with significant health risks (e.g. HIV transmission). The consequences of substance use, across the range of substances commonly used, have received little attention. The purpose of this study is to map participant’s beliefs about the effects of substance use to inform prevention, health promotion and clinical interventions. - Methods Participants were interviewed about experiences regarding their substance use and recruited through medical and sexual health clinics. Data were collected though a consumer panel and individual interviews. Responses regarding perceived consequences of substance use were coded using Consensual Qualitative Research (CQR) methodology. - Results Most participants reported lifetime use of alcohol, cannabis, stimulants and amyl nitrite, and recent alcohol and cannabis use. A wide range of themes were identified regarding participant’s thoughts, emotions and behaviours (including sexual behaviours) secondary to substance use, including: cognitive functioning, mood, social interaction, physical effects, sexual activity, sexual risk-taking, perception of sexual experience, arousal, sensation, relaxation, disinhibition, energy/activity level and numbing. Analyses indicated several consequences were consistent across substance types (e.g. cognitive impairment, enhanced mood), whereas others were highly specific to a given substance (e.g. heightened arousal post amyl nitrite use). - Conclusions Prevention and interventions need to consider the variety of effects of substance use in tailoring effective education programs to reduce harms. A diversity of consequences appear to have direct and indirect impacts on decision-making, sexual activity and risk-taking. Findings lend support for the role of specific beliefs (e.g. expectancies) related to substance use on risk-related cognitions, emotions and behaviours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collecting regular personal reflections from first year teachers in rural and remote schools is challenging as they are busily absorbed in their practice, and separated from each other and the researchers by thousands of kilometres. In response, an innovative web-based solution was designed to both collect data and be a responsive support system for early career teachers as they came to terms with their new professional identities within rural and remote school settings. Using an emailed link to a web-based application named goingok.com, the participants are charting their first year plotlines using a sliding scale from ‘distressed’, ‘ok’ to ‘soaring’ and describing their self-assessment in short descriptive posts. These reflections are visible to the participants as a developing online journal, while the collections of de-identified developing plotlines are visible to the research team, alongside numerical data. This paper explores important aspects of the design process, together with the challenges and opportunities encountered in its implementation. A number of the key considerations for choosing to develop a web application for data collection are initially identified, and the resultant application features and scope are then examined. Examples are then provided about how a responsive software development approach can be part of a supportive feedback loop for participants while being an effective data collection process. Opportunities for further development are also suggested with projected implications for future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ordinal qualitative data are often collected for phenotypical measurements in plant pathology and other biological sciences. Statistical methods, such as t tests or analysis of variance, are usually used to analyze ordinal data when comparing two groups or multiple groups. However, the underlying assumptions such as normality and homogeneous variances are often violated for qualitative data. To this end, we investigated an alternative methodology, rank regression, for analyzing the ordinal data. The rank-based methods are essentially based on pairwise comparisons and, therefore, can deal with qualitative data naturally. They require neither normality assumption nor data transformation. Apart from robustness against outliers and high efficiency, the rank regression can also incorporate covariate effects in the same way as the ordinary regression. By reanalyzing a data set from a wheat Fusarium crown rot study, we illustrated the use of the rank regression methodology and demonstrated that the rank regression models appear to be more appropriate and sensible for analyzing nonnormal data and data with outliers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is an investigation into the nature of data analysis and computer software systems which support this activity.

The first chapter develops the notion of data analysis as an experimental science which has two major components: data-gathering and theory-building. The basic role of language in determining the meaningfulness of theory is stressed, and the informativeness of a language and data base pair is studied. The static and dynamic aspects of data analysis are then considered from this conceptual vantage point. The second chapter surveys the available types of computer systems which may be useful for data analysis. Particular attention is paid to the questions raised in the first chapter about the language restrictions imposed by the computer system and its dynamic properties.

The third chapter discusses the REL data analysis system, which was designed to satisfy the needs of the data analyzer in an operational relational data system. The major limitation on the use of such systems is the amount of access to data stored on a relatively slow secondary memory. This problem of the paging of data is investigated and two classes of data structure representations are found, each of which has desirable paging characteristics for certain types of queries. One representation is used by most of the generalized data base management systems in existence today, but the other is clearly preferred in the data analysis environment, as conceptualized in Chapter I.

This data representation has strong implications for a fundamental process of data analysis -- the quantification of variables. Since quantification is one of the few means of summarizing and abstracting, data analysis systems are under strong pressure to facilitate the process. Two implementations of quantification are studied: one analagous to the form of the lower predicate calculus and another more closely attuned to the data representation. A comparison of these indicates that the use of the "label class" method results in orders of magnitude improvement over the lower predicate calculus technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The measured time-history of the cylinder pressure is the principal diagnostic in the analysis of processes within the combustion chamber. This paper defines, implements and tests a pressure analysis algorithm for a Formula One racing engine in MATLAB1. Evaluation of the software on real data is presented. The sensitivity of the model to the variability of burn parameter estimates is also discussed. Copyright © 1997 Society of Automotive Engineers, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lee M.H., Bell J. and Coghill G.M., Ambiguities and Deviations in Qualitative Circuit Analysis, in Proc. QR?2001, 15th Int. Workshop on Qualitative Reasoning, San Antonio, Texas, May 2001, pp51-58.