906 resultados para Content analysis (Communication) -- Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a lack of research on the everyday lives of older people in developing countries. This exploratory study used structured observation and content analysis to examine the presence of older people in public fora, and considered the methods’ potential for understanding older people’s social integration and inclusion. Structured observation occurred of public social spaces in six cities each located in a different developing country, and in one city in the United Kingdom, together with content analysis of the presence of people in newspaper pictures and on television in the selected countries. Results indicated that across all fieldwork sites and data sources, there was a low presence of older people, with women considerably less present than men in developing countries. There was variation across fieldwork sites in older people’s presence by place and time of day, and in their accompanied status. The presence of older people in images drawn from newspapers was associated with the news/non-news nature of the source. The utility of the study’s methodological approach is considered, as is the degree to which the presence of older people in public fora might relate to social integration and inclusion in different cultural contexts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data. © 2010 IOP Publishing Ltd and SISSA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: High intercoder reliability (ICR) is required in qualitative content analysis for assuring quality when more than one coder is involved in data analysis. The literature is short of standardized procedures for ICR procedures in qualitative content analysis. OBJECTIVE: To illustrate how ICR assessment can be used to improve codings in qualitative content analysis. METHODS: Key steps of the procedure are presented, drawing on data from a qualitative study on patients' perspectives on low back pain. RESULTS: First, a coding scheme was developed using a comprehensive inductive and deductive approach. Second, 10 transcripts were coded independently by two researchers, and ICR was calculated. A resulting kappa value of .67 can be regarded as satisfactory to solid. Moreover, varying agreement rates helped to identify problems in the coding scheme. Low agreement rates, for instance, indicated that respective codes were defined too broadly and would need clarification. In a third step, the results of the analysis were used to improve the coding scheme, leading to consistent and high-quality results. DISCUSSION: The quantitative approach of ICR assessment is a viable instrument for quality assurance in qualitative content analysis. Kappa values and close inspection of agreement rates help to estimate and increase quality of codings. This approach facilitates good practice in coding and enhances credibility of analysis, especially when large samples are interviewed, different coders are involved, and quantitative results are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The climate change narrative has changed from one of mitigation to one of adaptation. Governments around the world have created climate change frameworks which address how the country can better cope with the expected and unexpected changes due to global climate change. In an effort to do so, federal governments of Canada and the United States, as well as some provinces and states within these countries, have created detailed documents which outline what steps must be taken to adapt to these changes. However, not much is mentioned about how these steps will be translated in to policy, and how that policy will eventually be implemented. To examine the ability of governments to acknowledge and incorporate the plethora of scientific information to policy, consideration must be made for policy capacity. This report focuses on three sectors: water supply and demand; drought and flood planning; and forest and grassland ecosystems, and the word ‘capacity’ as related to nine different forms of policy capacity acknowledged in these frameworks. Qualitative content analysis using NVivo was carried out on fifty four frameworks and the results obtained show that there is a greater consideration for managerial capacity compared to analytical or political capacity. The data also indicated that although there were more Canadian frameworks which referred to policy capacity, the frameworks from the United States actually considered policy capacity to a greater degree.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation, the cytogenetic characteristics of bone marrow cells from 41 multiple myeloma patients were investigated. These cytogenetic data were correlated with the total DNA content as measured by flow cytometry. Both the cytogenetic information and DNA content were then correlated with clinical data to determine if diagnosis and prognosis of multiple myeloma could be improved.^ One hundred percent of the patients demonstrated abnormal chromosome numbers per metaphase. The average chromosome number per metaphase ranged from 42 to 49.9, with a mean of 44.99. The percent hypodiploidy ranged from 0-100% and the percent hyperdiploidy from 0-53%. Detailed cytogenetic analyses were very difficult to perform because of the paucity of mitotic figures and the poor chromosome morphology. Thus, detailed chromosome banding analysis on these patients was impossible.^ Thirty seven percent of the patients had normal total DNA content, whereas 63% had abnormal amounts of DNA (one patient with less than normal amounts and 25 patients with greater than normal amounts of DNA).^ Several clinical parameters were used in the statistical analyses: tumor burden, patient status at biopsy, patient response status, past therapy, type of treatment and percent plasma cells. Only among these clinical parameters were any statistically significant correlations found: pretreatment tumor burden versus patient response, patient biopsy status versus patient response and past therapy versus patient response.^ No correlations were found between percent hypodiploid, diploid, hyperdiploid or DNA content, and the patient response status, nor were any found between those patients with: (a) normal plasma cells, low pretreatment tumor mass burden and more than 50% of the analyzed metaphases with 46 chromosomes; (b) normal amounts of DNA, low pretreatment tumor mass burden and more than 50% of the metaphases with 46 chromosomes; (c) normal amounts of DNA and normal quantities of plasma cells; (d) abnormal amounts of DNA, abnormal amounts of plasma cells, high pretreatment tumor mass burden and less than 50% of the metaphases with 46 chromosomes.^ Technical drawbacks of both cytogenetic and DNA content analysis in these multiple myeloma patients are discussed along with the lack of correlations between DNA content and chromosome number. Refined chromosome banding analysis awaits technical improvements before we can understand which chromosome material (if any) makes up the "extra" amounts of DNA in these patients. None of the correlations tested can be used as diagnostic or prognostic aids for multiple myeloma. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the United States, “binge” drinking among college students is an emerging public health concern due to the significant physical and psychological effects on young adults. The focus is on identifying interventions that can help decrease high-risk drinking behavior among this group of drinkers. One such intervention is Motivational interviewing (MI), a client-centered therapy that aims at resolving client ambivalence by developing discrepancy and engaging the client in change talk. Of late, there is a growing interest in determining the active ingredients that influence the alliance between the therapist and the client. This study is a secondary analysis of the data obtained from the Southern Methodist Alcohol Research Trial (SMART) project, a dismantling trial of MI and feedback among heavy drinking college students. The present project examines the relationship between therapist and client language in MI sessions on a sample of “binge” drinking college students. Of the 126 SMART tapes, 30 tapes (‘MI with feedback’ group = 15, ‘MI only’ group = 15) were randomly selected for this study. MISC 2.1, a mutually exclusive and exhaustive coding system, was used to code the audio/videotaped MI sessions. Therapist and client language were analyzed for communication characteristics. Overall, therapists adopted a MI consistent style and clients were found to engage in change talk. Counselor acceptance, empathy, spirit, and complex reflections were all significantly related to client change talk (p-values ranged from 0.001 to 0.047). Additionally, therapist ‘advice without permission’ and MI Inconsistent therapist behaviors were strongly correlated with client sustain talk (p-values ranged from 0.006 to 0.048). Simple linear regression models showed a significant correlation between MI consistent (MICO) therapist language (independent variable) and change talk (dependent variable) and MI inconsistent (MIIN) therapist language (independent variable) and sustain talk (dependent variable). The study has several limitations such as small sample size, self-selection bias, poor inter-rater reliability for the global scales and the lack of a temporal measure of therapist and client language. Future studies might consider a larger sample size to obtain more statistical power. In addition the correlation between therapist language, client language and drinking outcome needs to be explored.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a series of attempts to research and document relevant sloshing type phenomena, a series of experiments have been conducted. The aim of this paper is to describe the setup and data processing of such experiments. A sloshing tank is subjected to angular motion. As a result pressure registers are obtained at several locations, together with the motion data, torque and a collection of image and video information. The experimental rig and the data acquisition systems are described. Useful information for experimental sloshing research practitioners is provided. This information is related to the liquids used in the experiments, the dying techniques, tank building processes, synchronization of acquisition systems, etc. A new procedure for reconstructing experimental data, that takes into account experimental uncertainties, is presented. This procedure is based on a least squares spline approximation of the data. Based on a deterministic approach to the first sloshing wave impact event in a sloshing experiment, an uncertainty analysis procedure of the associated first pressure peak value is described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report sheds light on the fundamental questions and underlying tensions between current policy objectives, compliance strategies and global trends in online personal data processing, assessing the existing and future framework in terms of effective regulation and public policy. Based on the discussions among the members of the CEPS Digital Forum and independent research carried out by the rapporteurs, policy conclusions are derived with the aim of making EU data protection policy more fit for purpose in today’s online technological context. This report constructively engages with the EU data protection framework, but does not provide a textual analysis of the EU data protection reform proposal as such.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cover title.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photonic technologies for data processing in the optical domain are expected to play a major role in future high-speed communications. Nonlinear effects in optical fibres have many attractive features and great, but not yet fully explored potential for optical signal processing. Here we provide an overview of our recent advances in developing novel techniques and approaches to all-optical processing based on fibre nonlinearities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first experimental implementation of a recently designed quasi-lossless fibre span with strongly reduced signal power excursion. The resulting fibre waveguide medium can be advantageously used both in lightwave communications and in all-optical nonlinear data processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first experimental implementation of a recently designed quasi-lossless fibre span with strongly reduced signal power excursion. The resulting fibre waveguide medium can be advantageously used both in lightwave communications and in all-optical nonlinear data processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large number of studies have been devoted to modeling the contents and interactions between users on Twitter. In this paper, we propose a method inspired from Social Role Theory (SRT), which assumes that a user behaves differently in different roles in the generation process of Twitter content. We consider the two most distinctive social roles on Twitter: originator and propagator, who respectively posts original messages and retweets or forwards the messages from others. In addition, we also consider role-specific social interactions, especially implicit interactions between users who share some common interests. All the above elements are integrated into a novel regularized topic model. We evaluate the proposed method on real Twitter data. The results show that our method is more effective than the existing ones which do not distinguish social roles. Copyright 2013 ACM.