24 resultados para Content analysis (Communication) -- Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metabolic stable isotope labeling is increasingly employed for accurate protein (and metabolite) quantitation using mass spectrometry (MS). It provides sample-specific isotopologues that can be used to facilitate comparative analysis of two or more samples. Stable Isotope Labeling by Amino acids in Cell culture (SILAC) has been used for almost a decade in proteomic research and analytical software solutions have been established that provide an easy and integrated workflow for elucidating sample abundance ratios for most MS data formats. While SILAC is a discrete labeling method using specific amino acids, global metabolic stable isotope labeling using isotopes such as (15)N labels the entire element content of the sample, i.e. for (15)N the entire peptide backbone in addition to all nitrogen-containing side chains. Although global metabolic labeling can deliver advantages with regard to isotope incorporation and costs, the requirements for data analysis are more demanding because, for instance for polypeptides, the mass difference introduced by the label depends on the amino acid composition. Consequently, there has been less progress on the automation of the data processing and mining steps for this type of protein quantitation. Here, we present a new integrated software solution for the quantitative analysis of protein expression in differential samples and show the benefits of high-resolution MS data in quantitative proteomic analyses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a method that employs Earth Observation (EO) data to calculate spatiotemporal estimates of soil heat flux, G, using a physically-based method (the Analytical Method). The method involves a harmonic analysis of land surface temperature (LST) data. It also requires an estimate of near-surface soil thermal inertia; this property depends on soil textural composition and varies as a function of soil moisture content. The EO data needed to drive the model equations, and the ground-based data required to provide verification of the method, were obtained over the Fakara domain within the African Monsoon Multidisciplinary Analysis (AMMA) program. LST estimates (3 km × 3 km, one image 15 min−1) were derived from MSG-SEVIRI data. Soil moisture estimates were obtained from ENVISAT-ASAR data, while estimates of leaf area index, LAI, (to calculate the effect of the canopy on G, largely due to radiation extinction) were obtained from SPOT-HRV images. The variation of these variables over the Fakara domain, and implications for values of G derived from them, were discussed. Results showed that this method provides reliable large-scale spatiotemporal estimates of G. Variations in G could largely be explained by the variability in the model input variables. Furthermore, it was shown that this method is relatively insensitive to model parameters related to the vegetation or soil texture. However, the strong sensitivity of thermal inertia to soil moisture content at low values of relative saturation (<0.2) means that in arid or semi-arid climates accurate estimates of surface soil moisture content are of utmost importance, if reliable estimates of G are to be obtained. This method has the potential to improve large-scale evaporation estimates, to aid land surface model prediction and to advance research that aims to explain failure in energy balance closure of meteorological field studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter introduces the latest practices and technologies in the interactive interpretation of environmental data. With environmental data becoming ever larger, more diverse and more complex, there is a need for a new generation of tools that provides new capabilities over and above those of the standard workhorses of science. These new tools aid the scientist in discovering interesting new features (and also problems) in large datasets by allowing the data to be explored interactively using simple, intuitive graphical tools. In this way, new discoveries are made that are commonly missed by automated batch data processing. This chapter discusses the characteristics of environmental science data, common current practice in data analysis and the supporting tools and infrastructure. New approaches are introduced and illustrated from the points of view of both the end user and the underlying technology. We conclude by speculating as to future developments in the field and what must be achieved to fulfil this vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

n the past decade, the analysis of data has faced the challenge of dealing with very large and complex datasets and the real-time generation of data. Technologies to store and access these complex and large datasets are in place. However, robust and scalable analysis technologies are needed to extract meaningful information from these datasets. The research field of Information Visualization and Visual Data Analytics addresses this need. Information visualization and data mining are often used complementary to each other. Their common goal is the extraction of meaningful information from complex and possibly large data. However, though data mining focuses on the usage of silicon hardware, visualization techniques also aim to access the powerful image-processing capabilities of the human brain. This article highlights the research on data visualization and visual analytics techniques. Furthermore, we highlight existing visual analytics techniques, systems, and applications including a perspective on the field from the chemical process industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, there has been an increasing interest in the adoption of emerging ubiquitous sensor network (USN) technologies for instrumentation within a variety of sustainability systems. USN is emerging as a sensing paradigm that is being newly considered by the sustainability management field as an alternative to traditional tethered monitoring systems. Researchers have been discovering that USN is an exciting technology that should not be viewed simply as a substitute for traditional tethered monitoring systems. In this study, we investigate how a movement monitoring measurement system of a complex building is developed as a research environment for USN and related decision-supportive technologies. To address the apparent danger of building movement, agent-mediated communication concepts have been designed to autonomously manage large volumes of exchanged information. In this study, we additionally detail the design of the proposed system, including its principles, data processing algorithms, system architecture, and user interface specifics. Results of the test and case study demonstrate the effectiveness of the USN-based data acquisition system for real-time monitoring of movement operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The debate associated with the qualifications of business school faculty has raged since the 1959 release of the Gordon–Howell and Pierson reports, which encouraged business schools in the USA to enhance their legitimacy by increasing their faculties’ doctoral qualifications and scholarly rigor. Today, the legitimacy of specific faculty qualifications remains one of the most discussed topics in management education, attracting the interest of administrators, faculty, and accreditation agencies. Based on new institutional theory and the institutional logics perspective, this paper examines convergence and innovation in business schools through an analysis of faculty hiring criteria. The qualifications examined are academic degree, scholarly publications, teaching experience, and professional experience. Three groups of schools are examined based on type of university, position within a media ranking system, and accreditation by the Association to Advance Collegiate Schools of Business. Data are gathered using a content analysis of 441 faculty postings from business schools based in the USA over two time periods. Contrary to claims of global convergence, we find most qualifications still vary by group, even in the mature US market. Moreover, innovative hiring is more likely to be found in non-elite schools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To investigate the relationship between research data management (RDM) and data sharing in the formulation of RDM policies and development of practices in higher education institutions (HEIs). Design/methodology/approach: Two strands of work were undertaken sequentially: firstly, content analysis of 37 RDM policies from UK HEIs; secondly, two detailed case studies of institutions with different approaches to RDM based on semi-structured interviews with staff involved in the development of RDM policy and services. The data are interpreted using insights from Actor Network Theory. Findings: RDM policy formation and service development has created a complex set of networks within and beyond institutions involving different professional groups with widely varying priorities shaping activities. Data sharing is considered an important activity in the policies and services of HEIs studied, but its prominence can in most cases be attributed to the positions adopted by large research funders. Research limitations/implications: The case studies, as research based on qualitative data, cannot be assumed to be universally applicable but do illustrate a variety of issues and challenges experienced more generally, particularly in the UK. Practical implications: The research may help to inform development of policy and practice in RDM in HEIs and funder organisations. Originality/value: This paper makes an early contribution to the RDM literature on the specific topic of the relationship between RDM policy and services, and openness – a topic which to date has received limited attention.