979 resultados para Data portal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of conceptual frameworks for the analysis of social exclusion has somewhat out-stripped related methodological developments. This paper seeks to contribute to filling this gap through the application of self-organising maps (SOMs) to the analysis of a detailed set of material deprivation indicators relating to the Irish case. The SOM approach allows us to offer a differentiated and interpretable picture of the structure of multiple deprivation in contemporary Ireland. Employing this approach, we identify 16 clusters characterised by distinct profiles across 42 deprivation indicators. Exploratory analyses demonstrate that, controlling for equivalised household income, SOM cluster membership adds substantially to our ability to predict subjective economic stress. Moreover, in comparison with an analogous latent class approach, the SOM analysis offers considerable additional discriminatory power in relation to individuals' experience of their economic circumstances. The results suggest that the SOM approach could prove a valuable addition to a 'methodological platform' for analysing the shape and form of social exclusion. (c) 2009 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a data model for content representation of temporal media in an IP based sensor network. The model is formed by introducing the idea of semantic-role from linguistics into the underlying concepts of formal event representation with the aim of developing a common event model. The architecture of a prototype system for a multi camera surveillance system, based on the proposed model is described. The important aspects of the proposed model are its expressiveness, its ability to model content of temporal media, and its suitability for use with a natural language interface. It also provides a platform for temporal information fusion, as well as organizing sensor annotations by help of ontologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Popular approaches in human tissue-based biomarker discovery include tissue microarrays (TMAs) and DNA Microarrays (DMAs) for protein and gene expression profiling respectively. The data generated by these analytic platforms, together with associated image, clinical and pathological data currently reside on widely different information platforms, making searching and cross-platform analysis difficult. Consequently, there is a strong need to develop a single coherent database capable of correlating all available data types.

Method: This study presents TMAX, a database system to facilitate biomarker discovery tasks. TMAX organises a variety of biomarker discovery-related data into the database. Both TMA and DMA experimental data are integrated in TMAX and connected through common DNA/protein biomarkers. Patient clinical data (including tissue pathological data), computer assisted tissue image and associated analytic data are also included in TMAX to enable the truly high throughput processing of ultra-large digital slides for both TMAs and whole slide tissue digital slides. A comprehensive web front-end was built with embedded XML parser software and predefined SQL queries to enable rapid data exchange in the form of standard XML files.

Results & Conclusion: TMAX represents one of the first attempts to integrate TMA data with public gene expression experiment data. Experiments suggest that TMAX is robust in managing large quantities of data from different sources (clinical, TMA, DMA and image analysis). Its web front-end is user friendly, easy to use, and most importantly allows the rapid and easy data exchange of biomarker discovery related data. In conclusion, TMAX is a robust biomarker discovery data repository and research tool, which opens up the opportunities for biomarker discovery and further integromics research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ineffective risk stratification can delay diagnosis of serious disease in patients with hematuria. We applied a systems biology approach to analyze clinical, demographic and biomarker measurements (n = 29) collected from 157 hematuric patients: 80 urothelial cancer (UC) and 77 controls with confounding pathologies.

Methods: On the basis of biomarkers, we conducted agglomerative hierarchical clustering to identify patient and biomarker clusters. We then explored the relationship between the patient clusters and clinical characteristics using Chi-square analyses. We determined classification errors and areas under the receiver operating curve of Random Forest Classifiers (RFC) for patient subpopulations using the biomarker clusters to reduce the dimensionality of the data.

Results: Agglomerative clustering identified five patient clusters and seven biomarker clusters. Final diagnoses categories were non-randomly distributed across the five patient clusters. In addition, two of the patient clusters were enriched with patients with ‘low cancer-risk’ characteristics. The biomarkers which contributed to the diagnostic classifiers for these two patient clusters were similar. In contrast, three of the patient clusters were significantly enriched with patients harboring ‘high cancer-risk” characteristics including proteinuria, aggressive pathological stage and grade, and malignant cytology. Patients in these three clusters included controls, that is, patients with other serious disease and patients with cancers other than UC. Biomarkers which contributed to the diagnostic classifiers for the largest ‘high cancer- risk’ cluster were different than those contributing to the classifiers for the ‘low cancer-risk’ clusters. Biomarkers which contributed to subpopulations that were split according to smoking status, gender and medication were different.

Conclusions: The systems biology approach applied in this study allowed the hematuric patients to cluster naturally on the basis of the heterogeneity within their biomarker data, into five distinct risk subpopulations. Our findings highlight an approach with the promise to unlock the potential of biomarkers. This will be especially valuable in the field of diagnostic bladder cancer where biomarkers are urgently required. Clinicians could interpret risk classification scores in the context of clinical parameters at the time of triage. This could reduce cystoscopies and enable priority diagnosis of aggressive diseases, leading to improved patient outcomes at reduced costs. © 2013 Emmert-Streib et al; licensee BioMed Central Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Medical geology research has recognised a number of potentially toxic elements (PTEs), such as arsenic, cobalt, chromium, copper, nickel, lead, vanadium, uranium and zinc, known to influence human disease by their respective deficiency or toxicity. As the impact of infectious diseases has decreased and the population ages, so cancer has become the most common cause of death in developed countries including Northern Ireland. This research explores the relationship between environmental exposure to potentially toxic elements in soil and cancer disease data across Northern Ireland. The incidence of twelve different cancer types (lung, stomach, leukaemia, oesophagus, colorectal, bladder, kidney, breast, mesothelioma, melanoma and non melanoma(NM) both basal and squamous, were examined in the form of twenty-five coded datasets comprising aggregates over the 12 year period from 1993 to 2006. A local modelling technique,geographically weighted regression (GWR) is usedto explore the relationship between environmental exposure and cancer disease data. The results show comparisons of the geographical incidence of certain cancers (stomach and NM squamous skin cancer) in relation to concentrations of certain PTEs (arsenic levels in soils and radon were identified). Findings from the research have implications for regional human health risk assessments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Several surveillance definitions of influenza-like illness (ILI) have been proposed, based on the presence of symptoms. Symptom data can be obtained from patients, medical records, or both. Past research has found that agreements between health record data and self-report are variable depending on the specific symptom. Therefore, we aimed to explore the implications of using data on influenza symptoms extracted from medical records, similar data collected prospectively from outpatients, and the combined data from both sources as predictors of laboratory-confirmed influenza. Methods: Using data from the Hutterite Influenza Prevention Study, we calculated: 1) the sensitivity, specificity and predictive values of individual symptoms within surveillance definitions; 2) how frequently surveillance definitions correlated to laboratory-confirmed influenza; and 3) the predictive value of surveillance definitions. Results: Of the 176 participants with reports from participants and medical records, 142 (81%) were tested for influenza and 37 (26%) were PCR positive for influenza. Fever (alone) and fever combined with cough and/or sore throat were highly correlated with being PCR positive for influenza for all data sources. ILI surveillance definitions, based on symptom data from medical records only or from both medical records and self-report, were better predictors of laboratory-confirmed influenza with higher odds ratios and positive predictive values. Discussion: The choice of data source to determine ILI will depend on the patient population, outcome of interest, availability of data source, and use for clinical decision making, research, or surveillance. © Canadian Public Health Association, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data obtained with any research tool must be reproducible, a concept referred to as reliability. Three techniques are often used to evaluate reliability of tools using continuous data in aging research: intraclass correlation coefficients (ICC), Pearson correlations, and paired t tests. These are often construed as equivalent when applied to reliability. This is not correct, and may lead researchers to select instruments based on statistics that may not reflect actual reliability. The purpose of this paper is to compare the reliability estimates produced by these three techniques and determine the preferable technique. A hypothetical dataset was produced to evaluate the reliability estimates obtained with ICC, Pearson correlations, and paired t tests in three different situations. For each situation two sets of 20 observations were created to simulate an intrarater or inter-rater paradigm, based on 20 participants with two observations per participant. Situations were designed to demonstrate good agreement, systematic bias, or substantial random measurement error. In the situation demonstrating good agreement, all three techniques supported the conclusion that the data were reliable. In the situation demonstrating systematic bias, the ICC and t test suggested the data were not reliable, whereas the Pearson correlation suggested high reliability despite the systematic discrepancy. In the situation representing substantial random measurement error where low reliability was expected, the ICC and Pearson coefficient accurately illustrated this. The t test suggested the data were reliable. The ICC is the preferred technique to measure reliability. Although there are some limitations associated with the use of this technique, they can be overcome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe an approach aimed at addressing the issue of joint exploitation of control (stream) and data parallelism in a skeleton based parallel programming environment, based on annotations and refactoring. Annotations drive efficient implementation of a parallel computation. Refactoring is used to transform the associated skeleton tree into a more efficient, functionally equivalent skeleton tree. In most cases, cost models are used to drive the refactoring process. We show how sample use case applications/kernels may be optimized and discuss preliminary experiments with FastFlow assessing the theoretical results. © 2013 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data flow techniques have been around since the early '70s when they were used in compilers for sequential languages. Shortly after their introduction they were also consideredas a possible model for parallel computing, although the impact here was limited. Recently, however, data flow has been identified as a candidate for efficient implementation of various programming models on multi-core architectures. In most cases, however, the burden of determining data flow "macro" instructions is left to the programmer, while the compiler/run time system manages only the efficient scheduling of these instructions. We discuss a structured parallel programming approach supporting automatic compilation of programs to macro data flow and we show experimental results demonstrating the feasibility of the approach and the efficiency of the resulting "object" code on different classes of state-of-the-art multi-core architectures. The experimental results use different base mechanisms to implement the macro data flow run time support, from plain pthreads with condition variables to more modern and effective lock- and fence-free parallel frameworks. Experimental results comparing efficiency of the proposed approach with those achieved using other, more classical, parallel frameworks are also presented. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador: