71 resultados para multiple data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 'Troubled Families' policy and intervention agenda is based on a deficit approach that tends to ignore the role of structural disadvantage in the lives of the families it targets. In an effort to support this rhetoric, both quantitative and qualitative data have been used, and misused, to create a representation of these families, which emphasizes risk and individual blame and minimizes societal factors. This current paper presents findings from an in-depth qualitative study using a biographical narrative approach to explore parents' experiences of multiple adversities at different times over the life-course. Key themes relating to the pattern and nature of adversities experienced by participants provide a more nuanced understanding of the lives of families experiencing multiple and complex problems, highlighting how multiple interpretations are often possible within the context of professional intervention. The findings support the increasing call to move away from procedurally driven, risk averse child protection practice towards more relationally based practice, which addresses not only the needs of all family members but recognizes parents as individuals in their own right.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we explore ways to address the issue of dataset bias in person re-identification by using data augmentation to increase the variability of the available datasets, and we introduce a novel data augmentation method for re-identification based on changing the image background. We show that use of data augmentation can improve the cross-dataset generalisation of convolutional network based re-identification systems, and that changing the image background yields further improvements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Late-onset Alzheimer's disease (AD) is heritable with 20 genes showing genome-wide association in the International Genomics of Alzheimer's Project (IGAP). To identify the biology underlying the disease, we extended these genetic data in a pathway analysis.

Methods: The ALIGATOR and GSEA algorithms were used in the IGAP data to identify associated functional pathways and correlated gene expression networks in human brain.

Results: ALIGATOR identified an excess of curated biological pathways showing enrichment of association. Enriched areas of biology included the immune response (P = 3.27 X 10(-12) after multiple testing correction for pathways), regulation of endocytosis (P = 1.31 X 10(-11)), cholesterol transport (P = 2.96 X 10(-9)), and proteasome-ubiquitin activity (P = 1.34 X 10(-6)). Correlated gene expression analysis identified four significant network modules, all related to the immune response (corrected P = .002-.05).

Conclusions: The immime response, regulation of endocytosis, cholesterol transport, and protein ubiquitination represent prime targets for AD therapeutics. (C) 2015 Published by Elsevier Inc. on behalf of The Alzheimer's Association.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a framework for a telecommunications interface which allows data from sensors embedded in Smart Grid applications to reliably archive data in an appropriate time-series database. The challenge in doing so is two-fold, firstly the various formats in which sensor data is represented, secondly the problems of telecoms reliability. A prototype of the authors' framework is detailed which showcases the main features of the framework in a case study featuring Phasor Measurement Units (PMU) as the application. Useful analysis of PMU data is achieved whenever data from multiple locations can be compared on a common time axis. The prototype developed highlights its reliability, extensibility and adoptability; features which are largely deferred from industry standards for data representation to proprietary database solutions. The open source framework presented provides link reliability for any type of Smart Grid sensor and is interoperable with existing proprietary database systems, and open database systems. The features of the authors' framework allow for researchers and developers to focus on the core of their real-time or historical analysis applications, rather than having to spend time interfacing with complex protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: While the discovery of new drugs is a complex, lengthy and costly process, identifying new uses for existing drugs is a cost-effective approach to therapeutic discovery. Connectivity mapping integrates gene expression profiling with advanced algorithms to connect genes, diseases and small molecule compounds and has been applied in a large number of studies to identify potential drugs, particularly to facilitate drug repurposing. Colorectal cancer (CRC) is a commonly diagnosed cancer with high mortality rates, presenting a worldwide health problem. With the advancement of high throughput omics technologies, a number of large scale gene expression profiling studies have been conducted on CRCs, providing multiple datasets in gene expression data repositories. In this work, we systematically apply gene expression connectivity mapping to multiple CRC datasets to identify candidate therapeutics to this disease.

RESULTS: We developed a robust method to compile a combined gene signature for colorectal cancer across multiple datasets. Connectivity mapping analysis with this signature of 148 genes identified 10 candidate compounds, including irinotecan and etoposide, which are chemotherapy drugs currently used to treat CRCs. These results indicate that we have discovered high quality connections between the CRC disease state and the candidate compounds, and that the gene signature we created may be used as a potential therapeutic target in treating the disease. The method we proposed is highly effective in generating quality gene signature through multiple datasets; the publication of the combined CRC gene signature and the list of candidate compounds from this work will benefit both cancer and systems biology research communities for further development and investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the Coordinated Synoptic Investigation of NGC 2264, a continuous 30 day multi-wavelength photometric monitoring campaign on more than 1000 young cluster members using 16 telescopes. The unprecedented combination of multi-wavelength, high-precision, high-cadence, and long-duration data opens a new window into the time domain behavior of young stellar objects. Here we provide an overview of the observations, focusing on results from Spitzer and CoRoT. The highlight of this work is detailed analysis of 162 classical T Tauri stars for which we can probe optical and mid-infrared flux variations to 1% amplitudes and sub-hour timescales. We present a morphological variability census and then use metrics of periodicity, stochasticity, and symmetry to statistically separate the light curves into seven distinct classes, which we suggest represent different physical processes and geometric effects. We provide distributions of the characteristic timescales and amplitudes and assess the fractional representation within each class. The largest category (>20%) are optical "dippers" with discrete fading events lasting ~1-5 days. The degree of correlation between the optical and infrared light curves is positive but weak; notably, the independently assigned optical and infrared morphology classes tend to be different for the same object. Assessment of flux variation behavior with respect to (circum)stellar properties reveals correlations of variability parameters with Hα emission and with effective temperature. Overall, our results point to multiple origins of young star variability, including circumstellar obscuration events, hot spots on the star and/or disk, accretion bursts, and rapid structural changes in the inner disk. Based on data from the Spitzer and CoRoT missions. The CoRoT space mission was developed and is operated by the French space agency CNES, with participation of ESA's RSSD and Science Programmes, Austria, Belgium, Brazil, Germany, and Spain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Master data management (MDM) integrates data from multiple
structured data sources and builds a consolidated 360-
degree view of business entities such as customers and products.
Today’s MDM systems are not prepared to integrate
information from unstructured data sources, such as news
reports, emails, call-center transcripts, and chat logs. However,
those unstructured data sources may contain valuable
information about the same entities known to MDM from
the structured data sources. Integrating information from
unstructured data into MDM is challenging as textual references
to existing MDM entities are often incomplete and
imprecise and the additional entity information extracted
from text should not impact the trustworthiness of MDM
data.
In this paper, we present an architecture for making MDM
text-aware and showcase its implementation as IBM InfoSphere
MDM Extension for Unstructured Text Correlation,
an add-on to IBM InfoSphere Master Data Management
Standard Edition. We highlight how MDM benefits from
additional evidence found in documents when doing entity
resolution and relationship discovery. We experimentally
demonstrate the feasibility of integrating information from
unstructured data sources into MDM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current data-intensive image processing applications push traditional embedded architectures to their limits. FPGA based hardware acceleration is a potential solution but the programmability gap and time consuming HDL design flow is significant. The proposed research approach to develop “FPGA based programmable hardware acceleration platform” that uses, large number of Streaming Image processing Processors (SIPPro) potentially addresses these issues. SIPPro is pipelined in-order soft-core processor architecture with specific optimisations for image processing applications. Each SIPPro core uses 1 DSP48, 2 Block RAMs and 370 slice-registers, making the processor as compact as possible whilst maintaining flexibility and programmability. It is area efficient, scalable and high performance softcore architecture capable of delivering 530 MIPS per core using Xilinx Zynq SoC (ZC7Z020-3). To evaluate the feasibility of the proposed architecture, a Traffic Sign Recognition (TSR) algorithm has been prototyped on a Zedboard with the color and morphology operations accelerated using multiple SIPPros. Simulation and experimental results demonstrate that the processing platform is able to achieve a speedup of 15 and 33 times for color filtering and morphology operations respectively, with a significant reduced design effort and time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To evaluate temporal changes in GCF levels of substance P, cathepsin G, interleukin 1 beta (IL-1&beta), neutrophil elastase and alpha1-antitrypsin (&alpha1AT) during development of and recovery from experimental gingivitis. Methods: Healthy human volunteers participated in a split-mouth study: experimental gingivitis was induced using a soft vinyl splint to cover test teeth during brushing over 21 days, after which normal brushing was resumed. Modified gingival index (MGI), gingival bleeding index (BI) and modified Quigley and Hein plaque index (PI) were assessed and 30-second GCF samples taken from 4 paired test and contra-lateral control sites in each subject at days 0, 7, 14, 21, 28 and 42. GCF volume was measured and site-specific quantification of one analyte per GCF sample was performed using radioimmunoassay (substance P), enzyme assay (cathepsin G) or ELISA (IL-1&beta, elastase, &alpha1AT). Site-specific data were analysed using analysis of repeated measurements and paired sample tests. Results: 56 subjects completed the study. All measurements at baseline (day 0) and at control sites throughout the study were low. Clinical indices and GCF volumes at the test sites increased from day 0, peaking at day 21 (difference between test and control for PI, BI, MGI and GCF all p<0.0001) and decreased again to control levels by day 28. Levels of four inflammatory markers showed a similar pattern, with significant differences between test and control apparent at 7 days (substance P p=0.0015; cathepsin G p=0.029; IL-1&beta p=0.026; elastase p=0.0129) and peaking at day 21 (substance P p=0.0023; cathepsin G, IL-1&beta and elastase all p<0.0001). Levels of &alpha1AT showed no apparent pattern over the course of the study. Conclusion: GCF levels of substance P, cathepsin G, IL-1&beta and neutrophil elastase have the potential to act as early markers of experimentally-induced gingival inflammation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method for the detection and classification of multiple events in an electrical power system in real-time, namely; islanding, high frequency events (loss of load) and low frequency events (loss of generation). This method is based on principal component analysis of frequency measurements and employs a moving window approach to combat the time-varying nature of power systems, thereby increasing overall situational awareness of the power system. Numerical case studies using both real data, collected from the UK power system, and simulated case studies, constructed using DigSilent PowerFactory, for islanding events, as well as both loss of load and generation dip events, are used to demonstrate the reliability of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rapid and affordable tumor molecular profiling has led to an explosion of clinical and genomic data poised to enhance the diagnosis, prognostication and treatment of cancer. A critical point has now been reached at which the analysis and storage of annotated clinical and genomic information in unconnected silos will stall the advancement of precision cancer care. Information systems must be harmonized to overcome the multiple technical and logistical barriers to data sharing. Against this backdrop, the Global Alliance for Genomic Health (GA4GH) was established in 2013 to create a common framework that enables responsible, voluntary and secure sharing of clinical and genomic data. This Perspective from the GA4GH Clinical Working Group Cancer Task Team highlights the data-aggregation challenges faced by the field, suggests potential collaborative solutions and describes how GA4GH can catalyze a harmonized data-sharing culture.