826 resultados para Data mining models
Resumo:
This paper describes a novel system for automatic classification of images obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The IIF protocol on HEp-2 cells has been the hallmark method to identify the presence of ANAs, due to its high sensitivity and the large range of antigens that can be detected. However, it suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg. speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. We propose a novel automatic cell image classification method termed Cell Pyramid Matching (CPM), which is comprised of regional histograms of visual words coupled with the Multiple Kernel Learning framework. We present a study of several variations of generating histograms and show the efficacy of the system on two publicly available datasets: the ICPR HEp-2 cell classification contest dataset and the SNPHEp-2 dataset.
Resumo:
Recent advances in computer vision and machine learning suggest that a wide range of problems can be addressed more appropriately by considering non-Euclidean geometry. In this paper we explore sparse dictionary learning over the space of linear subspaces, which form Riemannian structures known as Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into the space of symmetric matrices by an isometric mapping, which enables us to devise a closed-form solution for updating a Grassmann dictionary, atom by atom. Furthermore, to handle non-linearity in data, we propose a kernelised version of the dictionary learning algorithm. Experiments on several classification tasks (face recognition, action recognition, dynamic texture classification) show that the proposed approach achieves considerable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as kernelised Affine Hull Method and graph-embedding Grassmann discriminant analysis.
Resumo:
Person re-identification is particularly challenging due to significant appearance changes across separate camera views. In order to re-identify people, a representative human signature should effectively handle differences in illumination, pose and camera parameters. While general appearance-based methods are modelled in Euclidean spaces, it has been argued that some applications in image and video analysis are better modelled via non-Euclidean manifold geometry. To this end, recent approaches represent images as covariance matrices, and interpret such matrices as points on Riemannian manifolds. As direct classification on such manifolds can be difficult, in this paper we propose to represent each manifold point as a vector of similarities to class representers, via a recently introduced form of Bregman matrix divergence known as the Stein divergence. This is followed by using a discriminative mapping of similarity vectors for final classification. The use of similarity vectors is in contrast to the traditional approach of embedding manifolds into tangent spaces, which can suffer from representing the manifold structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS datasets for the person re-identification task show that the proposed approach obtains better performance than recent techniques such as Histogram Plus Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local Features.
Resumo:
Background: Extreme heat is a leading weather-related cause of illness and death in many locations across the globe, including subtropical Australia. The possibility of increasingly frequent and severe heat waves warrants continued efforts to reduce this health burden, which could be accomplished by targeting intervention measures toward the most vulnerable communities. Objectives: We sought to quantify spatial variability in heat-related morbidity in Brisbane, Australia, to highlight regions of the city with the greatest risk. We also aimed to find area-level social and environmental determinants of high risk within Brisbane. Methods: We used a series of hierarchical Bayesian models to examine city-wide and intracity associations between temperature and morbidity using a 2007–2011 time series of geographically referenced hospital admissions data. The models accounted for long-term time trends, seasonality, and day of week and holiday effects. Results: On average, a 10°C increase in daily maximum temperature during the summer was associated with a 7.2% increase in hospital admissions (95% CI: 4.7, 9.8%) on the following day. Positive statistically significant relationships between admissions and temperature were found for 16 of the city’s 158 areas; negative relationships were found for 5 areas. High-risk areas were associated with a lack of high income earners and higher population density. Conclusions: Geographically targeted public health strategies for extreme heat may be effective in Brisbane, because morbidity risk was found to be spatially variable. Emergency responders, health officials, and city planners could focus on short- and long-term intervention measures that reach communities in the city with lower incomes and higher population densities, including reduction of urban heat island effects.
Resumo:
Through the application of process mining, valuable evidence-based insights can be obtained about business processes in organisations. As a result the field has seen an increased uptake in recent years as evidenced by success stories and increased tool support. However, despite this impact, current performance analysis capabilities remain somewhat limited in the context of information-poor event logs. For example, natural daily and weekly patterns are not considered. In this paper a new framework for analysing event logs is defined which is based on the concept of event gap. The framework allows for a systematic approach to sophisticated performance-related analysis of event logs containing varying degrees of information. The paper formalises a range of event gap types and then presents an implementation as well as an evaluation of the proposed approach.
Resumo:
Land-use regression (LUR) is a technique that can improve the accuracy of air pollution exposure assessment in epidemiological studies. Most LUR models are developed for single cities, which places limitations on their applicability to other locations. We sought to develop a model to predict nitrogen dioxide (NO2) concentrations with national coverage of Australia by using satellite observations of tropospheric NO2 columns combined with other predictor variables. We used a generalised estimating equation (GEE) model to predict annual and monthly average ambient NO2 concentrations measured by a national monitoring network from 2006 through 2011. The best annual model explained 81% of spatial variation in NO2 (absolute RMS error=1.4 ppb), while the best monthly model explained 76% (absolute RMS error=1.9 ppb). We applied our models to predict NO2 concentrations at the ~350,000 census mesh blocks across the country (a mesh block is the smallest spatial unit in the Australian census). National population-weighted average concentrations ranged from 7.3 ppb (2006) to 6.3 ppb (2011). We found that a simple approach using tropospheric NO2 column data yielded models with slightly better predictive ability than those produced using a more involved approach that required simulation of surface-to-column ratios. The models were capable of capturing within-urban variability in NO2, and offer the ability to estimate ambient NO2 concentrations at monthly and annual time scales across Australia from 2006–2011. We are making our model predictions freely available for research.
Resumo:
Early transcriptional activation events that occur in bladder immediately following bacterial urinary tract infection (UTI) are not well defined. In this study, we describe the whole bladder transcriptome of uropathogenic Escherichia coli (UPEC) cystitis in mice using genome-wide expression profiling to define the transcriptome of innate immune activation stemming from UPEC colonization of the bladder. Bladder RNA from female C57BL/6 mice, analyzed using 1.0 ST-Affymetrix microarrays, revealed extensive activation of diverse sets of innate immune response genes, including those that encode multiple IL-family members, receptors, metabolic regulators, MAPK activators, and lymphocyte signaling molecules. These were among 1564 genes differentially regulated at 2 h postinfection, highlighting a rapid and broad innate immune response to bladder colonization. Integrative systems-level analyses using InnateDB (http://www.innatedb.com) bioinformatics and ingenuity pathway analysis identified multiple distinct biological pathways in the bladder transcriptome with extensive involvement of lymphocyte signaling, cell cycle alterations, cytoskeletal, and metabolic changes. A key regulator of IL activity identified in the transcriptome was IL-10, which was analyzed functionally to reveal marked exacerbation of cystitis in IL-10–deficient mice. Studies of clinical UTI revealed significantly elevated urinary IL-10 in patients with UPEC cystitis, indicating a role for IL-10 in the innate response to human UTI. The whole bladder transcriptome presented in this work provides new insight into the diversity of innate factors that determine UTI on a genome-wide scale and will be valuable for further data mining. Identification of protective roles for other elements in the transcriptome will provide critical new insight into the complex cascade of events that underpin UTI.
Resumo:
Determination of sequence similarity is a central issue in computational biology, a problem addressed primarily through BLAST, an alignment based heuristic which has underpinned much of the analysis and annotation of the genomic era. Despite their success, alignment-based approaches scale poorly with increasing data set size, and are not robust under structural sequence rearrangements. Successive waves of innovation in sequencing technologies – so-called Next Generation Sequencing (NGS) approaches – have led to an explosion in data availability, challenging existing methods and motivating novel approaches to sequence representation and similarity scoring, including adaptation of existing methods from other domains such as information retrieval. In this work, we investigate locality-sensitive hashing of sequences through binary document signatures, applying the method to a bacterial protein classification task. Here, the goal is to predict the gene family to which a given query protein belongs. Experiments carried out on a pair of small but biologically realistic datasets (the full protein repertoires of families of Chlamydia and Staphylococcus aureus genomes respectively) show that a measure of similarity obtained by locality sensitive hashing gives highly accurate results while offering a number of avenues which will lead to substantial performance improvements over BLAST..
Resumo:
Text is the main method of communicating information in the digital age. Messages, blogs, news articles, reviews, and opinionated information abounds on the Internet. People commonly purchase products online and post their opinions about purchased items. This feedback is displayed publicly to assist others with their purchasing decisions, creating the need for a mechanism with which to extract and summarize useful information for enhancing the decision-making process. Our contribution is to improve the accuracy of extraction by combining different techniques from three major areas, named Data Mining, Natural Language Processing techniques and Ontologies. The proposed framework sequentially mines product’s aspects and users’ opinions, groups representative aspects by similarity, and generates an output summary. This paper focuses on the task of extracting product aspects and users’ opinions by extracting all possible aspects and opinions from reviews using natural language, ontology, and frequent “tag” sets. The proposed framework, when compared with an existing baseline model, yielded promising results.
Resumo:
Due to the popularity of security cameras in public places, it is of interest to design an intelligent system that can efficiently detect events automatically. This paper proposes a novel algorithm for multi-person event detection. To ensure greater than real-time performance, features are extracted directly from compressed MPEG video. A novel histogram-based feature descriptor that captures the angles between extracted particle trajectories is proposed, which allows us to capture motion patterns of multi-person events in the video. To alleviate the need for fine-grained annotation, we propose the use of Labelled Latent Dirichlet Allocation, a “weakly supervised” method that allows the use of coarse temporal annotations which are much simpler to obtain. This novel system is able to run at approximately ten times real-time, while preserving state-of-theart detection performance for multi-person events on a 100-hour real-world surveillance dataset (TRECVid SED).
Resumo:
The integration of Information and Communication Technologies (ICT) into healthcare processes “eHealth” is driving enormous change in healthcare delivery and productivity. The transformations empower patients and present opportunities for new synergies between healthcare professionals, clinical decision makers, policy makers and educators. Technologies that are directly driving changes include Tele-medicine, Electronic health records (EHR), Standards to ensure computer systems inter-operate, Decision support systems, Data mining and easy access to medical information. This workshop provides an introduction to key informatics initiatives in eHealth using real examples and suggests how applications can be applied to modern society.
Resumo:
Information and Communication Technologies are dramatically transforming Allopathic medicine. Technological developments including Tele-medicine, Electronic health records, Standards to ensure computer systems inter-operate, Data mining, Simulation, Decision Support and easy access to medical information each contribute to empowering patients in new ways and change the practice of medicine. To date, informatics has had little impact on Ayurvedic medicine. This tutorial provides an introduction to key informatics initiatives in Allopothic medicine using real examples and suggests how applications can be applied to Ayurvedic medicine.
Resumo:
This paper outlines the approach taken by the Speech, Audio, Image and Video Technologies laboratory, and the Applied Data Mining Research Group (SAIVT-ADMRG) in the 2014 MediaEval Social Event Detection (SED) task. We participated in the event based clustering subtask (subtask 1), and focused on investigating the incorporation of image features as another source of data to aid clustering. In particular, we developed a descriptor based around the use of super-pixel segmentation, that allows a low dimensional feature that incorporates both colour and texture information to be extracted and used within the popular bag-of-visual-words (BoVW) approach.
Resumo:
The use of ‘topic’ concepts has shown improved search performance, given a query, by bringing together relevant documents which use different terms to describe a higher level concept. In this paper, we propose a method for discovering and utilizing concepts in indexing and search for a domain specific document collection being utilized in industry. This approach differs from others in that we only collect focused concepts to build the concept space and that instead of turning a user’s query into a concept based query, we experiment with different techniques of combining the original query with a concept query. We apply the proposed approach to a real-world document collection and the results show that in this scenario the use of concept knowledge at index and search can improve the relevancy of results.