256 resultados para tumor classification
Resumo:
Purpose The role played by the innate immune system in determining survival from non-small-cell lung cancer (NSCLC) is unclear. The aim of this study was to investigate the prognostic significance of macrophage and mast-cell infiltration in NSCLC. Methods We used immunohistochemistry to identify tryptase+ mast cells and CD68+ macrophages in the tumor stroma and tumor islets in 175 patients with surgically resected NSCLC. Results Macrophages were detected in both the tumor stroma and islets in all patients. Mast cells were detected in the stroma and islets in 99.4% and 68.5% of patients, respectively. Using multivariate Cox proportional hazards analysis, increasing tumor islet macrophage density (P < .001) and tumor islet/stromal macrophage ratio (P < .001) emerged as favorable independent prognostic indicators. In contrast, increasing stromal macrophage density was an independent predictor of reduced survival (P = .001). The presence of tumor islet mast cells (P = .018) and increasing islet/stromal mast-cell ratio (P = .032) were also favorable independent prognostic indicators. Macrophage islet density showed the strongest effect: 5-year survival was 52.9% in patients with an islet macrophage density greater than the median versus 7.7% when less than the median (P < .0001). In the same groups, respectively, median survival was 2,244 versus 334 days (P < .0001). Patients with a high islet macrophage density but incomplete resection survived markedly longer than patients with a low islet macrophage density but complete resection. Conclusion The tumor islet CD68+ macrophage density is a powerful independent predictor of survival from surgically resected NSCLC. The biologic explanation for this and its implications for the use of adjunctive treatment requires further study. © 2005 by American Society of Clinical Oncology.
Resumo:
Background L-type amino acid transporters (LATs) uptake neutral amino acids including L-leucine into cells, stimulating mammalian target of rapamycin complex 1 signaling and protein synthesis. LAT1 and LAT3 are overexpressed at different stages of prostate cancer, and they are responsible for increasing nutrients and stimulating cell growth. Methods We examined LAT3 protein expression in human prostate cancer tissue microarrays. LAT function was inhibited using a leucine analog (BCH) in androgen-dependent and -independent environments, with gene expression analyzed by microarray. A PC-3 xenograft mouse model was used to study the effects of inhibiting LAT1 and LAT3 expression. Results were analyzed with the Mann-Whitney U or Fisher exact tests. All statistical tests were two-sided. Results LAT3 protein was expressed at all stages of prostate cancer, with a statistically significant decrease in expression after 4–7 months of neoadjuvant hormone therapy (4–7 month mean = 1.571; 95% confidence interval = 1.155 to 1.987 vs 0 month = 2.098; 95% confidence interval = 1.962 to 2.235; P = .0187). Inhibition of LAT function led to activating transcription factor 4–mediated upregulation of amino acid transporters including ASCT1, ASCT2, and 4F2hc, all of which were also regulated via the androgen receptor. LAT inhibition suppressed M-phase cell cycle genes regulated by E2F family transcription factors including critical castration-resistant prostate cancer regulatory genes UBE2C, CDC20, and CDK1. In silico analysis of BCH-downregulated genes showed that 90.9% are statistically significantly upregulated in metastatic castration-resistant prostate cancer. Finally, LAT1 or LAT3 knockdown in xenografts inhibited tumor growth, cell cycle progression, and spontaneous metastasis in vivo. Conclusion Inhibition of LAT transporters may provide a novel therapeutic target in metastatic castration-resistant prostate cancer, via suppression of mammalian target of rapamycin complex 1 activity and M-phase cell cycle genes.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.
Resumo:
Object classification is plagued by the issue of session variation. Session variation describes any variation that makes one instance of an object look different to another, for instance due to pose or illumination variation. Recent work in the challenging task of face verification has shown that session variability modelling provides a mechanism to overcome some of these limitations. However, for computer vision purposes, it has only been applied in the limited setting of face verification. In this paper we propose a local region based intersession variability (ISV) modelling approach, and apply it to challenging real-world data. We propose a region based session variability modelling approach so that local session variations can be modelled, termed Local ISV. We then demonstrate the efficacy of this technique on a challenging real-world fish image database which includes images taken underwater, providing significant real-world session variations. This Local ISV approach provides a relative performance improvement of, on average, 23% on the challenging MOBIO, Multi-PIE and SCface face databases. It also provides a relative performance improvement of 35% on our challenging fish image dataset.
Resumo:
Debilitating infectious diseases caused by Chlamydia are major contributors to the decline of Australia's iconic native marsupial species, the koala (Phascolarctos cinereus). An understanding of koala chlamydial disease pathogenesis and the development of effective strategies to control infections continue to be hindered by an almost complete lack of species-specific immunological reagents. The cell-mediated immune response has been shown to play an influential role in the response to chlamydial infection in other hosts. The objective of this study, hence, was to provide preliminary data on the role of two key cytokines, pro-inflammatory tumour necrosis factor alpha (TNFα) and anti-inflammatory interleukin 10 (IL10), in the koala Chlamydia pecorum response. Utilising sequence homology between the cytokine sequences obtained from several recently sequenced marsupial genomes, this report describes the first mRNA sequences of any koala cytokine and the development of koala specific TNFα and IL10 real-time PCR assays to measure the expression of these genes from koala samples. In preliminary studies comparing wild koalas with overt chlamydial disease, previous evidence of C. pecorum infection or no signs of C. pecorum infection, we revealed strong but variable expression of TNFα and IL10 in wild koalas with current signs of chlamydiosis. The description of these assays and the preliminary data on the cell-mediated immune response of koalas to chlamydial infection paves the way for future studies characterising the koala immune response to a range of its pathogens while providing reagents to assist with measuring the efficacy of ongoing attempts to develop a koala chlamydial vaccine.
Resumo:
A cell classification algorithm that uses first, second and third order statistics of pixel intensity distributions over pre-defined regions is implemented and evaluated. A cell image is segmented into 6 regions extending from a boundary layer to an inner circle. First, second and third order statistical features are extracted from histograms of pixel intensities in these regions. Third order statistical features used are one-dimensional bispectral invariants. 108 features were considered as candidates for Adaboost based fusion. The best 10 stage fused classifier was selected for each class and a decision tree constructed for the 6-class problem. The classifier is robust, accurate and fast by design.
Resumo:
Real-time image analysis and classification onboard robotic marine vehicles, such as AUVs, is a key step in the realisation of adaptive mission planning for large-scale habitat mapping in previously unexplored environments. This paper describes a novel technique to train, process, and classify images collected onboard an AUV used in relatively shallow waters with poor visibility and non-uniform lighting. The approach utilises Förstner feature detectors and Laws texture energy masks for image characterisation, and a bag of words approach for feature recognition. To improve classification performance we propose a usefulness gain to learn the importance of each histogram component for each class. Experimental results illustrate the performance of the system in characterisation of a variety of marine habitats and its ability to operate onboard an AUV's main processor suitable for real-time mission planning.
Resumo:
Objective To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Method Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. Results The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. Conclusions The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.
Resumo:
Objective: To develop a system for the automatic classification of pathology reports for Cancer Registry notifications. Method: A two pass approach is proposed to classify whether pathology reports are cancer notifiable or not. The first pass queries pathology HL7 messages for known report types that are received by the Queensland Cancer Registry (QCR), while the second pass aims to analyse the free text reports and identify those that are cancer notifiable. Cancer Registry business rules, natural language processing and symbolic reasoning using the SNOMED CT ontology were adopted in the system. Results: The system was developed on a corpus of 500 histology and cytology reports (with 47% notifiable reports) and evaluated on an independent set of 479 reports (with 52% notifiable reports). Results show that the system can reliably classify cancer notifiable reports with a sensitivity, specificity, and positive predicted value (PPV) of 0.99, 0.95, and 0.95, respectively for the development set, and 0.98, 0.96, and 0.96 for the evaluation set. High sensitivity can be achieved at a slight expense in specificity and PPV. Conclusion: The system demonstrates how medical free-text processing enables the classification of cancer notifiable pathology reports with high reliability for potential use by Cancer Registries and pathology laboratories.
Resumo:
The aim of this research is to report initial experimental results and evaluation of a clinician-driven automated method that can address the issue of misdiagnosis from unstructured radiology reports. Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to disperse information resources and vast amounts of manual processing of unstructured information, a point-of-care accurate diagnosis is often difficult. A rule-based method that considers the occurrence of clinician specified keywords related to radiological findings was developed to identify limb abnormalities, such as fractures. A dataset containing 99 narrative reports of radiological findings was sourced from a tertiary hospital. The rule-based method achieved an F-measure of 0.80 and an accuracy of 0.80. While our method achieves promising performance, a number of avenues for improvement were identified using advanced natural language processing (NLP) techniques.
Resumo:
Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.