807 resultados para Machine Learning,hepatocellular malignancies,HCC,MVI


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reproductive performance of cattle may be influenced by several factors, but mineral imbalances are crucial in terms of direct effects on reproduction. Several studies have shown that elements such as calcium, copper, iron, magnesium, selenium, and zinc are essential for reproduction and can prevent oxidative stress. However, toxic elements such as lead, nickel, and arsenic can have adverse effects on reproduction. In this paper, we applied a simple and fast method of multi-element analysis to bovine semen samples from Zebu and European classes used in reproduction programs and artificial insemination. Samples were analyzed by inductively coupled plasma spectrometry (ICP-MS) using aqueous medium calibration and the samples were diluted in a proportion of 1:50 in a solution containing 0.01% (vol/vol) Triton X-100 and 0.5% (vol/vol) nitric acid. Rhodium, iridium, and yttrium were used as the internal standards for ICP-MS analysis. To develop a reliable method of tracing the class of bovine semen, we used data mining techniques that make it possible to classify unknown samples after checking the differentiation of known-class samples. Based on the determination of 15 elements in 41 samples of bovine semen, 3 machine-learning tools for classification were applied to determine cattle class. Our results demonstrate the potential of support vector machine (SVM), multilayer perceptron (MLP), and random forest (RF) chemometric tools to identify cattle class. Moreover, the selection tools made it possible to reduce the number of chemical elements needed from 15 to just 8.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-element analysis of honey samples was carried out with the aim of developing a reliable method of tracing the origin of honey. Forty-two chemical elements were determined (Al, Cu, Pb, Zn, Mn, Cd, Tl, Co, Ni, Rb, Ba, Be, Bi, U, V, Fe, Pt, Pd, Te, Hf, Mo, Sn, Sb, P, La, Mg, I, Sm, Tb, Dy, Sd, Th, Pr, Nd, Tm, Yb, Lu, Gd, Ho, Er, Ce, Cr) by inductively coupled plasma mass spectrometry (ICP-MS). Then, three machine learning tools for classification and two for attribute selection were applied in order to prove that it is possible to use data mining tools to find the region where honey originated. Our results clearly demonstrate the potential of Support Vector Machine (SVM), Multilayer Perceptron (MLP) and Random Forest (RF) chemometric tools for honey origin identification. Moreover, the selection tools allowed a reduction from 42 trace element concentrations to only 5. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: There is clinical evidence that very low and safe levels of amplitude-modulated electromagnetic fields administered via an intrabuccal spoon-shaped probe may elicit therapeutic responses in patients with cancer. However, there is no known mechanism explaining the anti-proliferative effect of very low intensity electromagnetic fields. METHODS: To understand the mechanism of this novel approach, hepatocellular carcinoma (HCC) cells were exposed to 27.12 MHz radiofrequency electromagnetic fields using in vitro exposure systems designed to replicate in vivo conditions. Cancer cells were exposed to tumour-specific modulation frequencies, previously identified by biofeedback methods in patients with a diagnosis of cancer. Control modulation frequencies consisted of randomly chosen modulation frequencies within the same 100 Hz-21 kHz range as cancer-specific frequencies. RESULTS: The growth of HCC and breast cancer cells was significantly decreased by HCC-specific and breast cancer-specific modulation frequencies, respectively. However, the same frequencies did not affect proliferation of nonmalignant hepatocytes or breast epithelial cells. Inhibition of HCC cell proliferation was associated with downregulation of XCL2 and PLP2. Furthermore, HCC-specific modulation frequencies disrupted the mitotic spindle. CONCLUSION: These findings uncover a novel mechanism controlling the growth of cancer cells at specific modulation frequencies without affecting normal tissues, which may have broad implications in oncology. British Journal of Cancer (2012) 106, 307-313. doi:10.1038/bjc.2011.523 www.bjcancer.com Published online 1 December 2011 (C) 2012 Cancer Research UK

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decision tree induction algorithms represent one of the most popular techniques for dealing with classification problems. However, traditional decision-tree induction algorithms implement a greedy approach for node splitting that is inherently susceptible to local optima convergence. Evolutionary algorithms can avoid the problems associated with a greedy search and have been successfully employed to the induction of decision trees. Previously, we proposed a lexicographic multi-objective genetic algorithm for decision-tree induction, named LEGAL-Tree. In this work, we propose extending this approach substantially, particularly w.r.t. two important evolutionary aspects: the initialization of the population and the fitness function. We carry out a comprehensive set of experiments to validate our extended algorithm. The experimental results suggest that it is able to outperform both traditional algorithms for decision-tree induction and another evolutionary algorithm in a variety of application domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Das hepatozelluläre Karzinom (HCC) ist mit ungefähr 1,000,000 neuen Fällen pro Jahr einer der häufigsten malignen Tumore weltweit. Es ist hauptsächlich in Südost-Asien und im südlichen Afrika verbreitet. Risikofaktoren sind chronische Infektionen mit Hepatitis Viren (HBV, HCV), Aflatoxin B1-Belastung und chronischer Alkoholkonsum. Um Veränderungen auf genomischer Ebene in HCCs zu untersuchen, wurden in der vorliegenden Untersuchung Frischgewebeproben von 21 Patienten mit HCCs und formalin-fixiertes, paraffineingebettetes Material von 6 Dysplastischen Knoten mittels Comparativer genomischer Hybridisierung (CGH) analysiert. In den untersuchten HCCs konnte Zugewinne auf 1q (12/21), 6p ( 6/21), 8q (11/21), 17q (6/21), 20q (6/21), sowie Verluste auf 4q (7/21), 6q (4/21), 10q (3/21), 13q (4/21), 16q (3/21) identifiziert werden. Die Validität der mit diesem Ansatz erzielten Ergebnisse konnte anhand von unabhängigen Kontrollexperimenten mit Interphase-FISH-Analyse nachgewiesen werden. Die in Dysplastische Knoten identifizierten Veränderungen sind Gewinne auf 1q (50% ) sowie Verluste auf 8p und 17p. Daher stellt 1q eine Kandidatenregion für die Identifizierung jener Gene dar, die bereits im frühem Stadium der Hepatokarzinogenese aktiviert sind. Die Gen-Expressionsanalyse eines HCCs mit Gewinnen auf 1q, 8q, und Xq zeigte die Überexpression von einigen Genen, die in den amplifizierten Regionen liegen. Daher kann spekuliert werden, dass die DNA-Amplifikation in der Hepatokarzinogenese bei einigen Genen ein Mechanismus der Aktivierung sein kann. Zusammengefasst konnte somit durch CGH-Analyse charakteristische, genomische Imbalances des HCC ermittelt werden. Der Vergleich mit Veränderungen bei prämalignen Läsionen erlaubt die Unterscheidung früher (prämaligner) und später (progressionsassoziierter) Veränderungen

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To evaluate the early response to treatment to an antiangiogenetic drug (sorafenib) in a heterotopic murine model of hepatocellular carcinoma (HCC) using ultrasonographic molecular imaging. Material and Methods: the xenographt model was established injecting a suspension of HuH7 cells subcutaneously in 19 nude mice. When tumors reached a mean diameter of 5-10 mm, they were divided in two groups (treatment and vehicle). The treatment group received sorafenib (62 mg/kg) by daily oral gavage for 14 days. Molecular imaging was performed using contrast enhanced ultrasound (CEUS), by injecting into the mouse venous circulation a suspension of VEGFR-2 targeted microbubbles (BR55, kind gift of Bracco Swiss, Geneve, Switzerland). Video clips were acquired for 6 minutes, then microbubbles (MBs) were destroyed by a high mechanical index (MI) impulse, and another minute was recorded to evaluate residual circulating MBs. The US protocol was repeated at day 0,+2,+4,+7, and +14 from the beginning of treatment administration. Video clips were analyzed using a dedicated software (Sonotumor, Bracco Swiss) to quantify the signal of the contrast agent. Time/intensity curves were obtained and the difference of the mean MBs signal before and after high MI impulse (Differential Targeted Enhancement-dTE) was calculated. dTE represents a numeric value in arbitrary units proportional to the amount of bound MBs. At day +14 mice were euthanized and the tumors analyzed for VEGFR-2, pERK, and CD31 tissue levels using western blot analysis. Results: dTE values decreased from day 0 to day +14 both in treatment and vehicle groups, and they were statistically higher in vehicle group than in treatment group at day +2, at day +7, and at day +14. With respect to the degree of tumor volume increase, measured as growth percentage delta (GPD), treatment group was divided in two sub-groups, non-responders (GPD>350%), and responders (GPD<200%). In the same way vehicle group was divided in slow growth group (GPD<400%), and fast growth group (GPD>900%). dTE values at day 0 (immediately before treatment start) were higher in non-responders than in responders group, with statistical difference at day 2. While dTE values were higher in the fast growth group than in the slow growth group only at day 0. A significant positive correlation was found between VEGFR-2 tissue levels and dTE values, confirming that level of BR55 tissue enhancement reflects the amount of tissue VEGF receptor. Conclusions: the present findings show that, at least in murine experimental models, CEUS with BR55 is feasable and appears to be a useful tool in the prediction of tumor growth and response to sorafenib treatment in xenograft HCC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background & Aims: This study investigates whether the aetiologic changes in liver disease and the improved management of hepatocellular carcinoma (HCC) have modified the clinical scenario of this tumour over the last 20 years in Italy. Methods: Retrospective study based on the analysis of the ITA.LI.CA (Italian Liver Cancer) database including 3027 HCC patients managed in 11 centres. Patients were divided into 3 groups according to the period of HCC diagnosis: 1987–1996 (year of the ‘‘Milano criteria’’ publication), 1997–2001 (year of release of the EASL guidelines for HCC), and 2002–2008. Results: The significant changes were: (1) progressive patient ageing; (2) increasing prevalence of HCV infection until 2001, with a subsequent decrease, when the alcoholic aetiology increased; (3) liver function improvement, until 2001; (4) increasing ‘‘incidental’’ at the expense of ‘‘symptomatic’’ diagnoses, until 2001; (5) unchanged prevalence of tumours diagnosed during surveillance (around 50%), with an increasing use of the 6- month schedule; (6) favourable HCC ‘‘stage migration’’, until 2001; (7) increasing use of percutaneous ablation; (8) improving survival, until 2001. Conclusions: Over the last 20 years, several aetiologic and clinical features regarding HCC have changed. The survival improvement observed until 2001 was due to an increasing number of tumours diagnosed in early stages and in a background of compensated cirrhosis, and a growing and better use of locoregional treatments. However, the prevalence of early cancers and survival did not increase further in the last years, a result inciting national policies aimed at implementing surveillance programmes for at risk patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scopo dello studio: Stabilire se cambiamenti della perfusione di una lesione target di epatocarcinoma (HCC), valutati quantitativamente mediante ecografia con contrasto (CE-US) alla settimana 2 e 4 di terapia con sorafenib, possono predire la progressione di malattia alla settimana 8, valutata con la tomografia computerizzata o la risonanza magnetica con mezzo di contrasto (TC-RM) usando i criteri RECIST/RECIST modificati (response evaluation criteria in solid tumors). Pazienti e metodi: Il comitato etico ha approvato lo studio ed i pazienti hanno fornito un consenso informato scritto prima dell’arruolamento. Lo studio è stato effettuato su un campione di soggetti con epatocarcinoma avanzato o non suscettibile di trattamento curativo, in monoterapia con sorafenib. La valutazione della risposta tumorale è stata effettuata con TC o RM a 2 mesi usando i criteri RECIST/RECIST modificati. La CE-US è stata effettuata entro 1 settimana prima dell’inizio del trattamento con sorafenib e durante la terapia alla settimana 2, 4, 8, 16 e 32. I parametri quantitativi funzionali sono stati ottenuti impiegando un software dedicato. I cambiamenti dei valori dei parametri suddetti tra il tempo zero ed i punti temporali successivi sono stati confrontati con la risposta tumorale basata sui criteri RECIST/RECIST modificati. Risultati: La riduzione dei valori dei parametri relativi alla perfusione tumorale, in particolare di WiAUC e PE (parametri correlati con il volume ematico), al T2/T4 (settimana 2, 4), predice la risposta tumorale a 2 mesi, valutata secondo i criteri RECIST e RECIST modificati, risultata indicativa di malattia stabile (responders). Conclusione: L’ecografia con contrasto può essere impiegata per quantificare i cambiamenti della vascolarizzazione tumorale già alla settimana 2, 4 dopo la somministrazione di sorafenib nei pazienti con HCC. Questi precoci cambiamenti della perfusione tumorale possono essere predittivi della risposta tumorale a 2 mesi e possono avere un potenziale nella valutazione precoce dell'efficacia della terapia antiangiogenica nell’epatocarcinoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hepatitis B x protein (HBx) is a non structural, multifunctional protein of hepatitis B virus (HBV) that modulates a variety of host processes.Due to its transcriptional activity,able to alter the expression of growth-control genes,it has been implicated in hepatocarcinogenesis.Increased expression of HBx has been reported on the liver tissue samples of hepatocellular carcinoma (HCC),and a specific anti-HBx immune response can be detected in the peripheral blood of patients with chronic HBV.However,its role and entity has not been yet clarified.Thus,we performed a cross-sectional analysis of anti-HBx specific T cell response in HBV-infected patients in different stage of disease.A total of 70 HBV-infected subjects were evaluated:15 affected by chronic hepatitis (CH-median age 45 yrs),14 by cirrhosis (median age 55 yrs),11 with dysplastic nodules (median age 64 yrs),15 with HCC (median age 60 yrs),15 with IC(median age 53 yrs).All patients were infected by virus genotype D with different levels of HBV viremia and most of them (91%) were HBeAb positive.The HBx-specific T cell response was evaluated by anti-Interferon (IFN)-gamma Elispot assay after in vitro stimulation of peripheral blood mononuclear cells,using 20 overlapping synthetic peptides covering all HBx protein sequence.HBx-specific IFN-gamma-secreting T cells were found in 6 out of 15 patients with chronic hepatitis (40%), 3 out of 14 cirrhosis (21%), in 5 out of 11 cirrhosis with macronodules (54%), and in 10 out of 15 HCC patients (67%). The number of responding patients resulted significantly higher in HCC than IC (p=0.02) and cirrhosis (p=0.02). Central specific region of the protein x was preferentially recognize,between 86-88 peptides. HBx response does not correlate with clinical feature disease(AFP,MELD).The HBx specific T-cell response seems to increase accordingly to progression of the disease, being increased in subjects with dysplastic or neoplastic lesions and can represent an additional tool to monitor the patients at high risk to develop HCC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The recent increasing incidence of intrahepatic cholangiocellular carcinoma (ICC) in cirrhosis increased the problem of noninvasive differential diagnosis between ICC and hepatocellular carcinoma (HCC) in cirrhosis. In literature there isn’t data about treatment and prognosis of ICC in cirrhosis. Aim: To investigate the role of the different imaging techniques in the diagnosis of ICC in cirrhosis; to analyze treatments and prognosis with particular attention to factors associated with survival. Methods: The data of 30 cirrhotic patients with ICC were retrospectively collected; patients were referred to Liver Units (S.Orsola-Malpighi and S.Matteo Hospitals) between 2005 and 2011. The results of contrast-enhanced ultrasound (CEUS), computed tomography (CT) and magnetic resonance (MR) were evaluated; the enhancement pattern at different imaging techniques were analysed, with particular attention to misdiagnosis of HCC. We evaluated the different treatments and survival of the study group and then we performed the survival analysis of different clinico-pathologic factors. Results: Twenty-five patients underwent CEUS, 27 CT and 10 MR. In 3 cases (12%) CEUS misdiagnosed ICC for HCC, in 7 cases (26%) CT misdiagnosed ICC and in 1 case (10%) MR misdiagnosed ICC. Patient were followed for a mean of 30 months (range:4-86), with a mean survival of 30 months. Twenty-four out of 30 patients were treated with curative approach, while the other 6 underwent TACE (n=4), radioembolization (n=1) or systemic treatment with Gemcitabine (n=1). The univariate analysis revealed that CA19-9 levels, surveillance program and nodule size were significantly related with survival. By multivariate analysis only nodule size £ 40mm was significant (p=0,004). Conclusion: Diagnosis of ICC in cirrhosis remains difficult because there isn’t a typical enhancement pattern and in some cases it cannot be distinguished from HCC by the different imaging techniques. The study of survival related factors shows that nodule size ≤ 40mm is correlated with improved survival.