91 resultados para Machine Learning,hepatocellular malignancies,HCC,MVI
em Université de Lausanne, Switzerland
Advanced mapping of environmental data: Geostatistics, Machine Learning and Bayesian Maximum Entropy
Resumo:
This book combines geostatistics and global mapping systems to present an up-to-the-minute study of environmental data. Featuring numerous case studies, the reference covers model dependent (geostatistics) and data driven (machine learning algorithms) analysis techniques such as risk mapping, conditional stochastic simulations, descriptions of spatial uncertainty and variability, artificial neural networks (ANN) for spatial data, Bayesian maximum entropy (BME), and more.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
Zielsetzung: Vergleich von Drug Eluting Bead (DEB)-TACE mit konventioneller TACE bei der Behandlung von ,,intermediate stage-HCC bei Patienten mit Zirrhose. Material und Methodik: 212 Patienten (185 ♂, 27 ♀; mittleres Alter, 67 Jahre) mit Child-Pugh A oder B Leberzirrhose und großem und/oder multinodulärem, irresektablen HCC wurden randomisiert, um das Therapieansprechen nach der Behandlung mit DEB (DC Bead; Biocompatibles, UK) beladen mit Doxorubicin oder konventioneller TACE mit Doxorubicin zu vergleichen. Die Randomisierung wurde nach Child-Pugh Status (A oder B), Performance Status (ECOG 0 oder 1), bilobärer Erkrankung (ja/nein) und frühere kurative Behandlung (ja/nein) stratifiziert. Der primäre Studienendpunkt war das 6-Monats-Tumoransprechen. Eine unabhängige verblindete MRT-Studie wurde durchgeführt, um das Tumoransprechen nach den RECIST Kriterien zu beurteilen. Ergebnisse: DEB-TACE mit Doxorubicin zeigte eine höhere Rate an komplettem Tumoransprechen, objektivem Ansprechen und Tumorkontrolle im Vergleich zur konventionellen TACE (27% vs 22%; 52% vs 44%; and 63% vs 52%; P>0.05). Patienten mit Child-Pugh B Zirrhose, ECOG 1 Performance Status, bilobärer Erkrankung und Rezidiven nach kurativer Behandlung zeigte einen signifikanten Anstieg des objektiven Ansprechens (p = 0.038) im Vergleich zur Kontrollgruppe. Bei Patienten, die mit DEB-TACE behandelt wurden, konnte eine deutliche Reduktion der gravierenden Lebertoxizität erreicht werden. Die Doxorubicin-Nebenwirkungsrate war in der DEB-TACE Gruppe deutlich geringer (p = 0.0001) als in der konventionellen TACEGruppe. Schlussfolgerung: DEB-TACE mit Doxorubicin ist sicher und effektiv in der Behandlung von ,,intermediate-stage HCC und bietet einen signifikanten Vorteil bei Patienten mit fortgeschrittener Erkrankung.
Resumo:
Background: Sunitinib (SU) is a multitargeted tyrosine kinase inhibitor with antitumor and antiangiogenetic activity. Evidence for clinical activity in HCC was reported in 2 phase II trials [Zhu et al and Faivre et al, ASCO 2007] using either a 37.5 or a 50 mg daily dose in a 4 weeks on, 2 weeks off regimen. The objective of this trial was to demonstrate antitumor activity of continuous SU treatment in patients (pts) with HCC. Methods: Key eligibility criteria included unresectable or metastatic HCC, no prior systemic anticancer treatment, measurable disease and Child- Pugh A or B liver dysfunction. Pts received 37.5 mg SU daily until progression or unacceptable toxicity. The primary endpoint was progression free survival at 12 weeks (PFS12) defined as 'success' if the patient was alive and without tumor progression assessed by 12 weeks (±7 days) after registration. A PFS12 of _20% was considered uninteresting and promising if _40%. Using the Simon-two minimax stage design with 90% power and 5% significance the sample size was 45 pts. Secondary endpoints included safety assessments, measurement of serum cobalamin levels and tumor density. Results: From September 2007 to August 2008 45 pts, mostly male (87%), were enrolled in 10 centers. Median age was 63 years, 89% had Child-Pugh A and 47% had distant metastases. Median largest lesion diameter was 84mm (range: 18-280) and 18% had prior TACE. Reasons for stopping therapy were: PD 60%, symptomatic deterioration 16%, toxicity 11%, death 2% (due to tumor), and other reasons 4%; 7% remain on therapy. PFS12 was rated as success in 15 pts (33%) (95% CI: 20%, 49%) and failure in 27 (60%); 3 were not evaluable (due to refusal). Over the whole trial period 1 CR and 40% SD as best response were achieved. Median PFS, duration of disease stabilization, TTP and OS were 2.8, 3.2, 2.8 and 9.3 months, respectively. Grade 3 and 4 adverse events were infrequent and all deaths due to the tumor. Conclusions: Continuous SU treatment with 37.5 mg/d daily is feasible and demonstrates moderate activity in pts with advanced HCC and mild to moderately impaired liver dysfunction. Under this trial design the therapy is considered promising (>13 PFS12 successes).
Resumo:
The present research deals with the review of the analysis and modeling of Swiss franc interest rate curves (IRC) by using unsupervised (SOM, Gaussian Mixtures) and supervised machine (MLP) learning algorithms. IRC are considered as objects embedded into different feature spaces: maturities; maturity-date, parameters of Nelson-Siegel model (NSM). Analysis of NSM parameters and their temporal and clustering structures helps to understand the relevance of model and its potential use for the forecasting. Mapping of IRC in a maturity-date feature space is presented and analyzed for the visualization and forecasting purposes.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.