43 resultados para Fractional algorithms

em Université de Lausanne, Switzerland


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: According to guidelines, patients with coronary artery disease (CAD) should undergo revascularization if myocardial ischemia is present. While coronary angiography (CXA) allows the morphological assessment of CAD, the fractional flow reserve (FFR) has proved to be a complementary invasive test to assess the functional significance of CAD, i.e. to detect ischemia. Perfusion Cardiac Magnetic Resonance (CMR) has turned out to be a robust non-invasive technique to assess myocardial ischemia. The objective: is to compare the cost-effectiveness ratio - defined as the costs per patient correctly diagnosed - of two algorithms used to diagnose hemodynamically significant CAD in relation to the pretest likelihood of CAD: 1) aCMRto assess ischemia before referring positive patients to CXA (CMR + CXA), 2) a CXA in all patients combined with a FFR test in patients with angiographically positive stenoses (CXA + FFR). Methods: The costs, evaluated from the health care system perspective in the Swiss, German, the United Kingdom (UK) and the United States (US) contexts, included public prices of the different tests considered as outpatient procedures, complications' costs and costs induced by diagnosis errors (false negative). The effectiveness criterion wasthe ability to accurately identify apatient with significantCAD.Test performancesused in the model were based on the clinical literature. Using a mathematical model, we compared the cost-effectiveness ratio for both algorithms for hypothetical patient cohorts with different pretest likelihood of CAD. Results: The cost-effectiveness ratio decreased hyperbolically with increasing pretest likelihood of CAD for both strategies. CMR + CXA and CXA + FFR were equally costeffective at a pretest likelihood of CAD of 62% in Switzerland, 67% in Germany, 83% in the UK and 84% in the US with costs of CHF 5'794, Euros 1'472, £ 2'685 and $ 2'126 per patient correctly diagnosed. Below these thresholds, CMR + CXA showed lower costs per patient correctly diagnosed than CXA + FFR. Implications for the health care system/professionals/patients/society These results facilitate decision making for the clinical use of new generations of imaging procedures to detect ischemia. They show to what extent the cost-effectiveness to diagnose CAD depends on the prevalence of the disease.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND AIMS: In critically ill patients, fractional hepatic de novo lipogenesis increases in proportion to carbohydrate administration during isoenergetic nutrition. In this study, we sought to determine whether this increase may be the consequence of continuous enteral nutrition and bed rest. We, therefore, measured fractional hepatic de novo lipogenesis in a group of 12 healthy subjects during near-continuous oral feeding (hourly isoenergetic meals with a liquid formula containing 55% carbohydrate). In eight subjects, near-continuous enteral nutrition and bed rest were applied over a 10 h period. In the other four subjects, it was extended to 34 h. Fractional hepatic de novo lipogenesis was measured by infusing(13) C-labeled acetate and monitoring VLDL-(13)C palmitate enrichment with mass isotopomer distribution analysis. Fractional hepatic de novo lipogenesis was 3.2% (range 1.5-7.5%) in the eight subjects after 10 h of near continuous nutrition and 1.6% (range 1.3-2.0%) in the four subjects after 34 h of near-continuous nutrition and bed rest. This indicates that continuous nutrition and physical inactivity do not increase hepatic de novo lipogenesis. Fractional hepatic de novo lipogenesis previously reported in critically ill patients under similar nutritional conditions (9.3%) (range 5.3-15.8%) was markedly higher than in healthy subjects (P<0.001). These data from healthy subjects indicate that fractional hepatic de novo lipogenesis is increased in critically ill patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: According to recent guidelines, patients with coronary artery disease (CAD) should undergo revascularization if significant myocardial ischemia is present. Both, cardiovascular magnetic resonance (CMR) and fractional flow reserve (FFR) allow for a reliable ischemia assessment and in combination with anatomical information provided by invasive coronary angiography (CXA), such a work-up sets the basis for a decision to revascularize or not. The cost-effectiveness ratio of these two strategies is compared. METHODS: Strategy 1) CMR to assess ischemia followed by CXA in ischemia-positive patients (CMR + CXA), Strategy 2) CXA followed by FFR in angiographically positive stenoses (CXA + FFR). The costs, evaluated from the third party payer perspective in Switzerland, Germany, the United Kingdom (UK), and the United States (US), included public prices of the different outpatient procedures and costs induced by procedural complications and by diagnostic errors. The effectiveness criterion was the correct identification of hemodynamically significant coronary lesion(s) (= significant CAD) complemented by full anatomical information. Test performances were derived from the published literature. Cost-effectiveness ratios for both strategies were compared for hypothetical cohorts with different pretest likelihood of significant CAD. RESULTS: CMR + CXA and CXA + FFR were equally cost-effective at a pretest likelihood of CAD of 62% in Switzerland, 65% in Germany, 83% in the UK, and 82% in the US with costs of CHF 5'794, euro 1'517, £ 2'680, and $ 2'179 per patient correctly diagnosed. Below these thresholds, CMR + CXA showed lower costs per patient correctly diagnosed than CXA + FFR. CONCLUSIONS: The CMR + CXA strategy is more cost-effective than CXA + FFR below a CAD prevalence of 62%, 65%, 83%, and 82% for the Swiss, the German, the UK, and the US health care systems, respectively. These findings may help to optimize resource utilization in the diagnosis of CAD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The origin of andesite is an important issue in petrology because andesite is the main eruptive product at convergent margins, corresponds to the average crustal composition and is often associated with major Cu-Au mineralization. In this study we present petrographic, mineralogical, geochemical and isotopic data for basaltic andesites of the latest Pleistocene Pilavo volcano, one of the most frontal volcanoes of the Ecuadorian Quaternary arc, situated upon thick (30-50 km) mafic crust composed of accreted Cretaceous oceanic plateau rocks and overlying mafic to intermediate Late Cretaceous-Late Tertiary magmatic arcs. The Pilavo rocks are basaltic andesites (54-57 center dot 5 wt % SiO(2)) with a tholeiitic affinity as opposed to the typical calc-alkaline high-silica andesites and dacites (SiO(2) 59-66 wt %) of other frontal arc volcanoes of Ecuador (e.g. Pichincha, Pululahua). They have much higher incompatible element contents (e.g. Sr 650-1350 ppm, Ba 650-1800 ppm, Zr 100-225 ppm, Th 5-25 ppm, La 15-65 ppm) and Th/La ratios (0 center dot 28-0 center dot 36) than Pichincha and Pululahua, and more primitive Sr ((87)Sr/(86)Sr similar to 0 center dot 7038-0 center dot 7039) and Nd (epsilon(Nd) similar to +5 center dot 5 to +6 center dot 1) isotopic signatures. Pilavo andesites have geochemical affinities with modern and recent high-MgO andesites (e.g. low-silica adakites, Setouchi sanukites) and, especially, with Archean sanukitoids, for both of which incompatible element enrichments are believed to result from interactions of slab melts with peridotitic mantle. Petrographic, mineral chemistry, bulk-rock geochemical and isotopic data indicate that the Pilavo magmatic rocks have evolved through three main stages: (1) generation of a basaltic magma in the mantle wedge region by flux melting induced by slab-derived fluids (aqueous, supercritical or melts); (2) high-pressure differentiation of the basaltic melt (at the mantle-crust boundary or at lower crustal levels) through sustained fractionation of olivine and clinopyroxene, leading to hydrous, high-alumina basaltic andesite melts with a tholeiitic affinity, enriched in incompatible elements and strongly impoverished in Ni and Cr; (3) establishment of one or more mid-crustal magma storage reservoirs in which the magmas evolved through dominant amphibole and clinopyroxene (but no plagioclase) fractionation accompanied by assimilation of the modified plutonic roots of the arc and recharge by incoming batches of more primitive magma from depth. The latter process has resulted in strongly increasing incompatible element concentrations in the Pilavo basaltic andesites, coupled with slightly increasing crustal isotopic signatures and a shift towards a more calc-alkaline affinity. Our data show that, although ultimately originating from the slab, incompatible element abundances in arc andesites with primitive isotopic signatures can be significantly enhanced by intra-crustal processes within a thick juvenile mafic crust, thus providing an additional process for the generation of enriched andesites.