61 resultados para writing to learn
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
BACKGROUND: Systematic reviews and meta-analyses of pre-clinical studies, in vivo animal experiments in particular, can influence clinical care. Publication bias is one of the major threats of validity in systematic reviews and meta-analyses. Previous empirical studies suggested that systematic reviews and meta-analyses have become more prevalent until 2010 and found evidence for compromised methodological rigor with a trend towards improvement. We aim to comprehensively summarize and update the evidence base on systematic reviews and meta-analyses of animal studies, their methodological quality and assessment of publication bias in particular. METHODS/DESIGN: The objectives of this systematic review are as follows: âeuro¢To investigate the epidemiology of published systematic reviews of animal studies until present. âeuro¢To examine methodological features of systematic reviews and meta-analyses of animal studies with special attention to the assessment of publication bias. âeuro¢To investigate the influence of systematic reviews of animal studies on clinical research by examining citations of the systematic reviews by clinical studies. Eligible studies for this systematic review constitute systematic reviews and meta-analyses that summarize in vivo animal experiments with the purpose of reviewing animal evidence to inform human health. We will exclude genome-wide association studies and animal experiments with the main purpose to learn more about fundamental biology, physical functioning or behavior. In addition to the inclusion of systematic reviews and meta-analyses identified by other empirical studies, we will systematically search Ovid Medline, Embase, ToxNet, and ScienceDirect from 2009 to January 2013 for further eligible studies without language restrictions. Two reviewers working independently will assess titles, abstracts, and full texts for eligibility and extract relevant data from included studies. Data reporting will involve a descriptive summary of meta-analyses and systematic reviews. DISCUSSION: Results are expected to be publicly available later in 2013 and may form the basis for recommendations to improve the quality of systematic reviews and meta-analyses of animal studies and their use with respect to clinical care.
Resumo:
Ullman (2004) suggested that Specific Language Impairment (SLI) results from a general procedural learning deficit. In order to test this hypothesis, we investigated children with SLI via procedural learning tasks exploring the verbal, motor, and cognitive domains. Results showed that compared with a Control Group, the children with SLI (a) were unable to learn a phonotactic learning task, (b) were able but less efficiently to learn a motor learning task and (c) succeeded in a cognitive learning task. Regarding the motor learning task (Serial Reaction Time Task), reaction times were longer and learning slower than in controls. The learning effect was not significant in children with an associated Developmental Coordination Disorder (DCD), and future studies should consider comorbid motor impairment in order to clarify whether impairments are related to the motor rather than the language disorder. Our results indicate that a phonotactic learning but not a cognitive procedural deficit underlies SLI, thus challenging Ullmans' general procedural deficit hypothesis, like a few other recent studies.
Resumo:
Background and aim of the study: Formation of implicit memory during general anaesthesia is still debated. Perceptual learning is the ability to learn to perceive. In this study, an auditory perceptual learning paradigm, using frequency discrimination, was performed to investigate the implicit memory. It was hypothesized that auditory stimulation would successfully induce perceptual learning. Thus, initial thresholds of the frequency discrimination postoperative task should be lower for the stimulated group (group S) compared to the control group (group C). Material and method: Eighty-seven patients ASA I-III undergoing visceral and orthopaedic surgery during general anaesthesia lasting more than 60 minutes were recruited. The anaesthesia procedure was standardized (BISR monitoring included). Group S received auditory stimulation (2000 pure tones applied for 45 minutes) during the surgery. Twenty-four hours after the operation, both groups performed ten blocks of the frequency discrimination task. Mean of the thresholds for the first three blocks (T1) were compared between groups. Results: Mean age and BIS value of group S and group C are respectively 40 } 11 vs 42 } 11 years (p = 0,49) and 42 } 6 vs 41 } 8 (p = 0.87). T1 is respectively 31 } 33 vs 28 } 34 (p = 0.72) in group S and C. Conclusion: In our study, no implicit memory during general anaesthesia was demonstrated. This may be explained by a modulation of the auditory evoked potentials caused by the anaesthesia, or by an insufficient longer time of repetitive stimulation to induce perceptual learning.
Resumo:
This study examined the effects of ibotenic acid-induced lesions of the hippocampus, subiculum and hippocampus +/- subiculum upon the capacity of rats to learn and perform a series of allocentric spatial learning tasks in an open-field water maze. The lesions were made by infusing small volumes of the neurotoxin at a total of 26 (hippocampus) or 20 (subiculum) sites intended to achieve complete target cell loss but minimal extratarget damage. The regional extent and axon-sparing nature of these lesions was evaluated using both cresyl violet and Fink - Heimer stained sections. The behavioural findings indicated that both the hippocampus and subiculum lesions caused impairment to the initial postoperative acquisition of place navigation but did not prevent eventual learning to levels of performance almost as effective as those of controls. However, overtraining of the hippocampus + subiculum lesioned rats did not result in significant place learning. Qualitative observations of the paths taken to find a hidden escape platform indicated that different strategies were deployed by hippocampal and subiculum lesioned groups. Subsequent training on a delayed matching to place task revealed a deficit in all lesioned groups across a range of sample choice intervals, but the subiculum lesioned group was less impaired than the group with the hippocampal lesion. Finally, unoperated control rats given both the initial training and overtraining were later given either a hippocampal lesion or sham surgery. The hippocampal lesioned rats were impaired during a subsequent retention/relearning phase. Together, these findings suggest that total hippocampal cell loss may cause a dual deficit: a slower rate of place learning and a separate navigational impairment. The prospect of unravelling dissociable components of allocentric spatial learning is discussed.
Resumo:
Medium-chain-length polyhydroxyalkanoates (PHAs) are polyesters having properties of biodegradable thermoplastics and elastomers that are naturally produced by a variety of pseudomonads. Saccharomyces cerevisiae was transformed with the Pseudomonas aeruginosa PHAC1 synthase modified for peroxisome targeting by the addition of the carboxyl 34 amino acids from the Brassica napus isocitrate lyase. The PHAC1 gene was put under the control of the promoter of the catalase A gene. PHA synthase expression and PHA accumulation were found in recombinant S. cerevisiae growing in media containing fatty acids. PHA containing even-chain monomers from 6 to 14 carbons was found in recombinant yeast grown on oleic acid, while odd-chain monomers from 5 to 15 carbons were found in PHA from yeast grown on heptadecenoic acid. The maximum amount of PHA accumulated was 0.45% of the dry weight. Transmission electron microscopy of recombinant yeast grown on oleic acid revealed the presence of numerous PHA inclusions found within membrane-bound organelles. Together, these data show that S. cerevisiae expressing a peroxisomal PHA synthase produces PHA in the peroxisome using the 3-hydroxyacyl coenzyme A intermediates of the beta-oxidation of fatty acids present in the media. S. cerevisiae can thus be used as a powerful model system to learn how fatty acid metabolism can be modified in order to synthesize high amounts of PHA in eukaryotes, including plants.
Resumo:
This paper presents the general regression neural networks (GRNN) as a nonlinear regression method for the interpolation of monthly wind speeds in complex Alpine orography. GRNN is trained using data coming from Swiss meteorological networks to learn the statistical relationship between topographic features and wind speed. The terrain convexity, slope and exposure are considered by extracting features from the digital elevation model at different spatial scales using specialised convolution filters. A database of gridded monthly wind speeds is then constructed by applying GRNN in prediction mode during the period 1968-2008. This study demonstrates that using topographic features as inputs in GRNN significantly reduces cross-validation errors with respect to low-dimensional models integrating only geographical coordinates and terrain height for the interpolation of wind speed. The spatial predictability of wind speed is found to be lower in summer than in winter due to more complex and weaker wind-topography relationships. The relevance of these relationships is studied using an adaptive version of the GRNN algorithm which allows to select the useful terrain features by eliminating the noisy ones. This research provides a framework for extending the low-dimensional interpolation models to high-dimensional spaces by integrating additional features accounting for the topographic conditions at multiple spatial scales. Copyright (c) 2012 Royal Meteorological Society.
Resumo:
Learning is predicted to affect manifold ecological and evolutionary processes, but the extent to which animals rely on learning in nature remains poorly known, especially for short-lived non-social invertebrates. This is in particular the case for Drosophila, a favourite laboratory system to study molecular mechanisms of learning. Here we tested whether Drosophila melanogaster use learned information to choose food while free-flying in a large greenhouse emulating the natural environment. In a series of experiments flies were first given an opportunity to learn which of two food odours was associated with good versus unpalatable taste; subsequently, their preference for the two odours was assessed with olfactory traps set up in the greenhouse. Flies that had experienced palatable apple-flavoured food and unpalatable orange-flavoured food were more likely to be attracted to the odour of apple than flies with the opposite experience. This was true both when the flies first learned in the laboratory and were then released and recaptured in the greenhouse, and when the learning occurred under free-flying conditions in the greenhouse. Furthermore, flies retained the memory of their experience while exploring the greenhouse overnight in the absence of focal odours, pointing to the involvement of consolidated memory. These results support the notion that even small, short lived insects which are not central-place foragers make use of learned cues in their natural environments.
Resumo:
Aims: The HR-NBL1 Study of the European SIOP Neuroblastoma Group (SIOPEN) randomised two high dose regimens to learn about potential superiority and toxicity profiles.Patients and Methods: At interim analysis 1483 high risk neuroblastoma patients (893 males) were included since 2002 with either INSS stage 4 disease (1383 pts) above 1 year, or as infants (59 pts) and stage 2&3 of any age (145 pts) with MYCN amplification. The median age at diagnosis was 2.9 years (1 month-19.9 years) with a median follow up of 3 years. Response eligibility criteria prior randomisation after Rapid Cojec Induction (J Clin Oncol, 2010) ± 2 courses of TVD (Cancer, 2003) included complete bone marrow remission and at least partial response at skeletal sites with no more than 3, but improved mIBG positive spots and a PBSC harvest of at least 3x10E6 CD34/kgBW. The randomised regimens were BuMel [busulfan oral till 2006, 4x150mg/m² in 4 ED; or intravenous use according to body weight as licenced thereafter; melphalan 140mg/m²/day) and CEM [carboplatinum ctn. infusion (4x AUC 4.1mg/ml.min/day, etoposid ctn. infusion (4x 338mg/m²/day or [4x 200mg/m²/day]*, melphalan (3x70mg/m²/day; 3x60mg/m²/day*;*reduced dosis if GFR< 100ml/min/1.73m²). Supportive care followed institutional guidelines. VOD prophylaxis included ursadiol, but randomised patients were not eligible for the prophylactic defibrotide trial. Local control included surgery and radiotherapy of 21Gy.Results: Of 1483 patients, 584 were being randomised for the high dose question at data lock. A significant difference in event free survival (3-year EFS 49% vs. 33%, p<0.001) and overall survival (3-year OS 61% vs. 48%, p=0.003) favouring the BuMel regimen over the CEM regimen was demonstrated. The relapse/progression rate was significantly higher after CEM (0.60±0.03) than after BuMel (0.48±0.03)(p<0.001). Toxicity data had reached 80% completeness at last analysis. The severe toxicity rate up to day 100 (ICU and toxic deaths) was below 10%, but was significantly higher for CEM (p= 0.014). The acute toxic death rate was 3% for BuMel and 5% for CEM (NS). The acute HDT toxicity profile favours the BuMel regimen in spite of a total VOD incidence of 18% (grade 3:5%).Conclusions: The Peto rule of P<0.001 at interim analysis on the primary endpoint, EFS was met. Hence randomization was stopped with BuMel as recommended standard treatment in the HR-NBl1/SIOPEN trial which is still accruing for the randomised immunotherapy question.
Resumo:
Multiple sclerosis (MS), a variable and diffuse disease affecting white and gray matter, is known to cause functional connectivity anomalies in patients. However, related studies published to-date are post hoc; our hypothesis was that such alterations could discriminate between patients and healthy controls in a predictive setting, laying the groundwork for imaging-based prognosis. Using functional magnetic resonance imaging resting state data of 22 minimally disabled MS patients and 14 controls, we developed a predictive model of connectivity alterations in MS: a whole-brain connectivity matrix was built for each subject from the slow oscillations (<0.11Hz) of region-averaged time series, and a pattern recognition technique was used to learn a discriminant function indicating which particular functional connections are most affected by disease. Classification performance using strict cross-validation yielded a sensitivity of 82% (above chance at p<0.005) and specificity of 86% (p<0.01) to distinguish between MS patients and controls. The most discriminative connectivity changes were found in subcortical and temporal regions, and contralateral connections were more discriminative than ipsilateral connections. The pattern of decreased discriminative connections can be summarized post hoc in an index that correlates positively (ρ=0.61) with white matter lesion load, possibly indicating functional reorganisation to cope with increasing lesion load. These results are consistent with a subtle but widespread impact of lesions in white matter and in gray matter structures serving as high-level integrative hubs. These findings suggest that predictive models of resting state fMRI can reveal specific anomalies due to MS with high sensitivity and specificity, potentially leading to new non-invasive markers.
Resumo:
As production and use of nanomaterials in commercial products grow it is imperative to ensure these materials are used safely with minimal unwanted impacts on human health or the environment. Foremost among the populations of potential concern are workers who handle nanomaterials in a variety of occupational settings, including university laboratories, industrial manufacturing plants and other institutions. Knowledge about prudent practices for handling nanomaterials is being developed by many groups around the world but may be communicated in a way that is difficult for practitioners to access or use. The GoodNanoGuide is a collaborative, open-access project aimed at creating an international forum for the development and discussion of prudent practices that can be used by researchers, workers and their representatives, occupational safety professionals, governmental officials and even the public. The GoodNanoGuide is easily accessed by anyone with access to a web browser and aims to become a living repository of good practices for the nanotechnology enterprise. Interested individuals are invited to learn more about the GoodNanoGuide at http://goodnanoguide.org.
Resumo:
It has been convincingly argued that computer simulation modeling differs from traditional science. If we understand simulation modeling as a new way of doing science, the manner in which scientists learn about the world through models must also be considered differently. This article examines how researchers learn about environmental processes through computer simulation modeling. Suggesting a conceptual framework anchored in a performative philosophical approach, we examine two modeling projects undertaken by research teams in England, both aiming to inform flood risk management. One of the modeling teams operated in the research wing of a consultancy firm, the other were university scientists taking part in an interdisciplinary project experimenting with public engagement. We found that in the first context the use of standardized software was critical to the process of improvisation, the obstacles emerging in the process concerned data and were resolved through exploiting affordances for generating, organizing, and combining scientific information in new ways. In the second context, an environmental competency group, obstacles were related to the computer program and affordances emerged in the combination of experience-based knowledge with the scientists' skill enabling a reconfiguration of the mathematical structure of the model, allowing the group to learn about local flooding.
Resumo:
Introduction Occupational therapists could play an important role in facilitating driving cessation for ageing drivers. This, however, requires an easy-to-learn, standardised on-road evaluation method. This study therefore investigates whether use of P-drive' could be reliably taught to occupational therapists via a short half-day training session. Method Using the English 26-item version of P-drive, two occupational therapists evaluated the driving ability of 24 home-dwelling drivers aged 70 years or over on a standardised on-road route. Experienced driving instructors' on-road, subjective evaluations were then compared with P-drive scores. Results Following a short half-day training session, P-drive was shown to have almost perfect between-rater reliability (ICC2,1=0.950, 95% CI 0.889 to 0.978). Reliability was stable across sessions including the training phase even if occupational therapists seemed to become slightly less severe in their ratings with experience. P-drive's score was related to the driving instructors' subjective evaluations of driving skills in a non-linear manner (R-2=0.445, p=0.021). Conclusion P-drive is a reliable instrument that can easily be taught to occupational therapists and implemented as a way of standardising the on-road driving test.
Resumo:
Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.
Resumo:
OBJECTIVES: To learn upon incidence, underlying mechanisms and effectiveness of treatment strategies in patients with central airway and pulmonary parenchymal aorto-bronchial fistulation after thoracic endovascular aortic repair (TEVAR). METHODS: Analysis of an international multicentre registry (European Registry of Endovascular Aortic Repair Complications) between 2001 and 2012 with a total caseload of 4680 TEVAR procedures (14 centres). RESULTS: Twenty-six patients with a median age of 70 years (interquartile range: 60-77) (35% female) were identified. The incidence of either central airway (aorto-bronchial) or pulmonary parenchymal (aorto-pulmonary) fistulation (ABPF) in the entire cohort after TEVAR in the study period was 0.56% (central airway 58%, peripheral parenchymal 42%). Atherosclerotic aneurysm formation was the leading indication for TEVAR in 15 patients (58%). The incidence of primary endoleaks after initial TEVAR was n = 10 (38%), of these 80% were either type I or type III endoleaks. Fourteen patients (54%) developed central left bronchial tree lesions, 11 patients (42%) pulmonary parenchymal lesions and 1 patient (4%) developed a tracheal lesion. The recognized mechanism of ABPF was external compression of the bronchial tree in 13 patients (50%), the majority being due to endoleak formation, further ischaemia due to extensive coverage of bronchial feeding arteries in 3 patients (12%). Inflammation and graft erosion accounted for 4 patients (30%) each. Cumulative survival during the entire study period was 39%. Among deaths, 71% were attributed to ABPF. There was no difference in survival in patients having either central airway or pulmonary parenchymal ABPF (33 vs 45%, log-rank P = 0.55). Survival with a radical surgical approach was significantly better when compared with any other treatment strategy in terms of overall survival (63 vs 32% and 63 vs 21% at 1 and 2 years, respectively), as well as in terms of fistula-related survival (63 vs 43% and 63 vs 43% at 1 and 2 years, respectively). CONCLUSIONS: ABPF is a rare but highly lethal complication after TEVAR. The leading mechanism behind ABPF seems to be a continuing external compression of either the bronchial tree or left upper lobe parenchyma. In this setting, persisting or newly developing endoleak formation seems to play a crucial role. Prognosis does not differ in patients with central airway or pulmonary parenchymal fistulation. Radical bronchial or pulmonary parenchymal repair in combination with stent graft removal and aortic reconstruction seems to be the most durable treatment strategy.