56 resultados para Observational techniques and algorithms
em Université de Lausanne, Switzerland
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
In European countries and North America, people spend 80 to 90% of time inside buildings and thus breathe indoor air. In Switzerland, special attention has been devoted to the 16 stations of the national network of observation of atmospheric pollutants (NABEL). The results indicate a reduction in outdoor pollution over the last ten years. With such a decrease in pollution over these ten years the question becomes: how can we explain an increase of diseases? Indoor pollution can be the cause. Indoor contaminants that may create indoor air quality (IAQ) problems come from a variety of sources. These can include inadequate ventilation, temperature and humidity dysfunction, and volatile organic compounds (VOCs). The health effects from these contaminants are varied and can range from discomfort, irritation and respiratory diseases to cancer. Among such contaminants, environmental tobacco smoke (ETS) could be considered the most important in terms of both health effects and engineering controls of ventilation. To perform indoor pollution monitoring, several selected ETS tracers can be used including carbon monoxide (CO), carbon dioxide (CO2), respirable particles (RSP), condensate, nicotine, polycyclic aromatic hydrocarbons (PAHs), nitrosamines, etc. In this paper, some examples are presented of IAQ problems that have occurred following the renewal of buildings and energy saving concerns. Using industrial hygiene sampling techniques and focussing on selected priority pollutants used as tracers, various problems have been identified and solutions proposed. [Author]
Transcatheter aortic valve implantation (TAVI): state of the art techniques and future perspectives.
Resumo:
Transcatheter aortic valve therapies are the newest established techniques for the treatment of high risk patients affected by severe symptomatic aortic valve stenosis. The transapical approach requires a left anterolateral mini-thoracotomy, whereas the transfemoral method requires an adequate peripheral vascular access and can be performed fully percutaneously. Alternatively, the trans-subclavian access has been recently proposed as a third promising approach. Depending on the technique, the fine stent-valve positioning can be performed with or without contrast injections. The transapical echo-guided stent-valve implantation without angiography (the Lausanne technique) relies entirely on transoesophageal echocardiogramme imaging for the fine stent-valve positioning and it has been proved that this technique prevents the onset of postoperative contrast-related acute kidney failure. Recent published reports have shown good hospital outcomes and short-term results after transcatheter aortic valve implantation, but there are no proven advantages in using the transfemoral or the transapical technique. In particular, the transapical series have a higher mean logistic Euroscore of 27-35%, a procedural success rate above 95% and a mean 30-day mortality between 7.5 and 17.5%, whereas the transfemoral results show a lower logistic Euroscore of 23-25.5%, a procedural success rate above 90% and a 30-day mortality of 7-10.8%. Nevertheless, further clinical trials and long-term results are mandatory to confirm this positive trend. Future perspectives in transcatheter aortic valve therapies would be the development of intravascular devices for the ablation of the diseased valve leaflets and the launch of new stent-valves with improved haemodynamic, different sizes and smaller delivery systems.
Resumo:
OBJECTIVE: This review describes and evaluates the results of laparoscopic aortic surgery. METHODS: We describe the different laparoscopic techniques used to treat aortic disease, including (1) total laparoscopic aortic surgery (TLS), (2) laparoscopy-assisted procedures including hand-assisted laparoscopic surgery (HALS), and (3) robot-assisted laparoscopic surgery, with their current indications. Results of these techniques are analyzed in a systematic review of the clinical series published between 1998 and 2008, each containing >10 patients with complete information concerning operative time, clamping time, conversion rate, length of hospital stay, morbidity, and mortality. RESULTS: We selected and reviewed 29 studies that included 1073 patients. Heterogeneity of the studies and selection of the patients made comparison with current open or endovascular surgery difficult. Median operative time varied widely in TLS, from 240 to 391 minutes. HALS had the shortest operating time. Median clamping time varied from 60 to 146 minutes in TLS and was shorter in HALS. Median hospital stay varied from 4 to 10 days regardless of the laparoscopic technique. The postoperative mortality rate was 2.1% (95% confidence interval, 1.4-3.0), with no significant difference between patients treated for occlusive disease or for aneurysmal disease. Conversion to open surgery was necessary in 8.1% of patients and was slightly higher with TLS than with laparoscopy-assisted techniques (P = .07). CONCLUSIONS: Analysis of these series shows that laparoscopic aortic surgery can be performed safely provided that patient selection is adjusted to the surgeon's experience and conversion is liberally performed. The future of this technique in comparison with endovascular surgery is still unknown, and it is now time for multicenter randomized trials to demonstrate the potential benefit of this type of surgery.
Resumo:
To make a comprehensive evaluation of organ-specific out-of-field doses using Monte Carlo (MC) simulations for different breast cancer irradiation techniques and to compare results with a commercial treatment planning system (TPS). Three breast radiotherapy techniques using 6MV tangential photon beams were compared: (a) 2DRT (open rectangular fields), (b) 3DCRT (conformal wedged fields), and (c) hybrid IMRT (open conformal+modulated fields). Over 35 organs were contoured in a whole-body CT scan and organ-specific dose distributions were determined with MC and the TPS. Large differences in out-of-field doses were observed between MC and TPS calculations, even for organs close to the target volume such as the heart, the lungs and the contralateral breast (up to 70% difference). MC simulations showed that a large fraction of the out-of-field dose comes from the out-of-field head scatter fluence (>40%) which is not adequately modeled by the TPS. Based on MC simulations, the 3DCRT technique using external wedges yielded significantly higher doses (up to a factor 4-5 in the pelvis) than the 2DRT and the hybrid IMRT techniques which yielded similar out-of-field doses. In sharp contrast to popular belief, the IMRT technique investigated here does not increase the out-of-field dose compared to conventional techniques and may offer the most optimal plan. The 3DCRT technique with external wedges yields the largest out-of-field doses. For accurate out-of-field dose assessment, a commercial TPS should not be used, even for organs near the target volume (contralateral breast, lungs, heart).
Resumo:
BACKGROUND: Hypotension, a common intra-operative incident, bears an important potential for morbidity. It is most often manageable and sometimes preventable, which renders its study important. Therefore, we aimed at examining hospital variations in the occurrence of intra-operative hypotension and its predictors. As secondary endpoints, we determined to what extent hypotension relates to the risk of post-operative incidents and death. METHODS: We used the Anaesthesia Databank Switzerland, built on routinely and prospectively collected data on all anaesthesias in 21 hospitals. The three outcomes were assessed using multi-level logistic regression models. RESULTS: Among 147,573 anaesthesias, hypotension ranged from 0.6% to 5.2% in participating hospitals, and from 0.3% up to 12% in different surgical specialties. Most (73.4%) were minor single events. Age, ASA status, combined general and regional anaesthesia techniques, duration of surgery and hospitalization were significantly associated with hypotension. Although significantly associated, the emergency status of the surgery had a weaker effect. Hospitals' odds ratios for hypotension varied between 0.12 and 2.50 (P < or = 0.001), even after adjusting for patient and anaesthesia factors, and for type of surgery. At least one post-operative incident occurred in 9.7% of the procedures, including 0.03% deaths. Intra-operative hypotension was associated with a higher risk of post-operative incidents and death. CONCLUSION: Wide variations remain in the occurrence of hypotension among hospitals after adjustment for risk factors. Although differential reporting from hospitals may exist, variations in anaesthesia techniques and blood pressure maintenance may also have contributed. Intra-operative hypotension is associated with morbidities and sometimes death, and constant vigilance must thus be advocated.
Resumo:
Background: Hypotension, a common intra-operative incident, bears an important potential for morbidity. It is most often manageable and sometimes preventable, which renders its study important. Therefore, we aimed at examining hospital variations in the occurrence of intraoperative hypotension and its predictors. As secondary endpoints, we determined to what extent hypotension relates to the risk of postoperative incidents and death. Methods: We used the Anaesthesia Databank Switzerland, built on routinely and prospectively collected data on all anaesthesias in 21 hospitals. The three outcomes were assessed using multi-level logistic regression models. Results: Among 147573 anaesthesia, hypotension ranged from 0.6 to 5.2% in participating hospitals, and from 0.3 up to 12% in different surgical specialties. Most (73.4%) were minor single events. Age, ASA status, combined general and regional anaesthesia techniques, duration of surgery, and hospitalization were significantly associated to hypotension. Although significantly associated, the emergency status of the surgery had a weaker effect. Hospitals' Odds Ratios for hypotension varied between 0.12 to 2.50 (p ≤0.001) with respect to the mean prevalence of 3.1%, even after adjusting for patient and anaesthesia factors, and for type of surgery. At least one postoperative incident occurred in 9.7% of the interventions, including 0.03% deaths. Intra-operative hypotension was associated with higher risk of post-operative incidents and death. Conclusions: Wide variations in the occurrence of hypotension amongst hospitals remain after adjustment for risk factors. Although differential reporting from hospitals may exist, variations in anesthesia techniques and blood pressure maintenance could have also contributed. Intra-operative hypotension is associated with morbidities and sometimes death, and constant vigilance must thus be advocated.
Resumo:
Much medical research is observational. The reporting of observational studies is often of insufficient quality. Poor reporting hampers the assessment of the strengths and weaknesses of a study and the generalisability of its results. Taking into account empirical evidence and theoretical considerations, a group of methodologists, researchers, and editors developed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations to improve the quality of reporting of observational studies. The STROBE Statement consists of a checklist of 22 items, which relate to the title, abstract, introduction, methods, results and discussion sections of articles. Eighteen items are common to cohort studies, case-control studies and cross-sectional studies and four are specific to each of the three study designs. The STROBE Statement provides guidance to authors about how to improve the reporting of observational studies and facilitates critical appraisal and interpretation of studies by reviewers, journal editors and readers. This explanatory and elaboration document is intended to enhance the use, understanding, and dissemination of the STROBE Statement. The meaning and rationale for each checklist item are presented. For each item, one or several published examples and, where possible, references to relevant empirical studies and methodological literature are provided. Examples of useful flow diagrams are also included. The STROBE Statement, this document, and the associated Web site (http://www.strobe-statement.org/) should be helpful resources to improve reporting of observational research.
Resumo:
Voting Advice Applications (VAAs) have become a central component of election campaigns worldwide. Through matching political preferences of voters to parties and candidates, the web application grants voters a look into their political mirror and reveals the most suitable political choices to them in terms of policy congruence. Both the dense and concise information on the electoral offer and the comparative nature of the application make VAAs an unprecedented information source for electoral decision making. In times where electoral choices are found to be highly individualized and driven by political issue positions, an ever increasing number of voters turn to VAAs before casting their ballots. With VAAs in high demand, the question of their effects on voters has become a pressing research topic. In various countries, survey research has been used to proclaim an impact of VAAs on electoral behavior, yet practically all studies fail to provide the scientific evidence that would allow for making such claims. In this thesis, I set out to systematically establish the causal link between VAA use and electoral behavior, using various data sources and appropriate statistical techniques in doing so. The focus lies on the Swiss VAA smartvote, introduced in the forefront of the 2003 Swiss federal elections and meanwhile an integral part of the national election campaign, smartvote has produced over a million voting recommendations in the last Swiss federal elections to an active electorate of two million, potentially guiding a vast amount of voters in their choices on the ballot. In order to determine the effect of the VAA on electoral behavior, I analyze both voting preferences and choice among Swiss voters during two consecutive election periods. First, I introduce statistical techniques to adequately examine VAA effects in observational studies and use them to demonstrate that voters who used smartvote prior to the 2007 Swiss federal elections were significantly more likely to swing vote in the elections than non- users. Second, I analyze preference voting during the same election and show that the smartvote voting recommendation inclines politically knowledgeable voters to modify their ballots and cast candidate specific preference votes. Third, to further tackle the indication that smartvote use affects the preference structure of voters, I employ an experimental research design to demonstrate that voters who use the application tend to strengthen their vote propensities for their most preferred party and adapt their overall party preferences in a way that they consider more than one party as eligible vote options after engaging with the application. Finally, vote choice is examined for the 2011 Swiss federal election, showing once more that the VAA initiated a change of party choice among voters. In sum, this thesis presents empirical evidence for the transformative effect of the Swiss VAA smartvote on the electoral behavior.
Resumo:
PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.
Resumo:
Intraoperative cardiac imaging plays a key role during transcatheter aortic valve replacement. In recent years, new techniques and new tools for improved image quality and virtual navigation have been proposed, in order to simplify and standardize stent valve positioning and implantation. But routine performance of the new techniques may require major economic investments or specific knowledge and skills and, for this reason, they may not be accessible to the majority of cardiac centres involved in transcatheter valve replacement projects. Additionally, they still require injections of contrast medium to obtain computed images. Therefore, we have developed and describe here a very simple and intuitive method of positioning balloon-expandable stent valves, which represents the evolution of the 'dumbbell' technique for echocardiography-guided transcatheter valve replacement without angiography. This method, based on the partial inflation of the balloon catheter during positioning, traps the crimped valve in the aortic valve orifice and, consequently, very near to the ideal landing zone. It does not require specific echocardiographic knowledge; it does not require angiographies that increase the risk of postoperative kidney failure in elderly patients, and it can be also performed in centres not equipped with a hybrid operating room.
Resumo:
PURPOSE: Transanal endoscopic microsurgery provides a minimally invasive alternative to radical surgery for excision of benign and malignant rectal tumors. The purpose of this study was to review our experience with transanal endoscopic microsurgery to clarify its role in the treatment of different types of rectal pathology. METHODS: A prospective database documented all patients undergoing transanal endoscopic microsurgery from October 1996 through June 2008. We analyzed patient and operative factors, complications, and tumor recurrence. For recurrence analysis, we excluded patients with fewer than 6 months of follow-up, previous excisions, known metastases at initial presentation, and those who underwent immediate radical resection following transanal endoscopic microsurgery. RESULTS: Two hundred sixty-nine patients underwent transanal endoscopic microsurgery for benign (n = 158) and malignant (n = 111) tumors. Procedure-related complications (21%) included urinary retention (10.8%), fecal incontinence (4.1%), fever (3.8%), suture line dehiscence (1.5%), and bleeding (1.5%). Local recurrence rates for 121 benign and 83 malignant tumors were 5% for adenomas, 9.8% for T1 adenocarcinoma, 23.5% for T2 adenocarcinoma, 100% for T3 adenocarcinoma, and 0% for carcinoid tumors. All 6 (100%) recurrent adenomas were retreated with endoscopic techniques, and 8 of 17 (47%) recurrent adenocarcinomas underwent salvage procedures with curative intent. CONCLUSIONS: Transanal endoscopic microsurgery is a safe and effective method for excision of benign and malignant rectal tumors. Transanal endoscopic microsurgery can be offered for (1) curative resection of benign tumors, carcinoid tumors, and select T1 adenocarcinomas, (2) histopathologic staging in indeterminate cases, and (3) palliative resection in patients medically unfit or unwilling to undergo radical resection.