854 resultados para Gradient descent algorithms
Resumo:
While 3D thin-slab coronary magnetic resonance angiography (MRA) has traditionally been performed using a Cartesian acquisition scheme, spiral k-space data acquisition offers several potential advantages. However, these strategies have not been directly compared in the same subjects using similar methodologies. Thus, in the present study a comparison was made between 3D coronary MRA using Cartesian segmented k-space gradient-echo and spiral k-space data acquisition schemes. In both approaches the same spatial resolution was used and data were acquired during free breathing using navigator gating and prospective slice tracking. Magnetization preparation (T(2) preparation and fat suppression) was applied to increase the contrast. For spiral imaging two different examinations were performed, using one or two spiral interleaves, during each R-R interval. Spiral acquisitions were found to be superior to the Cartesian scheme with respect to the signal-to-noise ratio (SNR) and contrast-to-noise-ratio (CNR) (both P < 0.001) and image quality. The single spiral per R-R interval acquisition had the same total scan duration as the Cartesian acquisition, but the single spiral had the best image quality and a 2.6-fold increase in SNR. The double-interleaf spiral approach showed a 50% reduction in scanning time, a 1.8-fold increase in SNR, and similar image quality when compared to the standard Cartesian approach. Spiral 3D coronary MRA appears to be preferable to the Cartesian scheme. The increase in SNR may be "traded" for either shorter scanning times using multiple consecutive spiral interleaves, or for enhanced spatial resolution.
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
OBJECTIVE: Our objective was to compare two state-of-the-art coronary MRI (CMRI) sequences with regard to image quality and diagnostic accuracy for the detection of coronary artery disease (CAD). SUBJECTS AND METHODS: Twenty patients with known CAD were examined with a navigator-gated and corrected free-breathing 3D segmented gradient-echo (turbo field-echo) CMRI sequence and a steady-state free precession sequence (balanced turbo field-echo). CMRI was performed in a transverse plane for the left coronary artery and a double-oblique plane for the right coronary artery system. Subjective image quality (1- to 4-point scale, with 1 indicating excellent quality) and objective image quality parameters were independently determined for both sequences. Sensitivity, specificity, and accuracy for the detection of significant (> or = 50% diameter) coronary artery stenoses were determined as defined in invasive catheter X-ray coronary angiography. RESULTS: Subjective image quality was superior for the balanced turbo field-echo approach (1.8 +/- 0.9 vs 2.3 +/- 1.0 for turbo field-echo; p < 0.001). Vessel sharpness, signal-to-noise ratio, and contrast-to-noise ratio were all superior for the balanced turbo field-echo approach (p < 0.01 for signal-to-noise ratio and contrast-to-noise ratio). Of the 103 segments, 18% of turbo field-echo segments and 9% of balanced turbo field-echo segments had to be excluded from disease evaluation because of insufficient image quality. Sensitivity, specificity, and accuracy for the detection of significant coronary artery stenoses in the evaluated segments were 92%, 67%, 85%, respectively, for turbo field-echo and 82%, 82%, 81%, respectively, for balanced turbo field-echo. CONCLUSION: Balanced turbo field-echo offers improved image quality with significantly fewer nondiagnostic segments when compared with turbo field-echo. For the detection of CAD, both sequences showed comparable accuracy for the visualized segments.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
Upward migration of plant species due to climate change has become evident in several European mountain ranges. It is still, however, unclear whether certain plant traits increase the probability that a species will colonize mountain summits or vanish, and whether these traits differ with elevation. Here, we used data from a repeat survey of the occurrence of plant species on 120 summits, ranging from 2449 to 3418 m asl, in south-eastern Switzerland to identify plant traits that increase the probability of colonization or extinction in the 20th century. Species numbers increased across all plant traits considered. With some traits, however, numbers increased proportionally more. The most successful colonizers seemed to prefer warmer temperatures and well-developed soils. They produced achene fruits and/or seeds with pappus appendages. Conversely, cushion plants and species with capsule fruits were less efficient as colonizers. Observed changes in traits along the elevation gradient mainly corresponded to the natural distribution of traits. Extinctions did not seem to be clearly related to any trait. Our study showed that plant traits varied along both temporal and elevational gradients. While seeds with pappus seemed to be advantageous for colonization, most of the trait changes also mirrored previous gradients of traits along elevation and hence illustrated the general upward migration of plant species. An understanding of the trait characteristics of colonizing species is crucial for predicting future changes in mountain vegetation under climate change.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
In Amazonia, topographical variations in soil and forest structure within "terra-firme" ecosystems are important factors correlated with terrestrial invertebrates' distribution. The objective of this work was to assess the effects of soil clay content and slope on ant species distribution over a 25 km² grid covering the natural topographic continuum. Using three complementary sampling methods (sardine baits, pitfall traps and litter samples extracted in Winkler sacks), 300 subsamples of each method were taken in 30 plots distributed over a wet tropical forest in the Ducke Reserve (Manaus, AM, Brazil). An amount of 26,814 individuals from 11 subfamilies, 54 genera, 85 species and 152 morphospecies was recorded (Pheidole represented 37% of all morphospecies). The genus Eurhopalothrix was registered for the first time for the reserve. Species number was not correlated with slope or clay content, except for the species sampled from litter. However, the Principal Coordinate Analysis indicated that the main pattern of species composition from pitfall and litter samples was related to clay content. Almost half of the species were found only in valleys or only on plateaus, which suggests that most of them are habitat specialists. In Central Amazonia, soil texture is usually correlated with vegetation structure and moisture content, creating different microhabitats, which probably account for the observed differences in ant community structure.
Resumo:
The objective of this work was to elevate gradient effect on diversity of Collembola, in a temperate forest on the northeast slope of Iztaccíhuatl Volcano, Mexico. Four expeditions were organized from November 2003 to August 2004, at four altitudes (2,753, 3,015, 3,250 and 3,687 m a.s.l.). In each site, air temperature, CO2 concentration, humidity, and terrain inclination were measured. The influence of abiotic factors on faunal composition was evaluated, at the four collecting sites, with canonical correspondence analyses (CCA). A total of 24,028 specimens were obtained, representing 12 families, 44 genera and 76 species. Mesaphorura phlorae, Proisotoma ca. tenella and Parisotoma ca. notabilis were the most abundant species. The highest diversity and evenness were recorded at 3,250 m (H' = 2.85; J' = 0.73). Canonical analyses axes 1 and 2 of the CCA explained 67.4% of the variance in species composition, with CO2 and altitude best explaining axis 1, while slope and humidity were better correlated to axis 2. The results showed that CO2 is an important factor to explain Collembola species assemblage, together with slope and humidity.
Resumo:
Abstract
Resumo:
Abstract
Resumo:
Much of the analytical modeling of morphogen profiles is based on simplistic scenarios, where the source is abstracted to be point-like and fixed in time, and where only the steady state solution of the morphogen gradient in one dimension is considered. Here we develop a general formalism allowing to model diffusive gradient formation from an arbitrary source. This mathematical framework, based on the Green's function method, applies to various diffusion problems. In this paper, we illustrate our theory with the explicit example of the Bicoid gradient establishment in Drosophila embryos. The gradient formation arises by protein translation from a mRNA distribution followed by morphogen diffusion with linear degradation. We investigate quantitatively the influence of spatial extension and time evolution of the source on the morphogen profile. For different biologically meaningful cases, we obtain explicit analytical expressions for both the steady state and time-dependent 1D problems. We show that extended sources, whether of finite size or normally distributed, give rise to more realistic gradients compared to a single point-source at the origin. Furthermore, the steady state solutions are fully compatible with a decreasing exponential behavior of the profile. We also consider the case of a dynamic source (e.g. bicoid mRNA diffusion) for which a protein profile similar to the ones obtained from static sources can be achieved.
Resumo:
Inference of Markov random field images segmentation models is usually performed using iterative methods which adapt the well-known expectation-maximization (EM) algorithm for independent mixture models. However, some of these adaptations are ad hoc and may turn out numerically unstable. In this paper, we review three EM-like variants for Markov random field segmentation and compare their convergence properties both at the theoretical and practical levels. We specifically advocate a numerical scheme involving asynchronous voxel updating, for which general convergence results can be established. Our experiments on brain tissue classification in magnetic resonance images provide evidence that this algorithm may achieve significantly faster convergence than its competitors while yielding at least as good segmentation results.
Resumo:
Les hypertensions pulmonaires post-capillaires sont définies par une pression artérielle moyenne (PAPm) ≥ 25mmHg et une pression pulmonaire d'occlusion (PAPO) > 15mmHg. Une augmentation de la PAP peut être soit passive, transmission rétrograde de l'augmentation de la pression du coeur gauche ( gradient transpulmonaire GTP ≤ 12mmHg), soit active, élévation hors de proportion de la PAP due à une augmentation du tonus vasculaire et un remodelage vasculaire (GTP > 12mmHg). Le gradient entre la pression artérielle diastolique (PAPd) et la PAPO, qui est normal (≤ 5mmHg) dans les HP post-capillaires, n'est actuellement plus utilisé dans le diagnostic et l'évaluation des HP dans la dernière classification de 2008 (Dana Point 2008). But : - analyse des données cliniques, échocardiographiques et hémodynamiques des HP post-capillaires des patients référés dans un centre de référence d'HP - évaluer le rôle du gradient PAPd-PAPO dans la prise en charge des HP Méthode : Nous avons analysé de manière rétrospective les données cliniques, hémodynamiques et échocardiographiques des patients qui ont été diagnostiqué pour une HP au moyen d'un cathétérisme cardiaque entre janvier 2009 et juin 2011 au centre de référence d'HP du Centre Hospitalier Universitaire Vaudois (CHUV). Résultats: - 40% des patients ont les critères pour une HP post-capillaire - 33% des patients ont une HP qui répond à la définition d'HP "hors de propotion" avec un GTP > 12mmHg - 74% des patients avec HP post-capillaire ont une cardiopathie gauche associée avec des signes échocardiographiques de dysfonction diastolique - Sur les 27 patients avec une HP du groupe 2, 44% ont plusieurs facteurs de risque (FR) pour une HP - 75% de ces patients avec une cardiopathie gauche ainsi qu'un autre FR pour une HP ont un gradient PAPd-PAPO > 5 mmHg versus 8% de ceux qui n'ont pas d'autre FR (p-value 0.0075) Conclusion : Les HP post-capillaires sont fréquentes chez les patients adressés au centre de référence d'HP pour une suspicion d'HP. Dans notre cohorte 85% des patients avec HP post-capillaire ont une HP hors de proportion dont 44% ont un FR non cardiaque susceptible d'être à l'origine de l'HP. Le gradient PAPd-PAPO semble être un meilleur facteur discriminant que le GTP pour la caractérisation et la classification des HP.
Resumo:
This article reports on a lossless data hiding scheme for digital images where the data hiding capacity is either determined by minimum acceptable subjective quality or by the demanded capacity. In the proposed method data is hidden within the image prediction errors, where the most well-known prediction algorithms such as the median edge detector (MED), gradient adjacent prediction (GAP) and Jiang prediction are tested for this purpose. In this method, first the histogram of the prediction errors of images are computed and then based on the required capacity or desired image quality, the prediction error values of frequencies larger than this capacity are shifted. The empty space created by such a shift is used for embedding the data. Experimental results show distinct superiority of the image prediction error histogram over the conventional image histogram itself, due to much narrower spectrum of the former over the latter. We have also devised an adaptive method for hiding data, where subjective quality is traded for data hiding capacity. Here the positive and negative error values are chosen such that the sum of their frequencies on the histogram is just above the given capacity or above a certain quality.