30 resultados para INTERPOLATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Low brain tissue oxygen pressure (PbtO2) is associated with worse outcome in patients with severe traumatic brain injury (TBI). However, it is unclear whether brain tissue hypoxia is merely a marker of injury severity or a predictor of prognosis, independent from intracranial pressure (ICP) and injury severity. Hypothesis: We hypothesized that brain tissue hypoxia was an independent predictor of outcome in patients wih severe TBI, irrespective of elevated ICP and of the severity of cerebral and systemic injury. Methods: This observational study was conducted at the Neurological ICU, Hospital of the University of Pennsylvania, an academic level I trauma center. Patients admitted with severe TBI who had PbtO2 and ICP monitoring were included in the study. PbtO2, ICP, mean arterial pressure (MAP) and cerebral perfusion pressure (CPP = MAP-ICP) were monitored continuously and recorded prospectively every 30 min. Using linear interpolation, duration and cumulative dose (area under the curve, AUC) of brain tissue hypoxia (PbtO2 < 15 mm Hg), elevated ICP >20 mm Hg and low CPP <60 mm Hg were calculated, and the association with outcome at hospital discharge, dichotomized as good (Glasgow Outcome Score [GOS] 4-5) vs. poor (GOS 1-3), was analyzed. Results: A total of 103 consecutive patients, monitored for an average of 5 days, was studied. Brain tissue hypoxia was observed in 66 (64%) patients despite ICP was < 20 mm Hg and CPP > 60 mm Hg (72 +/- 39% and 49 +/- 41% of brain hypoxic time, respectively). Compared with patients with good outcome, those with poor outcome had a longer duration of brain hypoxia (1.7 +/- 3.7 vs. 8.3 +/- 15.9 hrs, P<0.01), as well as a longer duration (11.5 +/- 16.5 vs. 21.6 +/- 29.6 hrs, P=0.03) and a greater cumulative dose (56 +/- 93 vs. 143 +/- 218 mm Hg*hrs, P<0.01) of elevated ICP. By multivariable logistic regression, admission Glasgow Coma Scale (OR, 0.83, 95% CI: 0.70-0.99, P=0.04), Marshall CT score (OR 2.42, 95% CI: 1.42-4.11, P<0.01), APACHE II (OR 1.20, 95% CI: 1.03-1.43, P=0.03), and the duration of brain tissue hypoxia (OR 1.13; 95% CI: 1.01-1.27; P=0.04) were all significantly associated with poor outcome. No independent association was found between the AUC for elevated ICP and outcome (OR 1.01, 95% CI 0.97-1.02, P=0.11) in our prospective cohort. Conclusions: In patients with severe TBI, brain tissue hypoxia is frequent, despite normal ICP and CPP, and is associated with poor outcome, independent of intracranial hypertension and the severity of cerebral and systemic injury. Our findings indicate that PbtO2 is a strong physiologic prognostic marker after TBI. Further study is warranted to examine whether PbtO2-directed therapy improves outcome in severely head-injured patients .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To implement a carotid sparing protocol using helical Tomotherapy(HT) in T1N0 squamous-cell laryngeal carcinoma.Materials/Methods: Between July and August 2010, 7 men with stage T1N0 laryngeal carcinoma were included in this study. Age ranged from 47-74 years. Staging included endoscopic examination, CT-scan and MRI when indicated.Planned irradiation dose was 70 Gy in 35 fractions over 7 weeks. A simple treatment planning algorithm for carotidsparing was used: maximum point dose to the carotids 35 Gy, to the spinal cord 30 Gy, and 100% PTV volume to becovered with 95% of the prescribed dose. Carotid volume of interest extended to 1 cm above and below of the PTV.Doses to the carotid arteries, critical organs, and planned target volume (PTV) with our standard laryngealirradiation protocol was compared. Daily megavoltage scans were obtained before each fraction. When necessary, thePlanned Adaptive? software (TomoTherapy Inc., Madison, WI) was used to evaluate the need for a re-planning,which has never been indicated. Dose data were extracted using the VelocityAI software (Atlanta, GA), and datanormalization and dosevolume histogram (DVH) interpolation were realized using the Igor Pro software (Portland,OR).Results: A significant (p < 0.05) carotid dose sparing compared to our standard protocol with an average maximum point dose of 38.3 Gy (standard devaition [SD] 4.05 Gy), average mean dose of 18.59 Gy (SD 0.83 Gy) was achieved.In all patients, 95% of the carotid volume received less than 28.4 Gy (SD 0.98 Gy). The average maximum point doseto the spinal cord was 25.8 Gy (SD 3.24 Gy). PTV was fully covered with more than 95% of the prescribed dose forall patients with an average maximum point dose of 74.1 Gy and the absolute maximum dose in a single patient of75.2 Gy. To date, the clinical outcomes have been excellent. Three patients (42%) developed stage 1 mucositis that was conservatively managed, and all the patients presented a mild to moderate dysphonia. All adverse effectsresolved spontaneously in the month following the end of treatment. Early local control rate is 100% considering a 4-5months post treatment follow-up.Conclusions: HT allows a clinically significant decrease of carotid irradiation dose compared tostandard irradiation protocols with an acceptable spinal cord dose tradeoff. Moreover, this technique allows the PTV to be homogenously covered with a curative irradiation dose. Daily control imaging brings added security marginsespecially when working with high dose gradients. Further investigations and follow-up are underway to better evaluatethe late clinical outcomes especially the local control rate, late laryngeal and vascular toxicity, and expected potentialimpact on cerebrovascular events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction. Development of the fetal brain surfacewith concomitant gyrification is one of the majormaturational processes of the human brain. Firstdelineated by postmortem studies or by ultrasound, MRIhas recently become a powerful tool for studying in vivothe structural correlates of brain maturation. However,the quantitative measurement of fetal brain developmentis a major challenge because of the movement of the fetusinside the amniotic cavity, the poor spatial resolution,the partial volume effect and the changing appearance ofthe developing brain. Today extensive efforts are made todeal with the âeurooepost-acquisitionâeuro reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution (Rousseau, F., 2006;Jiang, S., 2007). We here propose a framework devoted tothe segmentation of the basal ganglia, the gray-whitetissue segmentation, and in turn the 3D corticalreconstruction of the fetal brain. Method. Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences in fetuses aged from 29 to 32gestational weeks (slice thickness 5.4mm, in planespatial resolution 1.09mm). For each fetus, 6 axialvolumes shifted by 1 mm were acquired (about 1 min pervolume). First, each volume is manually segmented toextract fetal brain from surrounding fetal and maternaltissues. Inhomogeneity intensity correction and linearintensity normalization are then performed. A highspatial resolution image of isotropic voxel size of 1.09mm is created for each fetus as previously published byothers (Rousseau, F., 2006). B-splines are used for thescattered data interpolation (Lee, 1997). Then, basalganglia segmentation is performed on this superreconstructed volume using active contour framework witha Level Set implementation (Bach Cuadra, M., 2010). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed (Bach Cuadra, M., 2009). Theresulting white matter image is then binarized andfurther given as an input in the Freesurfer software(http://surfer.nmr.mgh.harvard.edu/) to provide accuratethree-dimensional reconstructions of the fetal brain.Results. High-resolution images of the cerebral fetalbrain, as obtained from the low-resolution acquired MRI,are presented for 4 subjects of age ranging from 29 to 32GA. An example is depicted in Figure 1. Accuracy in theautomated basal ganglia segmentation is compared withmanual segmentation using measurement of Dice similarity(DSI), with values above 0.7 considering to be a verygood agreement. In our sample we observed DSI valuesbetween 0.785 and 0.856. We further show the results ofgray-white matter segmentation overlaid on thehigh-resolution gray-scale images. The results arevisually checked for accuracy using the same principlesas commonly accepted in adult neuroimaging. Preliminary3D cortical reconstructions of the fetal brain are shownin Figure 2. Conclusion. We hereby present a completepipeline for the automated extraction of accuratethree-dimensional cortical surface of the fetal brain.These results are preliminary but promising, with theultimate goal to provide âeurooemovieâeuro of the normal gyraldevelopment. In turn, a precise knowledge of the normalfetal brain development will allow the quantification ofsubtle and early but clinically relevant deviations.Moreover, a precise understanding of the gyraldevelopment process may help to build hypotheses tounderstand the pathogenesis of several neurodevelopmentalconditions in which gyrification have been shown to bealtered (e.g. schizophrenia, autismâeuro¦). References.Rousseau, F. (2006), 'Registration-Based Approach forReconstruction of High-Resolution In Utero Fetal MR Brainimages', IEEE Transactions on Medical Imaging, vol. 13,no. 9, pp. 1072-1081. Jiang, S. (2007), 'MRI of MovingSubjects Using Multislice Snapshot Images With VolumeReconstruction (SVR): Application to Fetal, Neonatal, andAdult Brain Studies', IEEE Transactions on MedicalImaging, vol. 26, no. 7, pp. 967-980. Lee, S. (1997),'Scattered data interpolation with multilevel B-splines',IEEE Transactions on Visualization and Computer Graphics,vol. 3, no. 3, pp. 228-244. Bach Cuadra, M. (2010),'Central and Cortical Gray Mater Segmentation of MagneticResonance Images of the Fetal Brain', ISMRM Conference.Bach Cuadra, M. (2009), 'Brain tissue segmentation offetal MR images', MICCAI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Images of myocardial strain can be used to diagnose heart disease, plan and monitor treatment, and to learn about cardiac structure and function. Three-dimensional (3D) strain is typically quantified using many magnetic resonance (MR) images obtained in two or three orthogonal planes. Problems with this approach include long scan times, image misregistration, and through-plane motion. This article presents a novel method for calculating cardiac 3D strain using a stack of two or more images acquired in only one orientation. The zHARP pulse sequence encodes in-plane motion using MR tagging and out-of-plane motion using phase encoding, and has been previously shown to be capable of computing 3D displacement within a single image plane. Here, data from two adjacent image planes are combined to yield a 3D strain tensor at each pixel; stacks of zHARP images can be used to derive stacked arrays of 3D strain tensors without imaging multiple orientations and without numerical interpolation. The performance and accuracy of the method is demonstrated in vitro on a phantom and in vivo in four healthy adult human subjects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the general regression neural networks (GRNN) as a nonlinear regression method for the interpolation of monthly wind speeds in complex Alpine orography. GRNN is trained using data coming from Swiss meteorological networks to learn the statistical relationship between topographic features and wind speed. The terrain convexity, slope and exposure are considered by extracting features from the digital elevation model at different spatial scales using specialised convolution filters. A database of gridded monthly wind speeds is then constructed by applying GRNN in prediction mode during the period 1968-2008. This study demonstrates that using topographic features as inputs in GRNN significantly reduces cross-validation errors with respect to low-dimensional models integrating only geographical coordinates and terrain height for the interpolation of wind speed. The spatial predictability of wind speed is found to be lower in summer than in winter due to more complex and weaker wind-topography relationships. The relevance of these relationships is studied using an adaptive version of the GRNN algorithm which allows to select the useful terrain features by eliminating the noisy ones. This research provides a framework for extending the low-dimensional interpolation models to high-dimensional spaces by integrating additional features accounting for the topographic conditions at multiple spatial scales. Copyright (c) 2012 Royal Meteorological Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper deals with the development and application of the methodology for automatic mapping of pollution/contamination data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve this problem. The automatic tuning of isotropic and an anisotropic GRNN model using cross-validation procedure is presented. Results are compared with k-nearest-neighbours interpolation algorithm using independent validation data set. Quality of mapping is controlled by the analysis of raw data and the residuals using variography. Maps of probabilities of exceeding a given decision level and ?thick? isoline visualization of the uncertainties are presented as examples of decision-oriented mapping. Real case study is based on mapping of radioactively contaminated territories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In vivo fetal magnetic resonance imaging provides aunique approach for the study of early human braindevelopment [1]. In utero cerebral morphometry couldpotentially be used as a marker of the cerebralmaturation and help to distinguish between normal andabnormal development in ambiguous situations. However,this quantitative approach is a major challenge becauseof the movement of the fetus inside the amniotic cavity,the poor spatial resolution provided by very fast MRIsequences and the partial volume effect. Extensiveefforts are made to deal with the reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution [2,3,4]. Frameworkswere developed for the segmentation of specific regionsof the fetal brain such as posterior fossa, brainstem orgerminal matrix [5,6], or for the entire brain tissue[7,8], applying the Expectation-Maximization MarkovRandom Field (EM-MRF) framework. However, many of theseprevious works focused on the young fetus (i.e. before 24weeks) and use anatomical atlas priors to segment thedifferent tissue or regions. As most of the gyraldevelopment takes place after the 24th week, acomprehensive and clinically meaningful study of thefetal brain should not dismiss the third trimester ofgestation. To cope with the rapidly changing appearanceof the developing brain, some authors proposed a dynamicatlas [8]. To our opinion, this approach however faces arisk of circularity: each brain will be analyzed /deformed using the template of its biological age,potentially biasing the effective developmental delay.Here, we expand our previous work [9] to proposepost-processing pipeline without prior that allow acomprehensive set of morphometric measurement devoted toclinical application. Data set & Methods: Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences (TR 7000 ms, TE 180 ms, FOV 40 x 40 cm,slice thickness 5.4mm, in plane spatial resolution1.09mm). For each fetus, 6 axial volumes shifted by 1 mmwere acquired under motherâeuro?s sedation (about 1min pervolume). First, each volume is segmentedsemi-automatically using region-growing algorithms toextract fetal brain from surrounding maternal tissues.Inhomogeneity intensity correction [10] and linearintensity normalization are then performed. Brain tissues(CSF, GM and WM) are then segmented based on thelow-resolution volumes as presented in [9]. Ahigh-resolution image with isotropic voxel size of 1.09mm is created as proposed in [2] and using B-splines forthe scattered data interpolation [11]. Basal gangliasegmentation is performed using a levet setimplementation on the high-resolution volume [12]. Theresulting white matter image is then binarized and givenas an input in FreeSurfer software(http://surfer.nmr.mgh.harvard.edu) to providetopologically accurate three-dimensional reconstructionsof the fetal brain according to the local intensitygradient. References: [1] Guibaud, Prenatal Diagnosis29(4) (2009). [2] Rousseau, Acad. Rad. 13(9), 2006. [3]Jiang, IEEE TMI 2007. [4] Warfield IADB, MICCAI 2009. [5]Claude, IEEE Trans. Bio. Eng. 51(4) 2004. [6] Habas,MICCAI 2008. [7] Bertelsen, ISMRM 2009. [8] Habas,Neuroimage 53(2) 2010. [9] Bach Cuadra, IADB, MICCAI2009. [10] Styner, IEEE TMI 19(39 (2000). [11] Lee, IEEETrans. Visual. And Comp. Graph. 3(3), 1997. [12] BachCuadra, ISMRM 2010.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article we propose a novel method for calculating cardiac 3-D strain. The method requires the acquisition of myocardial short-axis (SA) slices only and produces the 3-D strain tensor at every point within every pair of slices. Three-dimensional displacement is calculated from SA slices using zHARP which is then used for calculating the local displacement gradient and thus the local strain tensor. There are three main advantages of this method. First, the 3-D strain tensor is calculated for every pixel without interpolation; this is unprecedented in cardiac MR imaging. Second, this method is fast, in part because there is no need to acquire long-axis (LA) slices. Third, the method is accurate because the 3-D displacement components are acquired simultaneously and therefore reduces motion artifacts without the need for registration. This article presents the theory of computing 3-D strain from two slices using zHARP, the imaging protocol, and both phantom and in-vivo validation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large amount of data for inconspicuous taxa is stored in natural history collections; however, this information is often neglected for biodiversity patterns studies. Here, we evaluate the performance of direct interpolation of museum collections data, equivalent to the traditional approach used in bryophyte conservation planning, and stacked species distribution models (S-SDMs) to produce reliable reconstructions of species richness patterns, given that differences between these methods have been insufficiently evaluated for inconspicuous taxa. Our objective was to contrast if species distribution models produce better inferences of diversity richness than simply selecting areas with the higher species numbers. As model species, we selected Iberian species of the genus Grimmia (Bryophyta), and we used four well-collected areas to compare and validate the following models: 1) four Maxent richness models, each generated without the data from one of the four areas, and a reference model created using all of the data and 2) four richness models obtained through direct spatial interpolation, each generated without the data from one area, and a reference model created with all of the data. The correlations between the partial and reference Maxent models were higher in all cases (0.45 to 0.99), whereas the correlations between the spatial interpolation models were negative and weak (-0.3 to -0.06). Our results demonstrate for the first time that S-SDMs offer a useful tool for identifying detailed richness patterns for inconspicuous taxa such as bryophytes and improving incomplete distributions by assessing the potential richness of under-surveyed areas, filling major gaps in the available data. In addition, the proposed strategy would enhance the value of the vast number of specimens housed in biological collections.