86 resultados para Multi-cicle, Expectation, and Conditional Estimation Method
em Universit
Resumo:
Rapport de synthèse : Objectif : Le but de ce travail est d`étudier l'angiographie par scanner multi-barrette (AS) dans l'évaluation de l'artériopathie oblitérante (AOMI) de l'aorte abdominale et des membres inférieurs utilisant une méthode adaptative d'acquisition pour optimiser le rehaussement artériel en particulier pour le lit artériel distal et les artères des pieds. Matériels et méthodes : Trente-quatre patients pressentant une AOMI ont bénéficié d'une angiographie trans-cathéter (ATC) et d'une AS dans un délai inférieur ou égal à 15 jours. L'AS a été effectuée du tronc coeliaque jusqu'aux artères des pieds en une seule acquisition utilisant une haute résolution spatiale (16x0.625 mm). La vitesse de table et le temps de rotation pour chaque examen ont été choisis selon le temps de transit du produit de contraste, obtenu après un bolus test. Une quantité totale de 130 ml de contraste à 4 ml/s a été utilisée. L'analyse des images de l'AS a été effectuée par deux observateurs et les données ATC ont été interprétées de manière indépendante par deux autres observateurs. L'analyse a inclus la qualité de l'image et la détection de sténose supérieure ou égale à 50 % par patient et par segment artériel. La sensibilité et la spécificité de l'AS ont été calculées en considérant l'ATC comme examen de référence. La variabilité Interobservateur a été mesurée au moyen d'une statistique de kappa. Résultas : L'ATC a été non-conclusive dans 0.7 % des segments, tandis que l'AS était conclusive dans tous les segments. Sur l'analyse par patient, la sensibilité et la spécificité totales pour détecter une sténose significative égale ou supérieure à 50 % étaient de 100 %. L'analyse par segment a montré des sensibilités et de spécificités variant respectivement de 91 à 100 % et de 81 à 100 %. L'analyse des artères distales des pieds a révélé une sensibilité de 100 % et une spécificité de 90 %. Conclusion : L'angiographie par CT multi-barrettes utilisant cette méthode adaptative d'acquisition améliore la qualité de l'image et fournit une technique non-invasive et fiable pour évaluer L'AOMI, y compris les artères distales des pieds.
Resumo:
The human brain displays heterogeneous organization in both structure and function. Here we develop a method to characterize brain regions and networks in terms of information-theoretic measures. We look at how these measures scale when larger spatial regions as well as larger connectome sub-networks are considered. This framework is applied to human brain fMRI recordings of resting-state activity and DSI-inferred structural connectivity. We find that strong functional coupling across large spatial distances distinguishes functional hubs from unimodal low-level areas, and that this long-range functional coupling correlates with structural long-range efficiency on the connectome. We also find a set of connectome regions that are both internally integrated and coupled to the rest of the brain, and which resemble previously reported resting-state networks. Finally, we argue that information-theoretic measures are useful for characterizing the functional organization of the brain at multiple scales.
Resumo:
The objective of this study was to estimate the potential of method restriction as a public health strategy in suicide prevention. Data from the Swiss Federal Statistical Office and the Swiss Institutes of Forensic Medicine from 2004 were gathered and categorized into suicide submethods according to accessibility to restriction of means. Of suicides in Switzerland, 39.2% are accessible to method restriction. The highest proportions were found in private weapons (13.2%), army weapons (10.4%), and jumps from hot-spots (4.6%). The presented method permits the estimation of the suicide prevention potential of a country by method restriction and the comparison of restriction potentials between suicide methods. In Switzerland, reduction of firearm suicides has the highest potential to reduce the total number of suicides.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
Le but de ce travail doctoral était le développement de méthodes analytiques pour la détermination dethyl glucuronide et dethyl sulfate. Ces deux substances sont des métabolites directs de lethanol qui peuvent être détectées pendant des heures jusqu'à des jours dans des fluides corporels, après que léthanol ait été complètement éliminé du corps humain. Ce sont donc des marqueurs de consommation récente d'alcool.La majorité des expériences ont été effectuées en utilisant l'électrophorèse capillaire. Il était envisagé de fournir des méthodes utilisables dans des laboratoires de routine. Des méthodes électrophorétiques ont été développées et optimisées pour la détermination dethyl sulfate dans le sérum et l'urine ainsi que pour lethyl glucuronide dans le sérum. Lethyl glucuronide urinaire a pu être déterminé par un immunoassay commerciale qui a en plus été adapté avec succès pour des échantillons de sérum. Avec toutes ces méthodes d'analyse il était possible d'observer les deux marqueurs de consommation d'alcool récente, même une consommation aussi basse qu'un verre de boissons alcooliques.Finalement, une étude englobant plus de 100 échantillons aété effectuée avec l'ambition de déterminer les valeurs de référence pour lethyl glucuronide dans le sérum et l'urine. De plus, la nécessité de normaliser les échantillons d'urine par rapport à la dilution a été investiguée. Grâce à cette étude des valeurs de cut-off et une base statistique pour l'interprétation probabiliste ont pu être proposées.
Resumo:
High-frequency oscillations in the gamma-band reflect rhythmic synchronization of spike timing in active neural networks. The modulation of gamma oscillations is a widely established mechanism in a variety of neurobiological processes, yet its neurochemical basis is not fully understood. Modeling, in-vitro and in-vivo animal studies suggest that gamma oscillation properties depend on GABAergic inhibition. In humans, search for evidence linking total GABA concentration to gamma oscillations has led to promising -but also to partly diverging- observations. Here, we provide the first evidence of a direct relationship between the density of GABAA receptors and gamma oscillatory gamma responses in human primary visual cortex (V1). By combining Flumazenil-PET (to measure resting-levels of GABAA receptor density) and MEG (to measure visually-induced gamma oscillations), we found that GABAA receptor densities correlated positively with the frequency and negatively with amplitude of visually-induced gamma oscillations in V1. Our findings demonstrate that gamma-band response profiles of primary visual cortex across healthy individuals are shaped by GABAA-receptor-mediated inhibitory neurotransmission. These results bridge the gap with in-vitro and animal studies and may have future clinical implications given that altered GABAergic function, including dysregulation of GABAA receptors, has been related to psychiatric disorders including schizophrenia and depression.
Resumo:
Background: The imatinib trough plasma concentration (C(min)) correlates with clinical response in cancer patients. Therapeutic drug monitoring (TDM) of plasma C(min) is therefore suggested. In practice, however, blood sampling for TDM is often not performed at trough. The corresponding measurement is thus only remotely informative about C(min) exposure. Objectives: The objectives of this study were to improve the interpretation of randomly measured concentrations by using a Bayesian approach for the prediction of C(min), incorporating correlation between pharmacokinetic parameters, and to compare the predictive performance of this method with alternative approaches, by comparing predictions with actual measured trough levels, and with predictions obtained by a reference method, respectively. Methods: A Bayesian maximum a posteriori (MAP) estimation method accounting for correlation (MAP-ρ) between pharmacokinetic parameters was developed on the basis of a population pharmacokinetic model, which was validated on external data. Thirty-one paired random and trough levels, observed in gastrointestinal stromal tumour patients, were then used for the evaluation of the Bayesian MAP-ρ method: individual C(min) predictions, derived from single random observations, were compared with actual measured trough levels for assessment of predictive performance (accuracy and precision). The method was also compared with alternative approaches: classical Bayesian MAP estimation assuming uncorrelated pharmacokinetic parameters, linear extrapolation along the typical elimination constant of imatinib, and non-linear mixed-effects modelling (NONMEM) first-order conditional estimation (FOCE) with interaction. Predictions of all methods were finally compared with 'best-possible' predictions obtained by a reference method (NONMEM FOCE, using both random and trough observations for individual C(min) prediction). Results: The developed Bayesian MAP-ρ method accounting for correlation between pharmacokinetic parameters allowed non-biased prediction of imatinib C(min) with a precision of ±30.7%. This predictive performance was similar for the alternative methods that were applied. The range of relative prediction errors was, however, smallest for the Bayesian MAP-ρ method and largest for the linear extrapolation method. When compared with the reference method, predictive performance was comparable for all methods. The time interval between random and trough sampling did not influence the precision of Bayesian MAP-ρ predictions. Conclusion: Clinical interpretation of randomly measured imatinib plasma concentrations can be assisted by Bayesian TDM. Classical Bayesian MAP estimation can be applied even without consideration of the correlation between pharmacokinetic parameters. Individual C(min) predictions are expected to vary less through Bayesian TDM than linear extrapolation. Bayesian TDM could be developed in the future for other targeted anticancer drugs and for the prediction of other pharmacokinetic parameters that have been correlated with clinical outcomes.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
MOTIVATION: Microarray results accumulated in public repositories are widely reused in meta-analytical studies and secondary databases. The quality of the data obtained with this technology varies from experiment to experiment, and an efficient method for quality assessment is necessary to ensure their reliability. RESULTS: The lack of a good benchmark has hampered evaluation of existing methods for quality control. In this study, we propose a new independent quality metric that is based on evolutionary conservation of expression profiles. We show, using 11 large organ-specific datasets, that IQRray, a new quality metrics developed by us, exhibits the highest correlation with this reference metric, among 14 metrics tested. IQRray outperforms other methods in identification of poor quality arrays in datasets composed of arrays from many independent experiments. In contrast, the performance of methods designed for detecting outliers in a single experiment like Normalized Unscaled Standard Error and Relative Log Expression was low because of the inability of these methods to detect datasets containing only low-quality arrays and because the scores cannot be directly compared between experiments. AVAILABILITY AND IMPLEMENTATION: The R implementation of IQRray is available at: ftp://lausanne.isb-sib.ch/pub/databases/Bgee/general/IQRray.R. CONTACT: Marta.Rosikiewicz@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Resumo:
A new electrical method is proposed for determining the apparent resistivity of multi-earth layers located underwater. The method is based on direct current geoelectric sounding principles. A layered earth model is used to simulate the stratigraphic target. The measurement array is of pole-pole type; it is located underwater and is orientated vertically. This particular electrode configuration is very useful when conventional electrical methods cannot be used, especially if the water depth becomes very important. The calculated apparent resistivity shows a substantial quality increase in the measured signal caused by the underwater targets, from which little or no response is measured using conventional surface electrode methods. In practice, however, different factors such as water stratification, underwater streams or meteorological conditions complicate the interpretation of the field results. A case study is presented, where field surveys carried out on Lake Geneva were interpreted using the calculated apparent resistivity master-curves.
Resumo:
This study aimed to investigate the influence of ankle osteoarthritis (AOA) treatments, i.e., ankle arthrodesis (AA) and total ankle replacement (TAR), on the kinematics of multi-segment foot and ankle complex during relatively long-distance gait. Forty-five subjects in four groups (AOA, AA, TAR, and control) were equipped with a wearable system consisting of inertial sensors installed on the tibia, calcaneus, and medial metatarsals. The subjects walked 50-m twice while the system measured the kinematic parameters of their multi-segment foot: the range of motion of joints between tibia, calcaneus, and medial metatarsals in three anatomical planes, and the peaks of angular velocity of these segments in the sagittal plane. These parameters were then compared among the four groups. It was observed that the range of motion and peak of angular velocities generally improved after TAR and were similar to the control subjects. However, unlike AOA and TAR, AA imposed impairments in the range of motion in the coronal plane for both the tibia-calcaneus and tibia-metatarsals joints. In general, the kinematic parameters showed significant correlation with established clinical scales (FFI and AOFAS), which shows their convergent validity. Based on the kinematic parameters of multi-segment foot during 50-m gait, this study showed significant improvements in foot mobility after TAR, but several significant impairments remained after AA.