211 resultados para Conjugate gradient methods

em Université de Lausanne, Switzerland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many traits and/or strategies expressed by organisms are quantitative phenotypes. Because populations are of finite size and genomes are subject to mutations, these continuously varying phenotypes are under the joint pressure of mutation, natural selection and random genetic drift. This article derives the stationary distribution for such a phenotype under a mutation-selection-drift balance in a class-structured population allowing for demographically varying class sizes and/or changing environmental conditions. The salient feature of the stationary distribution is that it can be entirely characterized in terms of the average size of the gene pool and Hamilton's inclusive fitness effect. The exploration of the phenotypic space varies exponentially with the cumulative inclusive fitness effect over state space, which determines an adaptive landscape. The peaks of the landscapes are those phenotypes that are candidate evolutionary stable strategies and can be determined by standard phenotypic selection gradient methods (e.g. evolutionary game theory, kin selection theory, adaptive dynamics). The curvature of the stationary distribution provides a measure of the stability by convergence of candidate evolutionary stable strategies, and it is evaluated explicitly for two biological scenarios: first, a coordination game, which illustrates that, for a multipeaked adaptive landscape, stochastically stable strategies can be singled out by letting the size of the gene pool grow large; second, a sex-allocation game for diploids and haplo-diploids, which suggests that the equilibrium sex ratio follows a Beta distribution with parameters depending on the features of the genetic system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: A sacral chordoma is a rare, slow-growing, primary bone tumor, arising from embryonic notochordal remnants. Radical surgery is the only hope for cure. The aim of our present study is to analyse our experience with the challenging treatment of this rare tumor, to review current treatment modalities and to assess the outcome based on R status. METHODS: Eight patients were treated in our institution between 2001 and 2011. All patients were discussed by a multidisciplinary tumor board, and an en bloc surgical resection by posterior perineal access only or by combined anterior/posterior accesses was planned based on tumor extension. RESULTS: Seven patients underwent radical surgery, and one was treated by using local cryotherapy alone due to low performance status. Three misdiagnosed patients had primary surgery at another hospital with R1 margins. Reresection margins in our institution were R1 in two and R0 in one, and all three recurred. Four patients were primarily operated on at our institution and had en bloc surgery with R0 resection margins. One had local recurrence after 18 months. The overall morbidity rate was 86% (6/7 patients) and was mostly related to the perineal wound. Overall, 3 out of 7 resected patients were disease-free at a median follow-up of 2.9 years (range, 1.6-8.0 years). CONCLUSION: Our experience confirms the importance of early correct diagnosis and of an R0 resection for a sacral chordoma invading pelvic structures. It is a rare disease that requires a challenging multidisciplinary treatment, which should ideally be performed in a tertiary referral center.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UHPLC strategy which combines sub-2 microm porous particles and ultra-high pressure (>1000 bar) was investigated considering very high resolution criteria in both isocratic and gradient modes, with mobile phase temperatures between 30 and 90 degrees C. In isocratic mode, experimental conditions to reach the maximal efficiency were determined using the kinetic plot representation for DeltaP(max)=1000 bar. It has been first confirmed that the molecular weight of the compounds (MW) was a critical parameter which should be considered in the construction of such curves. With a MW around 1000 g mol(-1), efficiencies as high as 300,000 plates could be theoretically attained using UHPLC at 30 degrees C. By limiting the column length to 450 mm, the maximal plate count was around 100,000. In gradient mode, the longest column does not provide the maximal peak capacity for a given analysis time in UHPLC. This was attributed to the fact that peak capacity is not only related to the plate number but also to column dead time. Therefore, a compromise should be found and a 150 mm column should be preferentially selected for gradient lengths up to 60 min at 30 degrees C, while the columns coupled in series (3x 150 mm) were attractive only for t(grad)>250 min. Compared to 30 degrees C, peak capacities were increased by about 20-30% for a constant gradient length at 90 degrees C and gradient time decreased by 2-fold for an identical peak capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MRI visualization of devices is traditionally based on signal loss due to T(2)* effects originating from local susceptibility differences. To visualize nitinol devices with positive contrast, a recently introduced postprocessing method is adapted to map the induced susceptibility gradients. This method operates on regular gradient-echo MR images and maps the shift in k-space in a (small) neighborhood of every voxel by Fourier analysis followed by a center-of-mass calculation. The quantitative map of the local shifts generates the positive contrast image of the devices, while areas without susceptibility gradients render a background with noise only. The positive signal response of this method depends only on the choice of the voxel neighborhood size. The properties of the method are explained and the visualizations of a nitinol wire and two stents are shown for illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: Recent developments of magnetic resonance imaging enabled free-breathing coronary MRA (cMRA) using steady-state-free-precession (SSFP) for endogenous contrast. The purpose of this study was a systematic comparison of SSFP cMRA with standard T2-prepared gradient-echo and spiral cMRA. METHODS: Navigator-gated free-breathing T2-prepared SSFP-, T2-prepared gradient-echo- and T2-prepared spiral cMRA was performed in 18 healthy swine (45-68 kg body-weight). Image quality was investigated subjectively and signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and vessel sharpness were compared. RESULTS: SSFP cMRA allowed for high quality cMRA during free breathing with substantial improvements in SNR, CNR and vessel sharpness when compared with standard T2-prepared gradient-echo imaging. Spiral imaging demonstrated the highest SNR while image quality score and vessel definition was best for SSFP imaging. CONCLUSION: Navigator-gated free-breathing T2-prepared SSFP cMRA is a promising new imaging approach for high signal and high contrast imaging of the coronary arteries with improved vessel border definition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With improved B 0 homogeneity along with satisfactory gradient performance at high magnetic fields, snapshot gradient-recalled echo-planar imaging (GRE-EPI) would perform at long echo times (TEs) on the order of T2*, which intrinsically allows obtaining strongly T2*-weighted images with embedded substantial anatomical details in ultrashort time. The aim of this study was to investigate the feasibility and quality of long TE snapshot GRE-EPI images of rat brain at 9.4 T. When compensating for B 0 inhomogeneities, especially second-order shim terms, a 200 x 200 microm2 in-plane resolution image was reproducibly obtained at long TE (>25 ms). The resulting coronal images at 30 ms had diminished geometric distortions and, thus, embedded substantial anatomical details. Concurrently with the very consistent stability, such GRE-EPI images should permit to resolve functional data not only with high specificity but also with substantial anatomical details, therefore allowing coregistration of the acquired functional data on the same image data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Few European studies have investigated how cardiovascular risk factors (CRF) in adults relate to those observed in younger generations. OBJECTIVE: To explore this issue in a Swiss region using two population health surveys of 3636 adolescents ages 9-19 years and 3299 adults ages 25-74 years. METHODS: Age patterns of continuous CRF were estimated by robust locally weighted regression and those of high-risk groups were calculated using adult criteria with appropriate adjustment for children. RESULTS: Gender differences in height, weight, blood pressure, and HDL cholesterol observed in adults were found to emerge in adolescents. Overweight, affecting 10-12% of adolescents, was increasing steeply in young adults (three times among males and twice among females) in parallel with inactivity. Median age at smoking initiation was decreasing rapidly from 18 to 20 years in young adults to 15 in adolescents. A statistically significant social gradient in disfavor of the lower education level was observed for overweight in all age groups of women above 16 (odds ratios (ORs) 2.4 to 3.3, P < 0.01), for inactivity in adult males (ORs 1.6 to 2.0, P < 0.05), and for regular smoking in older adolescents (OR 1.9 for males, 2.7 for females, P < 0.005), but not for elevated blood pressure. CONCLUSION: Discontinuities in the cross-sectional age patterns of CRF indicated the emergence of a social gradient and the need for preventive actions against the early adoption of persistent unhealthy behaviors, to which low-educated girls and women are particularly exposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To compare different techniques for positive contrast imaging of susceptibility markers with MRI for three-dimensional visualization. As several different techniques have been reported, the choice of the suitable method depends on its properties with regard to the amount of positive contrast and the desired background suppression, as well as other imaging constraints needed for a specific application. MATERIALS AND METHODS: Six different positive contrast techniques are investigated for their ability to image at 3 Tesla a single susceptibility marker in vitro. The white marker method (WM), susceptibility gradient mapping (SGM), inversion recovery with on-resonant water suppression (IRON), frequency selective excitation (FSX), fast low flip-angle positive contrast SSFP (FLAPS), and iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) were implemented and investigated. RESULTS: The different methods were compared with respect to the volume of positive contrast, the product of volume and signal intensity, imaging time, and the level of background suppression. Quantitative results are provided, and strengths and weaknesses of the different approaches are discussed. CONCLUSION: The appropriate choice of positive contrast imaging technique depends on the desired level of background suppression, acquisition speed, and robustness against artifacts, for which in vitro comparative data are now available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors compared radial steady-state free precession (SSFP) coronary magnetic resonance (MR) angiography, cartesian k-space sampling SSFP coronary MR angiography, and gradient-echo coronary MR angiography in 16 healthy adults and four pilot study patients. Standard gradient-echo MR imaging with a T2 preparatory pulse and cartesian k-space sampling was the reference technique. Image quality was compared by using subjective motion artifact level and objective contrast-to-noise ratio and vessel sharpness. Radial SSFP, compared with cartesian SSFP and gradient-echo MR angiography, resulted in reduced motion artifacts and superior vessel sharpness. Cartesian SSFP resulted in increased motion artifacts (P <.05). Contrast-to-noise ratio with radial SSFP was lower than that with cartesian SSFP and similar to that with the reference technique. Radial SSFP coronary MR angiography appears preferable because of improved definition of vessel borders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While 3D thin-slab coronary magnetic resonance angiography (MRA) has traditionally been performed using a Cartesian acquisition scheme, spiral k-space data acquisition offers several potential advantages. However, these strategies have not been directly compared in the same subjects using similar methodologies. Thus, in the present study a comparison was made between 3D coronary MRA using Cartesian segmented k-space gradient-echo and spiral k-space data acquisition schemes. In both approaches the same spatial resolution was used and data were acquired during free breathing using navigator gating and prospective slice tracking. Magnetization preparation (T(2) preparation and fat suppression) was applied to increase the contrast. For spiral imaging two different examinations were performed, using one or two spiral interleaves, during each R-R interval. Spiral acquisitions were found to be superior to the Cartesian scheme with respect to the signal-to-noise ratio (SNR) and contrast-to-noise-ratio (CNR) (both P < 0.001) and image quality. The single spiral per R-R interval acquisition had the same total scan duration as the Cartesian acquisition, but the single spiral had the best image quality and a 2.6-fold increase in SNR. The double-interleaf spiral approach showed a 50% reduction in scanning time, a 1.8-fold increase in SNR, and similar image quality when compared to the standard Cartesian approach. Spiral 3D coronary MRA appears to be preferable to the Cartesian scheme. The increase in SNR may be "traded" for either shorter scanning times using multiple consecutive spiral interleaves, or for enhanced spatial resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Our objective was to compare two state-of-the-art coronary MRI (CMRI) sequences with regard to image quality and diagnostic accuracy for the detection of coronary artery disease (CAD). SUBJECTS AND METHODS: Twenty patients with known CAD were examined with a navigator-gated and corrected free-breathing 3D segmented gradient-echo (turbo field-echo) CMRI sequence and a steady-state free precession sequence (balanced turbo field-echo). CMRI was performed in a transverse plane for the left coronary artery and a double-oblique plane for the right coronary artery system. Subjective image quality (1- to 4-point scale, with 1 indicating excellent quality) and objective image quality parameters were independently determined for both sequences. Sensitivity, specificity, and accuracy for the detection of significant (> or = 50% diameter) coronary artery stenoses were determined as defined in invasive catheter X-ray coronary angiography. RESULTS: Subjective image quality was superior for the balanced turbo field-echo approach (1.8 +/- 0.9 vs 2.3 +/- 1.0 for turbo field-echo; p < 0.001). Vessel sharpness, signal-to-noise ratio, and contrast-to-noise ratio were all superior for the balanced turbo field-echo approach (p < 0.01 for signal-to-noise ratio and contrast-to-noise ratio). Of the 103 segments, 18% of turbo field-echo segments and 9% of balanced turbo field-echo segments had to be excluded from disease evaluation because of insufficient image quality. Sensitivity, specificity, and accuracy for the detection of significant coronary artery stenoses in the evaluated segments were 92%, 67%, 85%, respectively, for turbo field-echo and 82%, 82%, 81%, respectively, for balanced turbo field-echo. CONCLUSION: Balanced turbo field-echo offers improved image quality with significantly fewer nondiagnostic segments when compared with turbo field-echo. For the detection of CAD, both sequences showed comparable accuracy for the visualized segments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The water content dynamics in the upper soil surface during evaporation is a key element in land-atmosphere exchanges. Previous experimental studies have suggested that the soil water content increases at the depth of 5 to 15 cm below the soil surface during evapo- ration, while the layer in the immediate vicinity of the soil surface is drying. In this study, the dynamics of water content profiles exposed to solar radiative forcing was monitored at a high temporal resolution using dielectric methods both in the presence and absence of evaporation. A 4-d comparison of reported moisture content in coarse sand in covered and uncovered buckets using a commercial dielectric-based probe (70 MHz ECH2O-5TE, Decagon Devices, Pullman, WA) and the standard 1-GHz time domain reflectometry method. Both sensors reported a positive correlation between temperature and water content in the 5- to 10-cm depth, most pronounced in the morning during heating and in the afternoon during cooling. Such positive correlation might have a physical origin induced by evaporation at the surface and redistribution due to liquid water fluxes resulting from the temperature- gradient dynamics within the sand profile at those depths. Our experimental data suggest that the combined effect of surface evaporation and temperature-gradient dynamics should be considered to analyze experimental soil water profiles. Additional effects related to the frequency of operation and to protocols for temperature compensation of the dielectric sensors may also affect the probes' response during large temperature changes.