875 resultados para Radial spaces
Resumo:
In the last few years, a need to account for molecular flexibility in drug-design methodologies has emerged, even if the dynamic behavior of molecular properties is seldom made explicit. For a flexible molecule, it is indeed possible to compute different values for a given conformation-dependent property and the ensemble of such values defines a property space that can be used to describe its molecular variability; a most representative case is the lipophilicity space. In this review, a number of applications of lipophilicity space and other property spaces are presented, showing that this concept can be fruitfully exploited: to investigate the constraints exerted by media of different levels of structural organization, to examine processes of molecular recognition and binding at an atomic level, to derive informative descriptors to be included in quantitative structure--activity relationships and to analyze protein simulations extracting the relevant information. Much molecular information is neglected in the descriptors used by medicinal chemists, while the concept of property space can fill this gap by accounting for the often-disregarded dynamic behavior of both small ligands and biomacromolecules. Property space also introduces some innovative concepts such as molecular sensitivity and plasticity, which appear best suited to explore the ability of a molecule to adapt itself to the environment variously modulating its property and conformational profiles. Globally, such concepts can enhance our understanding of biological phenomena providing fruitful descriptors in drug-design and pharmaceutical sciences.
Resumo:
Abstract
MRI of coronary vessel walls using radial k-space sampling and steady-state free precession imaging.
Resumo:
OBJECTIVE: The objective of our study was to investigate the impact of radial k-space sampling and steady-state free precession (SSFP) imaging on image quality in MRI of coronary vessel walls. SUBJECTS AND METHODS: Eleven subjects were examined on a 1.5-T MR system using three high-resolution navigator-gated and cardiac-triggered 3D black blood sequences (cartesian gradient-echo [GRE], radial GRE, and radial SSFP) with identical spatial resolution (0.9 x 0.9 x 2.4 mm3). The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel wall sharpness, and motion artifacts were analyzed. RESULTS: The mean SNR and CNR of the coronary vessel wall were improved using radial imaging and were best using radial k-space sampling combined with SSFP imaging. Vessel border definition was similar for all three sequences. Radial k-space sampling was found to be less sensitive to motion. Consistently good image quality was seen with the radial GRE sequence. CONCLUSION: Radial k-space sampling in MRI of coronary vessel walls resulted in fewer motion artifacts and improved SNR and CNR. The use of SSFP imaging, however, did not result in improved coronary vessel wall visualization.
Resumo:
Following a scheme of Levin we describe the values that functions in Fock spaces take on lattices of critical density in terms of both the size of the values and a cancelation condition that involves discrete versions of the Cauchy and Beurling-Ahlfors transforms.
Resumo:
We characterize the weighted Hardy inequalities for monotone functions in Rn +. In dimension n = 1, this recovers the standard theory of Bp weights. For n > 1, the result was previously only known for the case p = 1. In fact, our main theorem is proved in the more general setting of partly ordered measure spaces.
Resumo:
BACKGROUND: The radial artery is routinely used as a graft for surgical arterial myocardial revascularization. The proximal radial artery anastomosis site remains unknown. In this study, we analyzed the short-term results and the operative risk determinants after having used four different common techniques for radial artery implantation. METHODS: From January 2000 to December 2004, 571 patients underwent coronary artery bypass grafting with radial arteries. Data were analyzed for the entire population and for subgroups following the proximal radial artery anastomosis site: 140 T-graft with the mammary artery (group A), 316 free-grafts with the proximal anastomosis to the ascending aorta (group B), 55 mammary arteries in situ elongated with the radial artery (group C) and 60 radial arteries elongated with a piece of mammary artery and anastomosed to the ascending aorta (group D). RESULTS: The mean age was 53.8 +/- 7.7 years; 55.5% of patients had a previous myocardial infarction and 73% presented with a satisfactory left ventricular function. A complete arterial myocardial revascularization was achieved in 532 cases (93.2%) and 90.2% of the procedures were performed under cardiopulmonary bypass and cardioplegic arrest. The operative mortality rate was 0.9%, a postoperative myocardial infarction was diagnosed in 19 patients (3.3%), an intra-aortic balloon pump was used in 10 patients (1.7%) and a mechanical circulatory device was implanted in 2 patients. The radial artery harvesting site remained always free from complications. The proximal radial artery anastomosis site was not a determinant of early hospital mortality. Group C showed a higher risk of postoperative myocardial infarction (p = 0.09), together with female gender (p = 0.003), hypertension (p = 0.059) and a longer cardiopulmonary bypass time. CONCLUSIONS: The radial artery and the mammary artery can guarantee multiple arterial revascularization also for patients with contraindications to double mammary artery use. The four most common techniques for proximal radial artery anastomosis are not related to a higher operative risk and they can be used alternatively to reach the best surgical results
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Free-breathing whole-heart coronary MRA with 3D radial SSFP and self-navigated image reconstruction.
Resumo:
Respiratory motion is a major source of artifacts in cardiac magnetic resonance imaging (MRI). Free-breathing techniques with pencil-beam navigators efficiently suppress respiratory motion and minimize the need for patient cooperation. However, the correlation between the measured navigator position and the actual position of the heart may be adversely affected by hysteretic effects, navigator position, and temporal delays between the navigators and the image acquisition. In addition, irregular breathing patterns during navigator-gated scanning may result in low scan efficiency and prolonged scan time. The purpose of this study was to develop and implement a self-navigated, free-breathing, whole-heart 3D coronary MRI technique that would overcome these shortcomings and improve the ease-of-use of coronary MRI. A signal synchronous with respiration was extracted directly from the echoes acquired for imaging, and the motion information was used for retrospective, rigid-body, through-plane motion correction. The images obtained from the self-navigated reconstruction were compared with the results from conventional, prospective, pencil-beam navigator tracking. Image quality was improved in phantom studies using self-navigation, while equivalent results were obtained with both techniques in preliminary in vivo studies.
Resumo:
Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated
Resumo:
Linear spaces consisting of σ-finite probability measures and infinite measures (improper priors and likelihood functions) are defined. The commutative group operation, called perturbation, is the updating given by Bayes theorem; the inverse operation is the Radon-Nikodym derivative. Bayes spaces of measures are sets of classes of proportional measures. In this framework, basic notions of mathematical statistics get a simple algebraic interpretation. For example, exponential families appear as affine subspaces with their sufficient statistics as a basis. Bayesian statistics, in particular some well-known properties of conjugated priors and likelihood functions, are revisited and slightly extended
Resumo:
Qualitative differences in strategy selection during foraging in a partially baited maze were assessed in young and old rats. The baited and non-baited arms were at a fixed position in space and marked by a specific olfactory cue. The senescent rats did more re-entries during the first four-trial block but were more rapid than the young rats in selecting the reinforced arms during the first visits. Dissociation between the olfactory spatial cue reference by rotating the maze revealed that only few old subjects relied on olfactory cues to select the baited arms and the remainder relied mainly on the visuo-spatial cues.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
This thesis presents an alternative approach to the analytical design of surface-mounted axialflux permanent-magnet machines. Emphasis has been placed on the design of axial-flux machines with a one-rotor-two-stators configuration. The design model developed in this study incorporates facilities to include both the electromagnetic design and thermal design of the machine as well as to take into consideration the complexity of the permanent-magnet shapes, which is a typical requirement for the design of high-performance permanent-magnet motors. A prototype machine with rated 5 kW output power at 300 min-1 rotation speed has been designed and constructed for the purposesof ascertaining the results obtained from the analytical design model. A comparative study of low-speed axial-flux and low-speed radial-flux permanent-magnet machines is presented. The comparative study concentrates on 55 kW machines with rotation speeds 150 min-1, 300 min-1 and 600 min-1 and is based on calculated designs. A novel comparison method is introduced. The method takes into account the mechanical constraints of the machine and enables comparison of the designed machines, with respect to the volume, efficiency and cost aspects of each machine. It is shown that an axial-flux permanent-magnet machine with one-rotor-two-stators configuration has generally a weaker efficiency than a radial-flux permanent-magnet machine if for all designs the same electric loading, air-gap flux density and current density have been applied. On the other hand, axial-flux machines are usually smaller in volume, especially when compared to radial-flux machines for which the length ratio (axial length of stator stack vs. air-gap diameter)is below 0.5. The comparison results show also that radial-flux machines with alow number of pole pairs, p < 4, outperform the corresponding axial-flux machines.