109 resultados para Acceleration data structure
Resumo:
Some introduced ant populations have an extraordinary social organization, called unicoloniality, whereby individuals mix freely within large supercolonies. We investigated whether this mode of social organization also exists in native populations of the Argentine ant Linepithema humile. Behavioral analyses revealed the presence of 11 supercolonies (width 1 to 515 m) over a 3-km transect. As in the introduced range, there was always strong aggression between but never within supercolonies. The genetic data were in perfect agreement with the behavioral tests, all nests being assigned to identical supercolonies with the different methods. There was strong genetic differentiation between supercolonies but no genetic differentiation among nests within supercolonies. We never found more than a single mitochondrial haplotype per supercolony, further supporting the view that supercolonies are closed breeding units. Genetic and chemical distances between supercolonies were positively correlated, but there were no other significant associations between geographic, genetic, chemical, and behavioral distances. A comparison of supercolonies sampled in 1999 and 2005 revealed a very high turnover, with about one-third of the supercolonies being replaced yearly. This dynamic is likely to involve strong competition between supercolonies and thus act as a potent selective force maintaining unicoloniality over evolutionary time.
Resumo:
The Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES), a 19-item instrument developed to assess readiness to change alcohol use among individuals presenting for specialized alcohol treatment, has been used in various populations and settings. Its factor structure and concurrent validity has been described for specialized alcohol treatment settings and primary care. The purpose of this study was to determine the factor structure and concurrent validity of the SOCRATES among medical inpatients with unhealthy alcohol use not seeking help for specialized alcohol treatment. The subjects were 337 medical inpatients with unhealthy alcohol use, identified during their hospital stay. Most of them had alcohol dependence (76%). We performed an Alpha Factor Analysis (AFA) and Principal Component Analysis (PCA) of the 19 SOCRATES items, and forced 3 factors and 2 components, in order to replicate findings from Miller and Tonigan (Miller, W. R., & Tonigan, J. S., (1996). Assessing drinkers' motivations for change: The Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES). Psychology of Addictive Behavior, 10, 81-89.) and Maisto et al. (Maisto, S. A., Conigliaro, J., McNeil, M., Kraemer, K., O'Connor, M., & Kelley, M. E., (1999). Factor structure of the SOCRATES in a sample of primary care patients. Addictive Behavior, 24(6), 879-892.). Our analysis supported the view that the 2 component solution proposed by Maisto et al. (Maisto, S.A., Conigliaro, J., McNeil, M., Kraemer, K., O'Connor, M., & Kelley, M.E., (1999). Factor structure of the SOCRATES in a sample of primary care patients. Addictive Behavior, 24(6), 879-892.) is more appropriate for our data than the 3 factor solution proposed by Miller and Tonigan (Miller, W. R., & Tonigan, J. S., (1996). Assessing drinkers' motivations for change: The Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES). Psychology of Addictive Behavior, 10, 81-89.). The first component measured Perception of Problems and was more strongly correlated with severity of alcohol-related consequences, presence of alcohol dependence, and alcohol consumption levels (average number of drinks per day and total number of binge drinking days over the past 30 days) compared to the second component measuring Taking Action. Our findings support the view that the SOCRATES is comprised of two important readiness constructs in general medical patients identified by screening.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
Whether or not species participating in specialized and obligate interactions display similar and simultaneous demographic variations at the intraspecific level remains an open question in phylogeography. In the present study, we used the mutualistic nursery pollination occurring between the European globeflower Trollius europaeus and its specialized pollinators in the genus Chiastocheta as a case study. Explicitly, we investigated if the phylogeographies of the pollinating flies are significantly different from the expectation under a scenario of plant-insect congruence. Based on a large-scale sampling, we first used mitochondrial data to infer the phylogeographical histories of each fly species. Then, we defined phylogeographical scenarios of congruence with the plant history, and used maximum likelihood and Bayesian approaches to test for plant-insect phylogeographical congruence for the three Chiastocheta species. We show that the phylogeographical histories of the three fly species differ. Only Chiastocheta lophota and Chiastocheta dentifera display strong spatial genetic structures, which do not appear to be statistically different from those expected under scenarios of phylogeographical congruence with the plant. The results of the present study indicate that the fly species responded in independent and different ways to shared evolutionary forces, displaying varying levels of congruence with the plant genetic structure
Resumo:
Myelination requires a massive increase in glial cell membrane synthesis. Here we demonstrate that the acute phase of myelin lipid synthesis is regulated by SREBP cleavage activation protein (SCAP), an activator of sterol regulatory element-binding proteins (SREBPs). Deletion of SCAP in Schwann cells led to a loss of SREBP-mediated gene expression, congenital hypomyelination and abnormal gait. Interestingly, aging SCAP mutant mice showed partial regain of function; they exhibited improved gait and produced small amounts of myelin indicating a slow SCAP-independent uptake of external lipids. Accordingly, extracellular lipoproteins promoted myelination by SCAP mutant Schwann cells. However, SCAP mutant myelin never reached normal thickness and had biophysical abnormalities concordant with abnormal lipid composition. These data demonstrate that SCAP mediated regulation of glial lipogenesis is key to the proper synthesis of myelin membrane. The described defects in SCAP mutant myelination provide new insights into the pathogenesis, and open new avenues for treatment strategies, of peripheral neuropathies associated with lipid metabolic disorders.
Resumo:
Crushed seeds of the Moringa oleifera tree have been used traditionally as natural flocculants to clarify drinking water. We previously showed that one of the seed peptides mediates both the sedimentation of suspended particles such as bacterial cells and a direct bactericidal activity, raising the possibility that the two activities might be related. In this study, the conformational modeling of the peptide was coupled to a functional analysis of synthetic derivatives. This indicated that partly overlapping structural determinants mediate the sedimentation and antibacterial activities. Sedimentation requires a positively charged, glutamine-rich portion of the peptide that aggregates bacterial cells. The bactericidal activity was localized to a sequence prone to form a helix-loop-helix structural motif. Amino acid substitution showed that the bactericidal activity requires hydrophobic proline residues within the protruding loop. Vital dye staining indicated that treatment with peptides containing this motif results in bacterial membrane damage. Assembly of multiple copies of this structural motif into a branched peptide enhanced antibacterial activity, since low concentrations effectively kill bacteria such as Pseudomonas aeruginosa and Streptococcus pyogenes without displaying a toxic effect on human red blood cells. This study thus identifies a synthetic peptide with potent antibacterial activity against specific human pathogens. It also suggests partly distinct molecular mechanisms for each activity. Sedimentation may result from coupled flocculation and coagulation effects, while the bactericidal activity would require bacterial membrane destabilization by a hydrophobic loop.
Resumo:
(3R)-hydroxyacyl-CoA dehydrogenase is part of multifunctional enzyme type 2 (MFE-2) of peroxisomal fatty acid beta-oxidation. The MFE-2 protein from yeasts contains in the same polypeptide chain two dehydrogenases (A and B), which possess difference in substrate specificity. The crystal structure of Candida tropicalis (3R)-hydroxyacyl-CoA dehydrogenase AB heterodimer, consisting of dehydrogenase A and B, determined at the resolution of 2.2A, shows overall similarity with the prototypic counterpart from rat, but also important differences that explain the substrate specificity differences observed. Docking studies suggest that dehydrogenase A binds the hydrophobic fatty acyl chain of a medium-chain-length ((3R)-OH-C10) substrate as bent into the binding pocket, whereas the short-chain substrates are dislocated by two mechanisms: (i) a short-chain-length 3-hydroxyacyl group ((3R)-OH-C4) does not reach the hydrophobic contacts needed for anchoring the substrate into the active site; and (ii) Leu44 in the loop above the NAD(+) cofactor attracts short-chain-length substrates away from the active site. Dehydrogenase B, which can use a (3R)-OH-C4 substrate, has a more shallow binding pocket and the substrate is correctly placed for catalysis. Based on the current structure, and together with the structure of the 2-enoyl-CoA hydratase 2 unit of yeast MFE-2 it becomes obvious that in yeast and mammalian MFE-2s, despite basically identical functional domains, the assembly of these domains into a mature, dimeric multifunctional enzyme is very different.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
284 million people worldwide suffered from type 2 diabetes mellitus (T2DM) in 2010, which will, in approximately half of them, lead to the development of diabetic peripheral neuropathy (DPN). Although DPN is the most common complication of diabetes mellitus and the leading cause of non-traumatic amputations its pathophysiology is still poorly understood. To get more insight into the molecular mechanism underlying DPN in T2DM, I used a rodent model of T2DM, the db/db mice.¦ln vivo electrophysiological recordings of diabetic animals indicated that in addition to reduced nerve conduction velocity db/db mice also present increased nerve excitability. Further ex vivo evaluation of the electrophysiological properties of db/db nerves clearly established a presence of the peripheral nerve hyperexcitability (PNH) phenotype in diabetic animals. Using pharmacological inhibitors we demonstrated that PNH is mostly mediated by the decreased activity of Kv1 channels. ln agreement with these data 1 observed that the diabetic condition led to a reduced presence of the Kv1.2 subunits in juxtaparanodal regions of db/db peripheral nerves whereas its mANA and protein expression levels were not affected. Lmportantly, I confirmed a loss of juxtaparanodal Kv1.2 subunits in nerve biopsies from type 2 diabetic patients. Together these observations indicate that the type 2 diabetic condition leads to potassium-channel mediated changes of nerve excitability thus identifying them as potential drug targets to treat sorne of the DPN related symptoms.¦Schwann cells ensheath and isolate peripheral axons by the production of myelin, which consists of lipids and proteins in a ratio of 2:1. Peripheral myelin protein 2 (= P2, Pmp2 or FABP8) was originally described as one of the most abundant myelin proteins in the peripheral nervous system. P2, which is a member of the fatty acid binding protein (FABP) family, is a 14.8 kDa cytosolic protein expressed on the cytoplasmic side of compact myelin membranes. As indicated by their name, the principal role of FABPs is thought to be the binding and transport of fatty acids.¦To study its role in myelinating glial cells I have recently generated a complete P2 knockout mouse model (P2-/-). I confirmed the loss of P2 in the sciatic nerve of P2-/- mice at the mRNA and protein level. Electrophysiological analysis of the adult (P56) mutant mice revealed a mild but significant reduction in the motor nerve conduction velocity. lnterestingly, this functional change was not accompanied by any detectable alterations in general myelin structure. However, I have observed significant alterations in the mRNA expression level of other FABPs, predominantly FABP9, in the PNS of P2-/- mice as compared to age-matched P2+/+ mice indicating a role of P2 in the glial myelin lipid metabolism.¦Le diabète de type 2 touche 284 million de personnes dans le monde en 2010 et son évolution conduit dans la moitié des cas à une neuropathie périphérique diabétique. Bien que la neuropathie périphérique soit la complication la plus courante du diabète pouvant conduire jusqu'à l'amputation, sa physiopathologie est aujourd'hui encore mal comprise. Dans le but d'améliorer les connaissances moléculaires expliquant les mécanismes de la neuropathie liée au diabète de type 2, j'ai utilisé un modèle murin du diabète de type 2, les souris db/db.¦ln vivo, les enregistrements éléctrophysiologiques des animaux diabétiques montrent qu'en plus d'une diminution de la vitesse de conduction nerveuse, les souris db/db présentent également une augmentation de l'excitabilité nerveuse. Des mesures menées Ex vivo ont montré l'existence d'un phénotype d'hyperexcitabilité sur les nerfs périphériques isolés d'animaux diabétiques. Grâce à l'utilisation d'inhibiteurs pharmacologiques, nous avons pu démontrer que l'hyperexcitabilité démontrée était due à une réduction d'activité des canaux Kv1. En accord avec ces données, j'ai observé qu'une situation de diabète conduisait à une diminution des canaux Kv1.2 aux régions juxta-paranodales des nerfs périphériques db/db, alors que l'expression du transcrit et de la protéine restait stable. J'ai également confirmé l'absence de canaux Kv1.2 aux juxta-paranoeuds de biopsies de nerfs de patients diabétiques. L'ensemble de ces observations montrent que les nerfs périphériques chez les patients atteints de diabète de type 2 est due à une diminution des canaux potassiques rapides juxtaparanodaux les identifiant ainsi comme des cibles thérapeutiques potentielles.¦Les cellules de Schwann enveloppent et isolent les axones périphériques d'une membrane spécialisée, la myéline, composée de deux fois plus de lipides que de protéines. La protéine P2 (Pmp2 "peripheral myelin protein 2" ou FABP8 "fatty acid binding protein") est l'une des protéines les plus abondantes au système nerveux périphérique. P2 appartient à la famille de protéines FABP liant et transportant les acides gras et est une protéine cytosolique de 14,8 kDa exprimée du côté cytoplasmique de la myéline compacte.¦Afin d'étudier le rôle de P2 dans les cellules de Schwann myélinisantes, j'ai généré une souris knockout (P2-/-). Après avoir validé l'absence de transcrit et de protéine P2 dans les nerfs sciatiques P2-/-, des mesures électrophysiologiques ont montré une réduction modérée mais significative de la vitesse de conduction du nerf moteur périphérique. Il est important de noter que ces changements fonctionnels n'ont pas pu être associés à quelconque changement dans la structure de la myéline. Cependant, j'ai observé dans les nerfs périphériques P2-/-, une altération significative du niveau d'expression d'ARNm d'autres FABPs et en particulier FABP9. Ce dernier résultat démontre l'importance du rôle de la protéine P2 dans le métabolisme lipidique de la myéline.
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.
Resumo:
La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.
Resumo:
This paper presents an ITK implementation for exportingthe contours of the automated segmentation results toDICOM-RT Structure Set format. The âeurooeradiotherapystructure setâeuro (RTSTRUCT) object of the DICOM standard isused for the transfer of patient structures and relateddata, between the devices found within and outside theradiotherapy department. It mainly contains theinformation of regions of interest (ROIs) and points ofinterest (E.g. dose reference points). In many cases,rather than manually drawing these ROIs on the CT images,one can indeed benefit from the automated segmentationalgorithms already implemented in ITK. But at present, itis not possible to export the ROIs obtained from ITK toRTSTRUCT format. In order to bridge this gap, we havedeveloped a framework for exporting contour data toRTSTRUCT. We provide here the complete implementation ofRTSTRUCT exporter and present the details of the pipelineused. Results on a 3-D CT image of the Head and Neck(H&N) region are presented.
Resumo:
The species of the common shrew (Sorex araneus) group are morphologically very similar but exhibit high levels of karyotypic variation. Here we used genetic variation at 10 microsatellite markers in a data set of 212 individuals mostly sampled in the western Alps and composed of five karyotypic taxa (Sorex coronatus, Sorex antinorii and the S. araneus chromosome races Cordon, Bretolet and Vaud) to investigate the concordance between genetic and karyotypic structure. Bayesian analysis confirmed the taxonomic status of the three sampled species since individuals consistently grouped according to their taxonomical status. However, introgression can still be detected between S. antinorii and the race Cordon of S. araneus. This observation is consistent with the expected low karyotypic complexity of hybrids between these two taxa. Geographically based cryptic substructure was discovered within S. antinorii, a pattern consistent with the different postglaciation recolonization routes of this species. Additionally, we detected two genetic groups within S. araneus notwithstanding the presence of three chromosome races. This pattern can be explained by the probable hybrid status of the Bretolet race but also suggests a relatively low impact of chromosomal differences on genetic structure compared to historical factors. Finally, we propose that the current data set (available at http://www.unil.ch/dee/page7010_en.html#1) could be used as a reference by those wanting to identify Sorex individuals sampled in the western Alps.
Resumo:
The complex regional pain syndrome (CRPS) is a rare but debilitating pain disorder that mostly occurs after injuries to the upper limb. A number of studies indicated altered brain function in CRPS, whereas possible influences on brain structure remain poorly investigated. We acquired structural magnetic resonance imaging data from CRPS type I patients and applied voxel-by-voxel statistics to compare white and gray matter brain segments of CRPS patients with matched controls. Patients and controls were statistically compared in two different ways: First, we applied a 2-sample ttest to compare whole brain white and gray matter structure between patients and controls. Second, we aimed to assess structural alterations specifically of the primary somatosensory (S1) and motor cortex (M1) contralateral to the CRPS affected side. To this end, MRI scans of patients with left-sided CRPS (and matched controls) were horizontally flipped before preprocessing and region-of-interest-based group comparison. The unpaired ttest of the "non-flipped" data revealed that CRPS patients presented increased gray matter density in the dorsomedial prefrontal cortex. The same test applied to the "flipped" data showed further increases in gray matter density, not in the S1, but in the M1 contralateral to the CRPS-affected limb which were inversely related to decreased white matter density of the internal capsule within the ipsilateral brain hemisphere. The gray-white matter interaction between motor cortex and internal capsule suggests compensatory mechanisms within the central motor system possibly due to motor dysfunction. Altered gray matter structure in dorsomedial prefrontal cortex may occur in response to emotional processes such as pain-related suffering or elevated analgesic top-down control.
Resumo:
Expression data contribute significantly to the biological value of the sequenced human genome, providing extensive information about gene structure and the pattern of gene expression. ESTs, together with SAGE libraries and microarray experiment information, provide a broad and rich view of the transcriptome. However, it is difficult to perform large-scale expression mining of the data generated by these diverse experimental approaches. Not only is the data stored in disparate locations, but there is frequent ambiguity in the meaning of terms used to describe the source of the material used in the experiment. Untangling semantic differences between the data provided by different resources is therefore largely reliant on the domain knowledge of a human expert. We present here eVOC, a system which associates labelled target cDNAs for microarray experiments, or cDNA libraries and their associated transcripts with controlled terms in a set of hierarchical vocabularies. eVOC consists of four orthogonal controlled vocabularies suitable for describing the domains of human gene expression data including Anatomical System, Cell Type, Pathology and Developmental Stage. We have curated and annotated 7016 cDNA libraries represented in dbEST, as well as 104 SAGE libraries,with expression information,and provide this as an integrated, public resource that allows the linking of transcripts and libraries with expression terms. Both the vocabularies and the vocabulary-annotated libraries can be retrieved from http://www.sanbi.ac.za/evoc/. Several groups are involved in developing this resource with the aim of unifying transcript expression information.