932 resultados para Probabilistic latent semantic model
Resumo:
The paper discusses maintenance challenges of organisations with a huge number of devices and proposes the use of probabilistic models to assist monitoring and maintenance planning. The proposal assumes connectivity of instruments to report relevant features for monitoring. Also, the existence of enough historical registers with diagnosed breakdowns is required to make probabilistic models reliable and useful for predictive maintenance strategies based on them. Regular Markov models based on estimated failure and repair rates are proposed to calculate the availability of the instruments and Dynamic Bayesian Networks are proposed to model cause-effect relationships to trigger predictive maintenance services based on the influence between observed features and previously documented diagnostics
Resumo:
BACKGROUND: Solexa/Illumina short-read ultra-high throughput DNA sequencing technology produces millions of short tags (up to 36 bases) by parallel sequencing-by-synthesis of DNA colonies. The processing and statistical analysis of such high-throughput data poses new challenges; currently a fair proportion of the tags are routinely discarded due to an inability to match them to a reference sequence, thereby reducing the effective throughput of the technology. RESULTS: We propose a novel base calling algorithm using model-based clustering and probability theory to identify ambiguous bases and code them with IUPAC symbols. We also select optimal sub-tags using a score based on information content to remove uncertain bases towards the ends of the reads. CONCLUSION: We show that the method improves genome coverage and number of usable tags as compared with Solexa's data processing pipeline by an average of 15%. An R package is provided which allows fast and accurate base calling of Solexa's fluorescence intensity files and the production of informative diagnostic plots.
Resumo:
The most evident symptoms of schizophrenia are severe impairment of cognitive functions like attention, abstract reasoning and working memory. The latter has been defined as the ability to maintain and manipulate on-line a limited amount of information. Whereas several studies show that working memory processes are impaired in schizophrenia, the specificity of this deficit is still unclear. Results obtained with a new paradigm, involving visuospatial, dynamic and static working memory processing, suggest that schizophrenic patients rely on a specific compensatory strategy. An animal model of schizophrenia with a transient deficit in glutathione during the development reveals similar substitutive processing, masking the impairment in working memory functions in specific test conditions only. Taken together, these results show coherence between working memory deficits in schizophrenic patients and in animal models. More generally, it is possible to consider that the pathological state may be interpreted as a reduced homeostatic reserve. However, this may be balanced in specific situations by efficient allostatic strategies. Thus, the pathological condition would remain latent in several situations, due to such allostatic regulations. However, to maintain a performance based on highly specific strategies requires in turn specific conditions, limitating adaptative resources in humans and in animals. In summary, we suggest that the psychological and physical load to maintain this rigid allostatic state is very high in patients and animal subjects.
Resumo:
The lithium-pilocarpine model mimics most features of human temporal lobe epilepsy. Following our prior studies of cerebral metabolic changes, here we explored the expression of transporters for glucose (GLUT1 and GLUT3) and monocarboxylates (MCT1 and MCT2) during and after status epilepticus (SE) induced by lithium-pilocarpine in PN10, PN21, and adult rats. In situ hybridization was used to study the expression of transporter mRNAs during the acute phase (1, 4, 12 and 24h of SE), the latent phase, and the early and late chronic phases. During SE, GLUT1 expression was increased throughout the brain between 1 and 12h of SE, more strongly in adult rats; GLUT3 increased only transiently, at 1 and 4h of SE and mainly in PN10 rats; MCT1 was increased at all ages but 5-10-fold more in adult than in immature rats; MCT2 expression increased mainly in adult rats. At all ages, MCT1 and MCT2 up-regulation was limited to the circuit of seizures while GLUT1 and GLUT3 changes were more widespread. During the latent and chronic phases, the expression of nutrient transporters was normal in PN10 rats. In PN21 rats, GLUT1 was up-regulated in all brain regions. In contrast, in adult rats GLUT1 expression was down-regulated in the piriform cortex, hilus and CA1 as a result of extensive neuronal death. The changes in nutrient transporter expression reported here further support previous findings in other experimental models demonstrating rapid transcriptional responses to marked changes in cerebral energetic/glucose demand.
Resumo:
This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results
Resumo:
Abstract The complexity of the current business world is making corporate disclosure more and more important for information users. These users, including investors, financial analysts, and government authorities rely on the disclosed information to make their investment decisions, analyze and recommend shares, and to draft regulation policies. Moreover, the globalization of capital markets has raised difficulties for information users in understanding the differences incorporate disclosure across countries and across firms. Using a sample of 797 firms from 34 countries, this thesis advances the literature on disclosure by illustrating comprehensively the disclosure determinants originating at firm systems and national systems based on the multilevel latent variable approach. Under this approach, the overall variation associated with the firm-specific variables is decomposed into two parts, the within-country and the between-country part. Accordingly, the model estimates the latent association between corporate disclosure and information demand at two levels, the within-country and the between-country level. The results indicate that the variables originating from corporate systems are hierarchically correlated with those from the country environment. The information demand factor indicated by the number of exchanges listed and the number of analyst recommendations can significantly explain the variation of corporate disclosure for both "within" and "between" countries. The exogenous influences of firm fundamentals-firm size and performance-are exerted indirectly through the information demand factor. Specifically, if the between-country variation in firm variables is taken into account, only the variables of legal systems and economic growth keep significance in explaining the disclosure differences across countries. These findings strongly support the hypothesis that disclosure is a response to both corporate systems and national systems, but the influence of the latter on disclosure reflected significantly through that of the former. In addition, the results based on ADR (American Depositary Receipt) firms suggest that the globalization of capital markets is harmonizing the disclosure behavior of cross-boundary listed firms, but it cannot entirely eliminate the national features in disclosure and other firm-specific characteristics.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
Aim Conservation strategies are in need of predictions that capture spatial community composition and structure. Currently, the methods used to generate these predictions generally focus on deterministic processes and omit important stochastic processes and other unexplained variation in model outputs. Here we test a novel approach of community models that accounts for this variation and determine how well it reproduces observed properties of alpine butterfly communities. Location The western Swiss Alps. Methods We propose a new approach to process probabilistic predictions derived from stacked species distribution models (S-SDMs) in order to predict and assess the uncertainty in the predictions of community properties. We test the utility of our novel approach against a traditional threshold-based approach. We used mountain butterfly communities spanning a large elevation gradient as a case study and evaluated the ability of our approach to model species richness and phylogenetic diversity of communities. Results S-SDMs reproduced the observed decrease in phylogenetic diversity and species richness with elevation, syndromes of environmental filtering. The prediction accuracy of community properties vary along environmental gradient: variability in predictions of species richness was higher at low elevation, while it was lower for phylogenetic diversity. Our approach allowed mapping the variability in species richness and phylogenetic diversity projections. Main conclusion Using our probabilistic approach to process species distribution models outputs to reconstruct communities furnishes an improved picture of the range of possible assemblage realisations under similar environmental conditions given stochastic processes and help inform manager of the uncertainty in the modelling results
Resumo:
A new model for dealing with decision making under risk by considering subjective and objective information in the same formulation is here presented. The uncertain probabilistic weighted average (UPWA) is also presented. Its main advantage is that it unifies the probability and the weighted average in the same formulation and considering the degree of importance that each case has in the analysis. Moreover, it is able to deal with uncertain environments represented in the form of interval numbers. We study some of its main properties and particular cases. The applicability of the UPWA is also studied and it is seen that it is very broad because all the previous studies that use the probability or the weighted average can be revised with this new approach. Focus is placed on a multi-person decision making problem regarding the selection of strategies by using the theory of expertons.
Resumo:
Recent single-cell studies in monkeys (Romo et al., 2004) show that the activity of neurons in the ventral premotor cortex covaries with the animal's decisions in a perceptual comparison task regarding the frequency of vibrotactile events. The firing rate response of these neurons was dependent only on the frequency differences between the two applied vibrations, the sign of that difference being the determining factor for correct task performance. We present a biophysically realistic neurodynamical model that can account for the most relevant characteristics of this decision-making-related neural activity. One of the nontrivial predictions of this model is that Weber's law will underlie the perceptual discrimination behavior. We confirmed this prediction in behavioral tests of vibrotactile discrimination in humans and propose a computational explanation of perceptual discrimination that accounts naturally for the emergence of Weber's law. We conclude that the neurodynamical mechanisms and computational principles underlying the decision-making processes in this perceptual discrimination task are consistent with a fluctuation-driven scenario in a multistable regime.
Resumo:
Quality management has become a strategic issue for organisations and is very valuable to produce quality software. However, quality management systems (QMS) are not easy to implement and maintain. The authors' experience shows the benefits of developing a QMS by first formalising it using semantic web ontologies and then putting them into practice through a semantic wiki. The QMS ontology that has been developed captures the core concepts of a traditional QMS and combines them with concepts coming from the MPIu'a development process model, which is geared towards obtaining usable and accessible software products. Then, the ontology semantics is directly put into play by a semantics-aware tool, the Semantic MediaWiki. The developed QMS tool is being used for 2 years by the GRIHO research group, where it has manages almost 50 software development projects taking into account the quality management issues. It has also been externally audited by a quality certification organisation. Its users are very satisfied with their daily work with the tool, which manages all the documents created during project development and also allows them to collaborate, thanks to the wiki features.
Resumo:
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic inversion approaches, probabilistic inversion provides the full posterior probability density function of the saturation field and accounts for the uncertainties inherent in the petrophysical parameters relating the resistivity to saturation. In this study, the data are from benchtop ERT experiments conducted during gas injection into a quasi-2D brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. The saturation fields are estimated by Markov chain Monte Carlo inversion of the measured data and compared to independent saturation measurements from light transmission through the chamber. Different model parameterizations are evaluated in terms of the recovered saturation and petrophysical parameter values. The saturation field is parameterized (1) in Cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values in structural elements whose shape and location is assumed known or represented by an arbitrary Gaussian Bell structure. Results show that the estimated saturation fields are in overall agreement with saturations measured by light transmission, but differ strongly in terms of parameter estimates, parameter uncertainties and computational intensity. Discretization in the frequency domain (as in the discrete cosine transform parameterization) provides more accurate models at a lower computational cost compared to spatially discretized (Cartesian) models. A priori knowledge about the expected geologic structures allows for non-discretized model descriptions with markedly reduced degrees of freedom. Constraining the solutions to the known injected gas volume improved estimates of saturation and parameter values of the petrophysical relationship. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.
Resumo:
ABSTRACT The traditional method of net present value (NPV) to analyze the economic profitability of an investment (based on a deterministic approach) does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L.) production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV) were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV) such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in Chile.
Resumo:
Résumé: L'impact de la maladie d'Alzheimer (MA) est dévastateur pour la vie quotidienne de la personne affectée, avec perte progressive de la mémoire et d'autres facultés cognitives jusqu'à la démence. Il n'existe toujours pas de traitement contre cette maladie et il y a aussi une grande incertitude sur le diagnostic des premiers stades de la MA. La signature anatomique de la MA, en particulier l'atrophie du lobe temporal moyen (LTM) mesurée avec la neuroimagerie, peut être utilisée comme un biomarqueur précoce, in vivo, des premiers stades de la MA. Toutefois, malgré le rôle évident du LMT dans les processus de la mémoire, nous savons que les modèles anatomiques prédictifs de la MA basés seulement sur des mesures d'atrophie du LTM n'expliquent pas tous les cas cliniques. Au cours de ma thèse, j'ai conduit trois projets pour comprendre l'anatomie et le fonctionnement du LMT dans (1) les processus de la maladie et dans (2) les processus de mémoire ainsi que (3) ceux de l'apprentissage. Je me suis intéressée à une population avec déficit cognitif léger (« Mild Cognitive Impairment », MCI), à risque pour la MA. Le but du premier projet était de tester l'hypothèse que des facteurs, autres que ceux cognitifs, tels que les traits de personnalité peuvent expliquer les différences interindividuelles dans le LTM. De plus, la diversité phénotypique des manifestations précliniques de la MA provient aussi d'une connaissance limitée des processus de mémoire et d'apprentissage dans le cerveau sain. L'objectif du deuxième projet porte sur l'investigation des sous-régions du LTM, et plus particulièrement de leur contribution dans différentes composantes de la mémoire de reconnaissance chez le sujet sain. Pour étudier cela, j'ai utilisé une nouvelle méthode multivariée ainsi que l'IRM à haute résolution pour tester la contribution de ces sous-régions dans les processus de familiarité (« ou Know ») et de remémoration (ou « Recollection »). Finalement, l'objectif du troisième projet était de tester la contribution du LTM en tant que système de mémoire dans l'apprentissage et l'interaction dynamique entre différents systèmes de mémoire durant l'apprentissage. Les résultats du premier projet montrent que, en plus du déficit cognitif observé dans une population avec MCI, les traits de personnalité peuvent expliquer les différences interindividuelles du LTM ; notamment avec une plus grande contribution du neuroticisme liée à une vulnérabilité au stress et à la dépression. Mon étude a permis d'identifier un pattern d'anormalité anatomique dans le LTM associé à la personnalité avec des mesures de volume et de diffusion moyenne du tissu. Ce pattern est caractérisé par une asymétrie droite-gauche du LTM et un gradient antéro-postérieur dans le LTM. J'ai interprété ce résultat par des propriétés tissulaires et neurochimiques différemment sensibles au stress. Les résultats de mon deuxième projet ont contribué au débat actuel sur la contribution des sous-régions du LTM dans les processus de familiarité et de remémoration. Utilisant une nouvelle méthode multivariée, les résultats supportent premièrement une dissociation des sous-régions associées aux différentes composantes de la mémoire. L'hippocampe est le plus associé à la mémoire de type remémoration et le cortex parahippocampique, à la mémoire de type familiarité. Deuxièmement, l'activation correspondant à la trace mnésique pour chaque type de mémoire est caractérisée par une distribution spatiale distincte. La représentation neuronale spécifique, « sparse-distributed», associée à la mémoire de remémoration dans l'hippocampe serait la meilleure manière d'encoder rapidement des souvenirs détaillés sans interférer les souvenirs précédemment stockés. Dans mon troisième projet, j'ai mis en place une tâche d'apprentissage en IRM fonctionnelle pour étudier les processus d'apprentissage d'associations probabilistes basé sur le feedback/récompense. Cette étude m'a permis de mettre en évidence le rôle du LTM dans l'apprentissage et l'interaction entre différents systèmes de mémoire comme la mémoire procédurale, perceptuelle ou d'amorçage et la mémoire de travail. Nous avons trouvé des activations dans le LTM correspondant à un processus de mémoire épisodique; les ganglions de la base (GB), à la mémoire procédurale et la récompense; le cortex occipito-temporal (OT), à la mémoire de représentation perceptive ou l'amorçage et le cortex préfrontal, à la mémoire de travail. Nous avons également observé que ces régions peuvent interagir; le type de relation entre le LTM et les GB a été interprété comme une compétition, ce qui a déjà été reporté dans des études récentes. De plus, avec un modèle dynamique causal, j'ai démontré l'existence d'une connectivité effective entre des régions. Elle se caractérise par une influence causale de type « top-down » venant de régions corticales associées avec des processus de plus haut niveau venant du cortex préfrontal sur des régions corticales plus primaires comme le OT cortex. Cette influence diminue au cours du de l'apprentissage; cela pourrait correspondre à un mécanisme de diminution de l'erreur de prédiction. Mon interprétation est que cela est à l'origine de la connaissance sémantique. J'ai également montré que les choix du sujet et l'activation cérébrale associée sont influencés par les traits de personnalité et des états affectifs négatifs. Les résultats de cette thèse m'ont amenée à proposer (1) un modèle expliquant les mécanismes possibles liés à l'influence de la personnalité sur le LTM dans une population avec MCI, (2) une dissociation des sous-régions du LTM dans différents types de mémoire et une représentation neuronale spécifique à ces régions. Cela pourrait être une piste pour résoudre les débats actuels sur la mémoire de reconnaissance. Finalement, (3) le LTM est aussi un système de mémoire impliqué dans l'apprentissage et qui peut interagir avec les GB par une compétition. Nous avons aussi mis en évidence une interaction dynamique de type « top -down » et « bottom-up » entre le cortex préfrontal et le cortex OT. En conclusion, les résultats peuvent donner des indices afin de mieux comprendre certains dysfonctionnements de la mémoire liés à l'âge et la maladie d'Alzheimer ainsi qu'à améliorer le développement de traitement. Abstract: The impact of Alzheimer's disease is devastating for the daily life of the affected patients, with progressive loss of memory and other cognitive skills until dementia. We still lack disease modifying treatment and there is also a great amount of uncertainty regarding the accuracy of diagnostic classification in the early stages of AD. The anatomical signature of AD, in particular the medial temporal lobe (MTL) atrophy measured with neuroimaging, can be used as an early in vivo biomarker in early stages of AD. However, despite the evident role of MTL in memory, we know that the derived predictive anatomical model based only on measures of brain atrophy in MTL does not explain all clinical cases. Throughout my thesis, I have conducted three projects to understand the anatomy and the functioning of MTL on (1) disease's progression, (2) memory process and (3) learning process. I was interested in a population with mild cognitive impairment (MCI), at risk for AD. The objective of the first project was to test the hypothesis that factors, other than the cognitive ones, such as the personality traits, can explain inter-individual differences in the MTL. Moreover, the phenotypic diversity in the manifestations of preclinical AD arises also from the limited knowledge of memory and learning processes in healthy brain. The objective of the second project concerns the investigation of sub-regions of the MTL, and more particularly their contributions in the different components of recognition memory in healthy subjects. To study that, I have used a new multivariate method as well as MRI at high resolution to test the contribution of those sub-regions in the processes of familiarity and recollection. Finally, the objective of the third project was to test the contribution of the MTL as a memory system in learning and the dynamic interaction between memory systems during learning. The results of the first project show that, beyond cognitive state of impairment observed in the population with MCI, the personality traits can explain the inter-individual differences in the MTL; notably with a higher contribution of neuroticism linked to proneness to stress and depression. My study has allowed identifying a pattern of anatomical abnormality in the MTL related to personality with measures of volume and mean diffusion of the tissue. That pattern is characterized by right-left asymmetry in MTL and an anterior to posterior gradient within MTL. I have interpreted that result by tissue and neurochemical properties differently sensitive to stress. Results of my second project have contributed to the actual debate on the contribution of MTL sub-regions in the processes of familiarity and recollection. Using a new multivariate method, the results support firstly a dissociation of the subregions associated with different memory components. The hippocampus was mostly associated with recollection and the surrounding parahippocampal cortex, with familiarity type of memory. Secondly, the activation corresponding to the mensic trace for each type of memory is characterized by a distinct spatial distribution. The specific neuronal representation, "sparse-distributed", associated with recollection in the hippocampus would be the best way to rapidly encode detailed memories without overwriting previously stored memories. In the third project, I have created a learning task with functional MRI to sudy the processes of learning of probabilistic associations based on feedback/reward. That study allowed me to highlight the role of the MTL in learning and the interaction between different memory systems such as the procedural memory, the perceptual memory or priming and the working memory. We have found activations in the MTL corresponding to a process of episodic memory; the basal ganglia (BG), to a procedural memory and reward; the occipito-temporal (OT) cortex, to a perceptive memory or priming and the prefrontal cortex, to working memory. We have also observed that those regions can interact; the relation type between the MTL and the BG has been interpreted as a competition. In addition, with a dynamic causal model, I have demonstrated a "top-down" influence from cortical regions associated with high level cortical area such as the prefrontal cortex on lower level cortical regions such as the OT cortex. That influence decreases during learning; that could correspond to a mechanism linked to a diminution of prediction error. My interpretation is that this is at the origin of the semantic knowledge. I have also shown that the subject's choice and the associated brain activation are influenced by personality traits and negative affects. Overall results of this thesis have brought me to propose (1) a model explaining the possible mechanism linked to the influence of personality on the MTL in a population with MCI, (2) a dissociation of MTL sub-regions in different memory types and a neuronal representation specific to each region. This could be a cue to resolve the actual debates on recognition memory. Finally, (3) the MTL is also a system involved in learning and that can interact with the BG by a competition. We have also shown a dynamic interaction of « top -down » and « bottom-up » types between the pre-frontal cortex and the OT cortex. In conclusion, the results could give cues to better understand some memory dysfunctions in aging and Alzheimer's disease and to improve development of treatment.