20 resultados para Equation of prediction
Resumo:
BACKGROUND: The Outpatient Bleeding Risk Index (OBRI) and the Kuijer, RIETE and Kearon scores are clinical prognostic scores for bleeding in patients receiving oral anticoagulants for venous thromboembolism (VTE). We prospectively compared the performance of these scores in elderly patients with VTE. METHODS: In a prospective multicenter Swiss cohort study, we studied 663 patients aged ≥ 65 years with acute VTE. The outcome was a first major bleeding at 90 days. We classified patients into three categories of bleeding risk (low, intermediate and high) according to each score and dichotomized patients as high vs. low or intermediate risk. We calculated the area under the receiver-operating characteristic (ROC) curve, positive predictive values and likelihood ratios for each score. RESULTS: Overall, 28 out of 663 patients (4.2%, 95% confidence interval [CI] 2.8-6.0%) had a first major bleeding within 90 days. According to different scores, the rate of major bleeding varied from 1.9% to 2.1% in low-risk, from 4.2% to 5.0% in intermediate-risk and from 3.1% to 6.6% in high-risk patients. The discriminative power of the scores was poor to moderate, with areas under the ROC curve ranging from 0.49 to 0.60 (P = 0.21). The positive predictive values and positive likelihood ratios were low and varied from 3.1% to 6.6% and from 0.72 to 1.59, respectively. CONCLUSION: In elderly patients with VTE, existing bleeding risk scores do not have sufficient accuracy and power to discriminate between patients with VTE who are at a high risk of short-term major bleeding and those who are not.
Resumo:
BACKGROUND AND PURPOSE: Several prognostic scores have been developed to predict the risk of symptomatic intracranial hemorrhage (sICH) after ischemic stroke thrombolysis. We compared the performance of these scores in a multicenter cohort. METHODS: We merged prospectively collected data of patients with consecutive ischemic stroke who received intravenous thrombolysis in 7 stroke centers. We identified and evaluated 6 scores that can provide an estimate of the risk of sICH in hyperacute settings: MSS (Multicenter Stroke Survey); HAT (Hemorrhage After Thrombolysis); SEDAN (blood sugar, early infarct signs, [hyper]dense cerebral artery sign, age, NIH Stroke Scale); GRASPS (glucose at presentation, race [Asian], age, sex [male], systolic blood pressure at presentation, and severity of stroke at presentation [NIH Stroke Scale]); SITS (Safe Implementation of Thrombolysis in Stroke); and SPAN (stroke prognostication using age and NIH Stroke Scale)-100 positive index. We included only patients with available variables for all scores. We calculated the area under the receiver operating characteristic curve (AUC-ROC) and also performed logistic regression and the Hosmer-Lemeshow test. RESULTS: The final cohort comprised 3012 eligible patients, of whom 221 (7.3%) had sICH per National Institute of Neurological Disorders and Stroke, 141 (4.7%) per European Cooperative Acute Stroke Study II, and 86 (2.9%) per Safe Implementation of Thrombolysis in Stroke criteria. The performance of the scores assessed with AUC-ROC for predicting European Cooperative Acute Stroke Study II sICH was: MSS, 0.63 (95% confidence interval, 0.58-0.68); HAT, 0.65 (0.60-0.70); SEDAN, 0.70 (0.66-0.73); GRASPS, 0.67 (0.62-0.72); SITS, 0.64 (0.59-0.69); and SPAN-100 positive index, 0.56 (0.50-0.61). SEDAN had significantly higher AUC-ROC values compared with all other scores, except for GRASPS where the difference was nonsignificant. SPAN-100 performed significantly worse compared with other scores. The discriminative ranking of the scores was the same for the National Institute of Neurological Disorders and Stroke, and Safe Implementation of Thrombolysis in Stroke definitions, with SEDAN performing best, GRASPS second, and SPAN-100 worst. CONCLUSIONS: SPAN-100 had the worst predictive power, and SEDAN constantly the highest predictive power. However, none of the scores had better than moderate performance.
Resumo:
PURPOSE: The purpose of our study was to assess whether a model combining clinical factors, MR imaging features, and genomics would better predict overall survival of patients with glioblastoma (GBM) than either individual data type. METHODS: The study was conducted leveraging The Cancer Genome Atlas (TCGA) effort supported by the National Institutes of Health. Six neuroradiologists reviewed MRI images from The Cancer Imaging Archive (http://cancerimagingarchive.net) of 102 GBM patients using the VASARI scoring system. The patients' clinical and genetic data were obtained from the TCGA website (http://www.cancergenome.nih.gov/). Patient outcome was measured in terms of overall survival time. The association between different categories of biomarkers and survival was evaluated using Cox analysis. RESULTS: The features that were significantly associated with survival were: (1) clinical factors: chemotherapy; (2) imaging: proportion of tumor contrast enhancement on MRI; and (3) genomics: HRAS copy number variation. The combination of these three biomarkers resulted in an incremental increase in the strength of prediction of survival, with the model that included clinical, imaging, and genetic variables having the highest predictive accuracy (area under the curve 0.679±0.068, Akaike's information criterion 566.7, P<0.001). CONCLUSION: A combination of clinical factors, imaging features, and HRAS copy number variation best predicts survival of patients with GBM.
Resumo:
Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints. Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set. Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models. Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem.
Resumo:
OBJECTIVES: To test whether the Global Positioning System (GPS) could be potentially useful to assess the velocity of walking and running in humans. SUBJECT: A young man was equipped with a GPS receptor while walking running and cycling at various velocity on an athletic track. The speed of displacement assessed by GPS, was compared to that directly measured by chronometry (76 tests). RESULTS: In walking and running conditions (from 2-20 km/h) as well as cycling conditions (from 20-40 km/h), there was a significant relationship between the speed assessed by GPS and that actually measured (r = 0.99, P < 0.0001) with little bias in the prediction of velocity. The overall error of prediction (s.d. of difference) averaged +/-0.8 km/h. CONCLUSION: The GPS technique appears very promising for speed assessment although the relative accuracy at walking speed is still insufficient for research purposes. It may be improved by using differential GPS measurement.
Resumo:
The integration of the differential equation of the second law of Fick applied to the diffusion of chemical elements in a semi-infinite solid made it easier to estimate the time of stay of olivine mega-cristals in contact with the host lava The results of this research show the existence of two groups of olivine. The first remained in contact with the magmatic liquid during 19 to 22 days, while the second remained so during only 5 to 9 days. This distinction is correlative to that based on the qualitative observation.
Resumo:
We evaluated a new combined sensor for monitoring transcutaneous carbon dioxide tension (PtcCO2) and oxygen tension (PtcO2) in 20 critically ill newborn infants. Arterial oxygen tension (PaO2) ranged from 16 to 126 torr and arterial carbon dioxide tension (PaCO2) from 14 to 72 torr. Linear correlation analysis (100 paired values) of PtcO2 versus PaO2 showed an r value of 0.75 with a regression equation of PtcO2 = 8.59 + 0.905 (PaO2), while PtcCO2 versus PaCO2 revealed a correlation coefficient of r = 0.89 with an equation of PtcCO2 = 2.53 + 1.06 (PaCO2). The bias between PaO2 and PtcO2 was -2.8 with a precision of +/- 16.0 torr (range, -87 to +48 torr). The bias between PaCO2 and PtcCO2 was -5.1 with a precision of +/- 7.3 torr (range, -34 to +8 torr). The transcutaneous sensor detected 83% of hypoxia (PaO2 less than 45 torr), 75% of hyperoxia (PaO2 greater than 90 torr), 45% of hypocapnia (PaCO2 less than 35 torr), and 96% of hypercapnia (PaCO2 greater than 45 torr). We conclude that the reliability of the combined transcutaneous PO2 and PCO2 monitor in sick neonates is good for detecting hypercapnia, fair for hypoxia and hyperoxia, but poor for hypocapnia. It is an improvement in that it spares available skin surface and requires less handling, but it appears to be slightly less accurate than the single electrodes.
Resumo:
In this thesis, we study the use of prediction markets for technology assessment. We particularly focus on their ability to assess complex issues, the design constraints required for such applications and their efficacy compared to traditional techniques. To achieve this, we followed a design science research paradigm, iteratively developing, instantiating, evaluating and refining the design of our artifacts. This allowed us to make multiple contributions, both practical and theoretical. We first showed that prediction markets are adequate for properly assessing complex issues. We also developed a typology of design factors and design propositions for using these markets in a technology assessment context. Then, we showed that they are able to solve some issues related to the R&D portfolio management process and we proposed a roadmap for their implementation. Finally, by comparing the instantiation and the results of a multi-criteria decision method and a prediction market, we showed that the latter are more efficient, while offering similar results. We also proposed a framework for comparing forecasting methods, to identify the constraints based on contingency factors. In conclusion, our research opens a new field of application of prediction markets and should help hasten their adoption by enterprises. Résumé français: Dans cette thèse, nous étudions l'utilisation de marchés de prédictions pour l'évaluation de nouvelles technologies. Nous nous intéressons plus particulièrement aux capacités des marchés de prédictions à évaluer des problématiques complexes, aux contraintes de conception pour une telle utilisation et à leur efficacité par rapport à des techniques traditionnelles. Pour ce faire, nous avons suivi une approche Design Science, développant itérativement plusieurs prototypes, les instanciant, puis les évaluant avant d'en raffiner la conception. Ceci nous a permis de faire de multiples contributions tant pratiques que théoriques. Nous avons tout d'abord montré que les marchés de prédictions étaient adaptés pour correctement apprécier des problématiques complexes. Nous avons également développé une typologie de facteurs de conception ainsi que des propositions de conception pour l'utilisation de ces marchés dans des contextes d'évaluation technologique. Ensuite, nous avons montré que ces marchés pouvaient résoudre une partie des problèmes liés à la gestion des portes-feuille de projets de recherche et développement et proposons une feuille de route pour leur mise en oeuvre. Finalement, en comparant la mise en oeuvre et les résultats d'une méthode de décision multi-critère et d'un marché de prédiction, nous avons montré que ces derniers étaient plus efficaces, tout en offrant des résultats semblables. Nous proposons également un cadre de comparaison des méthodes d'évaluation technologiques, permettant de cerner au mieux les besoins en fonction de facteurs de contingence. En conclusion, notre recherche ouvre un nouveau champ d'application des marchés de prédiction et devrait permettre d'accélérer leur adoption par les entreprises.
Resumo:
Résumé La thématique de cette thèse peut être résumée par le célèbre paradoxe de biologie évolutive sur le maintien du polymorphisme face à la sélection et par l'équation du changement de fréquence gamétique au cours du temps dû, à la sélection. La fréquence d'un gamète xi à la génération (t + 1) est: !!!Equation tronquée!!! Cette équation est utilisée pour générer des données utlisée tout au long de ce travail pour 2, 3 et 4 locus dialléliques. Le potentiel de l'avantage de l'hétérozygote pour le maintien du polymorphisme est le sujet de la première partie. La définition commune de l'avantage de l'hétérozygote n'etant applicable qu'a un locus ayant 2 allèles, cet avantage est redéfini pour un système multilocus sur les bases de précédentes études. En utilisant 5 définitions différentes de l'avantage de l'hétérozygote, je montre que cet avantage ne peut être un mécanisme général dans le maintien du polymorphisme sous sélection. L'étude de l'influence de locus non-détectés sur les processus évolutifs, seconde partie de cette thèse, est motivée par les travaux moléculaires ayant pour but de découvrir le nombre de locus codant pour un trait. La plupart de ces études sous-estiment le nombre de locus. Je montre que des locus non-détectés augmentent la probabilité d'observer du polymorphisme sous sélection. De plus, les conclusions sur les facteurs de maintien du polymorphisme peuvent être trompeuses si tous les locus ne sont pas détectés. Dans la troisième partie, je m'intéresse à la valeur attendue de variance additive après un goulot d'étranglement pour des traits sélectionés. Une études précédente montre que le niveau de variance additive après goulot d'étranglement augmente avec le nombre de loci. Je montre que le niveau de variance additive après un goulot d'étranglement augmente (comparé à des traits neutres), mais indépendamment du nombre de loci. Par contre, le taux de recombinaison a une forte influence, entre autre en regénérant les gamètes disparus suite au goulot d'étranglement. La dernière partie de ce travail de thèse décrit un programme pour le logiciel de statistique R. Ce programme permet d'itérer l'équation ci-dessus en variant les paramètres de sélection, recombinaison et de taille de populations pour 2, 3 et 4 locus dialléliques. Cette thèse montre qu'utiliser un système multilocus permet d'obtenir des résultats non-conformes à ceux issus de systèmes rnonolocus (la référence en génétique des populations). Ce programme ouvre donc d'intéressantes perspectives en génétique des populations. Abstract The subject of this PhD thesis can be summarized by one famous paradox of evolu-tionary biology: the maintenance of polymorphism in the face of selection, and one classical equation of theoretical population genetics: the changes in gametic frequencies due to selection and recombination. The frequency of gamete xi at generation (t + 1) is given by: !!! Truncated equation!!! This equation is used to generate data on selection at two, three, and four diallelic loci for the different parts of this work. The first part focuses on the potential of heterozygote advantage to maintain genetic polymorphism. Results of previous studies are used to (re)define heterozygote advantage for multilocus systems, since the classical definition is for one diallelic locus. I use 5 different definitions of heterozygote advantage. And for these five definitions, I show that heterozygote advantage is not a general mechanism for the maintenance of polymorphism. The study of the influence of undetected loci on evolutionary processes (second part of this work) is motivated by molecular works which aim at discovering the loci coding for a trait. For most of these works, some coding loci remains undetected. I show that undetected loci increases the probability of maintaining polymorphism under selection. In addition, conclusions about the factor that maintain polymorphism can be misleading if not all loci are considered. This is, therefore, only when all loci are detected that exact conclusions on the level of maintained polymorphism or on the factor(s) that maintain(s) polymorphism could be drawn. In the third part, the focus is on the expected release of additive genetic variance after bottleneck for selected traits. A previous study shows that the expected release of additive variance increases with an increase in the number of loci. I show that the expected release of additive variance after bottleneck increases for selected traits (compared with neutral), but this increase is not a function of the number of loci, but function of the recombination rate. Finally, the last part of this PhD thesis is a description of a package for the statistical software R that implements the Equation given above. It allows to generate data for different scenario regarding selection, recombination, and population size. This package opens perspectives for the theoretical population genetics that mainly focuses on one locus, while this work shows that increasing the number of loci leads not necessarily to straightforward results.
Resumo:
We tested the performance of transcutaneous oxygen monitoring (TcPO2) and pulse oximetry (tcSaO2) in detecting hypoxia in critically ill neonatal and pediatric patients. In 54 patients (178 data sets) with a mean age of 2.4 years (range 1 to 19 years), arterial saturation (SaO2) ranged from 9.5 to 100%, and arterial oxygen tension (PaO2) from 16.4 to 128 mmHg. Linear correlation analysis of pulse oximetry vs measured SaO2 revealed an r value of 0.95 (p less than 0.001) with an equation of y = 21.1 + 0.749x, while PaO2 vs tcPO2 showed a correlation coefficient of r = 0.95 (p less than 0.001) with an equation of y = -1.04 + 0.876x. The mean difference between measured SaO2 and tcSaO2 was -2.74 +/- 7.69% (range +14 to - 29%) and the mean difference between PaO2 and tcPO2 was +7.43 +/- 8.57 mmHg (range -14 to +49 mmHg). Pulse oximetry was reliable at values above 65%, but was inaccurate and overestimated the arterial SaO2 at lower values. TcPO2 tended to underestimate the arterial value with increasing PaO2. Pulse oximetry had the best sensitivity to specificity ratio for hypoxia between 65 and 90% SaO2; for tcPO2 the best results were obtained between 35 and 55 mmHg PaO2.
Resumo:
Methods used to analyze one type of nonstationary stochastic processes?the periodically correlated process?are considered. Two methods of one-step-forward prediction of periodically correlated time series are examined. One-step-forward predictions made in accordance with an autoregression model and a model of an artificial neural network with one latent neuron layer and with an adaptation mechanism of network parameters in a moving time window were compared in terms of efficiency. The comparison showed that, in the case of prediction for one time step for time series of mean monthly water discharge, the simpler autoregression model is more efficient.
Resumo:
Purpose: To evaluate the reproducibility of Cirrus-SD OCT measurements and to compare central macular thickness (CMT) measurements between TD-Stratus and SD-Cirrus OCT in patients with active exudative AMD. Methods: Consecutive case series of patients with active exudative AMD seen in the Medical Retina Department. Patients underwent 1 scan with Stratus (macular thickness map protocol) and 5 scans with Cirrus (Macular Cube protocol) at the same visit by the same experienced examiner. To be included, patients best-corrected visual acuity (BCVA) had to be >20/200 while all scans had to be of sufficient quality, well-centered and at least one Cirrus scan with CMT >300 microns. The repeatability of the SD Cirrus was estimated by using all 5 CMT measurements and the mean of the Cirrus measurements was compared with the CMT obtained by TD Stratus. Results: Cirrus OCT demonstrated high intraobserver repeatability at the central foveal region (ICC 96%). The mean of the CMT measurements was 321microns for Stratus and 387 microns for Cirrus. The average difference was 65m (SD=30). The coefficient of concordance between Stratus and Cirrus CMT measurements was rho=0,749 with a high precision and a moderate accuracy. The equation of the line of regression between Stratus and meanCirrus is given by the following: M_stratus = 0,848 x m_cirrus - 4,496 (1).Conclusions: The Cirrus macular cube protocol allows reproducible CMT measurements in patients with active exudative AMD. In cases of upgrading from TD to SD use and vice versa, there is the possibility to predict the measurements by using the equation (1). These real life data and conclusions can help in improving our clinical management of patients with neovascular AMD.
Resumo:
Résumé: L'impact de la maladie d'Alzheimer (MA) est dévastateur pour la vie quotidienne de la personne affectée, avec perte progressive de la mémoire et d'autres facultés cognitives jusqu'à la démence. Il n'existe toujours pas de traitement contre cette maladie et il y a aussi une grande incertitude sur le diagnostic des premiers stades de la MA. La signature anatomique de la MA, en particulier l'atrophie du lobe temporal moyen (LTM) mesurée avec la neuroimagerie, peut être utilisée comme un biomarqueur précoce, in vivo, des premiers stades de la MA. Toutefois, malgré le rôle évident du LMT dans les processus de la mémoire, nous savons que les modèles anatomiques prédictifs de la MA basés seulement sur des mesures d'atrophie du LTM n'expliquent pas tous les cas cliniques. Au cours de ma thèse, j'ai conduit trois projets pour comprendre l'anatomie et le fonctionnement du LMT dans (1) les processus de la maladie et dans (2) les processus de mémoire ainsi que (3) ceux de l'apprentissage. Je me suis intéressée à une population avec déficit cognitif léger (« Mild Cognitive Impairment », MCI), à risque pour la MA. Le but du premier projet était de tester l'hypothèse que des facteurs, autres que ceux cognitifs, tels que les traits de personnalité peuvent expliquer les différences interindividuelles dans le LTM. De plus, la diversité phénotypique des manifestations précliniques de la MA provient aussi d'une connaissance limitée des processus de mémoire et d'apprentissage dans le cerveau sain. L'objectif du deuxième projet porte sur l'investigation des sous-régions du LTM, et plus particulièrement de leur contribution dans différentes composantes de la mémoire de reconnaissance chez le sujet sain. Pour étudier cela, j'ai utilisé une nouvelle méthode multivariée ainsi que l'IRM à haute résolution pour tester la contribution de ces sous-régions dans les processus de familiarité (« ou Know ») et de remémoration (ou « Recollection »). Finalement, l'objectif du troisième projet était de tester la contribution du LTM en tant que système de mémoire dans l'apprentissage et l'interaction dynamique entre différents systèmes de mémoire durant l'apprentissage. Les résultats du premier projet montrent que, en plus du déficit cognitif observé dans une population avec MCI, les traits de personnalité peuvent expliquer les différences interindividuelles du LTM ; notamment avec une plus grande contribution du neuroticisme liée à une vulnérabilité au stress et à la dépression. Mon étude a permis d'identifier un pattern d'anormalité anatomique dans le LTM associé à la personnalité avec des mesures de volume et de diffusion moyenne du tissu. Ce pattern est caractérisé par une asymétrie droite-gauche du LTM et un gradient antéro-postérieur dans le LTM. J'ai interprété ce résultat par des propriétés tissulaires et neurochimiques différemment sensibles au stress. Les résultats de mon deuxième projet ont contribué au débat actuel sur la contribution des sous-régions du LTM dans les processus de familiarité et de remémoration. Utilisant une nouvelle méthode multivariée, les résultats supportent premièrement une dissociation des sous-régions associées aux différentes composantes de la mémoire. L'hippocampe est le plus associé à la mémoire de type remémoration et le cortex parahippocampique, à la mémoire de type familiarité. Deuxièmement, l'activation correspondant à la trace mnésique pour chaque type de mémoire est caractérisée par une distribution spatiale distincte. La représentation neuronale spécifique, « sparse-distributed», associée à la mémoire de remémoration dans l'hippocampe serait la meilleure manière d'encoder rapidement des souvenirs détaillés sans interférer les souvenirs précédemment stockés. Dans mon troisième projet, j'ai mis en place une tâche d'apprentissage en IRM fonctionnelle pour étudier les processus d'apprentissage d'associations probabilistes basé sur le feedback/récompense. Cette étude m'a permis de mettre en évidence le rôle du LTM dans l'apprentissage et l'interaction entre différents systèmes de mémoire comme la mémoire procédurale, perceptuelle ou d'amorçage et la mémoire de travail. Nous avons trouvé des activations dans le LTM correspondant à un processus de mémoire épisodique; les ganglions de la base (GB), à la mémoire procédurale et la récompense; le cortex occipito-temporal (OT), à la mémoire de représentation perceptive ou l'amorçage et le cortex préfrontal, à la mémoire de travail. Nous avons également observé que ces régions peuvent interagir; le type de relation entre le LTM et les GB a été interprété comme une compétition, ce qui a déjà été reporté dans des études récentes. De plus, avec un modèle dynamique causal, j'ai démontré l'existence d'une connectivité effective entre des régions. Elle se caractérise par une influence causale de type « top-down » venant de régions corticales associées avec des processus de plus haut niveau venant du cortex préfrontal sur des régions corticales plus primaires comme le OT cortex. Cette influence diminue au cours du de l'apprentissage; cela pourrait correspondre à un mécanisme de diminution de l'erreur de prédiction. Mon interprétation est que cela est à l'origine de la connaissance sémantique. J'ai également montré que les choix du sujet et l'activation cérébrale associée sont influencés par les traits de personnalité et des états affectifs négatifs. Les résultats de cette thèse m'ont amenée à proposer (1) un modèle expliquant les mécanismes possibles liés à l'influence de la personnalité sur le LTM dans une population avec MCI, (2) une dissociation des sous-régions du LTM dans différents types de mémoire et une représentation neuronale spécifique à ces régions. Cela pourrait être une piste pour résoudre les débats actuels sur la mémoire de reconnaissance. Finalement, (3) le LTM est aussi un système de mémoire impliqué dans l'apprentissage et qui peut interagir avec les GB par une compétition. Nous avons aussi mis en évidence une interaction dynamique de type « top -down » et « bottom-up » entre le cortex préfrontal et le cortex OT. En conclusion, les résultats peuvent donner des indices afin de mieux comprendre certains dysfonctionnements de la mémoire liés à l'âge et la maladie d'Alzheimer ainsi qu'à améliorer le développement de traitement. Abstract: The impact of Alzheimer's disease is devastating for the daily life of the affected patients, with progressive loss of memory and other cognitive skills until dementia. We still lack disease modifying treatment and there is also a great amount of uncertainty regarding the accuracy of diagnostic classification in the early stages of AD. The anatomical signature of AD, in particular the medial temporal lobe (MTL) atrophy measured with neuroimaging, can be used as an early in vivo biomarker in early stages of AD. However, despite the evident role of MTL in memory, we know that the derived predictive anatomical model based only on measures of brain atrophy in MTL does not explain all clinical cases. Throughout my thesis, I have conducted three projects to understand the anatomy and the functioning of MTL on (1) disease's progression, (2) memory process and (3) learning process. I was interested in a population with mild cognitive impairment (MCI), at risk for AD. The objective of the first project was to test the hypothesis that factors, other than the cognitive ones, such as the personality traits, can explain inter-individual differences in the MTL. Moreover, the phenotypic diversity in the manifestations of preclinical AD arises also from the limited knowledge of memory and learning processes in healthy brain. The objective of the second project concerns the investigation of sub-regions of the MTL, and more particularly their contributions in the different components of recognition memory in healthy subjects. To study that, I have used a new multivariate method as well as MRI at high resolution to test the contribution of those sub-regions in the processes of familiarity and recollection. Finally, the objective of the third project was to test the contribution of the MTL as a memory system in learning and the dynamic interaction between memory systems during learning. The results of the first project show that, beyond cognitive state of impairment observed in the population with MCI, the personality traits can explain the inter-individual differences in the MTL; notably with a higher contribution of neuroticism linked to proneness to stress and depression. My study has allowed identifying a pattern of anatomical abnormality in the MTL related to personality with measures of volume and mean diffusion of the tissue. That pattern is characterized by right-left asymmetry in MTL and an anterior to posterior gradient within MTL. I have interpreted that result by tissue and neurochemical properties differently sensitive to stress. Results of my second project have contributed to the actual debate on the contribution of MTL sub-regions in the processes of familiarity and recollection. Using a new multivariate method, the results support firstly a dissociation of the subregions associated with different memory components. The hippocampus was mostly associated with recollection and the surrounding parahippocampal cortex, with familiarity type of memory. Secondly, the activation corresponding to the mensic trace for each type of memory is characterized by a distinct spatial distribution. The specific neuronal representation, "sparse-distributed", associated with recollection in the hippocampus would be the best way to rapidly encode detailed memories without overwriting previously stored memories. In the third project, I have created a learning task with functional MRI to sudy the processes of learning of probabilistic associations based on feedback/reward. That study allowed me to highlight the role of the MTL in learning and the interaction between different memory systems such as the procedural memory, the perceptual memory or priming and the working memory. We have found activations in the MTL corresponding to a process of episodic memory; the basal ganglia (BG), to a procedural memory and reward; the occipito-temporal (OT) cortex, to a perceptive memory or priming and the prefrontal cortex, to working memory. We have also observed that those regions can interact; the relation type between the MTL and the BG has been interpreted as a competition. In addition, with a dynamic causal model, I have demonstrated a "top-down" influence from cortical regions associated with high level cortical area such as the prefrontal cortex on lower level cortical regions such as the OT cortex. That influence decreases during learning; that could correspond to a mechanism linked to a diminution of prediction error. My interpretation is that this is at the origin of the semantic knowledge. I have also shown that the subject's choice and the associated brain activation are influenced by personality traits and negative affects. Overall results of this thesis have brought me to propose (1) a model explaining the possible mechanism linked to the influence of personality on the MTL in a population with MCI, (2) a dissociation of MTL sub-regions in different memory types and a neuronal representation specific to each region. This could be a cue to resolve the actual debates on recognition memory. Finally, (3) the MTL is also a system involved in learning and that can interact with the BG by a competition. We have also shown a dynamic interaction of « top -down » and « bottom-up » types between the pre-frontal cortex and the OT cortex. In conclusion, the results could give cues to better understand some memory dysfunctions in aging and Alzheimer's disease and to improve development of treatment.
Resumo:
La médecine prédictive évalue la probabilité que des personnes portant des mutations génétiques constitutionnelles puissent développer une maladie donnée, comme par exemple une tumeur maligne (oncogénétique). Dans le cas des prédispositions génétiques au cancer, des mesures particulières de surveillance et de prévention sont discutées en fonction de l'évaluation des risques et des résultats de l'analyse génétique, y compris certains traitements préventifs allant, à l'extrême, jusqu'à l'intervention chirurgicale prophylactique (ex : mastectomie et/ou ovariectomie). Cette étude est basée sur une interprétation psychanalytique du récit de sujets ayant entrepris une démarche en oncogénétique et vise à analyser l'impact psychique : a) du résultat de l'analyse génétique et b) de la construction de l'arbre généalogique. Elle a été conduite dans l'Unité d'oncogénétique et de prévention des cancers (UOPC) du Service d'oncologie des Hôpitaux Universitaires de Genève (HUG). L'UOPC assure des consultations de conseil génétique spécialisé pour les personnes ayant des antécédents personnels et/ou familiaux de maladies tumorales suggestifs de l'existence de prédispositions génétiques au cancer. La population de cette étude comprend 125 sujets suivis lors des différentes étapes du dépistage, pour un total de 289 consultations et 50 entretiens individuels. Cette recherche montre que les sujets asymptomatiques réélaborent de façon personnelle, soit le résultat génétique (négatif ou positif), soit l'acte de prédiction. En revanche, ceux qui ont développé un cancer expriment des sentiments d'angoisse, comme s'ils subissaient les effets d'un destin inéluctable qui s'est effectivement réalisé. Par ailleurs, l'arbre généalogique est réinterprété de façon personnelle, laissant apparaître des aspects refoulés ou niés qui peuvent resurgir. Lorsque d'autres membres de la famille sont sollicités pour préciser les liens génétiques et/ou être soumis en première intention à l'analyse génétique, le sujet exprime sa difficulté de dépendre d'autres personnes pour connaître son propre statut biologique. D'une façon générale, on constate que là où la médecine prédictive réalise son acte de prévision, le sujet répond de façon imprévisible. Dans l'optique de la psychanalyse, cette imprévisibilité est liée aux aspects du « désir inconscient ». Cette étude montre aussi qu'on ne peut pas considérer le dépistage génétique comme étant la cause directe du traumatisme. L'effort doit porter sur le fait que le sujet puisse se réapproprier ce qui lui arrive, et exprimer progressivement sa souffrance spécifique en jeu dans le processus de prédiction pour créer un écart entre la vérité médicale et la sienne. L'espace de la parole devient ainsi le lieu d'un travail privilégié. La psychanalyse opère donc pour que le résultat génétique se détache de l'acte de prédiction, c'est-à-dire qu'il redevienne un moment de la vie du sujet qui puisse s'articuler comme sa propre histoire personnelle. The aim of predictive medicine is to assess the probability that individuals carrying germ-line mutations will develop certain diseases, for instance cancer (oncogenetics). In predictive oncology, particular surveillance and prevention measures are discussed with these patients in relation to risk assessment and results of genetic testing, including preventive care which can, in extremes cases, lead to prophylactic surgery (i.e. mastectomy and/or ovariectomy). This study is based on a psychoanalytic interpretation of subjects' narration of the oncogenetic process and aims at analyzing the psychological impact of a) genetic testing and b) the construction of the family tree. It was carried out at the Oncogenetics and cancer prevention unit (Unité d'oncogénétique et de prévention des cancers) from the Geneva University Hospitals (Hôpitaux Universitaires de Genève, HUG) which organizes genetic counselling for individuals having personal and/or family history suggestive of genetic predisposition to cancer. The study population comprises 125 patients followed during the successive steps of genetic counselling, for a total of 289 consultations and 50 personal interviews. This research shows that asymptomatic subjects re-elaborate in a personal way either the results of genetic testing (negative or positive) or the act of prediction. Conversely, those having developed cancer express feelings of anguish, as if they were undergoing the effects of a destiny which effectively happened. Its sight remains a difficult step of the oncogenetic process, as psychological aspects which were repressed or denied can re-appear. When some family members are solicited to help reconstructing the genetic relationships, sometimes being themselves submitted first to genetic testing, the study subject expresses the difficulty to depend on other persons to learn more about his own biological status. In this study, we observe that, in parallel to predictions delivered by the process of predictive medicine, the subject actually answers unpredictably. With a psychoanalytic perspective, this unpredictability is related to an "unconscious desire". We also find that we cannot consider that genetic screening is a direct cause of psychological trauma. Our efforts must rely on allowing the subject to re-appropriate himself what is happening, to let him progressively express his own suffering of the prediction in order to create a gap between the medical reality and his own. In this process, "speech" is needed to let this happening. Psychoanalysis works in such a way that the genetic testing's result becomes distinct from the act of prediction, a moment of the subject's life expressed as his own personal history.
Resumo:
Designing an efficient sampling strategy is of crucial importance for habitat suitability modelling. This paper compares four such strategies, namely, 'random', 'regular', 'proportional-stratified' and 'equal -stratified'- to investigate (1) how they affect prediction accuracy and (2) how sensitive they are to sample size. In order to compare them, a virtual species approach (Ecol. Model. 145 (2001) 111) in a real landscape, based on reliable data, was chosen. The distribution of the virtual species was sampled 300 times using each of the four strategies in four sample sizes. The sampled data were then fed into a GLM to make two types of prediction: (1) habitat suitability and (2) presence/ absence. Comparing the predictions to the known distribution of the virtual species allows model accuracy to be assessed. Habitat suitability predictions were assessed by Pearson's correlation coefficient and presence/absence predictions by Cohen's K agreement coefficient. The results show the 'regular' and 'equal-stratified' sampling strategies to be the most accurate and most robust. We propose the following characteristics to improve sample design: (1) increase sample size, (2) prefer systematic to random sampling and (3) include environmental information in the design'