27 resultados para encoding-error model

em Université de Lausanne, Switzerland


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evolution of continuous traits is the central component of comparative analyses in phylogenetics, and the comparison of alternative models of trait evolution has greatly improved our understanding of the mechanisms driving phenotypic differentiation. Several factors influence the comparison of models, and we explore the effects of random errors in trait measurement on the accuracy of model selection. We simulate trait data under a Brownian motion model (BM) and introduce different magnitudes of random measurement error. We then evaluate the resulting statistical support for this model against two alternative models: Ornstein-Uhlenbeck (OU) and accelerating/decelerating rates (ACDC). Our analyses show that even small measurement errors (10%) consistently bias model selection towards erroneous rejection of BM in favour of more parameter-rich models (most frequently the OU model). Fortunately, methods that explicitly incorporate measurement errors in phylogenetic analyses considerably improve the accuracy of model selection. Our results call for caution in interpreting the results of model selection in comparative analyses, especially when complex models garner only modest additional support. Importantly, as measurement errors occur in most trait data sets, we suggest that estimation of measurement errors should always be performed during comparative analysis to reduce chances of misidentification of evolutionary processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data characteristics and species traits are expected to influence the accuracy with which species' distributions can be modeled and predicted. We compare 10 modeling techniques in terms of predictive power and sensitivity to location error, change in map resolution, and sample size, and assess whether some species traits can explain variation in model performance. We focused on 30 native tree species in Switzerland and used presence-only data to model current distribution, which we evaluated against independent presence-absence data. While there are important differences between the predictive performance of modeling methods, the variance in model performance is greater among species than among techniques. Within the range of data perturbations in this study, some extrinsic parameters of data affect model performance more than others: location error and sample size reduced performance of many techniques, whereas grain had little effect on most techniques. No technique can rescue species that are difficult to predict. The predictive power of species-distribution models can partly be predicted from a series of species characteristics and traits based on growth rate, elevational distribution range, and maximum elevation. Slow-growing species or species with narrow and specialized niches tend to be better modeled. The Swiss presence-only tree data produce models that are reliable enough to be useful in planning and management applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eukaryotic cells generate energy in the form of ATP, through a network of mitochondrial complexes and electron carriers known as the oxidative phosphorylation system. In mammals, mitochondrial complex I (CI) is the largest component of this system, comprising 45 different subunits encoded by mitochondrial and nuclear DNA. Humans diagnosed with mutations in the gene NDUFS4, encoding a nuclear DNA-encoded subunit of CI (NADH dehydrogenase ubiquinone Fe-S protein 4), typically suffer from Leigh syndrome, a neurodegenerative disease with onset in infancy or early childhood. Mitochondria from NDUFS4 patients usually lack detectable NDUFS4 protein and show a CI stability/assembly defect. Here, we describe a recessive mouse phenotype caused by the insertion of a transposable element into Ndufs4, identified by a novel combined linkage and expression analysis. Designated Ndufs4(fky), the mutation leads to aberrant transcript splicing and absence of NDUFS4 protein in all tissues tested of homozygous mice. Physical and behavioral symptoms displayed by Ndufs4(fky/fky) mice include temporary fur loss, growth retardation, unsteady gait, and abnormal body posture when suspended by the tail. Analysis of CI in Ndufs4(fky/fky) mice using blue native PAGE revealed the presence of a faster migrating crippled complex. This crippled CI was shown to lack subunits of the "N assembly module", which contains the NADH binding site, but contained two assembly factors not present in intact CI. Metabolomic analysis of the blood by tandem mass spectrometry showed increased hydroxyacylcarnitine species, implying that the CI defect leads to an imbalanced NADH/NAD(+) ratio that inhibits mitochondrial fatty acid β-oxidation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Male and female Wistar rats were treated postnatally (PND 5-16) with BSO (l-buthionine-(S,R)-sulfoximine) to provide a rat model of schizophrenia based on transient glutathione deficit. In the watermaze, BSO-treated male rats perform very efficiently in conditions where a diversity of visual information is continuously available during orientation trajectories [1]. Our hypothesis is that the treatment impairs proactive strategies anticipating future sensory information, while supporting a tight visual adjustment on memorized snapshots, i.e. compensatory reactive strategies. To test this hypothesis, BSO rats' performance was assessed in two conditions using an 8-arm radial maze task: a semi-transparent maze with no available view on the environment from maze centre [2], and a modified 2-parallel maze known to induce a neglect of the parallel pair in normal rats [3-5]. Male rats, but not females, were affected by the BSO treatment. In the semi-transparent maze, BSO males expressed a higher error rate, especially in completing the maze after an interruption. In the 2-parallel maze shape, BSO males, unlike controls, expressed no neglect of the parallel arms. This second result was in accord with a reactive strategy using accurate memory images of the contextual environment instead of a representation based on integrating relative directions. These results are coherent with a treatment-induced deficit in proactive decision strategy based on multimodal cognitive maps, compensated by accurate reactive adaptations based on the memory of local configurations. Control females did not express an efficient proactive capacity in the semi-transparent maze, neither did they show the significant neglect of the parallel arms, which might have masked the BSO induced effect. Their reduced sensitivity to BSO treatment is discussed with regard to a sex biased basal cognitive style.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Streptococcus gallolyticus is a causative agent of infective endocarditis associated with colon cancer. Genome sequence of strain UCN34 revealed the existence of 3 pilus loci (pil1, pil2, and pil3). Pili are long filamentous structures playing a key role as adhesive organelles in many pathogens. The pil1 locus encodes 2 LPXTG proteins (Gallo2178 and Gallo2179) and 1 sortase C (Gallo2177). Gallo2179 displaying a functional collagen-binding domain was referred to as the adhesin, whereas Gallo2178 was designated as the major pilin. Methods. S. gallolyticus UCN34, Pil1(+) and Pil1(-), expressing various levels of pil1, and recombinant Lactococcus lactis strains, constitutively expressing pil1, were studied. Polyclonal antibodies raised against the putative pilin subunits Gallo2178 and Gallo2179 were used in immunoblotting and immunogold electron microscopy. The role of pil1 was tested in a rat model of endocarditis. Results. We showed that the pil1 locus (gallo2179-78-77) forms an operon differentially expressed among S. gallolyticus strains. Short pilus appendages were identified both on the surface of S. gallolyticus UCN34 and recombinant L. lactis-expressing pil1. We demonstrated that Pil1 pilus is involved in binding to collagen, biofilm formation, and virulence in experimental endocarditis. Conclusions. This study identifies Pil1 as the first virulence factor characterized in S. gallolyticus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glutaric aciduria type I (glutaryl-CoA dehydrogenase deficiency) is an inborn error of metabolism that usually manifests in infancy by an acute encephalopathic crisis and often results in permanent motor handicap. Biochemical hallmarks of this disease are elevated levels of glutarate and 3-hydroxyglutarate in blood and urine. The neuropathology of this disease is still poorly understood, as low lysine diet and carnitine supplementation do not always prevent brain damage, even in early-treated patients. We used a 3D in vitro model of rat organotypic brain cell cultures in aggregates to mimic glutaric aciduria type I by repeated administration of 1 mM glutarate or 3-hydroxyglutarate at two time points representing different developmental stages. Both metabolites were deleterious for the developing brain cells, with 3-hydroxyglutarate being the most toxic metabolite in our model. Astrocytes were the cells most strongly affected by metabolite exposure. In culture medium, we observed an up to 11-fold increase of ammonium in the culture medium with a concomitant decrease of glutamine. We further observed an increase in lactate and a concomitant decrease in glucose. Exposure to 3-hydroxyglutarate led to a significantly increased cell death rate. Thus, we propose a three step model for brain damage in glutaric aciduria type I: (i) 3-OHGA causes the death of astrocytes, (ii) deficiency of the astrocytic enzyme glutamine synthetase leads to intracerebral ammonium accumulation, and (iii) high ammonium triggers secondary death of other brain cells. These unexpected findings need to be further investigated and verified in vivo. They suggest that intracerebral ammonium accumulation might be an important target for the development of more effective treatment strategies to prevent brain damage in patients with glutaric aciduria type I.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excessive exposure to solar ultraviolet (UV) is the main cause of skin cancer. Specific prevention should be further developed to target overexposed or highly vulnerable populations. A better characterisation of anatomical UV exposure patterns is however needed for specific prevention. To develop a regression model for predicting the UV exposure ratio (ER, ratio between the anatomical dose and the corresponding ground level dose) for each body site without requiring individual measurements. A 3D numeric model (SimUVEx) was used to compute ER for various body sites and postures. A multiple fractional polynomial regression analysis was performed to identify predictors of ER. The regression model used simulation data and its performance was tested on an independent data set. Two input variables were sufficient to explain ER: the cosine of the maximal daily solar zenith angle and the fraction of the sky visible from the body site. The regression model was in good agreement with the simulated data ER (R(2)=0.988). Relative errors up to +20% and -10% were found in daily doses predictions, whereas an average relative error of only 2.4% (-0.03% to 5.4%) was found in yearly dose predictions. The regression model predicts accurately ER and UV doses on the basis of readily available data such as global UV erythemal irradiance measured at ground surface stations or inferred from satellite information. It renders the development of exposure data on a wide temporal and geographical scale possible and opens broad perspectives for epidemiological studies and skin cancer prevention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The protection elicited by the intramuscular injection of two plasmid DNAs encoding Leishmania major cysteine proteinase type I (CPb) and type II (CPa) was evaluated in a murine model of experimental cutaneous leishmaniasis. BALB/c mice were immunized either separately or with a cocktail of the two plasmids expressing CPa or CPb. It was only when the cpa and cpb genes were co-injected that long lasting protection against parasite challenge was achieved. Similar protection was also observed when animals were first immunized with cpa/cpb DNA followed by recombinant CPa/CPb boost. Analysis of the immune response showed that protected animals developed a specific Th1 immune response, which was associated with an increase of IFN-gamma production. This is the first report demonstrating that co-injection of two genes expressing different antigens induces a long lasting protective response, whereas the separate injection of cysteine proteases genes is not protective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: In this study, we investigated the expression of the gene encoding beta-galactosidase (Glb)-1-like protein 3 (Glb1l3), a member of the glycosyl hydrolase 35 family, during retinal degeneration in the retinal pigment epithelium (RPE)-specific 65-kDa protein knockout (Rpe65(-/-)) mouse model of Leber congenital amaurosis (LCA). Additionally, we assessed the expression of the other members of this protein family, including beta-galactosidase-1 (Glb1), beta-galactosidase-1-like (Glb1l), and beta-galactosidase-1-like protein 2 (Glb1l2).Methods: The structural features of Glb1l3 were assessed using bioinformatic tools. mRNA expression of Glb-related genes was investigated by oligonucleotide microarray, real-time PCR, and reverse transcription (RT) -PCR. The localized expression of Glb1l3 was assessed by combined in situ hybridization and immunohistochemistry.Results: Glb1l3 was the only Glb-related member strongly downregulated in Rpe65(-/-) retinas before the onset and during progression of the disease. Glb1l3 mRNA was only expressed in the retinal layers and the RPE/choroid. The other Glb-related genes were ubiquitously expressed in different ocular tissues, including the cornea and lens. In the healthy retina, expression of Glb1l3 was strongly induced during postnatal retinal development; age-related increased expression persisted during adulthood and aging.Conclusions: These data highlight early-onset downregulation of Glb1l3 in Rpe65-related disease. They further indicate that impaired expression of Glb1l3 is mostly due to the absence of the chromophore 11-cis retinal, suggesting that Rpe65 deficiency may have many metabolic consequences in the underlying neuroretina.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Glutathione (GSH), a major cellular redox regulator and antioxidant, is decreased in cerebrospinal fluid and prefrontal cortex of schizophrenia patients. The gene of the key GSH-synthesizing enzyme, glutamate-cysteine ligase, modifier (GCLM) subunit, is associated with schizophrenia, suggesting that the deficit in the GSH system is of genetic origin. Using the GCLM knock-out (KO) mouse as model system with 60% decreased brain GSH levels and, thus, strong vulnerability to oxidative stress, we have shown that GSH dysregulation results in abnormal mouse brain morphology (e.g., reduced parvalbumin, PV, immuno-reactivity in frontal areas) and function. Additional oxidative stress, induced by GBR12909 (a dopamine re-uptake inhibitor), enhances morphological changes even further. Aim: In the present study we use the GCLM KO mouse model system, asking now, whether GSH dysregulation also compromises mouse behaviour and cognition. Methods: Male and female wildtype (WT) and GCLM-KO mice are treated with GBR12909 or phosphate buffered saline (PBS) from postnatal day (P) 5 to 10, and are behaviourally tested at P 60 and older. Results: In comparison to WT, KO animals of both sexes are hyperactive in the open field, display more frequent open arm entries on the elevated plus maze, longer float latencies in the Porsolt swim test, and more frequent contacts of novel and familiar objects. Contrary to other reports of animal models with reduced PV immuno-reactivity, GCLM-KO mice display normal rule learning capacity and perform normally on a spatial recognition task. GCLM-KO mice do, however, show a strong deficit in object-recognition after a 15 minutes retention delay. GBR12909 treatment exerts no additional effect. Conclusions: The results suggest that animals with impaired regulation of brain oxidative stress are impulsive and have reduced behavioural control in novel, unpredictable contexts. Moreover, GSH dysregulation seems to induce a selective attentional or stimulus-encoding deficit: despite intensive object exploration, GCLM-KO mice cannot discriminate between novel and familiar objects. In conclusion, the present data indicate that GSH dysregulation may contribute to the manifestation of behavioural and cognitive anomalies that are associated with schizophrenia.