954 resultados para Non-Gaussian dynamic models
Resumo:
BACKGROUND: By analyzing human immunodeficiency virus type 1 (HIV-1) pol sequences from the Swiss HIV Cohort Study (SHCS), we explored whether the prevalence of non-B subtypes reflects domestic transmission or migration patterns. METHODS: Swiss non-B sequences and sequences collected abroad were pooled to construct maximum likelihood trees, which were analyzed for Swiss-specific subepidemics, (subtrees including ≥80% Swiss sequences, bootstrap >70%; macroscale analysis) or evidence for domestic transmission (sequence pairs with genetic distance <1.5%, bootstrap ≥98%; microscale analysis). RESULTS: Of 8287 SHCS participants, 1732 (21%) were infected with non-B subtypes, of which A (n = 328), C (n = 272), CRF01_AE (n = 258), and CRF02_AG (n = 285) were studied further. The macroscale analysis revealed that 21% (A), 16% (C), 24% (CRF01_AE), and 28% (CRF02_AG) belonged to Swiss-specific subepidemics. The microscale analysis identified 26 possible transmission pairs: 3 (12%) including only homosexual Swiss men of white ethnicity; 3 (12%) including homosexual white men from Switzerland and partners from foreign countries; and 10 (38%) involving heterosexual white Swiss men and females of different nationality and predominantly nonwhite ethnicity. CONCLUSIONS: Of all non-B infections diagnosed in Switzerland, <25% could be prevented by domestic interventions. Awareness should be raised among immigrants and Swiss individuals with partners from high prevalence countries to contain the spread of non-B subtypes.
Resumo:
The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.
Resumo:
SUMMARY IN FRENCH Les cellules souches sont des cellules indifférenciées capables a) de proliférer, b) de s'auto¬renouveller, c) de produire des cellules différenciées, postmitotiques et fonctionnelles (multipotencialité), et d) de régénérer le tissu après des lésions. Par exemple, les cellules de souches hematopoiétiques, situées dans la moelle osseuse, peuvent s'amplifier, se diviser et produire diverses cellules différenciées au cours de la vie, les cellules souches restant dans la moelle osseuse et consentant leur propriété. Les cellules souches intestinales, situées dans la crypte des microvillosités peuvent également régénérer tout l'intestin au cours de la vie. La rétine se compose de six classes de neurones et d'un type de cellule gliale. Tous ces types de cellules sont produits par un progéniteur rétinien. Le pic de production des photorécepteurs se situe autour des premiers jours postnatals chez la souris. A cette période la rétine contient les cellules hautement prolifératives. Dans cette étude, nous avons voulu analyser le phénotype de ces cellules et leur potentiel en tant que cellules souches ou progénitrices. Nous nous sommes également concentrés sur l'effet de certains facteurs épigéniques sur leur destin cellulaire. Nous avons observé que toutes les cellules prolifératives isolées à partir de neurorétines postnatales de souris expriment le marqueur de glie radiaire RC2, ainsi que des facteurs de transcription habituellement trouvés dans la glie radiaire (Mash1, Pax6), et répondent aux critères des cellules souches : une capacité élevée d'expansion, un état indifférencié, la multipotencialité (démontrée par analyse clonale). Nous avons étudié la différentiation des cellules dans différents milieux de culture. En l'absence de sérum, l'EGF induit l'expression de la β-tubulin-III, un marqueur neuronal, et l'acquisition d'une morphologie neuronale, ceci dans 15% des cellules présentes. Nous avons également analysé la prolifération de cellules. Seulement 20% des cellules incorporent le bromodéoxyuridine (BrdU) qui est un marqueur de division cellulaire. Ceci démontre que l'EGF induit la formation des neurones sans une progression massive du cycle cellulaire. Par ailleurs, une stimulation de 2h d'EGF est suffisante pour induire la différentiation neuronale. Certains des neurones formés sont des cellules ganglionnaires rétiniennes (GR), comme l'indique l'expression de marqueurs de cellules ganglionnaires (Ath5, Brn3b et mélanopsine), et dans de rare cas d'autres neurones rétiniens ont été observés (photorécepteurs (PR) et cellules bipolaires). Nous avons confirmé que les cellules souches rétiniennes tardives n'étaient pas restreintes au cours du temps et qu'elles conservent leur multipotencialité en étant capables de générer des neurones dits précoces (GR) ou tardifs (PR). Nos résultats prouvent que l'EGF est non seulement un facteur contrôlant le développement glial, comme précédemment démontré, mais également un facteur efficace de différentiation pour les neurones rétiniens, du moins in vitro. D'autre part, nous avons voulu établir si l'oeil adulte humain contient des cellules souches rétiniennes (CSRs). L'oeil de certains poissons ou amphibiens continue de croître pendant l'âge adulte du fait de l'activité persistante des cellules souches rétiniennes. Chez les poissons, le CSRs se situe dans la marge ciliaire (CM) à la périphérie de la rétine. Bien que l'oeil des mammifères ne se développe plus pendant la vie d'adulte, plusieurs groupes ont prouvé que l'oeil de mammifères adultes contient des cellules souches rétiniennes également dans la marge ciliaire plus précisément dans l'épithélium pigmenté et non dans la neurorétine. Ces CSRs répondent à certains critères des cellules souches. Nous avons identifié et caractérisé les cellules souches rétiniennes résidant dans l'oeil adulte humain. Nous avons prouvé qu'elles partagent les mêmes propriétés que leurs homologues chez les rongeurs c.-à-d. auto-renouvellement, amplification, et différenciation en neurones rétiniens in vitro et in vivo (démontré par immunocoloration et microarray). D'autre part, ces cellules peuvent être considérablement amplifiées, tout en conservant leur potentiel de cellules souches, comme indiqué par l'analyse de leur profil d'expression génique (microarray). Elles expriment également des gènes communs à diverses cellules souches: nucleostemin, nestin, Brni1, Notch2, ABCG2, c-kit et son ligand, aussi bien que cyclin D3 qui agit en aval de c-kit. Nous avons pu montré que Bmi1et Oct4 sont nécessaires pour la prolifération des CSRs confortant leur propriété de cellules souches. Nos données indiquent que la neurorétine postnatale chez la souris et l'épithélium pigmenté de la marge ciliaire chez l'humain adulte contiennent les cellules souches rétiniennes. En outre, nous avons développé un système qui permet d'amplifier et de cultiver facilement les CSRs. Ce modèle permet de disséquer les mécanismes impliqués lors de la retinogenèse. Par exemple, ce système peut être employé pour l'étude des substances ou des facteurs impliqués, par exemple, dans la survie ou dans la génération des cellules rétiniennes. Il peut également aider à disséquer la fonction de gènes ou les facteurs impliqués dans la restriction ou la spécification du destin cellulaire. En outre, dans les pays occidentaux, la rétinite pigmentaire (RP) touche 1 individu sur 3500 et la dégénérescence maculaire liée à l'âge (DMLA) affecte 1 % à 3% de la population âgée de plus de 60 ans. La génération in vitro de cellules rétiniennes est aussi un outil prometteur pour fournir une source illimitée de cellules pour l'étude de transplantation cellulaire pour la rétine. SUMMARY IN ENGLISH Stem cells are defined as undifferentiated cells capable of a) proliferation, b) self maintenance (self-renewability), c) production of many differentiated functional postmitotic cells (multipotency), and d) regenerating tissue after injury. For instance, hematopoietic stem cells, located in bone marrow, can expand, divide and generate differentiated cells into the diverse lineages throughout life, the stem cells conserving their status. In the villi crypt, the intestinal stem cells are also able to regenerate the intestine during their life time. The retina is composed of six classes of neurons and one glial cell. All these cell types are produced by the retinal progenitor cell. The peak of photoreceptor production is reached around the first postnatal days in rodents. Thus, at this stage the retina contains highly proliferative cells. In our research, we analyzed the phenotype of these cells and their potential as possible progenitor or stem cells. We also focused on the effect of epigenic factor(s) and cell fate determination. All the proliferating cells isolated from mice postnatal neuroretina harbored the radial glia marker RC2, expressed transcription factors usually found in radial glia (Mash 1, Pax6), and met the criteria of stem cells: high capacity of expansion, maintenance of an undifferentiated state, and multipotency demonstrated by clonal analysis. We analyzed the differentiation seven days after the transfer of the cells in different culture media. In the absence of serum, EGF led to the expression of the neuronal marker β-tubulin-III, and the acquisition of neuronal morphology in 15% of the cells. Analysis of cell proliferation by bromodeoxyuridine incorporation revealed that EGF mainly induced the formation of neurons without stimulating massively cell cycle progression. Moreover, a pulse of 2h EGF stimulation was sufficient to induce neuronal differentiation. Some neurons were committed to the retinal ganglion cell (RGC) phenotype, as revealed by the expression of retinal ganglion markers (Ath5, Brn3b and melanopsin), and in few cases to other retinal phenotypes (photoreceptors (PRs) and bipolar cells). We confirmed that the late RSCs were not restricted over-time and conserved multipotentcy characteristics by generating retinal phenotypes that usually appear at early (RGC) or late (PRs) developmental stages. Our results show that EGF is not only a factor controlling glial development, as previously shown, but also a potent differentiation factor for retinal neurons, at least in vitro. On the other hand, we wanted to find out if the adult human eye contains retina stem cells. The eye of some fishes and amphibians continues to grow during adulthood due to the persistent activity of retinal stem cells (RSCs). In fish, the RSCs are located in the ciliary margin zone (CMZ) at the periphery of the retina. Although, the adult mammalian eye does not grow during adult life, several groups have shown that the adult mouse eye contains retinal stem cells in the homologous zone (i.e. the ciliary margin), in the pigmented epithelium and not in the neuroretina. These RSCs meet some criteria of stem cells. We identified and characterized the human retinal stem cells. We showed that they posses the same features as their rodent counterpart i.e. they self-renew, expand and differentiate into retinal neurons in vitro and in vivo (indicated by immunostaining and microarray analysis). Moreover, they can be greatly expanded while conserving their sternness potential as revealed by the gene expression profile analysis (microarray approach). They also expressed genes common to various stem cells: nucleostemin, nestin, Bmil , Notch2, ABCG2, c-kit and its ligand, as well as cyclin D3 which acts downstream of c-kit. Furthermore, Bmil and Oct-4 were required for RSC proliferation reinforcing their stem cell identity. Our data indicate that the mice postnatal neuroretina and the adult pigmented epithelium of adult human ciliary margin contain retinal stem cells. We developed a system to easily expand and culture RSCs that can be used to investigate the retinogenesis. For example, it can help to screen drugs or factors involved, for instance, in the survival or generation of retinal cells. This could help to dissect genes or factors involved in the restriction or specification of retinal cell fate. In Western countries, retinitis pigmentosa (RP) affects 1 out of 3'500 individuals and age-related macula degeneration (AMD) strikes 1 % to 3% of the population over 60. In vitro generation of retinal cells is thus a promising tool to provide an unlimited cell source for cellular transplantation studies in the retina.
Resumo:
Glioblastomas are highly diffuse, malignant tumors that have so far evaded clinical treatment. The strongly invasive behavior of cells in these tumors makes them very resistant to treatment, and for this reason both experimental and theoretical efforts have been directed toward understanding the spatiotemporal pattern of tumor spreading. Although usual models assume a standard diffusion behavior, recent experiments with cell cultures indicate that cells tend to move in directions close to that of glioblastoma invasion, thus indicating that a biasedrandom walk model may be much more appropriate. Here we show analytically that, for realistic parameter values, the speeds predicted by biased dispersal are consistent with experimentally measured data. We also find that models beyond reaction–diffusion–advection equations are necessary to capture this substantial effect of biased dispersal on glioblastoma spread
Resumo:
We present a non-equilibrium theory in a system with heat and radiative fluxes. The obtained expression for the entropy production is applied to a simple one-dimensional climate model based on the first law of thermodynamics. In the model, the dissipative fluxes are assumed to be independent variables, following the criteria of the Extended Irreversible Thermodynamics (BIT) that enlarges, in reference to the classical expression, the applicability of a macroscopic thermodynamic theory for systems far from equilibrium. We analyze the second differential of the classical and the generalized entropy as a criteria of stability of the steady states. Finally, the extreme state is obtained using variational techniques and observing that the system is close to the maximum dissipation rate
Resumo:
Aim The imperfect detection of species may lead to erroneous conclusions about species-environment relationships. Accuracy in species detection usually requires temporal replication at sampling sites, a time-consuming and costly monitoring scheme. Here, we applied a lower-cost alternative based on a double-sampling approach to incorporate the reliability of species detection into regression-based species distribution modelling.Location Doñana National Park (south-western Spain).Methods Using species-specific monthly detection probabilities, we estimated the detection reliability as the probability of having detected the species given the species-specific survey time. Such reliability estimates were used to account explicitly for data uncertainty by weighting each absence. We illustrated how this novel framework can be used to evaluate four competing hypotheses as to what constitutes primary environmental control of amphibian distribution: breeding habitat, aestivating habitat, spatial distribution of surrounding habitats and/or major ecosystems zonation. The study was conducted on six pond-breeding amphibian species during a 4-year period.Results Non-detections should not be considered equivalent to real absences, as their reliability varied considerably. The occurrence of Hyla meridionalis and Triturus pygmaeus was related to a particular major ecosystem of the study area, where suitable habitat for these species seemed to be widely available. Characteristics of the breeding habitat (area and hydroperiod) were of high importance for the occurrence of Pelobates cultripes and Pleurodeles waltl. Terrestrial characteristics were the most important predictors of the occurrence of Discoglossus galganoi and Lissotriton boscai, along with spatial distribution of breeding habitats for the last species.Main conclusions We did not find a single best supported hypothesis valid for all species, which stresses the importance of multiscale and multifactor approaches. More importantly, this study shows that estimating the reliability of non-detection records, an exercise that had been previously seen as a naïve goal in species distribution modelling, is feasible and could be promoted in future studies, at least in comparable systems.
Resumo:
AIM: The use of an animal model to study the aqueous dynamic and the histological findings after deep sclerectomy with (DSCI) and without collagen implant. METHODS: Deep sclerectomy was performed on rabbits' eyes. Eyes were randomly assigned to receive collagen implants. Measurements of intraocular pressure (IOP) and aqueous outflow facility using the constant pressure method through cannulation of the anterior chamber were performed. The system was filled with BSS and cationised ferritin. Histological assessment of the operative site was performed. Sections were stained with haematoxylin and eosin and with Prussian blue. Aqueous drainage vessels were identified by the reaction between ferritin and Prussian blue. All eyes were coded so that the investigator was blind to the type of surgery until the evaluation was completed. RESULTS: A significant decrease in IOP (p<0.05) was observed during the first 6 weeks after DSCI (mean IOP was 13.07 (2.95) mm Hg preoperatively and 9.08 (2.25) mm Hg at 6 weeks); DS without collagen implant revealed a significant decrease in IOP at weeks 4 and 8 after surgery (mean IOP 12.57 (3.52) mm Hg preoperatively, 9.45 (3.38) mm Hg at 4 weeks, and 9.22 (3.39) mm Hg at 8 weeks). Outflow facility was significantly increased throughout the 9 months of follow up in both DSCI and DS groups (p<0.05). The preoperative outflow facility (OF) was 0.15 (0.02) micro l/min/mm Hg. At 9 months, OF was 0.52 (0.28) microl/min/mm Hg and 0.46 (0.07) micro l/min/mm Hg for DSCI and DS respectively. Light microscopy studies showed the appearance of new aqueous drainage vessels in the sclera adjacent to the dissection site in DSCI and DS and the apparition of spindle cells lining the collagen implant in DSCI after 2 months. CONCLUSION: A significant IOP decrease was observed during the first weeks after DSCI and DS. DS with or without collagen implant provided a significant increase in outflow facility throughout the 9 months of follow up. This might be partly explained by new drainage vessels in the sclera surrounding the operated site. Microscopic studies revealed the appearance of spindle cells lining the collagen implant in DSCI after 2 months.
Resumo:
Neuroimaging studies typically compare experimental conditions using average brain responses, thereby overlooking the stimulus-related information conveyed by distributed spatio-temporal patterns of single-trial responses. Here, we take advantage of this rich information at a single-trial level to decode stimulus-related signals in two event-related potential (ERP) studies. Our method models the statistical distribution of the voltage topographies with a Gaussian Mixture Model (GMM), which reduces the dataset to a number of representative voltage topographies. The degree of presence of these topographies across trials at specific latencies is then used to classify experimental conditions. We tested the algorithm using a cross-validation procedure in two independent EEG datasets. In the first ERP study, we classified left- versus right-hemifield checkerboard stimuli for upper and lower visual hemifields. In a second ERP study, when functional differences cannot be assumed, we classified initial versus repeated presentations of visual objects. With minimal a priori information, the GMM model provides neurophysiologically interpretable features - vis à vis voltage topographies - as well as dynamic information about brain function. This method can in principle be applied to any ERP dataset testing the functional relevance of specific time periods for stimulus processing, the predictability of subject's behavior and cognitive states, and the discrimination between healthy and clinical populations.
Resumo:
Functionally relevant large scale brain dynamics operates within the framework imposed by anatomical connectivity and time delays due to finite transmission speeds. To gain insight on the reliability and comparability of large scale brain network simulations, we investigate the effects of variations in the anatomical connectivity. Two different sets of detailed global connectivity structures are explored, the first extracted from the CoCoMac database and rescaled to the spatial extent of the human brain, the second derived from white-matter tractography applied to diffusion spectrum imaging (DSI) for a human subject. We use the combination of graph theoretical measures of the connection matrices and numerical simulations to explicate the importance of both connectivity strength and delays in shaping dynamic behaviour. Our results demonstrate that the brain dynamics derived from the CoCoMac database are more complex and biologically more realistic than the one based on the DSI database. We propose that the reason for this difference is the absence of directed weights in the DSI connectivity matrix.
Resumo:
A new parameter is introduced: the lightning potential index (LPI), which is a measure of the potential for charge generation and separation that leads to lightning flashes in convective thunderstorms. The LPI is calculated within the charge separation region of clouds between 0 C and 20 C, where the noninductive mechanism involving collisions of ice and graupel particles in the presence of supercooled water is most effective. As shown in several case studies using the Weather Research and Forecasting (WRF) model with explicit microphysics, the LPI is highly correlated with observed lightning. It is suggested that the LPI may be a useful parameter for predicting lightning as well as a tool for improving weather forecasting of convective storms and heavy rainfall.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.
Resumo:
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
This study investigated the spatial, spectral, temporal and functional proprieties of functional brain connections involved in the concurrent execution of unrelated visual perception and working memory tasks. Electroencephalography data was analysed using a novel data-driven approach assessing source coherence at the whole-brain level. Three connections in the beta-band (18-24 Hz) and one in the gamma-band (30-40 Hz) were modulated by dual-task performance. Beta-coherence increased within two dorsofrontal-occipital connections in dual-task conditions compared to the single-task condition, with the highest coherence seen during low working memory load trials. In contrast, beta-coherence in a prefrontal-occipital functional connection and gamma-coherence in an inferior frontal-occipitoparietal connection was not affected by the addition of the second task and only showed elevated coherence under high working memory load. Analysis of coherence as a function of time suggested that the dorsofrontal-occipital beta-connections were relevant to working memory maintenance, while the prefrontal-occipital beta-connection and the inferior frontal-occipitoparietal gamma-connection were involved in top-down control of concurrent visual processing. The fact that increased coherence in the gamma-connection, from low to high working memory load, was negatively correlated with faster reaction time on the perception task supports this interpretation. Together, these results demonstrate that dual-task demands trigger non-linear changes in functional interactions between frontal-executive and occipitoparietal-perceptual cortices.
Resumo:
The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.