24 resultados para preconditioning convection-diffusion equation matrix equation

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Version abregée L'ischémie cérébrale est la troisième cause de mort dans les pays développés, et la maladie responsable des plus sérieux handicaps neurologiques. La compréhension des bases moléculaires et anatomiques de la récupération fonctionnelle après l'ischémie cérébrale est donc extrêmement importante et représente un domaine d'intérêt crucial pour la recherche fondamentale et clinique. Durant les deux dernières décennies, les chercheurs ont tenté de combattre les effets nocifs de l'ischémie cérébrale à l'aide de substances exogènes qui, bien que testées avec succès dans le domaine expérimental, ont montré un effet contradictoire dans l'application clinique. Une approche différente mais complémentaire est de stimuler des mécanismes intrinsèques de neuroprotection en utilisant le «modèle de préconditionnement» : une brève insulte protège contre des épisodes d'ischémie plus sévères à travers la stimulation de voies de signalisation endogènes qui augmentent la résistance à l'ischémie. Cette approche peut offrir des éléments importants pour clarifier les mécanismes endogènes de neuroprotection et fournir de nouvelles stratégies pour rendre les neurones et la glie plus résistants à l'attaque ischémique cérébrale. Dans un premier temps, nous avons donc étudié les mécanismes de neuroprotection intrinsèques stimulés par la thrombine, un neuroprotecteur «préconditionnant» dont on a montré, à l'aide de modèles expérimentaux in vitro et in vivo, qu'il réduit la mort neuronale. En appliquant une technique de microchirurgie pour induire une ischémie cérébrale transitoire chez la souris, nous avons montré que la thrombine peut stimuler les voies de signalisation intracellulaire médiées par MAPK et JNK par une approche moléculaire et l'analyse in vivo d'un inhibiteur spécifique de JNK (L JNK) .Nous avons également étudié l'impact de la thrombine sur la récupération fonctionnelle après une attaque et avons pu démontrer que ces mécanismes moléculaires peuvent améliorer la récupération motrice. La deuxième partie de cette étude des mécanismes de récupération après ischémie cérébrale est basée sur l'investigation des bases anatomiques de la plasticité des connections cérébrales, soit dans le modèle animal d'ischémie transitoire, soit chez l'homme. Selon des résultats précédemment publiés par divers groupes ,nous savons que des mécanismes de plasticité aboutissant à des degrés divers de récupération fonctionnelle sont mis enjeu après une lésion ischémique. Le résultat de cette réorganisation est une nouvelle architecture fonctionnelle et structurelle, qui varie individuellement selon l'anatomie de la lésion, l'âge du sujet et la chronicité de la lésion. Le succès de toute intervention thérapeutique dépendra donc de son interaction avec la nouvelle architecture anatomique. Pour cette raison, nous avons appliqué deux techniques de diffusion en résonance magnétique qui permettent de détecter les changements de microstructure cérébrale et de connexions anatomiques suite à une attaque : IRM par tenseur de diffusion (DT-IR1V) et IRM par spectre de diffusion (DSIRM). Grâce à la DT-IRM hautement sophistiquée, nous avons pu effectuer une étude de follow-up à long terme chez des souris ayant subi une ischémie cérébrale transitoire, qui a mis en évidence que les changements microstructurels dans l'infarctus ainsi que la modification des voies anatomiques sont corrélés à la récupération fonctionnelle. De plus, nous avons observé une réorganisation axonale dans des aires où l'on détecte une augmentation d'expression d'une protéine de plasticité exprimée dans le cône de croissance des axones (GAP-43). En appliquant la même technique, nous avons également effectué deux études, rétrospective et prospective, qui ont montré comment des paramètres obtenus avec DT-IRM peuvent monitorer la rapidité de récupération et mettre en évidence un changement structurel dans les voies impliquées dans les manifestations cliniques. Dans la dernière partie de ce travail, nous avons décrit la manière dont la DS-IRM peut être appliquée dans le domaine expérimental et clinique pour étudier la plasticité cérébrale après ischémie. Abstract Ischemic stroke is the third leading cause of death in developed countries and the disease responsible for the most serious long-term neurological disability. Understanding molecular and anatomical basis of stroke recovery is, therefore, extremely important and represents a major field of interest for basic and clinical research. Over the past 2 decades, much attention has focused on counteracting noxious effect of the ischemic insult with exogenous substances (oxygen radical scavengers, AMPA and NMDA receptor antagonists, MMP inhibitors etc) which were successfully tested in the experimental field -but which turned out to have controversial effects in clinical trials. A different but complementary approach to address ischemia pathophysiology and treatment options is to stimulate and investigate intrinsic mechanisms of neuroprotection using the "preconditioning effect": applying a brief insult protects against subsequent prolonged and detrimental ischemic episodes, by up-regulating powerful endogenous pathways that increase resistance to injury. We believe that this approach might offer an important insight into the molecular mechanisms responsible for endogenous neuroprotection. In addition, results from preconditioning model experiment may provide new strategies for making brain cells "naturally" more resistant to ischemic injury and accelerate their rate of functional recovery. In the first part of this work, we investigated down-stream mechanisms of neuroprotection induced by thrombin, a well known neuroprotectant which has been demonstrated to reduce stroke-induced cell death in vitro and in vivo experimental models. Using microsurgery to induce transient brain ischemia in mice, we showed that thrombin can stimulate both MAPK and JNK intracellular pathways through a molecular biology approach and an in vivo analysis of a specific kinase inhibitor (L JNK1). We also studied thrombin's impact on functional recovery demonstrating that these molecular mechanisms could enhance post-stroke motor outcome. The second part of this study is based on investigating the anatomical basis underlying connectivity remodeling, leading to functional improvement after stroke. To do this, we used both a mouse model of experimental ischemia and human subjects with stroke. It is known from previous data published in literature, that the brain adapts to damage in a way that attempts to preserve motor function. The result of this reorganization is a new functional and structural architecture, which will vary from patient to patient depending on the anatomy of the damage, the biological age of the patient and the chronicity of the lesion. The success of any given therapeutic intervention will depend on how well it interacts with this new architecture. For this reason, we applied diffusion magnetic resonance techniques able to detect micro-structural and connectivity changes following an ischemic lesion: diffusion tensor MRI (DT-MRI) and diffusion spectrum MRI (DS-MRI). Using DT-MRI, we performed along-term follow up study of stroke mice which showed how diffusion changes in the stroke region and fiber tract remodeling is correlating with stroke recovery. In addition, axonal reorganization is shown in areas of increased plasticity related protein expression (GAP 43, growth axonal cone related protein). Applying the same technique, we then performed a retrospective and a prospective study in humans demonstrating how specific DTI parameters could help to monitor the speed of recovery and show longitudinal changes in damaged tracts involved in clinical symptoms. Finally, in the last part of this study we showed how DS-MRI could be applied both to experimental and human stroke and which perspectives it can open to further investigate post stroke plasticity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new type of very fine grid hydrological model based on the spatiotemporal repartition of a PMP (Probable Maximum Precipitation) and on the topography. The goal is to estimate the influence of this rain on a PMF (Probable Maximum Flood) on a catchment area in Switzerland. The spatiotemporal distribution of the PMP was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flow, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the resurgence and the snowmelt. These added factors make the developed model closer to reality and also offer flexibility in the initial condition that is added to the factors concerning the PMP, such as the duration of the rain, the speed and direction of the wind. All these initial conditions taken together offer a complete image of the PMF.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a very fine grid hydrological model based on the spatiotemporal repartition of precipitation and on the topography. The goal is to estimate the flood on a catchment area, using a Probable Maximum Precipitation (PMP) leading to a Probable Maximum Flood (PMF). The spatiotemporal distribution of the precipitation was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flows, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the exfiltration and the snowmelt. This model was applied to several Swiss basins using measured rain, with results showing a good correlation between the simulated and observed flows. This good correlation proves that the model is valid and gives us the confidence that the results can be extrapolated to phenomena of extreme rainfall of PMP type. In this article we present some results obtained using a PMP rainfall and the developed model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Empirical literature on the analysis of the efficiency of measures for reducing persistent government deficits has mainly focused on the direct explanation of deficit. By contrast, this paper aims at modeling government revenue and expenditure within a simultaneous framework and deriving the fiscal balance (surplus or deficit) equation as the difference between the two variables. This setting enables one to not only judge how relevant the explanatory variables are in explaining the fiscal balance but also understand their impact on revenue and/or expenditure. Our empirical results, obtained by using a panel data set on Swiss Cantons for the period 1980-2002, confirm the relevance of the approach followed here, by providing unambiguous evidence of a simultaneous relationship between revenue and expenditure. They also reveal strong dynamic components in revenue, expenditure, and fiscal balance. Among the significant determinants of public fiscal balance we not only find the usual business cycle elements, but also and more importantly institutional factors such as the number of administrative units, and the ease with which people can resort to political (direct democracy) instruments, such as public initiatives and referendum.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The MDRD (Modification of diet in renal disease) equation enables glomerular filtration rate (GFR) estimation from serum creatinine only. Thus, the laboratory can report an estimated GFR (eGFR) with each serum creatinine assessment, increasing therefore the recognition of renal failure. Predictive performance of MDRD equation is better for GFR < 60 ml/min/1,73 m2. A normal or near-normal renal function is often underestimated by this equation. Overall, MDRD provides more reliable estimations of renal function than the Cockcroft-Gault (C-G) formula, but both lack precision. MDRD is not superior to C-G for drug dosing. Being adjusted to 1,73 m2, MDRD eGFR has to be back adjusted to the patient's body surface area for drug dosing. Besides, C-G has the advantage of a greater simplicity and a longer use.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lutetium zoning in garnet within eclogites from the Zermatt-Saas Fee zone, Western Alps, reveal sharp, exponentially decreasing central peaks. They can be used to constrain maximum Lu volume diffusion in garnets. A prograde garnet growth temperature interval of 450-600 A degrees C has been estimated based on pseudosection calculations and garnet-clinopyroxene thermometry. The maximum pre-exponential diffusion coefficient which fits the measured central peak is in the order of D-0= 5.7*10(-6) m(2)/s, taking an estimated activation energy of 270 kJ/mol based on diffusion experiments for other rare earth elements in garnet. This corresponds to a maximum diffusion rate of D (600 A degrees C) = 4.0*10(-22) m(2)/s. The diffusion estimate of Lu can be used to estimate the minimum closure temperature, T-c, for Sm-Nd and Lu-Hf age data that have been obtained in eclogites of the Western Alps, postulating, based on a literature review, that D (Hf) < D (Nd) < D (Sm) a parts per thousand currency sign D (Lu). T-c calculations, using the Dodson equation, yielded minimum closure temperatures of about 630 A degrees C, assuming a rapid initial exhumation rate of 50A degrees/m.y., and an average crystal size of garnets (r = 1 mm). This suggests that Sm/Nd and Lu/Hf isochron age differences in eclogites from the Western Alps, where peak temperatures did rarely exceed 600 A degrees C must be interpreted in terms of prograde metamorphism.