984 resultados para coupled reaction diffusion equation
Resumo:
Starting from the radiative transfer equation, we obtain an analytical solution for both the free propagator along one of the axes and an arbitrary phase function in the Fourier-Laplace domain. We also find the effective absorption parameter, which turns out to be very different from the one provided by the diffusion approximation. We finally present an analytical approximation procedure and obtain a differential equation that accurately reproduces the transport process. We test our approximations by means of simulations that use the Henyey-Greenstein phase function with very satisfactory results.
Resumo:
All derivations of the one-dimensional telegraphers equation, based on the persistent random walk model, assume a constant speed of signal propagation. We generalize here the model to allow for a variable propagation speed and study several limiting cases in detail. We also show the connections of this model with anomalous diffusion behavior and with inertial dichotomous processes.
Resumo:
A simple model of diffusion of innovations in a social network with upgrading costs is introduced. Agents are characterized by a single real variable, their technological level. According to local information, agents decide whether to upgrade their level or not, balancing their possible benefit with the upgrading cost. A critical point where technological avalanches display a power-law behavior is also found. This critical point is characterized by a macroscopic observable that turns out to optimize technological growth in the stationary state. Analytical results supporting our findings are found for the globally coupled case.
Resumo:
We study the motion of a particle governed by a generalized Langevin equation. We show that, when no fluctuation-dissipation relation holds, the long-time behavior of the particle may be from stationary to superdiffusive, along with subdiffusive and diffusive. When the random force is Gaussian, we derive the exact equations for the joint and marginal probability density functions for the position and velocity of the particle and find their solutions.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Exact solutions to FokkerPlanck equations with nonlinear drift are considered. Applications of these exact solutions for concrete models are studied. We arrive at the conclusion that for certain drifts we obtain divergent moments (and infinite relaxation time) if the diffusion process can be extended without any obstacle to the whole space. But if we introduce a potential barrier that limits the diffusion process, moments converge with a finite relaxation time.
Resumo:
In this paper we consider diffusion of a passive substance C in a temporarily and spatially inhomogeneous two-dimensional medium. As a realization for the latter we choose a phase-separating medium consisting of two substances A and B, whose dynamics is determined by the Cahn-Hilliard equation. Assuming different diffusion coefficients of C in A and B, we find that the variance of the distribution function of the said substance grows less than linearly in time. We derive a simple identity for the variance using a probabilistic ansatz and are then able to identify the interface between A and B as the main cause for this nonlinear dependence. We argue that, finally, for very large times the here temporarily dependent diffusion "constant" goes like t-1/3 to a constant asymptotic value D¿. The latter is calculated approximately by employing the effective-medium approximation and by fitting the simulation data to the said time dependence.
Resumo:
Lutetium zoning in garnet within eclogites from the Zermatt-Saas Fee zone, Western Alps, reveal sharp, exponentially decreasing central peaks. They can be used to constrain maximum Lu volume diffusion in garnets. A prograde garnet growth temperature interval of 450-600 A degrees C has been estimated based on pseudosection calculations and garnet-clinopyroxene thermometry. The maximum pre-exponential diffusion coefficient which fits the measured central peak is in the order of D-0= 5.7*10(-6) m(2)/s, taking an estimated activation energy of 270 kJ/mol based on diffusion experiments for other rare earth elements in garnet. This corresponds to a maximum diffusion rate of D (600 A degrees C) = 4.0*10(-22) m(2)/s. The diffusion estimate of Lu can be used to estimate the minimum closure temperature, T-c, for Sm-Nd and Lu-Hf age data that have been obtained in eclogites of the Western Alps, postulating, based on a literature review, that D (Hf) < D (Nd) < D (Sm) a parts per thousand currency sign D (Lu). T-c calculations, using the Dodson equation, yielded minimum closure temperatures of about 630 A degrees C, assuming a rapid initial exhumation rate of 50A degrees/m.y., and an average crystal size of garnets (r = 1 mm). This suggests that Sm/Nd and Lu/Hf isochron age differences in eclogites from the Western Alps, where peak temperatures did rarely exceed 600 A degrees C must be interpreted in terms of prograde metamorphism.
Resumo:
We study biased, diffusive transport of Brownian particles through narrow, spatially periodic structures in which the motion is constrained in lateral directions. The problem is analyzed under the perspective of the Fick-Jacobs equation, which accounts for the effect of the lateral confinement by introducing an entropic barrier in a one-dimensional diffusion. The validity of this approximation, based on the assumption of an instantaneous equilibration of the particle distribution in the cross section of the structure, is analyzed by comparing the different time scales that characterize the problem. A validity criterion is established in terms of the shape of the structure and of the applied force. It is analytically corroborated and verified by numerical simulations that the critical value of the force up to which this description holds true scales as the square of the periodicity of the structure. The criterion can be visualized by means of a diagram representing the regions where the Fick-Jacobs description becomes inaccurate in terms of the scaled force versus the periodicity of the structure.
Resumo:
Diabetes is a recognized risk factor for cardiovascular diseases and heart failure. Diabetic cardiovascular dysfunction also underscores the development of diabetic retinopathy, nephropathy and neuropathy. Despite the broad availability of antidiabetic therapy, glycemic control still remains a major challenge in the management of diabetic patients. Hyperglycemia triggers formation of advanced glycosylation end products (AGEs), activates protein kinase C, enhances polyol pathway, glucose autoxidation, which coupled with elevated levels of free fatty acids, and leptin have been implicated in increased generation of superoxide anion by mitochondria, NADPH oxidases and xanthine oxidoreductase in diabetic vasculature and myocardium. Superoxide anion interacts with nitric oxide forming the potent toxin peroxynitrite via diffusion limited reaction, which in concert with other oxidants triggers activation of stress kinases, endoplasmic reticulum stress, mitochondrial and poly(ADP-ribose) polymerase 1-dependent cell death, dysregulates autophagy/mitophagy, inactivates key proteins involved in myocardial calcium handling/contractility and antioxidant defense, activates matrix metalloproteinases and redox-dependent pro-inflammatory transcription factors (e.g. nuclear factor kappaB) promoting inflammation, AGEs formation, eventually culminating in myocardial dysfunction, remodeling and heart failure. Understanding the complex interplay of oxidative/nitrosative stress with pro-inflammatory, metabolic and cell death pathways is critical to devise novel targeted therapies for diabetic cardiomyopathy, which will be overviewed in this brief synopsis. This article is part of a Special Issue entitled: Autophagy and protein quality control in cardiometabolic diseases.
Resumo:
In bubbly flow simulations, bubble size distribution is an important factor in determination of hydrodynamics. Beside hydrodynamics, it is crucial in the prediction of interfacial area available for mass transfer and in the prediction of reaction rate in gas-liquid reactors such as bubble columns. Solution of population balance equations is a method which can help to model the size distribution by considering continuous bubble coalescence and breakage. Therefore, in Computational Fluid Dynamic simulations it is necessary to couple CFD and Population Balance Model (CFD-PBM) to get reliable distribution. In the current work a CFD-PBM coupled model is implemented as FORTRAN subroutines in ANSYS CFX 10 and it has been tested for bubbly flow. This model uses the idea of Multi Phase Multi Size Group approach which was previously presented by Sha et al. (2006) [18]. The current CFD-PBM coupled method considers inhomogeneous flow field for different bubble size groups in the Eulerian multi-dispersed phase systems. Considering different velocity field for bubbles can give the advantageof more accurate solution of hydrodynamics. It is also an improved method for prediction of bubble size distribution in multiphase flow compared to available commercial packages.
Resumo:
We consider a renormalizable two-dimensional model of dilaton gravity coupled to a set of conformal fields as a toy model for quantum cosmology. We discuss the cosmological solutions of the model and study the effect of including the back reaction due to quantum corrections. As a result, when the matter density is below some threshold new singularities form in a weak-coupling region, which suggests that they will not be removed in the full quantum theory. We also solve the Wheeler-DeWitt equation. Depending on the quantum state of the Universe, the singularities may appear in a quantum region where the wave function is not oscillatory, i.e., when there is not a well-defined notion of classical spacetime.
Resumo:
The effect of the heat flux on the rate of chemical reaction in dilute gases is shown to be important for reactions characterized by high activation energies and in the presence of very large temperature gradients. This effect, obtained from the second-order terms in the distribution function (similar to those obtained in the Burnett approximation to the solution of the Boltzmann equation), is derived on the basis of information theory. It is shown that the analytical results describing the effect are simpler if the kinetic definition for the nonequilibrium temperature is introduced than if the thermodynamic definition is introduced. The numerical results are nearly the same for both definitions
Resumo:
The objective of this work was to combine the advantages of the dried blood spot (DBS) sampling process with the highly sensitive and selective negative-ion chemical ionization tandem mass spectrometry (NICI-MS-MS) to analyze for recent antidepressants including fluoxetine, norfluoxetine, reboxetine, and paroxetine from micro whole blood samples (i.e., 10 microL). Before analysis, DBS samples were punched out, and antidepressants were simultaneously extracted and derivatized in a single step by use of pentafluoropropionic acid anhydride and 0.02% triethylamine in butyl chloride for 30 min at 60 degrees C under ultrasonication. Derivatives were then separated on a gas chromatograph coupled with a triple-quadrupole mass spectrometer operating in negative selected reaction monitoring mode for a total run time of 5 min. To establish the validity of the method, trueness, precision, and selectivity were determined on the basis of the guidelines of the "Société Française des Sciences et des Techniques Pharmaceutiques" (SFSTP). The assay was found to be linear in the concentration ranges 1 to 500 ng mL(-1) for fluoxetine and norfluoxetine and 20 to 500 ng mL(-1) for reboxetine and paroxetine. Despite the small sampling volume, the limit of detection was estimated at 20 pg mL(-1) for all the analytes. The stability of DBS was also evaluated at -20 degrees C, 4 degrees C, 25 degrees C, and 40 degrees C for up to 30 days. Furthermore, the method was successfully applied to a pharmacokinetic investigation performed on a healthy volunteer after oral administration of a single 40-mg dose of fluoxetine. Thus, this validated DBS method combines an extractive-derivative single step with a fast and sensitive GC-NICI-MS-MS technique. Using microliter blood samples, this procedure offers a patient-friendly tool in many biomedical fields such as checking treatment adherence, therapeutic drug monitoring, toxicological analyses, or pharmacokinetic studies.
Resumo:
The existence of a supramolecular organization of the G protein-coupled receptor (GPCR) is now being widely accepted by the scientific community. Indeed, GPCR oligomers may enhance the diversity and performance by which extracellular signals are transferred to the G proteins in the process of receptor transduction, although the mechanism that underlies this phenomenon still remains unsolved. Recently, it has been proposed that a trans-conformational switching model could be the mechanism allowing direct inhibition/activation of receptor activation/inhibition, respectively. Thus, heterotropic receptor-receptor allosteric regulations are behind the GPCR oligomeric function. In this paper we want to revise how GPCR oligomerization impinges on several important receptor functions like biosynthesis, plasma membrane diffusion or velocity, pharmacology and signaling. In particular, the rationale of receptor oligomerization might lie in the need of sensing complex whole cell extracellular signals and translating them into a simple computational model.