969 resultados para Stochastic particle dynamics (theory)


Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanoparticles emitted from road traffic are the largest source of respiratory exposure for the general public living in urban areas. It has been suggested that the adverse health effects of airborne particles may scale with the airborne particle number, which if correct, focuses attention on the nanoparticle (less than 100 nm) size range which dominates the number count in urban areas. Urban measurements of particle size distributions have tended to show a broadly similar pattern dominated by a mode centred on 20–30 nm diameter particles emitted by diesel engine exhaust. In this paper we report the results of measurements of particle number concentration and size distribution made in a major London park as well as on the BT Tower, 160 m high. These measurements taken during the REPARTEE project (Regents Park and BT Tower experiment) show a remarkable shift in particle size distributions with major losses of the smallest particle class as particles are advected away from the traffic source. In the Park, the traffic related mode at 20–30 nm diameter is much reduced with a new mode at <10 nm. Size distribution measurements also revealed higher number concentrations of sub-50 nm particles at the BT Tower during days affected by higher turbulence as determined by Doppler Lidar measurements and indicate a loss of nanoparticles from air aged during less turbulent conditions. These results suggest that nanoparticles are lost by evaporation, rather than coagulation processes. The results have major implications for understanding the impacts of traffic-generated particulate matter on human health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new modelling framework suitable for the description of atmospheric convective systems as a collection of distinct plumes. The literature contains many examples of models for collections of plumes in which strong simplifying assumptions are made, a diagnostic dependence of convection on the large-scale environment and the limit of many plumes often being imposed from the outset. Some recent studies have sought to remove one or the other of those assumptions. The proposed framework removes both, and is explicitly time-dependent and stochastic in its basic character. The statistical dynamics of the plume collection are defined through simple probabilistic rules applied at the level of individual plumes, and van Kampen's system size expansion is then used to construct the macroscopic limit of the microscopic model. Through suitable choices of the microscopic rules, the model is shown to encompass previous studies in the appropriate limits, and to allow their natural extensions beyond those limits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A particle filter is a data assimilation scheme that employs a fully nonlinear, non-Gaussian analysis step. Unfortunately as the size of the state grows the number of ensemble members required for the particle filter to converge to the true solution increases exponentially. To overcome this Vaswani [Vaswani N. 2008. IEEE Trans Signal Process 56:4583–97] proposed a new method known as mode tracking to improve the efficiency of the particle filter. When mode tracking, the state is split into two subspaces. One subspace is forecast using the particle filter, the other is treated so that its values are set equal to the mode of the marginal pdf. There are many ways to split the state. One hypothesis is that the best results should be obtained from the particle filter with mode tracking when we mode track the maximum number of unimodal dimensions. The aim of this paper is to test this hypothesis using the three dimensional stochastic Lorenz equations with direct observations. It is found that mode tracking the maximum number of unimodal dimensions does not always provide the best result. The best choice of states to mode track depends on the number of particles used and the accuracy and frequency of the observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many numerical models for weather prediction and climate studies are run at resolutions that are too coarse to resolve convection explicitly, but too fine to justify the local equilibrium assumed by conventional convective parameterizations. The Plant-Craig (PC) stochastic convective parameterization scheme, developed in this paper, solves this problem by removing the assumption that a given grid-scale situation must always produce the same sub-grid-scale convective response. Instead, for each timestep and gridpoint, one of the many possible convective responses consistent with the large-scale situation is randomly selected. The scheme requires as input the large-scale state as opposed to the instantaneous grid-scale state, but must nonetheless be able to account for genuine variations in the largescale situation. Here we investigate the behaviour of the PC scheme in three-dimensional simulations of radiative-convective equilibrium, demonstrating in particular that the necessary space-time averaging required to produce a good representation of the input large-scale state is not in conflict with the requirement to capture large-scale variations. The resulting equilibrium profiles agree well with those obtained from established deterministic schemes, and with corresponding cloud-resolving model simulations. Unlike the conventional schemes the statistics for mass flux and rainfall variability from the PC scheme also agree well with relevant theory and vary appropriately with spatial scale. The scheme is further shown to adapt automatically to changes in grid length and in forcing strength.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the record of 30 flank eruptions over the last 110 years at Nyamuragira, we have tested the relationship between the eruption dynamics and the local stress field. There are two groups of eruptions based on their duration (< 80days >) that are also clustered in space and time. We find that the eruptions fed by dykes parallel to the East African Rift Valley have longer durations (and larger volumes) than those eruptions fed by dykes with other orientations. This is compatible with a model for compressible magma transported through an elastic-walled dyke in a differential stress field from an over-pressured reservoir (Woods et al., 2006). The observed pattern of eruptive fissures is consistent with a local stress field modified by a northwest-trending, right lateral slip fault that is part of the northern transfer zone of the Kivu Basin rift segment. We have also re-tested with new data the stochastic eruption models for Nyamuragira of Burt et al. (1994). The time-predictable, pressure-threshold model remains the best fit and is consistent with the typically observed declining rate of sulphur dioxide emission during the first few days of eruption with lava emission from a depressurising, closed, crustal reservoir. The 2.4-fold increase in long-term eruption rate that occurred after 1977 is confirmed in the new analysis. Since that change, the record has been dominated by short-duration eruptions fed by dykes perpendicular to the Rift. We suggest that the intrusion of a major dyke during the 1977 volcano-tectonic event at neighbouring Nyiragongo volcano inhibited subsequent dyke formation on the southern flanks of Nyamuragira and this may also have resulted in more dykes reaching the surface elsewhere. Thus that sudden change in output was a result of a changed stress field that forced more of the deep magma supply to the surface. Another volcano-tectonic event in 2002 may also have changed the magma output rate at Nyamuragira.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider two weakly coupled systems and adopt a perturbative approach based on the Ruelle response theory to study their interaction. We propose a systematic way of parameterizing the effect of the coupling as a function of only the variables of a system of interest. Our focus is on describing the impacts of the coupling on the long term statistics rather than on the finite-time behavior. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are additionally two separate and very different contributions. One is a term taking into account the second-order contributions of the fluctuations in the coupling, which can be parameterized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system. In order to treat this case, we present an extension to Ruelle's response theory able to deal with integral operators. We discuss our results in the context of other methods previously proposed for disentangling the dynamics of two coupled systems. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exists, can be used equally well to study the statistics of the slow variables and that of the fast variables. By recursively applying the technique proposed here, we can treat the general case of multi-level systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The understanding of the statistical properties and of the dynamics of multistable systems is gaining more and more importance in a vast variety of scientific fields. This is especially relevant for the investigation of the tipping points of complex systems. Sometimes, in order to understand the time series of given observables exhibiting bimodal distributions, simple one-dimensional Langevin models are fitted to reproduce the observed statistical properties, and used to investing-ate the projected dynamics of the observable. This is of great relevance for studying potential catastrophic changes in the properties of the underlying system or resonant behaviours like those related to stochastic resonance-like mechanisms. In this paper, we propose a framework for encasing this kind of studies, using simple box models of the oceanic circulation and choosing as observable the strength of the thermohaline circulation. We study the statistical properties of the transitions between the two modes of operation of the thermohaline circulation under symmetric boundary forcings and test their agreement with simplified one-dimensional phenomenological theories. We extend our analysis to include stochastic resonance-like amplification processes. We conclude that fitted one-dimensional Langevin models, when closely scrutinised, may result to be more ad-hoc than they seem, lacking robustness and/or well-posedness. They should be treated with care, more as an empiric descriptive tool than as methodology with predictive power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present molecular dynamics (MD) and slip-springs model simulations of the chain segmental dynamics in entangled linear polymer melts. The time-dependent behavior of the segmental orientation autocorrelation functions and mean-square segmental displacements are analyzed for both flexible and semiflexible chains, with particular attention paid to the scaling relations among these dynamic quantities. Effective combination of the two simulation methods at different coarse-graining levels allows us to explore the chain dynamics for chain lengths ranging from Z ≈ 2 to 90 entanglements. For a given chain length of Z ≈ 15, the time scales accessed span for more than 10 decades, covering all of the interesting relaxation regimes. The obtained time dependence of the monomer mean square displacements, g1(t), is in good agreement with the tube theory predictions. Results on the first- and second-order segmental orientation autocorrelation functions, C1(t) and C2(t), demonstrate a clear power law relationship of C2(t) C1(t)m with m = 3, 2, and 1 in the initial, free Rouse, and entangled (constrained Rouse) regimes, respectively. The return-to-origin hypothesis, which leads to inverse proportionality between the segmental orientation autocorrelation functions and g1(t) in the entangled regime, is convincingly verified by the simulation result of C1(t) g1(t)−1 t–1/4 in the constrained Rouse regime, where for well-entangled chains both C1(t) and g1(t) are rather insensitive to the constraint release effects. However, the second-order correlation function, C2(t), shows much stronger sensitivity to the constraint release effects and experiences a protracted crossover from the free Rouse to entangled regime. This crossover region extends for at least one decade in time longer than that of C1(t). The predicted time scaling behavior of C2(t) t–1/4 is observed in slip-springs simulations only at chain length of 90 entanglements, whereas shorter chains show higher scaling exponents. The reported simulation work can be applied to understand the observations of the NMR experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the relation between so called continuous localization models—i.e. non-linear stochastic Schrödinger evolutions—and the discrete GRW-model of wave function collapse. The former can be understood as scaling limit of the GRW process. The proof relies on a stochastic Trotter formula, which is of interest in its own right. Our Trotter formula also allows to complement results on existence theory of stochastic Schrödinger evolutions by Holevo and Mora/Rebolledo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review and structure some of the mathematical and statistical models that have been developed over the past half century to grapple with theoretical and experimental questions about the stochastic development of aging over the life course. We suggest that the mathematical models are in large part addressing the problem of partitioning the randomness in aging: How does aging vary between individuals, and within an individual over the lifecourse? How much of the variation is inherently related to some qualities of the individual, and how much is entirely random? How much of the randomness is cumulative, and how much is merely short-term flutter? We propose that recent lines of statistical inquiry in survival analysis could usefully grapple with these questions, all the more so if they were more explicitly linked to the relevant mathematical and biological models of aging. To this end, we describe points of contact among the various lines of mathematical and statistical research. We suggest some directions for future work, including the exploration of information-theoretic measures for evaluating components of stochastic models as the basis for analyzing experiments and anchoring theoretical discussions of aging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristics of the boundary layer separating a turbulence region from an irrotational (or non-turbulent) flow region are investigated using rapid distortion theory (RDT). The turbulence region is approximated as homogeneous and isotropic far away from the bounding turbulent/non-turbulent (T/NT) interface, which is assumed to remain approximately flat. Inviscid effects resulting from the continuity of the normal velocity and pressure at the interface, in addition to viscous effects resulting from the continuity of the tangential velocity and shear stress, are taken into account by considering a sudden insertion of the T/NT interface, in the absence of mean shear. Profiles of the velocity variances, turbulent kinetic energy (TKE), viscous dissipation rate (epsilon), turbulence length scales, and pressure statistics are derived, showing an excellent agreement with results from direct numerical simulations (DNS). Interestingly, the normalized inviscid flow statistics at the T/NT interface do not depend on the form of the assumed TKE spectrum. Outside the turbulent region, where the flow is irrotational (except inside a thin viscous boundary layer), epsilon decays as z^{-6}, where z is the distance from the T/NT interface. The mean pressure distribution is calculated using RDT, and exhibits a decrease towards the turbulence region due to the associated velocity fluctuations, consistent with the generation of a mean entrainment velocity. The vorticity variance and epsilon display large maxima at the T/NT interface due to the inviscid discontinuities of the tangential velocity variances existing there, and these maxima are quantitatively related to the thickness delta of the viscous boundary layer (VBL). For an equilibrium VBL, the RDT analysis suggests that delta ~ eta (where eta is the Kolmogorov microscale), which is consistent with the scaling law identified in a very recent DNS study for shear-free T/NT interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study inverse problems in neural field theory, i.e., the construction of synaptic weight kernels yielding a prescribed neural field dynamics. We address the issues of existence, uniqueness, and stability of solutions to the inverse problem for the Amari neural field equation as a special case, and prove that these problems are generally ill-posed. In order to construct solutions to the inverse problem, we first recast the Amari equation into a linear perceptron equation in an infinite-dimensional Banach or Hilbert space. In a second step, we construct sets of biorthogonal function systems allowing the approximation of synaptic weight kernels by a generalized Hebbian learning rule. Numerically, this construction is implemented by the Moore–Penrose pseudoinverse method. We demonstrate the instability of these solutions and use the Tikhonov regularization method for stabilization and to prevent numerical overfitting. We illustrate the stable construction of kernels by means of three instructive examples.