979 resultados para Stochastic particle dynamics (theory)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of the present study was to evaluate the influence of seasonality on the behavior of phytoplankton associations in eutrophic reservoirs with different depths in northeastern Brazil. Five collections were carried out at each of the reservoirs at two depths (0.1 m and near the sediment) at three-month intervals in each season (dry and rainy). The phytoplankton samples were preserved in Lugol's solution and quantified under an inverted microscope for the determination of density values, which were subsequently converted to biomass values based on cellular biovolume and classified in phytoplankton associations. The following abiotic variables were analyzed: water temperature, dissolved oxygen, pH, turbidity, water transparency, total phosphorus, total dissolved phosphorus, orthophosphate and total nitrogen. The data were investigated using canonical correspondence analysis. The influence of seasonality on the dynamics of the phytoplankton community was lesser in the deeper reservoirs. Depth affected the behavior of the algal associations. Variation in light availability was a determinant of changes in the phytoplankton structure. Urosolenia and Anabaena associations were more abundant in shallow ecosystems with a larger eutrophic zone, whereas the Microcystis association was more related to deep ecosystems with adequate availability of nutrients. The distribution of Cyclotella, Geitlerinema, Planktothrix, Pseudanabaena and Cylindrospermopsis associations was different from that seen in subtropical regions and the substitution of these associations was related to a reduction in the eutrophic zone rather than the mixture zone. Published by Elsevier GmbH.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this Letter we analyze the energy distribution evolution of test particles injected in three dimensional (3D) magnetohydrodynamic (MHD) simulations of different magnetic reconnection configurations. When considering a single Sweet-Parker topology, the particles accelerate predominantly through a first-order Fermi process, as predicted in [3] and demonstrated numerically in [8]. When turbulence is included within the current sheet, the acceleration rate is highly enhanced, because reconnection becomes fast and independent of resistivity [4,11] and allows the formation of a thick volume filled with multiple simultaneously reconnecting magnetic fluxes. Charged particles trapped within this volume suffer several head-on scatterings with the contracting magnetic fluctuations, which significantly increase the acceleration rate and results in a first-order Fermi process. For comparison, we also tested acceleration in MHD turbulence, where particles suffer collisions with approaching and receding magnetic irregularities, resulting in a reduced acceleration rate. We argue that the dominant acceleration mechanism approaches a second order Fermi process in this case.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis deals with inflation theory, focussing on the model of Jarrow & Yildirim, which is nowadays used when pricing inflation derivatives. After recalling main results about short and forward interest rate models, the dynamics of the main components of the market are derived. Then the most important inflation-indexed derivatives are explained (zero coupon swap, year-on-year, cap and floor), and their pricing proceeding is shown step by step. Calibration is explained and performed with a common method and an heuristic and non standard one. The model is enriched with credit risk, too, which allows to take into account the possibility of bankrupt of the counterparty of a contract. In this context, the general method of pricing is derived, with the introduction of defaultable zero-coupon bonds, and the Monte Carlo method is treated in detailed and used to price a concrete example of contract. Appendixes: A: martingale measures, Girsanov's theorem and the change of numeraire. B: some aspects of the theory of Stochastic Differential Equations; in particular, the solution for linear EDSs, and the Feynman-Kac Theorem, which shows the connection between EDSs and Partial Differential Equations. C: some useful results about normal distribution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have performed Monte Carlo and molecular dynamics simulations of suspensions of monodisperse, hard ellipsoids of revolution. Hard-particle models play a key role in statistical mechanics. They are conceptually and computationally simple, and they offer insight into systems in which particle shape is important, including atomic, molecular, colloidal, and granular systems. In the high density phase diagram of prolate hard ellipsoids we have found a new crystal, which is more stable than the stretched FCC structure proposed previously . The new phase, SM2, has a simple monoclinic unit cell containing a basis of two ellipsoids with unequal orientations. The angle of inclination is very soft for length-to-width (aspect) ratio l/w=3, while the other angles are not. A symmetric state of the unit cell exists, related to the densest-known packings of ellipsoids; it is not always the stable one. Our results remove the stretched FCC structure for aspect ratio l/w=3 from the phase diagram of hard, uni-axial ellipsoids. We provide evidence that this holds between aspect ratios 3 and 6, and possibly beyond. Finally, ellipsoids in SM2 at l/w=1.55 exhibit end-over-end flipping, warranting studies of the cross-over to where this dynamics is not possible. Secondly, we studied the dynamics of nearly spherical ellipsoids. In equilibrium, they show a first-order transition from an isotropic phase to a rotator phase, where positions are crystalline but orientations are free. When over-compressing the isotropic phase into the rotator regime, we observed super-Arrhenius slowing down of diffusion and relaxation, and signatures of the cage effect. These features of glassy dynamics are sufficiently strong that asymptotic scaling laws of the Mode-Coupling Theory of the glass transition (MCT) could be tested, and were found to apply. We found strong coupling of positional and orientational degrees of freedom, leading to a common value for the MCT glass-transition volume fraction. Flipping modes were not slowed down significantly. We demonstrated that the results are independent of simulation method, as predicted by MCT. Further, we determined that even intra-cage motion is cooperative. We confirmed the presence of dynamical heterogeneities associated with the cage effect. The transit between cages was seen to occur on short time scales, compared to the time spent in cages; but the transit was shown not to involve displacements distinguishable in character from intra-cage motion. The presence of glassy dynamics was predicted by molecular MCT (MMCT). However, as MMCT disregards crystallization, a test by simulation was required. Glassy dynamics is unusual in monodisperse systems. Crystallization typically intervenes unless polydispersity, network-forming bonds or other asymmetries are introduced. We argue that particle anisometry acts as a sufficient source of disorder to prevent crystallization. This sheds new light on the question of which ingredients are required for glass formation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work contains several applications of the mode-coupling theory (MCT) and is separated into three parts. In the first part we investigate the liquid-glass transition of hard spheres for dimensions d→∞ analytically and numerically up to d=800 in the framework of MCT. We find that the critical packing fraction ϕc(d) scales as d²2^(-d), which is larger than the Kauzmann packing fraction ϕK(d) found by a small-cage expansion by Parisi and Zamponi [J. Stat. Mech.: Theory Exp. 2006, P03017 (2006)]. The scaling of the critical packing fraction is different from the relation ϕc(d)∼d2^(-d) found earlier by Kirkpatrick and Wolynes [Phys. Rev. A 35, 3072 (1987)]. This is due to the fact that the k dependence of the critical collective and self nonergodicity parameters fc(k;d) and fcs(k;d) was assumed to be Gaussian in the previous theories. We show that in MCT this is not the case. Instead fc(k;d) and fcs(k;d), which become identical in the limit d→∞, converge to a non-Gaussian master function on the scale k∼d^(3/2). We find that the numerically determined value for the exponent parameter λ and therefore also the critical exponents a and b depend on the dimension d, even at the largest evaluated dimension d=800. In the second part we compare the results of a molecular-dynamics simulation of liquid Lennard-Jones argon far away from the glass transition [D. Levesque, L. Verlet, and J. Kurkijärvi, Phys. Rev. A 7, 1690 (1973)] with MCT. We show that the agreement between theory and computer simulation can be improved by taking binary collisions into account [L. Sjögren, Phys. Rev. A 22, 2866 (1980)]. We find that an empiric prefactor of the memory function of the original MCT equations leads to similar results. In the third part we derive the equations for a mode-coupling theory for the spherical components of the stress tensor. Unfortunately it turns out that they are too complex to be solved numerically.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the fundamental interactions in the Standard Model of particle physicsrnis the strong force, which can be formulated as a non-abelian gauge theoryrncalled Quantum Chromodynamics (QCD). rnIn the low-energy regime, where the QCD coupling becomes strong and quarksrnand gluons are confined to hadrons, a perturbativernexpansion in the coupling constant is not possible.rnHowever, the introduction of a four-dimensional Euclidean space-timernlattice allows for an textit{ab initio} treatment of QCD and provides arnpowerful tool to study the low-energy dynamics of hadrons.rnSome hadronic matrix elements of interest receive contributionsrnfrom diagrams including quark-disconnected loops, i.e. disconnected quarkrnlines from one lattice point back to the same point. The calculation of suchrnquark loops is computationally very demanding, because it requires knowledge ofrnthe all-to-all propagator. In this thesis we use stochastic sources and arnhopping parameter expansion to estimate such propagators.rnWe apply this technique to study two problems which relay crucially on therncalculation of quark-disconnected diagrams, namely the scalar form factor ofrnthe pion and the hadronic vacuum polarization contribution to the anomalousrnmagnet moment of the muon.rnThe scalar form factor of the pion describes the coupling of a charged pion torna scalar particle. We calculate the connected and the disconnected contributionrnto the scalar form factor for three different momentum transfers. The scalarrnradius of the pion is extracted from the momentum dependence of the form factor.rnThe use ofrnseveral different pion masses and lattice spacings allows for an extrapolationrnto the physical point. The chiral extrapolation is done using chiralrnperturbation theory ($chi$PT). We find that our pion mass dependence of thernscalar radius is consistent with $chi$PT at next-to-leading order.rnAdditionally, we are able to extract the low energy constant $ell_4$ from thernextrapolation, and ourrnresult is in agreement with results from other lattice determinations.rnFurthermore, our result for the scalar pion radius at the physical point isrnconsistent with a value that was extracted from $pipi$-scattering data. rnThe hadronic vacuum polarization (HVP) is the leading-order hadronicrncontribution to the anomalous magnetic moment $a_mu$ of the muon. The HVP canrnbe estimated from the correlation of two vector currents in the time-momentumrnrepresentation. We explicitly calculate the corresponding disconnectedrncontribution to the vector correlator. We find that the disconnectedrncontribution is consistent with zero within its statistical errors. This resultrncan be converted into an upper limit for the maximum contribution of therndisconnected diagram to $a_mu$ by using the expected time-dependence of therncorrelator and comparing it to the corresponding connected contribution. Wernfind the disconnected contribution to be smaller than $approx5%$ of thernconnected one. This value can be used as an estimate for a systematic errorrnthat arises from neglecting the disconnected contribution.rn

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nella tesi viene studiata la dinamica stocastica di particelle non interagenti su network con capacita di trasporto finita. L'argomento viene affrontato introducendo un formalismo operatoriale per il sistema. Dopo averne verificato la consistenza su modelli risolvibili analiticamente, tale formalismo viene impiegato per dimostrare l'emergere di una forza entropica agente sulle particelle, dovuta alle limitazioni dinamiche del network. Inoltre viene proposta una spiegazione qualitativa dell'effetto di attrazione reciproca tra nodi vuoti nel caso di processi sincroni.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Alternans of cardiac action potential duration (APD) is a well-known arrhythmogenic mechanism which results from dynamical instabilities. The propensity to alternans is classically investigated by examining APD restitution and by deriving APD restitution slopes as predictive markers. However, experiments have shown that such markers are not always accurate for the prediction of alternans. Using a mathematical ventricular cell model known to exhibit unstable dynamics of both membrane potential and Ca2+ cycling, we demonstrate that an accurate marker can be obtained by pacing at cycle lengths (CLs) varying randomly around a basic CL (BCL) and by evaluating the transfer function between the time series of CLs and APDs using an autoregressive-moving-average (ARMA) model. The first pole of this transfer function corresponds to the eigenvalue (λalt) of the dominant eigenmode of the cardiac system, which predicts that alternans occurs when λalt≤−1. For different BCLs, control values of λalt were obtained using eigenmode analysis and compared to the first pole of the transfer function estimated using ARMA model fitting in simulations of random pacing protocols. In all versions of the cell model, this pole provided an accurate estimation of λalt. Furthermore, during slow ramp decreases of BCL or simulated drug application, this approach predicted the onset of alternans by extrapolating the time course of the estimated λalt. In conclusion, stochastic pacing and ARMA model identification represents a novel approach to predict alternans without making any assumptions about its ionic mechanisms. It should therefore be applicable experimentally for any type of myocardial cell.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction: Advances in biotechnology have shed light on many biological processes. In biological networks, nodes are used to represent the function of individual entities within a system and have historically been studied in isolation. Network structure adds edges that enable communication between nodes. An emerging fieldis to combine node function and network structure to yield network function. One of the most complex networks known in biology is the neural network within the brain. Modeling neural function will require an understanding of networks, dynamics, andneurophysiology. It is with this work that modeling techniques will be developed to work at this complex intersection. Methods: Spatial game theory was developed by Nowak in the context of modeling evolutionary dynamics, or the way in which species evolve over time. Spatial game theory offers a two dimensional view of analyzingthe state of neighbors and updating based on the surroundings. Our work builds upon this foundation by studying evolutionary game theory networks with respect to neural networks. This novel concept is that neurons may adopt a particular strategy that will allow propagation of information. The strategy may therefore act as the mechanism for gating. Furthermore, the strategy of a neuron, as in a real brain, isimpacted by the strategy of its neighbors. The techniques of spatial game theory already established by Nowak are repeated to explain two basic cases and validate the implementation of code. Two novel modifications are introduced in Chapters 3 and 4 that build on this network and may reflect neural networks. Results: The introduction of two novel modifications, mutation and rewiring, in large parametricstudies resulted in dynamics that had an intermediate amount of nodes firing at any given time. Further, even small mutation rates result in different dynamics more representative of the ideal state hypothesized. Conclusions: In both modificationsto Nowak's model, the results demonstrate the network does not become locked into a particular global state of passing all information or blocking all information. It is hypothesized that normal brain function occurs within this intermediate range and that a number of diseases are the result of moving outside of this range.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

High density oligonucleotide expression arrays are a widely used tool for the measurement of gene expression on a large scale. Affymetrix GeneChip arrays appear to dominate this market. These arrays use short oligonucleotides to probe for genes in an RNA sample. Due to optical noise, non-specific hybridization, probe-specific effects, and measurement error, ad-hoc measures of expression, that summarize probe intensities, can lead to imprecise and inaccurate results. Various researchers have demonstrated that expression measures based on simple statistical models can provide great improvements over the ad-hoc procedure offered by Affymetrix. Recently, physical models based on molecular hybridization theory, have been proposed as useful tools for prediction of, for example, non-specific hybridization. These physical models show great potential in terms of improving existing expression measures. In this paper we demonstrate that the system producing the measured intensities is too complex to be fully described with these relatively simple physical models and we propose empirically motivated stochastic models that compliment the above mentioned molecular hybridization theory to provide a comprehensive description of the data. We discuss how the proposed model can be used to obtain improved measures of expression useful for the data analysts.