869 resultados para Random walk


Relevância:

60.00% 60.00%

Publicador:

Resumo:

E. coli does chemotaxis by performing a biased random walk composed of alternating periods of swimming (runs) and reorientations (tumbles). Tumbles are typically modelled as complete directional randomisations but it is known that in wild type E. coli, successive run directions are actually weakly correlated, with a mean directional difference of ∼63°. We recently presented a model of the evolution of chemotactic swimming strategies in bacteria which is able to quantitatively reproduce the emergence of this correlation. The agreement between model and experiments suggests that directional persistence may serve some function, a hypothesis supported by the results of an earlier model. Here we investigate the effect of persistence on chemotactic efficiency, using a spatial Monte Carlo model of bacterial swimming in a gradient, combined with simulations of natural selection based on chemotactic efficiency. A direct search of the parameter space reveals two attractant gradient regimes, (a) a low-gradient regime, in which efficiency is unaffected by directional persistence and (b) a high-gradient regime, in which persistence can improve chemotactic efficiency. The value of the persistence parameter that maximises this effect corresponds very closely with the value observed experimentally. This result is matched by independent simulations of the evolution of directional memory in a population of model bacteria, which also predict the emergence of persistence in high-gradient conditions. The relationship between optimality and persistence in different environments may reflect a universal property of random-walk foraging algorithms, which must strike a compromise between two competing aims: exploration and exploitation. We also present a new graphical way to generally illustrate the evolution of a particular trait in a population, in terms of variations in an evolvable parameter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In some delay-tolerant communication systems such as vehicular ad-hoc networks, information flow can be represented as an infectious process, where each entity having already received the information will try to share it with its neighbours. The random walk and random waypoint models are popular analysis tools for these epidemic broadcasts, and represent two types of random mobility. In this paper, we introduce a simulation framework investigating the impact of a gradual increase of bias in path selection (i.e. reduction of randomness), when moving from the former to the latter. Randomness in path selection can significantly alter the system performances, in both regular and irregular network structures. The implications of these results for real systems are discussed in details.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Collective cell spreading is frequently observed in development, tissue repair and disease progression. Mathematical modelling used in conjunction with experimental investigation can provide key insights into the mechanisms driving the spread of cell populations. In this study, we investigated how experimental and modelling frameworks can be used to identify several key features underlying collective cell spreading. In particular, we were able to independently quantify the roles of cell motility and cell proliferation in a spreading cell population, and investigate how these roles are influenced by factors such as the initial cell density, type of cell population and the assay geometry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We demonstrate the phenomenon of self-organized criticality (SOC) in a simple random walk model described by a random walk of a myopic ant, i.e., a walker who can see only nearest neighbors. The ant acts on the underlying lattice aiming at uniform digging, i.e., reduction of the height profile of the surface but is unaffected by the underlying lattice. In one, two, and three dimensions we have explored this model and have obtained power laws in the time intervals between consecutive events of "digging." Being a simple random walk, the power laws in space translate to power laws in time. We also study the finite size scaling of asymptotic scale invariant process as well as dynamic scaling in this system. This model differs qualitatively from the cascade models of SOC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study by means of experiments and Monte Carlo simulations, the scattering of light in random media, to determine the distance up to which photons travel along almost undeviated paths within a scattering medium, and are therefore capable of casting a shadow of an opaque inclusion embedded within the medium. Such photons are isolated by polarisation discrimination wherein the plane of linear polarisation of the input light is continuously rotated and the polarisation preserving component of the emerging light is extracted by means of a Fourier transform. This technique is a software implementation of lock-in detection. We find that images may be recovered to a depth far in excess of that predicted by the diffusion theory of photon propagation. To understand our experimental results, we perform Monte Carlo simulations to model the random walk behaviour of the multiply scattered photons. We present a. new definition of a diffusing photon in terms of the memory of its initial direction of propagation, which we then quantify in terms of an angular correlation function. This redefinition yields the penetration depth of the polarisation preserving photons. Based on these results, we have formulated a model to understand shadow formation in a turbid medium, the predictions of which are in good agreement with our experimental results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diffusive transport is a universal phenomenon, throughout both biological and physical sciences, and models of diffusion are routinely used to interrogate diffusion-driven processes. However, most models neglect to take into account the role of volume exclusion, which can significantly alter diffusive transport, particularly within biological systems where the diffusing particles might occupy a significant fraction of the available space. In this work we use a random walk approach to provide a means to reconcile models that incorporate crowding effects on different spatial scales. Our work demonstrates that coarse-grained models incorporating simplified descriptions of excluded volume can be used in many circumstances, but that care must be taken in pushing the coarse-graining process too far.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis studies the informational efficiency of the European Union emission allowance (EUA) market. In an efficient market, the market price is unpredictable and profits above average are impossible in the long run. The main research problem is does the EUA price follow a random walk. The method is an econometric analysis of the price series, which includes an autocorrelation coefficient test and a variance ratio test. The results reveal that the price series is autocorrelated and therefore a nonrandom walk. In order to find out the extent of predictability, the price series is modelled with an autoregressive model. The conclusion is that the EUA price is autocorrelated only to a small degree and that the predictability cannot be used to make extra profits. The EUA market is therefore considered informationally efficient, although the price series does not fulfill the requirements of a random walk. A market review supports the conclusion, but it is clear that the maturing of the market is still in process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many biological environments are crowded by macromolecules, organelles and cells which can impede the transport of other cells and molecules. Previous studies have sought to describe these effects using either random walk models or fractional order diffusion equations. Here we examine the transport of both a single agent and a population of agents through an environment containing obstacles of varying size and shape, whose relative densities are drawn from a specified distribution. Our simulation results for a single agent indicate that smaller obstacles are more effective at retarding transport than larger obstacles; these findings are consistent with our simulations of the collective motion of populations of agents. In an attempt to explore whether these kinds of stochastic random walk simulations can be described using a fractional order diffusion equation framework, we calibrate the solution of such a differential equation to our averaged agent density information. Our approach suggests that these kinds of commonly used differential equation models ought to be used with care since we are unable to match the solution of a fractional order diffusion equation to our data in a consistent fashion over a finite time period.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have investigated the impact of dissipationless minor galaxy mergers on the angular momentum of the remnant. Our simulations cover a range of initial orbital characteristics, and the system consists of a massive galaxy with a bulge and disk merging with a much less massive (one-tenth or one-twentieth) gasless companion that has a variety of morphologies (disk-or elliptical-like) and central baryonic mass concentrations. During the process of merging, the orbital angular momentum is redistributed into the internal angular momentum of the final system; the internal angular momentum of the primary galaxy can increase or decrease depending on the relative orientation of the orbital spin vectors (direct or retrograde), while the initially nonrotating dark matter halo always gains angular momentum. The specific angular momentum of the stellar component always decreases independently of the orbital parameters or morphology of the satellite, the decrease in the rotation velocity of the primary galaxy is accompanied by a change in the anisotropy of the orbits, and the ratio of rotation speed to velocity dispersion of the merger remnant is lower than the initial value, not only because of an increase in the dispersion but also of the slowing-down of the disk rotation. We briefly discuss several astrophysical implications of these results, suggesting that minor mergers do not cause a "random walk" process of the angular momentum of the stellar disk component of galaxies, but rather a steady decrease. Minor mergers may play a role in producing the large scatter observed in the Tully-Fisher relation for S0 galaxies, as well as in the increase of the velocity dispersion and the decrease in upsilon/sigma at large radii as observed in S0 galaxies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ever since its initial introduction some fifty years ago, the rational expectations paradigm has dominated the way economic theory handles uncertainty. The main assertion made by John F. Muth (1961), seen by many as the father of the paradigm, is that expectations of rational economic agents should essentially be equal to the predictions of relevant economic theory, since rational agents should use information available to them in an optimal way. This assumption often has important consequences on the results and interpretations of the models where it is applied. Although the rational expectations assumption can be applied to virtually any economic theory, the focus in this thesis is on macroeconomic theories of consumption, especially the Rational Expectations–Permanent Income Hypothesis proposed by Robert E. Hall in 1978. The much-debated theory suggests that, assuming that agents have rational expectations on their future income, consumption decisions should follow a random walk, and the best forecast of future consumption level is the current consumption level. Then, changes in consumption are unforecastable. This thesis constructs an empirical test for the Rational Expectations–Permanent Income Hypothesis using Finnish Consumer Survey data as well as various Finnish macroeconomic data. The data sample covers the years 1995–2010. Consumer survey data may be interpreted to directly represent household expectations, which makes it an interesting tool for this particular test. The variable to be predicted is the growth of total household consumption expenditure. The main empirical result is that the Consumer Confidence Index (CCI), a balance figure computed from the most important consumer survey responses, does have statistically significant predictive power over the change in total consumption expenditure. The history of consumption expenditure growth itself, however, fails to predict its own future values. This indicates that the CCI contains some information that the history of consumption decisions does not, and that the consumption decisions are not optimal in the theoretical context. However, when conditioned on various macroeconomic variables, the CCI loses its predictive ability. This finding suggests that the index is merely a (partial) summary of macroeconomic information, and does not contain any significant private information on consumption intentions of households not directly deductible from the objective economic variables. In conclusion, the Rational Expectations–Permanent Income Hypothesis is strongly rejected by the empirical results in this thesis. This result is in accordance with most earlier studies conducted on the topic.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The statistical properties of fractional Brownian walks are used to construct a path integral representation of the conformations of polymers with different degrees of bond correlation. We specifically derive an expression for the distribution function of the chains’ end‐to‐end distance, and evaluate it by several independent methods, including direct evaluation of the discrete limit of the path integral, decomposition into normal modes, and solution of a partial differential equation. The distribution function is found to be Gaussian in the spatial coordinates of the monomer positions, as in the random walk description of the chain, but the contour variables, which specify the location of the monomer along the chain backbone, now depend on an index h, the degree of correlation of the fractional Brownian walk. The special case of h=1/2 corresponds to the random walk. In constructing the normal mode picture of the chain, we conjecture the existence of a theorem regarding the zeros of the Bessel function.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The statistically steady humidity distribution resulting from an interaction of advection, modelled as an uncorrelated random walk of moist parcels on an isentropic surface, and a vapour sink, modelled as immediate condensation whenever the specific humidity exceeds a specified saturation humidity, is explored with theory and simulation. A source supplies moisture at the deep-tropical southern boundary of the domain and the saturation humidity is specified as a monotonically decreasing function of distance from the boundary. The boundary source balances the interior condensation sink, so that a stationary spatially inhomogeneous humidity distribution emerges. An exact solution of the Fokker-Planck equation delivers a simple expression for the resulting probability density function (PDF) of the wate-rvapour field and also the relative humidity. This solution agrees completely with a numerical simulation of the process, and the humidity PDF exhibits several features of interest, such as bimodality close to the source and unimodality further from the source. The PDFs of specific and relative humidity are broad and non-Gaussian. The domain-averaged relative humidity PDF is bimodal with distinct moist and dry peaks, a feature which we show agrees with middleworld isentropic PDFs derived from the ERA interim dataset. Copyright (C) 2011 Royal Meteorological Society