869 resultados para Random walk


Relevância:

60.00% 60.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 60J45, 60J50, 35Cxx; Secondary 31Cxx.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regressiontechniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a nave random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists' long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies. © 2010 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Exchange rate economics has achieved substantial development in the past few decades. Despite extensive research, a large number of unresolved problems remain in the exchange rate debate. This dissertation studied three puzzling issues aiming to improve our understanding of exchange rate behavior. Chapter Two used advanced econometric techniques to model and forecast exchange rate dynamics. Chapter Three and Chapter Four studied issues related to exchange rates using the theory of New Open Economy Macroeconomics. ^ Chapter Two empirically examined the short-run forecastability of nominal exchange rates. It analyzed important empirical regularities in daily exchange rates. Through a series of hypothesis tests, a best-fitting fractionally integrated GARCH model with skewed student-t error distribution was identified. The forecasting performance of the model was compared with that of a random walk model. Results supported the contention that nominal exchange rates seem to be unpredictable over the short run in the sense that the best-fitting model cannot beat the random walk model in forecasting exchange rate movements. ^ Chapter Three assessed the ability of dynamic general-equilibrium sticky-price monetary models to generate volatile foreign exchange risk premia. It developed a tractable two-country model where agents face a cash-in-advance constraint and set prices to the local market; the exogenous money supply process exhibits time-varying volatility. The model yielded approximate closed form solutions for risk premia and real exchange rates. Numerical results provided quantitative evidence that volatile risk premia can endogenously arise in a new open economy macroeconomic model. Thus, the model had potential to rationalize the Uncovered Interest Parity Puzzle. ^ Chapter Four sought to resolve the consumption-real exchange rate anomaly, which refers to the inability of most international macro models to generate negative cross-correlations between real exchange rates and relative consumption across two countries as observed in the data. While maintaining the assumption of complete asset markets, this chapter introduced endogenously segmented asset markets into a dynamic sticky-price monetary model. Simulation results showed that such a model could replicate the stylized fact that real exchange rates tend to move in an opposite direction with respect to relative consumption. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Exchange rate economics has achieved substantial development in the past few decades. Despite extensive research, a large number of unresolved problems remain in the exchange rate debate. This dissertation studied three puzzling issues aiming to improve our understanding of exchange rate behavior. Chapter Two used advanced econometric techniques to model and forecast exchange rate dynamics. Chapter Three and Chapter Four studied issues related to exchange rates using the theory of New Open Economy Macroeconomics. Chapter Two empirically examined the short-run forecastability of nominal exchange rates. It analyzed important empirical regularities in daily exchange rates. Through a series of hypothesis tests, a best-fitting fractionally integrated GARCH model with skewed student-t error distribution was identified. The forecasting performance of the model was compared with that of a random walk model. Results supported the contention that nominal exchange rates seem to be unpredictable over the short run in the sense that the best-fitting model cannot beat the random walk model in forecasting exchange rate movements. Chapter Three assessed the ability of dynamic general-equilibrium sticky-price monetary models to generate volatile foreign exchange risk premia. It developed a tractable two-country model where agents face a cash-in-advance constraint and set prices to the local market; the exogenous money supply process exhibits time-varying volatility. The model yielded approximate closed form solutions for risk premia and real exchange rates. Numerical results provided quantitative evidence that volatile risk premia can endogenously arise in a new open economy macroeconomic model. Thus, the model had potential to rationalize the Uncovered Interest Parity Puzzle. Chapter Four sought to resolve the consumption-real exchange rate anomaly, which refers to the inability of most international macro models to generate negative cross-correlations between real exchange rates and relative consumption across two countries as observed in the data. While maintaining the assumption of complete asset markets, this chapter introduced endogenously segmented asset markets into a dynamic sticky-price monetary model. Simulation results showed that such a model could replicate the stylized fact that real exchange rates tend to move in an opposite direction with respect to relative consumption.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The random walk models with temporal correlation (i.e. memory) are of interest in the study of anomalous diffusion phenomena. The random walk and its generalizations are of prominent place in the characterization of various physical, chemical and biological phenomena. The temporal correlation is an essential feature in anomalous diffusion models. These temporal long-range correlation models can be called non-Markovian models, otherwise, the short-range time correlation counterparts are Markovian ones. Within this context, we reviewed the existing models with temporal correlation, i.e. entire memory, the elephant walk model, or partial memory, alzheimer walk model and walk model with a gaussian memory with profile. It is noticed that these models shows superdiffusion with a Hurst exponent H > 1/2. We study in this work a superdiffusive random walk model with exponentially decaying memory. This seems to be a self-contradictory statement, since it is well known that random walks with exponentially decaying temporal correlations can be approximated arbitrarily well by Markov processes and that central limit theorems prohibit superdiffusion for Markovian walks with finite variance of step sizes. The solution to the apparent paradox is that the model is genuinely non-Markovian, due to a time-dependent decay constant associated with the exponential behavior. In the end, we discuss ideas for future investigations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2006, a large and prolonged bloom of the dinoflagellate Karenia mikimotoi occurred in Scottish coastal waters, causing extensive mortalities of benthic organisms including annelids and molluscs and some species of fish ( Davidson et al., 2009). A coupled hydrodynamic-algal transport model was developed to track the progression of the bloom around the Scottish coast during June–September 2006 and hence investigate the processes controlling the bloom dynamics. Within this individual-based model, cells were capable of growth, mortality and phototaxis and were transported by physical processes of advection and turbulent diffusion, using current velocities extracted from operational simulations of the MRCS ocean circulation model of the North-west European continental shelf. Vertical and horizontal turbulent diffusion of cells are treated using a random walk approach. Comparison of model output with remotely sensed chlorophyll concentrations and cell counts from coastal monitoring stations indicated that it was necessary to include multiple spatially distinct seed populations of K. mikimotoi at separate locations on the shelf edge to capture the qualitative pattern of bloom transport and development. We interpret this as indicating that the source population was being transported northwards by the Hebridean slope current from where colonies of K. mikimotoi were injected onto the continental shelf by eddies or other transient exchange processes. The model was used to investigate the effects on simulated K. mikimotoi transport and dispersal of: (1) the distribution of the initial seed population; (2) algal growth and mortality; (3) water temperature; (4) the vertical movement of particles by diurnal migration and eddy diffusion; (5) the relative role of the shelf edge and coastal currents; (6) the role of wind forcing. The numerical experiments emphasized the requirement for a physiologically based biological model and indicated that improved modelling of future blooms will potentially benefit from better parameterisation of temperature dependence of both growth and mortality and finer spatial and temporal hydrodynamic resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2006, a large and prolonged bloom of the dinoflagellate Karenia mikimotoi occurred in Scottish coastal waters, causing extensive mortalities of benthic organisms including annelids and molluscs and some species of fish ( Davidson et al., 2009). A coupled hydrodynamic-algal transport model was developed to track the progression of the bloom around the Scottish coast during June–September 2006 and hence investigate the processes controlling the bloom dynamics. Within this individual-based model, cells were capable of growth, mortality and phototaxis and were transported by physical processes of advection and turbulent diffusion, using current velocities extracted from operational simulations of the MRCS ocean circulation model of the North-west European continental shelf. Vertical and horizontal turbulent diffusion of cells are treated using a random walk approach. Comparison of model output with remotely sensed chlorophyll concentrations and cell counts from coastal monitoring stations indicated that it was necessary to include multiple spatially distinct seed populations of K. mikimotoi at separate locations on the shelf edge to capture the qualitative pattern of bloom transport and development. We interpret this as indicating that the source population was being transported northwards by the Hebridean slope current from where colonies of K. mikimotoi were injected onto the continental shelf by eddies or other transient exchange processes. The model was used to investigate the effects on simulated K. mikimotoi transport and dispersal of: (1) the distribution of the initial seed population; (2) algal growth and mortality; (3) water temperature; (4) the vertical movement of particles by diurnal migration and eddy diffusion; (5) the relative role of the shelf edge and coastal currents; (6) the role of wind forcing. The numerical experiments emphasized the requirement for a physiologically based biological model and indicated that improved modelling of future blooms will potentially benefit from better parameterisation of temperature dependence of both growth and mortality and finer spatial and temporal hydrodynamic resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La vallée du fleuve Saint-Laurent, dans l’est du Canada, est l’une des régions sismiques les plus actives dans l’est de l’Amérique du Nord et est caractérisée par de nombreux tremblements de terre intraplaques. Après la rotation rigide de la plaque tectonique, l’ajustement isostatique glaciaire est de loin la plus grande source de signal géophysique dans l’est du Canada. Les déformations et les vitesses de déformation de la croûte terrestre de cette région ont été étudiées en utilisant plus de 14 ans d’observations (9 ans en moyenne) de 112 stations GPS fonctionnant en continu. Le champ de vitesse a été obtenu à partir de séries temporelles de coordonnées GPS quotidiennes nettoyées en appliquant un modèle combiné utilisant une pondération par moindres carrés. Les vitesses ont été estimées avec des modèles de bruit qui incluent les corrélations temporelles des séries temporelles des coordonnées tridimensionnelles. Le champ de vitesse horizontale montre la rotation antihoraire de la plaque nord-américaine avec une vitesse moyenne de 16,8±0,7 mm/an dans un modèle sans rotation nette (no-net-rotation) par rapport à l’ITRF2008. Le champ de vitesse verticale confirme un soulèvement dû à l’ajustement isostatique glaciaire partout dans l’est du Canada avec un taux maximal de 13,7±1,2 mm/an et un affaissement vers le sud, principalement au nord des États-Unis, avec un taux typique de −1 à −2 mm/an et un taux minimum de −2,7±1,4 mm/an. Le comportement du bruit des séries temporelles des coordonnées GPS tridimensionnelles a été analysé en utilisant une analyse spectrale et la méthode du maximum de vraisemblance pour tester cinq modèles de bruit: loi de puissance; bruit blanc; bruit blanc et bruit de scintillation; bruit blanc et marche aléatoire; bruit blanc, bruit de scintillation et marche aléatoire. Les résultats montrent que la combinaison bruit blanc et bruit de scintillation est le meilleur modèle pour décrire la partie stochastique des séries temporelles. Les amplitudes de tous les modèles de bruit sont plus faibles dans la direction nord et plus grandes dans la direction verticale. Les amplitudes du bruit blanc sont à peu près égales à travers la zone d’étude et sont donc surpassées, dans toutes les directions, par le bruit de scintillation et de marche aléatoire. Le modèle de bruit de scintillation augmente l’incertitude des vitesses estimées par un facteur de 5 à 38 par rapport au modèle de bruit blanc. Les vitesses estimées de tous les modèles de bruit sont statistiquement cohérentes. Les paramètres estimés du pôle eulérien de rotation pour cette région sont légèrement, mais significativement, différents de la rotation globale de la plaque nord-américaine. Cette différence reflète potentiellement les contraintes locales dans cette région sismique et les contraintes causées par la différence des vitesses intraplaques entre les deux rives du fleuve Saint-Laurent. La déformation de la croûte terrestre de la région a été étudiée en utilisant la méthode de collocation par moindres carrés. Les vitesses horizontales interpolées montrent un mouvement cohérent spatialement: soit un mouvement radial vers l’extérieur pour les centres de soulèvement maximal au nord et un mouvement radial vers l’intérieur pour les centres d’affaissement maximal au sud, avec une vitesse typique de 1 à 1,6±0,4 mm/an. Cependant, ce modèle devient plus complexe près des marges des anciennes zones glaciaires. Basées selon leurs directions, les vitesses horizontales intraplaques peuvent être divisées en trois zones distinctes. Cela confirme les conclusions d’autres chercheurs sur l’existence de trois dômes de glace dans la région d’étude avant le dernier maximum glaciaire. Une corrélation spatiale est observée entre les zones de vitesses horizontales intraplaques de magnitude plus élevée et les zones sismiques le long du fleuve Saint-Laurent. Les vitesses verticales ont ensuite été interpolées pour modéliser la déformation verticale. Le modèle montre un taux de soulèvement maximal de 15,6 mm/an au sud-est de la baie d’Hudson et un taux d’affaissement typique de 1 à 2 mm/an au sud, principalement dans le nord des États-Unis. Le long du fleuve Saint-Laurent, les mouvements horizontaux et verticaux sont cohérents spatialement. Il y a un déplacement vers le sud-est d’une magnitude d’environ 1,3 mm/an et un soulèvement moyen de 3,1 mm/an par rapport à la plaque l’Amérique du Nord. Le taux de déformation verticale est d’environ 2,4 fois plus grand que le taux de déformation horizontale intraplaque. Les résultats de l’analyse de déformation montrent l’état actuel de déformation dans l’est du Canada sous la forme d’une expansion dans la partie nord (la zone se soulève) et d’une compression dans la partie sud (la zone s’affaisse). Les taux de rotation sont en moyenne de 0,011°/Ma. Nous avons observé une compression NNO-SSE avec un taux de 3.6 à 8.1 nstrain/an dans la zone sismique du Bas-Saint-Laurent. Dans la zone sismique de Charlevoix, une expansion avec un taux de 3,0 à 7,1 nstrain/an est orientée ENE-OSO. Dans la zone sismique de l’Ouest du Québec, la déformation a un mécanisme de cisaillement avec un taux de compression de 1,0 à 5,1 nstrain/an et un taux d’expansion de 1.6 à 4.1 nstrain/an. Ces mesures sont conformes, au premier ordre, avec les modèles d’ajustement isostatique glaciaire et avec la contrainte de compression horizontale maximale du projet World Stress Map, obtenue à partir de la théorie des mécanismes focaux (focal mechanism method).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents quantitative studies of T cell and dendritic cell (DC) behaviour in mouse lymph nodes (LNs) in the naive state and following immunisation. These processes are of importance and interest in basic immunology, and better understanding could improve both diagnostic capacity and therapeutic manipulations, potentially helping in producing more effective vaccines or developing treatments for autoimmune diseases. The problem is also interesting conceptually as it is relevant to other fields where 3D movement of objects is tracked with a discrete scanning interval. A general immunology introduction is presented in chapter 1. In chapter 2, I apply quantitative methods to multi-photon imaging data to measure how T cells and DCs are spatially arranged in LNs. This has been previously studied to describe differences between the naive and immunised state and as an indicator of the magnitude of the immune response in LNs, but previous analyses have been generally descriptive. The quantitative analysis shows that some of the previous conclusions may have been premature. In chapter 3, I use Bayesian state-space models to test some hypotheses about the mode of T cell search for DCs. A two-state mode of movement where T cells can be classified as either interacting to a DC or freely migrating is supported over a model where T cells would home in on DCs at distance through for example the action of chemokines. In chapter 4, I study whether T cell migration is linked to the geometric structure of the fibroblast reticular network (FRC). I find support for the hypothesis that the movement is constrained to the fibroblast reticular cell (FRC) network over an alternative 'random walk with persistence time' model where cells would move randomly, with a short-term persistence driven by a hypothetical T cell intrinsic 'clock'. I also present unexpected results on the FRC network geometry. Finally, a quantitative method is presented for addressing some measurement biases inherent to multi-photon imaging. In all three chapters, novel findings are made, and the methods developed have the potential for further use to address important problems in the field. In chapter 5, I present a summary and synthesis of results from chapters 3-4 and a more speculative discussion of these results and potential future directions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Simulations of droplet dispersion behind cylinder wakes and downstream of icing tunnel spray bars were conducted. In both cases, a range of droplet sizes were investigated numerically with a Lagrangian particle trajectory approach while the turbulent air flow was investigated with a hybrid Reynolds-Averaged Navier-Stokes/Large-Eddy Simulations approach scheme. In the first study, droplets were injected downstream of a cylinder at sub-critical conditions (i.e. with laminar boundary layer separation). A stochastic continuous random walk (CRW) turbulence model was used to capture the effects of sub-grid turbulence. Small inertia droplets (characterized by small Stokes numbers) were affected by both the large-scale and small-scale vortex structures and closely followed the air flow, while exhibiting a dispersion consistent with that of a scalar flow field. Droplets with intermediate Stokes numbers were centrifuged by the vortices to the outer edges of the wake, yielding an increased dispersion. Large Stokes number droplets were found to be less responsive to the vortex structures and exhibited the least dispersion. Particle concentration was also correlated with vorticity distribution which yielded preferential bias effects as a function of different particle sizes. This trend was qualitatively similar to results seen in homogenous isotropic turbulence, though the influence of particle inertia was less pronounced for the cylinder wake case. A similar study was completed for droplet dispersion within the Icing Research Tunnel (IRT) at the NASA Glenn Research Center, where it is important to obtain a nearly uniform liquid water content (LWC) distribution in the test section (to recreate atmospheric icing conditions).. For this goal, droplets are diffused by the mean and turbulent flow generated from the nozzle air jets, from the upstream spray bars, and from the vertical strut wakes. To understand the influence of these three components, a set of simulations was conducted with a sequential inclusion of these components. Firstly, a jet in an otherwise quiescent airflow was simulated to capture the impact of the air jet on flow turbulence and droplet distribution, and the predictions compared well with experimental results. The effects of the spray bar wake and vertical strut wake were then included with two more simulation conditions, for which it was found that the air jets were the primary driving force for droplet dispersion, i.e. that the spray bar and vertical strut wake effects were secondary.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study spatially localized states of a spiking neuronal network populated by a pulse coupled phase oscillator known as the lighthouse model. We show that in the limit of slow synaptic interactions in the continuum limit the dynamics reduce to those of the standard Amari model. For non-slow synaptic connections we are able to go beyond the standard firing rate analysis of localized solutions allowing us to explicitly construct a family of co-existing one-bump solutions, and then track bump width and firing pattern as a function of system parameters. We also present an analysis of the model on a discrete lattice. We show that multiple width bump states can co-exist and uncover a mechanism for bump wandering linked to the speed of synaptic processing. Moreover, beyond a wandering transition point we show that the bump undergoes an effective random walk with a diffusion coefficient that scales exponentially with the rate of synaptic processing and linearly with the lattice spacing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Chemotaxis, the phenomenon in which cells move in response to extracellular chemical gradients, plays a prominent role in the mammalian immune response. During this process, a number of chemical signals, called chemoattractants, are produced at or proximal to sites of infection and diffuse into the surrounding tissue. Immune cells sense these chemoattractants and move in the direction where their concentration is greatest, thereby locating the source of attractants and their associated targets. Leading the assault against new infections is a specialized class of leukocytes (white blood cells) known as neutrophils, which normally circulate in the bloodstream. Upon activation, these cells emigrate out of the vasculature and navigate through interstitial tissues toward target sites. There they phagocytose bacteria and release a number of proteases and reactive oxygen intermediates with antimicrobial activity. Neutrophils recruited by infected tissue in vivo are likely confronted by complex chemical environments consisting of a number of different chemoattractant species. These signals may include end target chemicals produced in the vicinity of the infectious agents, and endogenous chemicals released by local host tissues during the inflammatory response. To successfully locate their pathogenic targets within these chemically diverse and heterogeneous settings, activated neutrophils must be capable of distinguishing between the different signals and employing some sort of logic to prioritize among them. This ability to simultaneously process and interpret mulitple signals is thought to be essential for efficient navigation of the cells to target areas. In particular, aberrant cell signaling and defects in this functionality are known to contribute to medical conditions such as chronic inflammation, asthma and rheumatoid arthritis. To elucidate the biomolecular mechanisms underlying the neutrophil response to different chemoattractants, a number of efforts have been made toward understanding how cells respond to different combinations of chemicals. Most notably, recent investigations have shown that in the presence of both end target and endogenous chemoattractant variants, the cells migrate preferentially toward the former type, even in very low relative concentrations of the latter. Interestingly, however, when the cells are exposed to two different endogenous chemical species, they exhibit a combinatorial response in which distant sources are favored over proximal sources. Some additional results also suggest that cells located between two endogenous chemoattractant sources will respond to the vectorial sum of the combined gradients. In the long run, this peculiar behavior could result in oscillatory cell trajectories between the two sources. To further explore the significance of these and other observations, particularly in the context of physiological conditions, we introduce in this work a simplified phenomenological model of neutrophil chemotaxis. In particular, this model incorporates a trait commonly known as directional persistence - the tendency for migrating neutrophils to continue moving in the same direction (much like momentum) - while also accounting for the dose-response characteristics of cells to different chemical species. Simulations based on this model suggest that the efficiency of cell migration in complex chemical environments depends significantly on the degree of directional persistence. In particular, with appropriate values for this parameter, cells can improve their odds of locating end targets by drifting through a network of attractant sources in a loosely-guided fashion. This corroborates the prediction that neutrophils randomly migrate from one chemoattractant source to the next while searching for their end targets. These cells may thus use persistence as a general mechanism to avoid being trapped near sources of endogenous chemoattractants - the mathematical analogue of local maxima in a global optimization problem. Moreover, this general foraging strategy may apply to other biological processes involving multiple signals and long-range navigation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.