929 resultados para Biased correlated random walk
Resumo:
This paper examines the finite sample properties of three testing regimes for the null hypothesis of a panel unit root against stationary alternatives in the presence of cross-sectional correlation. The regimes of Bai and Ng (2004), Moon and Perron (2004) and Pesaran (2007) are assessed in the presence of multiple factors and also other non-standard situations. The behaviour of some information criteria used to determine the number of factors in a panel is examined and new information criteria with improved properties in small-N panels proposed. An application to the efficient markets hypothesis is also provided. The null hypothesis of a panel random walk is not rejected by any of the tests, supporting the efficient markets hypothesis in the financial services sector of the Australian Stock Exchange.
Resumo:
This article applies the panel stationarity test with a break proposed by Hadri and Rao (2008) to examine whether 14 macroeconomic variables of OECD countries can be best represented as random walk or stationary fluctuations around a deterministic trend. In contrast to previous studies, based essentially on visual inspection of the break type or just applying the most general break model, we use a model selection procedure based on BIC. We do this for each time series so that heterogeneous break models are allowed for in the panel. Our results suggest, overwhelmingly, that if we account for a structural break, cross-sectional dependence and choose the break models to be congruent with the data, then the null of stationarity cannot be rejected for all the 14 macroeconomic variables examined in this article. This is in sharp contrast with the results obtained by Hurlin (2004), using the same data but a different methodology.
Resumo:
Cellular response to radiation damage is made by a complex network of pathways and feedback loops whose spatiotemporal organization is still unclear despite its decisive role in determining the fate of the damaged cell. The single-cell approach and the high spatial resolution offered by microbeams provide the perfect tool to study and quantify the dynamic processes associated with the induction and repair of DNA damage. The soft X-ray microbeam has been used to follow the development of radiation induced foci in live cells by monitoring their size and intensity as a function of dose and time using yellow fluorescent protein (YFP) tagging techniques. Preliminary data indicate a delayed and linear rising of the intensity signal indicating a slow kinetic for the accumulation of DNA repair protein 53BP1. A slow and limited foci diffusion has also been observed. Further investigations are required to assess whatever such diffusion is consistent with a random walk pattern or if it is the result of a more structured lesion processing phenomenon. In conclusion, our data indicates that the use of microbeams coupled to live cell microscopy represent a sophisticated approach for visualizing and quantifying the dynamics changes of DNA proteins at the damaged sites.
Resumo:
We propose an exchange rate model that is a hybrid of the conventional specification with monetary fundamentals and the Evans–Lyons microstructure approach. We estimate a model augmented with order flow variables, using a unique data set: almost 100 monthly observations on interdealer order flow on dollar/euro and dollar/yen. The augmented macroeconomic, or “hybrid,” model exhibits greater in-sample stability and out of sample forecasting improvement vis-à-vis the basic macroeconomic and random walk specifications.
Resumo:
This paper concerns randomized leader election in synchronous distributed networks. A distributed leader election algorithm is presented for complete n-node networks that runs in O(1) rounds and (with high probability) takes only O(n-vlog3/2n) messages to elect a unique leader (with high probability). This algorithm is then extended to solve leader election on any connected non-bipartiten-node graph G in O(t(G)) time and O(t(G)n-vlog3/2n) messages, where t(G) is the mixing time of a random walk on G. The above result implies highly efficient (sublinear running time and messages) leader election algorithms for networks with small mixing times, such as expanders and hypercubes. In contrast, previous leader election algorithms had at least linear message complexity even in complete graphs. Moreover, super-linear message lower bounds are known for time-efficientdeterministic leader election algorithms. Finally, an almost-tight lower bound is presented for randomized leader election, showing that O(n-v) messages are needed for any O(1) time leader election algorithm which succeeds with high probability. It is also shown that O(n 1/3) messages are needed by any leader election algorithm that succeeds with high probability, regardless of the number of the rounds. We view our results as a step towards understanding the randomized complexity of leader election in distributed networks.
Resumo:
The IntCal09 and Marine09 radiocarbon calibration curves have been revised utilizing newly available and updated data sets from C measurements on tree rings, plant macrofossils, speleothems, corals, and foraminifera. The calibration curves were derived from the data using the random walk model (RWM) used to generate IntCal09 and Marine09, which has been revised to account for additional uncertainties and error structures. The new curves were ratified at the 21st International Radiocarbon conference in July 2012 and are available as Supplemental Material at www.radiocarbon.org. The database can be accessed at http://intcal.qub.ac.uk/intcal13/.
Resumo:
This paper concerns randomized leader election in synchronous distributed networks. A distributed leader election algorithm is presented for complete n-node networks that runs in O(1) rounds and (with high probability) uses only O(√ √nlog<sup>3/2</sup>n) messages to elect a unique leader (with high probability). When considering the "explicit" variant of leader election where eventually every node knows the identity of the leader, our algorithm yields the asymptotically optimal bounds of O(1) rounds and O(. n) messages. This algorithm is then extended to one solving leader election on any connected non-bipartite n-node graph G in O(τ(. G)) time and O(τ(G)n√log<sup>3/2</sup>n) messages, where τ(. G) is the mixing time of a random walk on G. The above result implies highly efficient (sublinear running time and messages) leader election algorithms for networks with small mixing times, such as expanders and hypercubes. In contrast, previous leader election algorithms had at least linear message complexity even in complete graphs. Moreover, super-linear message lower bounds are known for time-efficient deterministic leader election algorithms. Finally, we present an almost matching lower bound for randomized leader election, showing that Ω(n) messages are needed for any leader election algorithm that succeeds with probability at least 1/. e+. ε, for any small constant ε. >. 0. We view our results as a step towards understanding the randomized complexity of leader election in distributed networks.
Resumo:
The relationship between retention loss in single crystal PbTiO3 ferroelectric thin films and leakage currents is demonstrated by piezoresponse and conductive atomic force microscopy measurements. It was found that the polarization reversal in the absence of an electric field followed a stretched exponential behavior 1-exp[-(t/k)(d)] with exponent d>1, which is distinct from a dispersive random walk process with d <. The latter has been observed in polycrystalline films for which retention loss was associated with grain boundaries. The leakage current indicates power law scaling at short length scales, which strongly depends on the applied electric field. Additional information of the microstructure, which contributes to an explanation of the presence of leakage currents, is presented with high resolution transmission electron microscopy analysis.
Stochastic Analysis of Saltwater Intrusion in Heterogeneous Aquifers using Local Average Subdivision
Resumo:
This study investigates the effects of ground heterogeneity, considering permeability as a random variable, on an intruding SW wedge using Monte Carlo simulations. Random permeability fields were generated, using the method of Local Average Subdivision (LAS), based on a lognormal probability density function. The LAS method allows the creation of spatially correlated random fields, generated using coefficients of variation (COV) and horizontal and vertical scales of fluctuation (SOF). The numerical modelling code SUTRA was employed to solve the coupled flow and transport problem. The well-defined 2D dispersive Henry problem was used as the test case for the method. The intruding SW wedge is defined by two key parameters, the toe penetration length (TL) and the width of mixing zone (WMZ). These parameters were compared to the results of a homogeneous case simulated using effective permeability values. The simulation results revealed: (1) an increase in COV resulted in a seaward movement of TL; (2) the WMZ extended with increasing COV; (3) a general increase in horizontal and vertical SOF produced a seaward movement of TL, with the WMZ increasing slightly; (4) as the anisotropic ratio increased the TL intruded further inland and the WMZ reduced in size. The results show that for large values of COV, effective permeability parameters are inadequate at reproducing the effects of heterogeneity on SW intrusion.
Footprints in the sand: a persistent spatial impression of fishing in a mobile groundfish assemblage
Resumo:
Fishing is well known to curtail the size distribution of fish populations. This paper reports the discovery of small-scale spatial patterns in length appearing in several exploited species of Celtic Sea demersal 'groundfish'. These patterns match well with spatial distributions of fishing activity, estimated from vessel monitoring records taken over a period of 6 years, suggesting that this 'mobile' fish community retains a persistent impression of local-scale fishing pressure. An individual random-walk model of fish movement best matched these exploitation 'footprints' with individual movement rates set to <35 km per year. We propose that Celtic Sea groundfish may have surprisingly low movement rates for much of the year, such that fishing impact is spatially heterogeneous and related to local fishing intensity.
Resumo:
In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
A PhD Dissertation, presented as part of the requirements for the Degree of Doctor of Philosophy from the NOVA - School of Business and Economics
Resumo:
A estrutura temporal das taxas de juro, também conhecida por yield curve ou curva de rendimentos define a relação entre as taxas de juros e o prazo de vencimento (ou maturidades) dos investimentos feitos. Assim, o desenvolvimento de modelos que possibilitem a obtenção de previsões precisas sobre a estrutura temporal das taxas de juro e que permitam estudar a dinâmica da evolução das taxas de juro é de crucial importância em diversas áreas de financiamento. Neste estudo investigou-se a performance de diferentes métodos de previsão para obter a estrutura temporal das taxas de juro da Zona Euro, considerando o período entre 2009 e 2015. Em termos mais específicos, foi analisada a capacidade preditiva do modelo de Nelson-Siegel & Svensson assumindo que os parâmetros resultantes da estimação da especificação paramétrica podem ser modelizados através de métodos de séries temporais univariados (modelos ARIMA, Random walk) e multivariados (modelos VAR) e Redes Neuronais Artificiais (RNA) individuais e conjuntas. Os resultados deste estudo mostram que (i) as RNA com a previsão dos parâmetros em simultâneo exibem os valores de erro mais baixos para as maturidades de curto e médio prazo (3 meses a 5 anos); (ii) As RNAs individuais são melhores para prever as taxas de juro nas maturidades compreendidas entre os 7 e os 10 anos, e que (iii) para as maturidades de longo e muito longo prazo (15 e 30 anos respetivamente) deverá ser escolhido o modelo VAR(1). Estes resultados são robustos e consistentes para todos os horizontes de previsão analisados (1,2 e 3 meses). Contudo, no período analisado nenhum dos modelos testados apresenta valores de erro inferiores aos obtidos com o modelo Random Walk.
Resumo:
For predicting future volatility, empirical studies find mixed results regarding two issues: (1) whether model free implied volatility has more information content than Black-Scholes model-based implied volatility; (2) whether implied volatility outperforms historical volatilities. In this thesis, we address these two issues using the Canadian financial data. First, we examine the information content and forecasting power between VIXC - a model free implied volatility, and MVX - a model-based implied volatility. The GARCH in-sample test indicates that VIXC subsumes all information that is reflected in MVX. The out-of-sample examination indicates that VIXC is superior to MVX for predicting the next 1-, 5-, 10-, and 22-trading days' realized volatility. Second, we investigate the predictive power between VIXC and alternative volatility forecasts derived from historical index prices. We find that for time horizons lesser than 10-trading days, VIXC provides more accurate forecasts. However, for longer time horizons, the historical volatilities, particularly the random walk, provide better forecasts. We conclude that VIXC cannot incorporate all information contained in historical index prices for predicting future volatility.