29 resultados para Random-Walk Hypothesis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Early empirical studies of exchange rate determinants demonstrated that fundamentals-based monetary models were unable to outperform the benchmark random walk model in out-of-sample forecasts while later papers found evidence in favor of long-run exchange rate predictability. More recent theoretical works have adopted a microeconomic structure; a utility-based new open economy macroeconomic framework and a rational expectations present value model. Some recent empirical work argues that if the models are adjusted for parameter instability, it is a good predictor of nominal exchange rates while others use aggregate idiosyncratic volatility to generate good predictions. This latest research supports the idea that fundamental economic variables are likely to influence exchange rates especially in the long run and further that the emphasis should change to the economic value or utility based value to assess these macroeconomic models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many environmental studies require accurate simulation of water and solute fluxes in the unsaturated zone. This paper evaluates one- and multi-dimensional approaches for soil water flow as well as different spreading mechanisms to model solute behavior at different scales. For quantification of soil water fluxes,Richards equation has become the standard. Although current numerical codes show perfect water balances, the calculated soil water fluxes in case of head boundary conditions may depend largely on the method used for spatial averaging of the hydraulic conductivity. Atmospheric boundary conditions, especially in the case of phreatic groundwater levels fluctuating above and below a soil surface, require sophisticated solutions to ensure convergence. Concepts for flow in soils with macro pores and unstable wetting fronts are still in development. One-dimensional flow models are formulated to work with lumped parameters in order to account for the soil heterogeneity and preferential flow. They can be used at temporal and spatial scales that are of interest to water managers and policymakers. Multi-dimensional flow models are hampered by data and computation requirements.Their main strength is detailed analysis of typical multi-dimensional flow problems, including soil heterogeneity and preferential flow. Three physically based solute-transport concepts have been proposed to describe solute spreading during unsaturated flow: The stochastic-convective model (SCM), the convection-dispersion equation (CDE), and the fraction aladvection-dispersion equation (FADE). A less physical concept is the continuous-time random-walk process (CTRW). Of these, the SCM and the CDE are well established, and their strengths and weaknesses are identified. The FADE and the CTRW are more recent,and only a tentative strength weakness opportunity threat (SWOT)analysis can be presented at this time. We discuss the effect of the number of dimensions in a numerical model and the spacing between model nodes on solute spreading and the values of the solute-spreading parameters. In order to meet the increasing complexity of environmental problems, two approaches of model combination are used: Model integration and model coupling. Amain drawback of model integration is the complexity of there sulting code. Model coupling requires a systematic physical domain and model communication analysis. The setup and maintenance of a hydrologic framework for model coupling requires substantial resources, but on the other hand, contributions can be made by many research groups.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper develops a model of exchange rate determination within an error correction framework. The intention is to identify both long and short term determinants that can be used to forecast the AUD/US exchange rate. The paper identifies a set of significant variables associated with exchange rate movements over a twenty year period from 1984 to 2004. Specifically, the overnight interest rate differential, Australia's foreign trade-weighted exposure to commodity prices as well as exchange rate volatility are variables identified that are able explain movements in the AUDIUS dollar relationship. An error correction model is subsequently constructed that incorporates an equilibrium correction term, a short-term interest rate differential variable, a commodity price variable and a proxy for exchange rate volatility. The model is then used to forecast out of sample and is found to dominate a naIve random walk model based on three different metrics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy μ, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of ≥10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD ≥ 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on error level and spatial scale. Failure to account for large errors relative to the scale of movement can produce substantial biases in the interpretation of movement patterns. This study provides researchers with a framework for understanding the limitations of their data and identifies how temporal subsampling can help to reduce the influence of spatial error on their conclusions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Peer-to-peer (P2P) networks are gaining increased attention from both the scientific community and the larger Internet user community. Data retrieval algorithms lie at the center of P2P networks, and this paper addresses the problem of efficiently searching for files in unstructured P2P systems. We propose an Improved Adaptive Probabilistic Search (IAPS) algorithm that is fully distributed and bandwidth efficient. IAPS uses ant-colony optimization and takes file types into consideration in order to search for file container nodes with a high probability of success. We have performed extensive simulations to study the performance of IAPS, and we compare it with the Random Walk and Adaptive Probabilistic Search algorithms. Our experimental results show that IAPS achieves high success rates, high response rates, and significant message reduction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a practical and cost-effective approach to construct a fully distributed roadside communication infrastructure to facilitate the localized content dissemination to vehicles in the urban area. The proposed infrastructure is composed of distributed lightweight low-cost devices called roadside buffers (RSBs), where each RSB has the limited buffer storage and is able to transmit wirelessly the cached contents to fast-moving vehicles. To enable the distributed RSBs working toward the global optimal performance (e.g., minimal average file download delays), we propose a fully distributed algorithm to determine optimally the content replication strategy at RSBs. Specifically, we first develop a generic analytical model to evaluate the download delay of files, given the probability density of file distribution at RSBs. Then, we formulate the RSB content replication process as an optimization problem and devise a fully distributed content replication scheme accordingly to enable vehicles to recommend intelligently the desirable content files to RSBs. The proposed infrastructure is designed to optimize the global network utility, which accounts for the integrated download experience of users and the download demands of files. Using extensive simulations, we validate the effectiveness of the proposed infrastructure and show that the proposed distributed protocol can approach to the optimal performance and can significantly outperform the traditional heuristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The recent wide adoption of electronic medical records (EMRs) presents great opportunities and challenges for data mining. The EMR data are largely temporal, often noisy, irregular and high dimensional. This paper constructs a novel ordinal regression framework for predicting medical risk stratification from EMR. First, a conceptual view of EMR as a temporal image is constructed to extract a diverse set of features. Second, ordinal modeling is applied for predicting cumulative or progressive risk. The challenges are building a transparent predictive model that works with a large number of weakly predictive features, and at the same time, is stable against resampling variations. Our solution employs sparsity methods that are stabilized through domain-specific feature interaction networks. We introduces two indices that measure the model stability against data resampling. Feature networks are used to generate two multivariate Gaussian priors with sparse precision matrices (the Laplacian and Random Walk). We apply the framework on a large short-term suicide risk prediction problem and demonstrate that our methods outperform clinicians to a large margin, discover suicide risk factors that conform with mental health knowledge, and produce models with enhanced stability. © 2014 Springer-Verlag London.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A common explanation for the inability of the monetary model to beat the random walk in forecasting future exchange rates is that conventional time series tests may have low power, and that panel data should generate more powerful tests. This paper provides an extensive evaluation of this power argument to the use of panel data in the forecasting context. In particular, by using simulations it is shown that although pooling of the individual prediction tests can lead to substantial power gains, pooling only the parameters of the forecasting equation, as has been suggested in the previous literature, does not seem to generate more powerful tests. The simulation results are illustrated through an empirical application. Copyright © 2007 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recommender systems have been successfully dealing with the problem of information overload. However, most recommendation methods suit to the scenarios where explicit feedback, e.g. ratings, are available, but might not be suitable for the most common scenarios with only implicit feedback. In addition, most existing methods only focus on user and item dimensions and neglect any additional contextual information, such as time and location. In this paper, we propose a graph-based generic recommendation framework, which constructs a Multi-Layer Context Graph (MLCG) from implicit feedback data, and then performs ranking algorithms in MLCG for context-aware recommendation. Specifically, MLCG incorporates a variety of contextual information into a recommendation process and models the interactions between users and items. Moreover, based on MLCG, two novel ranking methods are developed: Context-aware Personalized Random Walk (CPRW) captures user preferences and current situations, and Semantic Path-based Random Walk (SPRW) incorporates semantics of paths in MLCG into random walk model for recommendation. The experiments on two real-world datasets demonstrate the effectiveness of our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Identifying influential spreaders in networks, which contributes to optimizing the use of available resources and efficient spreading of information, is of great theoretical significance and practical value. A random-walk-based algorithm LeaderRank has been shown as an effective and efficient method in recognizing leaders in social network, which even outperforms the well-known PageRank method. As LeaderRank is initially developed for binary directed networks, further extensions should be studied in weighted networks. In this paper, a generalized algorithm PhysarumSpreader is proposed by combining LeaderRank with a positive feedback mechanism inspired from an amoeboid organism called Physarum Polycephalum. By taking edge weights into consideration and adding the positive feedback mechanism, PhysarumSpreader is applicable in both directed and undirected networks with weights. By taking two real networks for examples, the effectiveness of the proposed method is demonstrated by comparing with other standard centrality measures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Identifying influential peers is an important issue for business to promote commercial strategies in social networks. This paper proposes a conductance eigenvector centrality (CEC) model to measure peer influence in the complex social network. The CEC model considers the social network as a conductance network and constructs methods to calculate the conductance matrix of the network. By a novel random walk mechanism, the CEC model obtains stable CEC values which measure the peer influence in the network. The experiments show that the CEC model can achieve robust performance in identifying peer influence. It outperforms the benchmark algorithms and obtains excellent outcomes when the network has high clustering coefficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two-dimensional Principal Component Analysis (2DPCA) is a robust method in face recognition. Much recent research shows that the 2DPCA is more reliable than the well-known PCA method in recognising human face. However, in many cases, this method tends to be overfitted to sample data. In this paper, we proposed a novel method named random subspace two-dimensional PCA (RS-2DPCA), which combines the 2DPCA method with the random subspace (RS) technique. The RS-2DPCA inherits the advantages of both the 2DPCA and RS technique, thus it can avoid the overfitting problem and achieve high recognition accuracy. Experimental results in three benchmark face data sets -the ORL database, the Yale face database and the extended Yale face database B - confirm our hypothesis that the RS-2DPCA is superior to the 2DPCA itself.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytic solution to the multi-target Bayes recursion known as the δ-Generalized Labeled Multi-Bernoulli ( δ-GLMB) filter has been recently proposed by Vo and Vo in [“Labeled Random Finite Sets and Multi-Object Conjugate Priors,” IEEE Trans. Signal Process., vol. 61, no. 13, pp. 3460-3475, 2014]. As a sequel to that paper, the present paper details efficient implementations of the δ-GLMB multi-target tracking filter. Each iteration of this filter involves an update operation and a prediction operation, both of which result in weighted sums of multi-target exponentials with intractably large number of terms. To truncate these sums, the ranked assignment and K-th shortest path algorithms are used in the update and prediction, respectively, to determine the most significant terms without exhaustively computing all of the terms. In addition, using tools derived from the same framework, such as probability hypothesis density filtering, we present inexpensive (relative to the δ-GLMB filter) look-ahead strategies to reduce the number of computations. Characterization of the L1-error in the multi-target density arising from the truncation is presented.