969 resultados para RANDOM-WALK SIMULATIONS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Voir la bibliographie du mémoire pour les références du résumé. See the thesis`s bibliography for the references in the summary.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Routine activity theory introduced by Cohen& Felson in 1979 states that criminal acts are caused due to the presenceof criminals, vic-timsand the absence of guardians in time and place. As the number of collision of these elements in place and time increases, criminal acts will also increase even if the number of criminals or civilians remains the same within the vicinity of a city. Street robbery is a typical example of routine ac-tivity theory and the occurrence of which can be predicted using routine activity theory. Agent-based models allow simulation of diversity among individuals. Therefore agent based simulation of street robbery can be used to visualize how chronological aspects of human activity influence the incidence of street robbery.The conceptual model identifies three classes of people-criminals, civilians and police with certain activity areas for each. Police exist only as agents of formal guardianship. Criminals with a tendency for crime will be in the search for their victims. Civilians without criminal tendencycan be either victims or guardians. In addition to criminal tendency, each civilian in the model has a unique set of characteristicslike wealth, employment status, ability for guardianship etc. These agents are subjected to random walk through a street environment guided by a Q –learning module and the possible outcomes are analyzed

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exam and solutions in LaTex

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exam and solutions in LaTex

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exam and solutions in PDF

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exam and solutions in PDF

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exam and solutions in PDF

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exam and solutions in LaTex

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este documento estima modelos lineales y no-lineales de corrección de errores para los precios spot de cuatro tipos de café. En concordancia con las leyes económicas, se encuentra evidencia que cuando los precios están por encima de su nivel de equilibrio, retornan a éste mas lentamente que cuando están por debajo. Esto puede reflejar el hecho que, en el corto plazo, para los países productores de café es mas fácil restringir la oferta para incrementar precios, que incrementarla para reducirlos. Además, se encuentra evidencia que el ajuste es más rápido cuando las desviaciones del equilibrio son mayores. Los pronósticos que se obtienen a partir de los modelos de corrección de errores no lineales y asimétricos considerados en el trabajo, ofrecen una leve mejoría cuando se comparan con los pronósticos que resultan de un modelo de paseo aleatorio.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we use the most representative models that exist in the literature on term structure of interest rates. In particular, we explore affine one factor models and polynomial-type approximations such as Nelson and Siegel. Our empirical application considers monthly data of USA and Colombia for estimation and forecasting. We find that affine models do not provide adequate performance either in-sample or out-of-sample. On the contrary, parsimonious models such as Nelson and Siegel have adequate results in-sample, however out-of-sample they are not able to systematically improve upon random walk base forecast.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration data sets extend an additional 2000 yr, from 0-26 cal kyr BP (Before Present, 0 cal. BP = AD 1950), and provide much higher resolution, greater precision, and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically-dated tree-ring samples, converted with a box diffusion model to marine mixed-layer ages, cover the period from 0-10.5 call kyr BR Beyond 10.5 cal kyr BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals. The marine records are corrected with site-specific C-14 reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 cal kyr BR A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring data sets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al. (this issue).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new calibration curve for the conversion of radiocarbon ages to calibrated (cal) ages has been constructed and internationally ratified to replace ImCal98, which extended from 0-24 cal kyr BP (Before Present, 0 cal BP = AD 1950). The new calibration data set for terrestrial samples extends from 0-26 cal kyr BP, but with much higher resolution beyond 11.4 cal kyr BP than ImCal98. Dendrochronologically-dated tree-ring samples cover the period from 0-12.4 cal kyr BP. Beyond the end of the tree rings, data from marine records (corals and foraminifera) are converted to the atmospheric equivalent with a site-specific marine reservoir correction to provide terrestrial calibration from 12.4-26.0 cal kyr BP. A substantial enhancement relative to ImCal98 is the introduction of a coherent statistical approach based on a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The tree-ring data sets, sources of uncertainty, and regional offsets are discussed here. The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine 04) are discussed in brief, but details are presented in Hughen et al. (this issue a). We do not make a recommendation for calibration beyond 26 cal kyr BP at this time; however, potential calibration data sets are compared in another paper (van der Plicht et al., this issue).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Capturing the pattern of structural change is a relevant task in applied demand analysis, as consumer preferences may vary significantly over time. Filtering and smoothing techniques have recently played an increasingly relevant role. A dynamic Almost Ideal Demand System with random walk parameters is estimated in order to detect modifications in consumer habits and preferences, as well as changes in the behavioural response to prices and income. Systemwise estimation, consistent with the underlying constraints from economic theory, is achieved through the EM algorithm. The proposed model is applied to UK aggregate consumption of alcohol and tobacco, using quarterly data from 1963 to 2003. Increased alcohol consumption is explained by a preference shift, addictive behaviour and a lower price elasticity. The dynamic and time-varying specification is consistent with the theoretical requirements imposed at each sample point. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nested clade phylogeographic analysis (NCPA) is a popular method for reconstructing the demographic history of spatially distributed populations from genetic data. Although some parts of the analysis are automated, there is no unique and widely followed algorithm for doing this in its entirety, beginning with the data, and ending with the inferences drawn from the data. This article describes a method that automates NCPA, thereby providing a framework for replicating analyses in an objective way. To do so, a number of decisions need to be made so that the automated implementation is representative of previous analyses. We review how the NCPA procedure has evolved since its inception and conclude that there is scope for some variability in the manual application of NCPA. We apply the automated software to three published datasets previously analyzed manually and replicate many details of the manual analyses, suggesting that the current algorithm is representative of how a typical user will perform NCPA. We simulate a large number of replicate datasets for geographically distributed, but entirely random-mating, populations. These are then analyzed using the automated NCPA algorithm. Results indicate that NCPA tends to give a high frequency of false positives. In our simulations we observe that 14% of the clades give a conclusive inference that a demographic event has occurred, and that 75% of the datasets have at least one clade that gives such an inference. This is mainly due to the generation of multiple statistics per clade, of which only one is required to be significant to apply the inference key. We survey the inferences that have been made in recent publications and show that the most commonly inferred processes (restricted gene flow with isolation by distance and contiguous range expansion) are those that are commonly inferred in our simulations. However, published datasets typically yield a richer set of inferences with NCPA than obtained in our random-mating simulations, and further testing of NCPA with models of structured populations is necessary to examine its accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The performance of various statistical models and commonly used financial indicators for forecasting securitised real estate returns are examined for five European countries: the UK, Belgium, the Netherlands, France and Italy. Within a VAR framework, it is demonstrated that the gilt-equity yield ratio is in most cases a better predictor of securitized returns than the term structure or the dividend yield. In particular, investors should consider in their real estate return models the predictability of the gilt-equity yield ratio in Belgium, the Netherlands and France, and the term structure of interest rates in France. Predictions obtained from the VAR and univariate time-series models are compared with the predictions of an artificial neural network model. It is found that, whilst no single model is universally superior across all series, accuracy measures and horizons considered, the neural network model is generally able to offer the most accurate predictions for 1-month horizons. For quarterly and half-yearly forecasts, the random walk with a drift is the most successful for the UK, Belgian and Dutch returns and the neural network for French and Italian returns. Although this study underscores market context and forecast horizon as parameters relevant to the choice of the forecast model, it strongly indicates that analysts should exploit the potential of neural networks and assess more fully their forecast performance against more traditional models.