190 resultados para Random Processes
Resumo:
This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.
Resumo:
Confidence in decision making is an important dimension of managerialbehavior. However, what is the relation between confidence, on the onehand, and the fact of receiving or expecting to receive feedback ondecisions taken, on the other hand? To explore this and related issuesin the context of everyday decision making, use was made of the ESM(Experience Sampling Method) to sample decisions taken by undergraduatesand business executives. For several days, participants received 4 or 5SMS messages daily (on their mobile telephones) at random moments at whichpoint they completed brief questionnaires about their current decisionmaking activities. Issues considered here include differences between thetypes of decisions faced by the two groups, their structure, feedback(received and expected), and confidence in decisions taken as well as inthe validity of feedback. No relation was found between confidence indecisions and whether participants received or expected to receivefeedback on those decisions. In addition, although participants areclearly aware that feedback can provide both confirming and disconfirming evidence, their ability to specify appropriatefeedback is imperfect. Finally, difficulties experienced inusing the ESM are discussed as are possibilities for further researchusing this methodology.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
We study the existence of moments and the tail behaviour of the densitiesof storage processes. We give sufficient conditions for existence andnon-existence of moments using the integrability conditions ofsubmultiplicative functions with respect to Lévy measures. Then, we studythe asymptotical behavior of the tails of these processes using the concaveor convex envelope of the release rate function.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
A new parametric minimum distance time-domain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator iseasy to compute and is consistent and asymptotically normally distributed for fractionallyintegrated (FI) processes with an integration order d strictly greater than -0.75. Therefore, it can be applied to both stationary and non-stationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a by-product, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the Box-Pierce goodness of fit statistic. Empirical applications and Monte-Carlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided.
Resumo:
This paper proposes a framework to examine business ethical dilemmas andbusiness attitudes towards such dilemmas. Business ethical dilemmas canbe understood as reflecting a contradiction between a socially detrimentalprocess and a self-interested profitable consequence. This representationallows us to distinguish two forms of behavior differing by whetherpriority is put on consequences or on processes. We argue that theseforms imply very different business attitudes towards society:controversial or competitive for the former and aligned or cooperativefor the latter. These attitudes are then analyzed at the discursive level in order to address the question of good faith in businessargumentation, i.e. to which extent are these attitudes consistent withactual business behaviors. We argue that consequential attitudes mostlyinvolve communication and lobbying actions aiming at eluding the dilemma.Therefore, the question of good faith for consequential attitudes liesin the consistency between beliefs and discourse. On the other hand,procedural attitudes acknowledge the dilemma and claim a change of theprocess of behavior. They thus raise the question of the consistencybetween discourses and actual behavior. We apply this processes/consequencesframework to the case of the oil industry s climate change ethical dilemmawhich comes forth as a dilemma between emitting greenhouse gases and making more profits . And we examine the different attitudes of two oilcorporations-BP Amoco and ExxonMobil-towards the dilemma.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.
Resumo:
This paper analyses the integration process that firms follow toimplement Supply Chain Management (SCM). This study has beeninspired in the integration model proposed by Stevens (1989). Hesuggests that companies internally integrate first and then extendintegration to other supply chain members, such as customers andsuppliers.To analyse the integration process a survey was conducted amongSpanish food manufacturers. The results show that there are companiesin three different integration stages. In stage I, companies are notintegrated. In stage II, companies have a medium-high level of internalintegration in the Logistics-Production interface, a low level ofinternal integration in the Logistics-Marketing interface, and a mediumlevel of external integration. And, in stage III, companies have highlevels of integration in both internal interfaces and in some of theirsupply chain relationships.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
Empirical studies have shown little evidence to support the presence of all unit roots present in the $^{\Delta_4}$ filter in quarterly seasonal time series. This paper analyses the performance of the Hylleberg, Engle, Granger and Yoo (1990) (HEGY) procedure when the roots under the null are not all present. We exploit the Vector of Quarters representation and cointegration relationship between the quarters when factors $(1-L),(1+L),\bigg(1+L^2\bigg),\bigg(1-L^2\bigg) y \bigg(1+L+L^2+L^3\bigg)$ are a source of nonstationarity in a process in order to obtain the distribution of tests of the HEGY procedure when the underlying processes have a root at the zero, Nyquist frequency, two complex conjugates of frequency $^{\pi/2}$ and two combinations of the previous cases. We show both theoretically and through a Monte-Carlo analysis that the t-ratios $^{t_{{\hat\pi}_1}}$ and $^{t_{{\hat\pi}_2}}$ and the F-type tests used in the HEGY procedure have the same distribution as under the null of a seasonal random walk when the root(s) is/are present, although this is not the case for the t-ratio tests associated with unit roots at frequency $^{\pi/2}$.
Resumo:
Empirical studies have shown little evidence to support the presence of all unit roots present in the $^{\Delta_4}$ filter in quarterly seasonal time series. This paper analyses the performance of the Hylleberg, Engle, Granger and Yoo (1990) (HEGY) procedure when the roots under the null are not all present. We exploit the Vector of Quarters representation and cointegration relationship between the quarters when factors $(1-L),(1+L),\bigg(1+L^2\bigg),\bigg(1-L^2\bigg) y \bigg(1+L+L^2+L^3\bigg)$ are a source of nonstationarity in a process in order to obtain the distribution of tests of the HEGY procedure when the underlying processes have a root at the zero, Nyquist frequency, two complex conjugates of frequency $^{\pi/2}$ and two combinations of the previous cases. We show both theoretically and through a Monte-Carlo analysis that the t-ratios $^{t_{{\hat\pi}_1}}$ and $^{t_{{\hat\pi}_2}}$ and the F-type tests used in the HEGY procedure have the same distribution as under the null of a seasonal random walk when the root(s) is/are present, although this is not the case for the t-ratio tests associated with unit roots at frequency $^{\pi/2}$.
Resumo:
[spa] El estudio de los procesos a través de los cuales la economía política se ha transformado en una disciplina académica es un área de creciente interés en la historia del pensamiento económico. Dicho estudio se ha abordado a través del análisis de la importancia de la economía política en un conjunto de instituciones, consideradas clave en la expansión de la economía en las sociedades occidentales en la segunda mitad del siglo XIX y primeras décadas del XX: universidades, sociedades económicas, publicaciones periódicas de contenido económico y los parlamentos nacionales. Este papel presenta una comparación entre los desarrollos del proceso de institutionalización de la economía política en España e Italia, a través del estudio de la presencia de esta disciplina en las instituciones mencionadas para el periodo 1860-1900. El objetivo es medir la posible existencia de una vía común en la institucionalización de la economía política en ambos países, como un primer paso hacia la elaboración de un modelo supranacional de institucionalización de la economía en este periodo.
Resumo:
This study presents new evidence concerning the uneven processes of industrialization innineteenth century Spain and Italy based on a disaggregate analysis of the productivesectors from which the behaviour of the aggregate indices is comprised. The use of multivariate time-series analysis techniques can aid our understanding and characterization of these two processes of industrialization. The identification of those sectors with key rolesin leading industrial growth provides new evidence concerning the factors that governed thebehaviour of the aggregates in the two economies. In addition, the analysis of the existenceof interindustry linkages reveals the scale of the industrialization process, and wheresignificant differences exist, accounts for many of the divergences recorded in the historiography for the period 1850-1913.
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.