975 resultados para random number generator
Resumo:
In this paper, we use a unique long-run dataset of regulatory constraints on capital account openness to explain stock market correlations. Since stock returns themselves are highly volatile, any examination of what drives correlations needs to focus on long runs of data. This is particularly true since some of the short-term changes in co-movements appear to reverse themselves (Delroy Hunter 2005). We argue that changes in the co-movement of indices have not been random. Rather, they are mainly driven by greater freedom to move funds from one country to another. In related work, Geert Bekaert and Campbell Harvey (2000) show that equity correlations increase after liberalization of capital markets, using a number of case studies from emerging countries. We examine this pattern systematically for the last century, and find it to be most pronounced in the recent past. We compare the importance of capital account openness with one main alternative explanation, the growing synchronization of economic fundamentals. We conclude that greater openness has been the single most important cause of growing correlations during the last quarter of a century, though increasingly correlated economic fundamentals also matter. In the conclusion, we offer some thoughts on why the effects of greater openness appear to be so much stronger today than they were during the last era of globalization before 1914.
Resumo:
This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.
Resumo:
Several studies have reported high performance of simple decision heuristics multi-attribute decision making. In this paper, we focus on situations where attributes are binary and analyze the performance of Deterministic-Elimination-By-Aspects (DEBA) and similar decision heuristics. We consider non-increasing weights and two probabilistic models for the attribute values: one where attribute values are independent Bernoulli randomvariables; the other one where they are binary random variables with inter-attribute positive correlations. Using these models, we show that good performance of DEBA is explained by the presence of cumulative as opposed to simple dominance. We therefore introduce the concepts of cumulative dominance compliance and fully cumulative dominance compliance and show that DEBA satisfies those properties. We derive a lower bound with which cumulative dominance compliant heuristics will choose a best alternative and show that, even with many attributes, this is not small. We also derive an upper bound for the expected loss of fully cumulative compliance heuristics and show that this is moderateeven when the number of attributes is large. Both bounds are independent of the values ofthe weights.
Resumo:
Confidence in decision making is an important dimension of managerialbehavior. However, what is the relation between confidence, on the onehand, and the fact of receiving or expecting to receive feedback ondecisions taken, on the other hand? To explore this and related issuesin the context of everyday decision making, use was made of the ESM(Experience Sampling Method) to sample decisions taken by undergraduatesand business executives. For several days, participants received 4 or 5SMS messages daily (on their mobile telephones) at random moments at whichpoint they completed brief questionnaires about their current decisionmaking activities. Issues considered here include differences between thetypes of decisions faced by the two groups, their structure, feedback(received and expected), and confidence in decisions taken as well as inthe validity of feedback. No relation was found between confidence indecisions and whether participants received or expected to receivefeedback on those decisions. In addition, although participants areclearly aware that feedback can provide both confirming and disconfirming evidence, their ability to specify appropriatefeedback is imperfect. Finally, difficulties experienced inusing the ESM are discussed as are possibilities for further researchusing this methodology.
Resumo:
Business News from the Iowa Department of Economic Development
Resumo:
Minkowski's ?(x) function can be seen as the confrontation of two number systems: regular continued fractions and the alternated dyadic system. This way of looking at it permits us to prove that its derivative, as it also happens for many other non-decreasing singular functions from [0,1] to [0,1], when it exists can only attain two values: zero and infinity. It is also proved that if the average of the partial quotients in the continued fraction expansion of x is greater than k* =5.31972, and ?'(x) exists then ?'(x)=0. In the same way, if the same average is less than k**=2 log2(F), where F is the golden ratio, then ?'(x)=infinity. Finally some results are presented concerning metric properties of continued fraction and alternated dyadic expansions.
Resumo:
Whereas people are typically thought to be better off with more choices, studiesshow that they often prefer to choose from small as opposed to large sets of alternatives.We propose that satisfaction from choice is an inverted U-shaped function of thenumber of alternatives. This proposition is derived theoretically by considering thebenefits and costs of different numbers of alternatives and is supported by fourexperimental studies. We also manipulate the perceptual costs of information processingand demonstrate how this affects the resulting satisfaction function. We furtherindicate that satisfaction when choosing from a given set is diminished if people aremade aware of the existence of other choice sets. The role of individual differences insatisfaction from choice is documented by noting effects due to gender and culture. Weconclude by emphasizing the need to have an explicit rationale for knowing how muchchoice is enough.
Resumo:
Establishes a Green Government Initiative for the State of Iowa.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
Background: Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs) with exposure and risk relations stemming from different sources.Methods: The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs.Results: The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases.Conclusions: Within reasonable computation time, the method yielded very accurate values for variances of AAFs.
Resumo:
This Executive Order establishes an Iowa Task Force to Rebuild Iowa, after the storms and flood in May and June of 2008.
Resumo:
This Executive Order establishes an Independent Contractor Reform Task Force.
Resumo:
Business news from the Iowa Department of Economic Development
Resumo:
Business Development news from the Iowa Department of Economic Development
Resumo:
Business Development news from the Iowa Department of Economic Development