43 resultados para SIZE DEFECT MODEL
Resumo:
This article presents a formal model of policy decision-making in an institutional framework of separation of powers in which the main actors are pivotal political parties with voting discipline. The basic model previously developed from pivotal politics theory for the analysis of the United States lawmaking is here modified to account for policy outcomes and institutional performances in other presidential regimes, especially in Latin America. Legislators' party indiscipline at voting and multi-partism appear as favorable conditions to reduce the size of the equilibrium set containing collectively inefficient outcomes, while a two-party system with strong party discipline is most prone to produce 'gridlock', that is, stability of socially inefficient policies. The article provides a framework for analysis which can induce significant revisions of empirical data, especially regarding the effects of situations of (newly defined) unified and divided government, different decision rules, the number of parties and their discipline. These implications should be testable and may inspire future analytical and empirical work.
Resumo:
This article presents a formal model of policy decision-making in an institutional framework of separation of powers in which the main actors are pivotal political parties with voting discipline. The basic model previously developed from pivotal politics theory for the analysis of the United States lawmaking is here modified to account for policy outcomes and institutional performances in other presidential regimes, especially in Latin America. Legislators' party indiscipline at voting and multi-partism appear as favorable conditions to reduce the size of the equilibrium set containing collectively inefficient outcomes, while a two-party system with strong party discipline is most prone to produce 'gridlock', that is, stability of socially inefficient policies. The article provides a framework for analysis which can induce significant revisions of empirical data, especially regarding the effects of situations of (newly defined) unified and divided government, different decision rules, the number of parties and their discipline. These implications should be testable and may inspire future analytical and empirical work.
Resumo:
This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.
Resumo:
In this paper we attempt to describe the general reasons behind the world populationexplosion in the 20th century. The size of the population at the end of the century inquestion, deemed excessive by some, was a consequence of a dramatic improvementin life expectancies, attributable, in turn, to scientific innovation, the circulation ofinformation and economic growth. Nevertheless, fertility is a variable that plays acrucial role in differences in demographic growth. We identify infant mortality, femaleeducation levels and racial identity as important exogenous variables affecting fertility.It is estimated that in poor countries one additional year of primary schooling forwomen leads to 0.614 child less per couple on average (worldwide). While it may bepossible to identify a global tendency towards convergence in demographic trends,particular attention should be paid to the case of Africa, not only due to its differentdemographic patterns, but also because much of the continent's population has yet toexperience improvement in quality of life generally enjoyed across the rest of theplanet.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
Defects in SnO2 nanowires have been studied by cathodoluminescence, and the obtained spectra have been compared with those measured on SnO2 nanocrystals of different sizes in order to reveal information about point defects not determined by other characterization techniques. Dependence of the luminescence bands on the thermal treatment temperatures and pre-treatment conditions have been determined pointing out their possible relation, due to the used treatment conditions, with the oxygen vacancy concentration. To explain these cathodoluminescence spectra and their behavior, a model based on first-principles calculations of the surface oxygen vacancies in the different crystallographic directions is proposed for corroborating the existence of surface state bands localized at energy values compatible with the found cathodoluminescence bands and with the gas sensing mechanisms. CL bands centered at 1.90 and 2.20 eV are attributed to the surface oxygen vacancies 100° coordinated with tin atoms, whereas CL bands centered at 2.37 and 2.75 eV are related to the surface oxygen vacancies 130° coordinated. This combined process of cathodoluminescence and ab initio calculations is shown to be a powerful tool for nanowire defect analysis.
Resumo:
We study the behavior of the random-bond Ising model at zero temperature by numerical simulations for a variable amount of disorder. The model is an example of systems exhibiting a fluctuationless first-order phase transition similar to some field-induced phase transitions in ferromagnetic systems and the martensitic phase transition appearing in a number of metallic alloys. We focus on the study of the hysteresis cycles appearing when the external field is swept from positive to negative values. By using a finite-size scaling hypothesis, we analyze the disorder-induced phase transition between the phase exhibiting a discontinuity in the hysteresis cycle and the phase with the continuous hysteresis cycle. Critical exponents characterizing the transition are obtained. We also analyze the size and duration distributions of the magnetization jumps (avalanches).
Resumo:
Spanning avalanches in the 3D Gaussian Random Field Ising Model (3D-GRFIM) with metastable dynamics at T=0 have been studied. Statistical analysis of the field values for which avalanches occur has enabled a Finite-Size Scaling (FSS) study of the avalanche density to be performed. Furthermore, a direct measurement of the geometrical properties of the avalanches has confirmed an earlier hypothesis that several types of spanning avalanches with two different fractal dimensions coexist at the critical point. We finally compare the phase diagram of the 3D-GRFIM with metastable dynamics with the same model in equilibrium at T=0.
Resumo:
We study the nonequilibrium behavior of the three-dimensional Gaussian random-field Ising model at T=0 in the presence of a uniform external field using a two-spin-flip dynamics. The deterministic, history-dependent evolution of the system is compared with the one obtained with the standard one-spin-flip dynamics used in previous studies of the model. The change in the dynamics yields a significant suppression of coercivity, but the distribution of avalanches (in number and size) stays remarkably similar, except for the largest ones that are responsible for the jump in the saturation magnetization curve at low disorder in the thermodynamic limit. By performing a finite-size scaling study, we find strong evidence that the change in the dynamics does not modify the universality class of the disorder-induced phase transition.
Resumo:
The influence of vacancy concentration on the behavior of the three-dimensional random field Ising model with metastable dynamics is studied. We have focused our analysis on the number of spanning avalanches which allows us a clear determination of the critical line where the hysteresis loops change from continuous to discontinuous. By a detailed finite-size scaling analysis we determine the phase diagram and numerically estimate the critical exponents along the whole critical line. Finally, we discuss the origin of the curvature of the critical line at high vacancy concentration.
Resumo:
Intensive numerical studies of exact ground states of the two-dimensional ferromagnetic random field Ising model at T=0, with a Gaussian distribution of fields, are presented. Standard finite size scaling analysis of the data suggests the existence of a transition at ¿c=0.64±0.08. Results are compared with existing theories and with the study of metastable avalanches in the same model.
Resumo:
In this paper, we study dynamical aspects of the two-dimensional (2D) gonihedric spin model using both numerical and analytical methods. This spin model has vanishing microscopic surface tension and it actually describes an ensemble of loops living on a 2D surface. The self-avoidance of loops is parametrized by a parameter ¿. The ¿=0 model can be mapped to one of the six-vertex models discussed by Baxter, and it does not have critical behavior. We have found that allowing for ¿¿0 does not lead to critical behavior either. Finite-size effects are rather severe, and in order to understand these effects, a finite-volume calculation for non-self-avoiding loops is presented. This model, like his 3D counterpart, exhibits very slow dynamics, but a careful analysis of dynamical observables reveals nonglassy evolution (unlike its 3D counterpart). We find, also in this ¿=0 case, the law that governs the long-time, low-temperature evolution of the system, through a dual description in terms of defects. A power, rather than logarithmic, law for the approach to equilibrium has been found.
Resumo:
We study the problem of the partition of a system of initial size V into a sequence of fragments s1,s2,s3 . . . . By assuming a scaling hypothesis for the probability p(s;V) of obtaining a fragment of a given size, we deduce that the final distribution of fragment sizes exhibits power-law behavior. This minimal model is useful to understanding the distribution of avalanche sizes in first-order phase transitions at low temperatures.