26 resultados para Similarity Laws


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes a panel of 18 European countries spanning from 1950 to 2003 toexamine the extent to which the legal reforms leading to easier divorce that took placeduring the second half of the 20th century have contributed to the increase in divorce rates across Europe. We use a quasi-experimental set-up and exploit the different timing of the reforms in divorce laws across countries. We account for unobserved country-specificfactors by introducing country fixed effects, and we include country-specific trends tocontrol for time-varying factors at the country level that may be correlated with divorcerates and divorce laws, such as changing social norms or slow moving demographictrends. We find that the reforms were followed by significant increases in divorce rates.Overall, we estimate that the introduction of no-fault, unilateral divorce increased thedivorce rate by about 1, a sizeable effect given the average rate of 4.2 divorces per 1,000married people in 2002.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article studies the effects of interest rate restrictions on loan allocation. The British governmenttightened the usury laws in 1714, reducing the maximum permissible interest rate from 6% to5%. A sample of individual loan transactions reveals that average loan size and minimum loan sizeincreased strongly, while access to credit worsened for those with little social capital. Collateralisedcredits, which had accounted for a declining share of total lending, returned to their former role ofprominence. Our results suggest that the usury laws distorted credit markets significantly; we findno evidence that they offered a form of Pareto-improving social insurance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A class of composite estimators of small area quantities that exploit spatial (distancerelated)similarity is derived. It is based on a distribution-free model for the areas, but theestimators are aimed to have optimal design-based properties. Composition is applied alsoto estimate some of the global parameters on which the small area estimators depend.It is shown that the commonly adopted assumption of random effects is not necessaryfor exploiting the similarity of the districts (borrowing strength across the districts). Themethods are applied in the estimation of the mean household sizes and the proportions ofsingle-member households in the counties (comarcas) of Catalonia. The simplest version ofthe estimators is more efficient than the established alternatives, even though the extentof spatial similarity is quite modest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that the statistics of an edge type variable in natural images exhibits self-similarity properties which resemble those of local energy dissipation in turbulent flows. Our results show that self-similarity and extended self-similarity hold remarkably for the statistics of the local edge variance, and that the very same models can be used to predict all of the associated exponents. These results suggest using natural images as a laboratory for testing more elaborate scaling models of interest for the statistical description of turbulent flows. The properties we have exhibited are relevant for the modeling of the early visual system: They should be included in models designed for the prediction of receptive fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We demonstrate that the self-similarity of some scale-free networks with respect to a simple degree-thresholding renormalization scheme finds a natural interpretation in the assumption that network nodes exist in hidden metric spaces. Clustering, i.e., cycles of length three, plays a crucial role in this framework as a topological reflection of the triangle inequality in the hidden geometry. We prove that a class of hidden variable models with underlying metric spaces are able to accurately reproduce the self-similarity properties that we measured in the real networks. Our findings indicate that hidden geometries underlying these real networks are a plausible explanation for their observed topologies and, in particular, for their self-similarity with respect to the degree-based renormalization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The kinetics and microstructure of solid-phase crystallization under continuous heating conditions and random distribution of nuclei are analyzed. An Arrhenius temperature dependence is assumed for both nucleation and growth rates. Under these circumstances, the system has a scaling law such that the behavior of the scaled system is independent of the heating rate. Hence, the kinetics and microstructure obtained at different heating rates differ only in time and length scaling factors. Concerning the kinetics, it is shown that the extended volume evolves with time according to αex = [exp(κCt′)]m+1, where t′ is the dimensionless time. This scaled solution not only represents a significant simplification of the system description, it also provides new tools for its analysis. For instance, it has been possible to find an analytical dependence of the final average grain size on kinetic parameters. Concerning the microstructure, the existence of a length scaling factor has allowed the grain-size distribution to be numerically calculated as a function of the kinetic parameters

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyse the use of the ordered weighted average (OWA) in decision-making giving special attention to business and economic decision-making problems. We present several aggregation techniques that are very useful for decision-making such as the Hamming distance, the adequacy coefficient and the index of maximum and minimum level. We suggest a new approach by using immediate weights, that is, by using the weighted average and the OWA operator in the same formulation. We further generalize them by using generalized and quasi-arithmetic means. We also analyse the applicability of the OWA operator in business and economics and we see that we can use it instead of the weighted average. We end the paper with an application in a business multi-person decision-making problem regarding production management

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyse the use of the ordered weighted average (OWA) in decision-making giving special attention to business and economic decision-making problems. We present several aggregation techniques that are very useful for decision-making such as the Hamming distance, the adequacy coefficient and the index of maximum and minimum level. We suggest a new approach by using immediate weights, that is, by using the weighted average and the OWA operator in the same formulation. We further generalize them by using generalized and quasi-arithmetic means. We also analyse the applicability of the OWA operator in business and economics and we see that we can use it instead of the weighted average. We end the paper with an application in a business multi-person decision-making problem regarding production management

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PLFC is a first-order possibilistic logic dealing with fuzzy constants and fuzzily restricted quantifiers. The refutation proof method in PLFC is mainly based on a generalized resolution rule which allows an implicit graded unification among fuzzy constants. However, unification for precise object constants is classical. In order to use PLFC for similarity-based reasoning, in this paper we extend a Horn-rule sublogic of PLFC with similarity-based unification of object constants. The Horn-rule sublogic of PLFC we consider deals only with disjunctive fuzzy constants and it is equipped with a simple and efficient version of PLFC proof method. At the semantic level, it is extended by equipping each sort with a fuzzy similarity relation, and at the syntactic level, by fuzzily “enlarging” each non-fuzzy object constant in the antecedent of a Horn-rule by means of a fuzzy similarity relation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The regulation of speed limits in the US had been centralized at the federal level since 1974, until decisions were devolved to the states in 1995. However, the centralization debate has reemerged in recent years. Here, we conduct the first econometric analysis of the determinants of speed limit laws. By using economic, geographic and political variables, our results suggest that geography -which affects private mobility needs and preferences- is the main factor influencing speed limit laws. We also highlight the role played by political ideology, with Republican constituencies being associated with higher speed limits. Furthermore, we identify the presence of regional and time dependence effects. By contrast, poor road safety outcomes do not impede the enactment of high speed limits. Overall, we present the first evidence of the role played by geographical, ideological and regional characteristics, which provide us with a better understanding of the formulation of speed limit policies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One main assumption in the theory of rough sets applied to information tables is that the elements that exhibit the same information are indiscernible (similar) and form blocks that can be understood as elementary granules of knowledge about the universe. We propose a variant of this concept defining a measure of similarity between the elements of the universe in order to consider that two objects can be indiscernible even though they do not share all the attribute values because the knowledge is partial or uncertain. The set of similarities define a matrix of a fuzzy relation satisfying reflexivity and symmetry but transitivity thus a partition of the universe is not attained. This problem can be solved calculating its transitive closure what ensure a partition for each level belonging to the unit interval [0,1]. This procedure allows generalizing the theory of rough sets depending on the minimum level of similarity accepted. This new point of view increases the rough character of the data because increases the set of indiscernible objects. Finally, we apply our results to a not real application to be capable to remark the differences and the improvements between this methodology and the classical one