997 resultados para Axioms of Huzita-Hatori
Resumo:
This article presents and explores the axioms and core ideas, or idées-force, of the Fascist ideologies of the first third of the twentieth century. The aim is to identify the features that define the term “Classical Fascism” as a conceptual category in the study of politics and to uncover the core ideas of its political theory. This analysis requires an appraisal of both the idées-force themselves and the political use that is made of them. If these appreciations are correct, Classical Fascism is characterized by a set of ideological and political aims and methods in which ideas, attitudes and behaviours are determined by an anti-democratic palingenetic ultranationalism underpinned by a sacralized ideology; the quest for a united, indissoluble society as apolitical system and, at the same time, the collective myth that mobilizes and redeems the nation; and third, violence as a political vehicle applied unchecked against internal opposition and against external enemies who challenge the nation´s progression towards the dream of rebirth and the culmination of this progression in the form of an empire.
Resumo:
Transreal arithmetic is a total arithmetic that contains real arithmetic, but which has no arithmetical exceptions. It allows the specification of the Universal Perspex Machine which unifies geometry with the Turing Machine. Here we axiomatise the algebraic structure of transreal arithmetic so that it provides a total arithmetic on any appropriate set of numbers. This opens up the possibility of specifying a version of floating-point arithmetic that does not have any arithmetical exceptions and in which every number is a first-class citizen. We find that literal numbers in the axioms are distinct. In other words, the axiomatisation does not require special axioms to force non-triviality. It follows that transreal arithmetic must be defined on a set of numbers that contains{-8,-1,0,1,8,&pphi;} as a proper subset. We note that the axioms have been shown to be consistent by machine proof.
Resumo:
We characterize the prekernel of NTU games by means of consistency,converse consistency, and five axioms of the Nash type on bilateral problems.The intersection of the prekernel and the core is also characterized with thesame axioms over the class of games where the core is nonempty.
Resumo:
If you want to know whether a property is true or not in a specific algebraic structure,you need to test that property on the given structure. This can be done by hand, which can be cumbersome and erroneous. In addition, the time consumed in testing depends on the size of the structure where the property is applied. We present an implementation of a system for finding counterexamples and testing properties of models of first-order theories. This system is supposed to provide a convenient and paperless environment for researchers and students investigating or studying such models and algebraic structures in particular. To implement a first-order theory in the system, a suitable first-order language.( and some axioms are required. The components of a language are given by a collection of variables, a set of predicate symbols, and a set of operation symbols. Variables and operation symbols are used to build terms. Terms, predicate symbols, and the usual logical connectives are used to build formulas. A first-order theory now consists of a language together with a set of closed formulas, i.e. formulas without free occurrences of variables. The set of formulas is also called the axioms of the theory. The system uses several different formats to allow the user to specify languages, to define axioms and theories and to create models. Besides the obvious operations and tests on these structures, we have introduced the notion of a functor between classes of models in order to generate more co~plex models from given ones automatically. As an example, we will use the system to create several lattices structures starting from a model of the theory of pre-orders.
Resumo:
We reconsider the following cost-sharing problem: agent i = 1,...,n demands a quantity xi of good i; the corresponding total cost C(x1,...,xn) must be shared among the n agents. The Aumann-Shapley prices (p1,...,pn) are given by the Shapley value of the game where each unit of each good is regarded as a distinct player. The Aumann-Shapley cost-sharing method assigns the cost share pixi to agent i. When goods come in indivisible units, we show that this method is characterized by the two standard axioms of Additivity and Dummy, and the property of No Merging or Splitting: agents never find it profitable to split or merge their demands.
Resumo:
Full Text / Article complet
Resumo:
We consider the question whether there exists a Banach space X of density continuum such that every Banach space of density at most continuum isomorphically embeds into X (called a universal Banach space of density c). It is well known that a""(a)/c (0) is such a space if we assume the continuum hypothesis. Some additional set-theoretic assumption is indeed needed, as we prove in the main result of this paper that it is consistent with the usual axioms of set-theory that there is no universal Banach space of density c. Thus, the problem of the existence of a universal Banach space of density c is undecidable using the usual axioms of set-theory. We also prove that it is consistent that there are universal Banach spaces of density c, but a""(a)/c (0) is not among them. This relies on the proof of the consistency of the nonexistence of an isomorphic embedding of C([0, c]) into a""(a)/c (0).
Resumo:
Researchers have long believed the concept of "excitement" in games to be subjective and difficult to measure. This paper presents the development of a mathematically computable index that measures this concept from the viewpoint of an audience. One of the key aspects of the index is the differential of the probability of "winning" before and after one specific "play" in a given game. If the probability of winning becomes very positive or negative by that play, then the audience will feel the game to be "exciting." The index makes a large contribution to the study of games and enables researchers to compare and analyze the "excitement" of various games. It may be applied to many fields especially the area of welfare economics, ranging from allocative efficiency to axioms of justice and equity.
A Mathematical Representation of "Excitement" in Games: A Contribution to the Theory of Game Systems
Resumo:
Researchers have long believed the concept of "excitement" in games to be subjective and difficult to measure. This paper presents the development of a mathematically computable index that measures the concept from the viewpoint of an audience and from that of a player. One of the key aspects of the index is the differential of the probability of "winning" before and after one specific "play" in a given game. The index makes a large contribution to the study of games and enables researchers to compare and analyze the “excitement” of various games. It may be applied in many fields, especially the area of welfare economics, and applications may range from those related to allocative efficiency to axioms of justice and equity.
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
The concepts of substantive beliefs and derived beliefs are defined, a set of substantive beliefs S like open set and the neighborhood of an element substantive belief. A semantic operation of conjunction is defined with a structure of an Abelian group. Mathematical structures exist such as poset beliefs and join-semilattttice beliefs. A metric space of beliefs and the distance of belief depending on the believer are defined. The concepts of closed and opened ball are defined. S′ is defined as subgroup of the metric space of beliefs Σ and S′ is a totally limited set. The term s is defined (substantive belief) in terms of closing of S′. It is deduced that Σ is paracompact due to Stone's Theorem. The pseudometric space of beliefs is defined to show how the metric of the nonbelieving subject has a topological space like a nonmaterial abstract ideal space formed in the mind of the believing subject, fulfilling the conditions of Kuratowski axioms of closure. To establish patterns of materialization of beliefs we are going to consider that these have defined mathematical structures. This will allow us to understand better cultural processes of text, architecture, norms, and education that are forms or the materialization of an ideology. This materialization is the conversion by means of certain mathematical correspondences, of an abstract set whose elements are beliefs or ideas, in an impure set whose elements are material or energetic. Text is a materialization of ideology.
Resumo:
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the 'global' mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy. © 2006 Elsevier B.V. All rights reserved.
Resumo:
A pénzügyekben mind elméletileg, mind az alkalmazások szempontjából fontos kérdés a tőkeallokáció. Hogyan osszuk szét egy adott portfólió kockázatát annak alportfóliói között? Miként tartalékoljunk tőkét a fennálló kockázatok fedezetére, és a tartalékokat hogyan rendeljük az üzleti egységekhez? A tőkeallokáció vizsgálatára axiomatikus megközelítést alkalmazunk, tehát alapvető tulajdonságok megkövetelésével dolgozunk. Cikkünk kiindulópontja Csóka-Pintér [2010] azon eredménye, hogy a koherens kockázati mértékek axiómái, valamint a tőkeallokációra vonatkozó méltányossági, ösztönzési és stabilitási követelmények nincsenek összhangban egymással. Ebben a cikkben analitikus és szimulációs eszközökkel vizsgáljuk ezeket a követelményeket. A gyakorlati alkalmazások során használt, illetve az elméleti szempontból érdekes tőkeallokációs módszereket is elemezzük. A cikk fő következtetése, hogy a Csóka-Pintér [2010] által felvetett probléma gyakorlati szempontból is releváns, tehát az nemcsak az elméleti vizsgálatok során merül fel, hanem igen sokszor előforduló és gyakorlati probléma. A cikk további eredménye, hogy a vizsgált tőkeallokációs módszerek jellemzésével segítséget nyújt az alkalmazóknak a különböző módszerek közötti választáshoz. / === / Risk capital allocation in finance is important theoretically and also in practical applications. How can the risk of a portfolio be shared among its sub-portfolios? How should the capital reserves be set to cover risks, and how should the reserves be assigned to the business units? The study uses an axiomatic approach to analyse risk capital allocation, by working with requiring basic properties. The starting point is a 2010 study by Csoka and Pinter (2010), who showed that the axioms of coherent measures of risk are not compatible with some fairness, incentive compatibility and stability requirements of risk allocation. This paper discusses these requirements using analytical and simulation tools. It analyses methods used in practical applications that have theoretically interesting properties. The main conclusion is that the problems identified in Csoka and Pinter (2010) remain relevant in practical applications, so that it is not just a theoretical issue, it is a common practical problem. A further contribution is made because analysis of risk allocation methods helps practitioners choose among the different methods available.
Resumo:
A kockázat jó mérése és elosztása elengedhetetlen a bankok, biztosítók, befektetési alapok és egyéb pénzügyi vállalkozások belső tőkeallokációjához vagy teljesítményértékeléséhez. A cikkben bemutatjuk, hogy a koherens kockázati mértékek axiómáit nem likvid portfóliók esetén is el lehet várni. Így mérve a kockázatot, ismertetünk a kockázatelosztásra vonatkozó két kooperatív játékelméleti cikket. Az első optimista, eszerint mindig létezik stabil, az alegységek minden koalíciója által elfogadható, általános módszer a kockázat (tőke) elosztására. A második cikk pesszimista, mert azt mondja ki, hogy ha a stabilitás mellett igazságosak is szeretnénk lenni, akkor egy lehetetlenségi tételbe ütközünk. / === / Measuring and allocating risk properly are crucial for performance evaluation and internal capital allocation of portfolios held by banks, insurance companies, investment funds and other entities subject to fi nancial risk. We argue that the axioms of coherent measures of risk are valid for illiquid portfolios as well. Then, we present the results of two papers on allocating risk measured by a coherent measure of risk. Assume a bank has some divisions. According to the fi rst paper there is always a stable allocation of risk capital, which is not blocked by any coalition of the divisions, that is there is a core compatible allocation rule (we present some examples for risk allocation rules). The second paper considers two more natural requirements, Equal Treatment Property and Strong Monotonicity. Equal Treatment Property makes sure that similar divisions are treated symmetrically, that is if two divisions make the same marginal risk contribution to all the coalition of divisions not containing them, then the rule should allocate them the very same risk capital. Strong Monotonicity requires that if the risk environment changes in such a way that the marginal contribution of a division is not decreasing, then its allocated risk capital should not decrease either. However, if risk is evaluated by any coherent measure of risk, then there is no risk allocation rule satisfying Core Compatibility, Equal Treatment Property and Strong Monotonicity, we encounter an impossibility result.
Resumo:
The paper reviews some axioms of additivity concerning ranking methods used for generalized tournaments with possible missing values and multiple comparisons. It is shown that one of the most natural properties, called consistency, has strong links to independence of irrelevant comparisons, an axiom judged unfavourable when players have different opponents. Therefore some directions of weakening consistency are suggested, and several ranking methods, the score, generalized row sum and least squares as well as fair bets and its two variants (one of them entirely new) are analysed whether they satisfy the properties discussed. It turns out that least squares and generalized row sum with an appropriate parameter choice preserve the relative ranking of two objects if the ranking problems added have the same comparison structure.