226 resultados para Knotted axioms
Resumo:
In the present paper we investigate the life cycles of formalized theories that appear in decision making instruments and science. In few words mixed theories are build in the following steps: Initially a small collection of facts is the kernel of the theory. To express these facts we make a special formalized language. When the collection grows we add some inference rules and thus some axioms to compress the knowledge. The next step is to generalize these rules to all expressions in the formalized language. For these rules we introduce some conclusion procedure. In such a way we make small theories for restricted fields of the knowledge. The most important procedure is the mixing of these partial knowledge systems. In that step we glue the theories together and eliminate the contradictions. The last operation is the most complicated one and some simplifying procedures are proposed.
Resumo:
The main concern of this paper is to present some improvements to results on the existence or non-existence of countably additive Borel measures that are not Radon measures on Banach spaces taken with their weak topologies, on the standard axioms (ZFC) of set-theory. However, to put the results in perspective we shall need to say something about consistency results concerning measurable cardinals.
Resumo:
The current INFRAWEBS European research project aims at developing ICT framework enabling software and service providers to generate and establish open and extensible development platforms for Web Service applications. One of the concrete project objectives is developing a full-life-cycle software toolset for creating and maintaining Semantic Web Services (SWSs) supporting specific applications based on Web Service Modelling Ontology (WSMO) framework. According to WSMO, functional and behavioural descriptions of a SWS may be represented by means of complex logical expressions (axioms). The paper describes a specialized userfriendly tool for constructing and editing such axioms – INFRAWEBS Axiom Editor. After discussing the main design principles of the Editor, its functional architecture is briefly presented. The tool is implemented in Eclipse Graphical Environment Framework and Eclipse Rich Client Platform.
Resumo:
The "recursive" definition of Default Logic is shown to be representable in a monotonic Modal Quantificational Logic whose modal laws are stronger than S5. Specifically, it is proven that a set of sentences of First Order Logic is a fixed-point of the "recursive" fixed-point equation of Default Logic with an initial set of axioms and defaults if and only if the meaning of the fixed-point is logically equivalent to a particular modal functor of the meanings of that initial set of sentences and of the sentences in those defaults. This is important because the modal representation allows the use of powerful automatic deduction systems for Modal Logic and because unlike the original "recursive" definition of Default Logic, it is easily generalized to the case where quantified variables may be shared across the scope of the components of the defaults.
Resumo:
The nonmonotonic logic called Reflective Logic is shown to be representable in a monotonic Modal Quantificational Logic whose modal laws are stronger than S5. Specifically, it is proven that a set of sentences of First Order Logic is a fixed-point of the fixed-point equation of Reflective Logic with an initial set of axioms and defaults if and only if the meaning of that set of sentences is logically equivalent to a particular modal functor of the meanings of that initial set of sentences and of the sentences in those defaults. This result is important because the modal representation allows the use of powerful automatic deduction systems for Modal Logic and because unlike the original Reflective Logic, it is easily generalized to the case where quantified variables may be shared across the scope of the components of the defaults thus allowing such defaults to produce quantified consequences. Furthermore, this generalization properly treats such quantifiers since all the laws of First Order Logic hold and since both the Barcan Formula and its converse hold.
Resumo:
The nonmonotonic logic called Default Logic is shown to be representable in a monotonic Modal Quantificational Logic whose modal laws are stronger than S5. Specifically, it is proven that a set of sentences of First Order Logic is a fixed-point of the fixed-point equation of Default Logic with an initial set of axioms and defaults if and only if the meaning or rather disquotation of that set of sentences is logically equivalent to a particular modal functor of the meanings of that initial set of sentences and of the sentences in those defaults. This result is important because the modal representation allows the use of powerful automatic deduction systems for Modal Logic and because unlike the original Default Logic, it is easily generalized to the case where quantified variables may be shared across the scope of the components of the defaults thus allowing such defaults to produce quantified consequences. Furthermore, this generalization properly treats such quantifiers since both the Barcan Formula and its converse hold.
Resumo:
The nonmonotonic logic called Autoepistemic Logic is shown to be representable in a monotonic Modal Quantificational Logic whose modal laws are stronger than S5. Specifically, it is proven that a set of sentences of First Order Logic is a fixed-point of the fixed-point equation of Autoepistemic Logic with an initial set of axioms if and only if the meaning or rather disquotation of that set of sentences is logically equivalent to a particular modal functor of the meaning of that initial set of sentences. This result is important because the modal representation allows the use of powerful automatic deduction systems for Modal Logic and unlike the original Autoepistemic Logic, it is easily generalized to the case where quantified variables may be shared across the scope of modal expressions thus allowing the derivation of quantified consequences. Furthermore, this generalization properly treats such quantifiers since both the Barcan formula and its converse hold.
Resumo:
Петра Г. Стайнова - Квази-линдельофовите пространства са въведени от Архангелски като усилване на слабо-линдельофовите. В тази статия се разглеждат няколко свойства на квази-линдельофовите пространства и се правят сравнения със съответни ре- зултати за линдельофовите и слабо-линдельофовите пространства. Дадени са няколко примера, включително на слабо-линдельофово пространство, което не е квази-линдельофово; на пространство, което е топологично произведение на две линдельофови, но не е дори квази-линдельофово, и на пространство, което е квази-линдельофово, но не Суслиново. Накрая са поставени няколко отворени въпроси.
Resumo:
2000 Mathematics Subject Classification: 49J52, 49J50, 58C20, 26B09.
Resumo:
In a framework with two parties, deterministic voter preferences and a type of geographical constraints, we propose a set of simple axioms and show that they jointly characterize the districting rule that maximizes the number of districts one party can win, given the distribution of individual votes (the \optimal gerrymandering rule"). As a corollary, we obtain that no districting rule can satisfy our axioms and treat parties symmetrically.
Resumo:
A pénzügyekben mind elméletileg, mind az alkalmazások szempontjából fontos kérdés a tőkeallokáció. Hogyan osszuk szét egy adott portfólió kockázatát annak alportfóliói között? Miként tartalékoljunk tőkét a fennálló kockázatok fedezetére, és a tartalékokat hogyan rendeljük az üzleti egységekhez? A tőkeallokáció vizsgálatára axiomatikus megközelítést alkalmazunk, tehát alapvető tulajdonságok megkövetelésével dolgozunk. Cikkünk kiindulópontja Csóka-Pintér [2010] azon eredménye, hogy a koherens kockázati mértékek axiómái, valamint a tőkeallokációra vonatkozó méltányossági, ösztönzési és stabilitási követelmények nincsenek összhangban egymással. Ebben a cikkben analitikus és szimulációs eszközökkel vizsgáljuk ezeket a követelményeket. A gyakorlati alkalmazások során használt, illetve az elméleti szempontból érdekes tőkeallokációs módszereket is elemezzük. A cikk fő következtetése, hogy a Csóka-Pintér [2010] által felvetett probléma gyakorlati szempontból is releváns, tehát az nemcsak az elméleti vizsgálatok során merül fel, hanem igen sokszor előforduló és gyakorlati probléma. A cikk további eredménye, hogy a vizsgált tőkeallokációs módszerek jellemzésével segítséget nyújt az alkalmazóknak a különböző módszerek közötti választáshoz. / === / Risk capital allocation in finance is important theoretically and also in practical applications. How can the risk of a portfolio be shared among its sub-portfolios? How should the capital reserves be set to cover risks, and how should the reserves be assigned to the business units? The study uses an axiomatic approach to analyse risk capital allocation, by working with requiring basic properties. The starting point is a 2010 study by Csoka and Pinter (2010), who showed that the axioms of coherent measures of risk are not compatible with some fairness, incentive compatibility and stability requirements of risk allocation. This paper discusses these requirements using analytical and simulation tools. It analyses methods used in practical applications that have theoretically interesting properties. The main conclusion is that the problems identified in Csoka and Pinter (2010) remain relevant in practical applications, so that it is not just a theoretical issue, it is a common practical problem. A further contribution is made because analysis of risk allocation methods helps practitioners choose among the different methods available.
Resumo:
In this paper shortest path games are considered. The transportation of a good in a network has costs and benet too. The problem is to divide the prot of the transportation among the players. Fragnelli et al (2000) introduce the class of shortest path games, which coincides with the class of monotone games. They also give a characterization of the Shapley value on this class of games. In this paper we consider further four characterizations of the Shapley value (Shapley (1953)'s, Young (1985)'s, Chun (1989)'s, and van den Brink (2001)'s axiomatizations), and conclude that all the mentioned axiomatizations are valid for shortest path games. Fragnelli et al (2000)'s axioms are based on the graph behind the problem, in this paper we do not consider graph specic axioms, we take TU axioms only, that is, we consider all shortest path problems and we take the view of abstract decision maker who focuses rather on the abstract problem than on the concrete situations.
Resumo:
A kockázat jó mérése és elosztása elengedhetetlen a bankok, biztosítók, befektetési alapok és egyéb pénzügyi vállalkozások belső tőkeallokációjához vagy teljesítményértékeléséhez. A cikkben bemutatjuk, hogy a koherens kockázati mértékek axiómáit nem likvid portfóliók esetén is el lehet várni. Így mérve a kockázatot, ismertetünk a kockázatelosztásra vonatkozó két kooperatív játékelméleti cikket. Az első optimista, eszerint mindig létezik stabil, az alegységek minden koalíciója által elfogadható, általános módszer a kockázat (tőke) elosztására. A második cikk pesszimista, mert azt mondja ki, hogy ha a stabilitás mellett igazságosak is szeretnénk lenni, akkor egy lehetetlenségi tételbe ütközünk. / === / Measuring and allocating risk properly are crucial for performance evaluation and internal capital allocation of portfolios held by banks, insurance companies, investment funds and other entities subject to fi nancial risk. We argue that the axioms of coherent measures of risk are valid for illiquid portfolios as well. Then, we present the results of two papers on allocating risk measured by a coherent measure of risk. Assume a bank has some divisions. According to the fi rst paper there is always a stable allocation of risk capital, which is not blocked by any coalition of the divisions, that is there is a core compatible allocation rule (we present some examples for risk allocation rules). The second paper considers two more natural requirements, Equal Treatment Property and Strong Monotonicity. Equal Treatment Property makes sure that similar divisions are treated symmetrically, that is if two divisions make the same marginal risk contribution to all the coalition of divisions not containing them, then the rule should allocate them the very same risk capital. Strong Monotonicity requires that if the risk environment changes in such a way that the marginal contribution of a division is not decreasing, then its allocated risk capital should not decrease either. However, if risk is evaluated by any coherent measure of risk, then there is no risk allocation rule satisfying Core Compatibility, Equal Treatment Property and Strong Monotonicity, we encounter an impossibility result.
Resumo:
Az életben számtalan olyan esettel találkozunk, amikor egy jószág iránti kereslet meghaladja a rendelkezésre álló kínálatot. Példaként említhetjük a kárpótlási igényeket, egy csődbement cég hitelezőinek igényeit, valamely szerv átültetésére váró betegek sorát stb. Ilyen helyzetekben valamilyen eljárás szerint oszthatjuk el a szűkös mennyiséget a szereplők között. Szokás megkülönböztetni a determinisztikus és a sztochasztikus elosztási eljárásokat, jóllehet sok esetben csak a determinisztikus eljárásokat alkalmazzák. Azonban igazságossági szempontból gyakran használnak sztochasztikus elosztási eljárásokat is, mint például tette azt az Egyesült államok hadserege a második világháború végét követően a külföldön állomásozó katonáinak visszavonásakor, illetve a vietnami háború során behívandó személyek kiválasztásakor. / === / We investigated the minimal variance methods introduced in Tasnádi [6] based on seven popular axioms. We proved that if a deterministic rationing method satisfies demand monotonicity, resource monotonicity, equal treatment of equals and self-duality, than the minimal variance methods associated with the given deterministic rationing method also satisfies demand monotonicity, resource monotonicity, equal treatment of equals and self-duality. Furthermore, we found that the consistency, the lower composition and the upper composition of a deterministic rationing method does not imply the consistency, the lower composition and the upper composition of a minimal variance method associated with the given deterministic rationing method.
Resumo:
In this paper shortest path games are considered. The transportation of a good in a network has costs and benet too. The problem is to divide the prot of the transportation among the players. Fragnelli et al (2000) introduce the class of shortest path games, which coincides with the class of monotone games. They also give a characterization of the Shapley value on this class of games. In this paper we consider further four characterizations of the Shapley value (Shapley (1953)'s, Young (1985)'s, Chun (1989)'s, and van den Brink (2001)'s axiomatizations), and conclude that all the mentioned axiomatizations are valid for shortest path games. Fragnelli et al (2000)'s axioms are based on the graph behind the problem, in this paper we do not consider graph specic axioms, we take TU axioms only, that is, we consider all shortest path problems and we take the view of abstract decision maker who focuses rather on the abstract problem than on the concrete situations.