946 resultados para Deep Inference, Proof Theory, Teoria della Dimostrazione, Cut elimination, Gentzen Hauptsatz
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
This study focuses on the theory of individual rights that the German theologian Conrad Summenhart (1455-1502) explicated in his massive work Opus septipartitum de contractibus pro foro conscientiae et theologico. The central question to be studied is: How does Summenhart understand the concept of an individual right and its immediate implications? The basic premiss of this study is that in Opus septipartitum Summenhart composed a comprehensive theory of individual rights as a contribution to the on-going medieval discourse on rights. With this rationale, the first part of the study concentrates on earlier discussions on rights as the background for Summenhart s theory. Special attention is paid to language in which right was defined in terms of power . In the fourteenth century writers like Hervaeus Natalis and William Ockham maintained that right signifies power by which the right-holder can to use material things licitly. It will also be shown how the attempts to describe what is meant by the term right became more specified and cultivated. Gerson followed the implications that the term power had in natural philosophy and attributed rights to animals and other creatures. To secure right as a normative concept, Gerson utilized the ancient ius suum cuique-principle of justice and introduced a definition in which right was seen as derived from justice. The latter part of this study makes effort to reconstructing Summenhart s theory of individual rights in three sections. The first section clarifies Summenhart s discussion of the right of the individual or the concept of an individual right. Summenhart specified Gerson s description of right as power, taking further use of the language of natural philosophy. In this respect, Summenhart s theory managed to bring an end to a particular continuity of thought that was centered upon a view in which right was understood to signify power to licit action. Perhaps the most significant feature of Summenhart s discussion was the way he explicated the implication of liberty that was present in Gerson s language of rights. Summenhart assimilated libertas with the self-mastery or dominion that in the economic context of discussion took the form of (a moderate) self-ownership. Summenhart discussion also introduced two apparent extensions to Gerson s terminology. First, Summenhart classified right as relation, and second, he equated right with dominion. It is distinctive of Summenhart s view that he took action as the primary determinant of right: Everyone has as much rights or dominion in regard to a thing, as much actions it is licit for him to exercise in regard to the thing. The second section elaborates Summenhart s discussion of the species dominion, which delivered an answer to the question of what kind of rights exist, and clarified thereby the implications of the concept of an individual right. The central feature in Summenhart s discussion was his conscious effort to systematize Gerson s language by combining classifications of dominion into a coherent whole. In this respect, his treatement of the natural dominion is emblematic. Summenhart constructed the concept of natural dominion by making use of the concepts of foundation (founded on a natural gift) and law (according to the natural law). In defining natural dominion as dominion founded on a natural gift, Summenhart attributed natural dominion to animals and even to heavenly bodies. In discussing man s natural dominion, Summenhart pointed out that the natural dominion is not sufficiently identified by its foundation, but requires further specification, which Summenhart finds in the idea that natural dominion is appropriate to the subject according to the natural law. This characterization lead him to treat God s dominion as natural dominion. Partly, this was due to Summenhart s specific understanding of the natural law, which made reasonableness as the primary criterion for the natural dominion at the expense of any metaphysical considerations. The third section clarifies Summenhart s discussion of the property rights defined by the positive human law. By delivering an account on juridical property rights Summenhart connected his philosophical and theological theory on rights to the juridical language of his times, and demonstrated that his own language of rights was compatible with current juridical terminology. Summenhart prepared his discussion of property rights with an account of the justification for private property, which gave private property a direct and strong natural law-based justification. Summenhart s discussion of the four property rights usus, usufructus, proprietas, and possession aimed at delivering a detailed report of the usage of these concepts in juridical discourse. His discussion was characterized by extensive use of the juridical source texts, which was more direct and verbal the more his discussion became entangled with the details of juridical doctrine. At the same time he promoted his own language on rights, especially by applying the idea of right as relation. He also showed recognizable effort towards systematizing juridical language related to property rights.
Resumo:
The book presents a reconstruction, interpretation and critical evaluation of the Schumpeterian theoretical approach to socio-economic change. The analysis focuses on the problem of social evolution, on the interpretation of the innovation process and business cycles and, finally, on Schumpeter s optimistic neglect of ecological-environmental conditions as possible factors influencing social-economic change. The author investigates how the Schumpeterian approach describes the process of social and economic evolution, and how the logic of transformations is described, explained and understood in the Schumpeterian theory. The material of the study includes Schumpeter s works written after 1925, a related part of the commentary literature on these works, and a selected part of the related literature on the innovation process, technological transformations and the problem of long waves. Concerning the period after 1925, the Schumpeterian oeuvre is conceived and analysed as a more or less homogenous corpus of texts. The book is divided into 9 chapters. Chapters 1-2 describe the research problems and methods. Chapter 3 is an effort to provide a systematic reconstruction of Schumpeter's ideas concerning social and economic evolution. Chapters 4 and 5 focus their analysis on the innovation process. In Chapters 6 and 7 Schumpeter's theory of business cycles is examined. Chapter 8 evaluates Schumpeter's views concerning his relative neglect of ecological-environmental conditions as possible factors influencing social-economic change. Finally, chapter 9 draws the main conclusions.
Resumo:
Tämän tutkielman aiheena on kulttuurienvälisyys kulttuurienvälisessä kaksikielisessä opetuksessa (Educación Intercultural Bilingüe, EIB) Boliviassa ja erityisesti kulttuurienvälisen kaksikielisen koulutuksen maisteriohjelmassa (Maestría en Educación Intercultural Bilingüe), jota koordinoi PROEIB Andes -järjestö yhteistyössä Cochabamban Universidad Mayor de San Simónin kanssa. Tutkielman tarkoituksena on selvittää, miten kulttuurienvälisyys määritellään ja mitä se käytännössä merkitsee opetuksen eri osa-alueilla: sisällöissä, opetusmetodeissa ja -materiaaleissa sekä arvioinnissa. Koska kulttuurienvälisen kaksikielisen opetuksen toteutus ja tutkiminen eri Latinalaisen Amerikan maissa on tähän asti painottunut lähes yksinomaan perusopetukseen, pyrin työssäni keskittymään kulttuurienvälisyyden ilmentymiin nimenomaan bolivialaisessa korkeakoulukontekstissa. Tutkielman aineistona on käytetty kahdeksaa EIB -asiantuntijoiden teemahaastattelua, jotka FM Eila Isotalus on tehnyt Boliviassa vuonna 2004. Haastatteluaineisto analysoitiin teoriasidonnaista eli abduktiivista sisällönanalyysiä käyttäen. Tutkielman teoriatausta koostuu yhtäältä kulttuurienvälisyyteen ja monikulttuurisuuteen liittyvien käsitteiden määrittelystä, ja toisaalta kulttuurienväliseen opetukseen liittyvien mallien esittelystä. Aineiston analyysissä avuksi on ollut etenkin James A. Banksin teoria monikulttuurisen opetuksen viidestä ulottuvuudesta, joiden kautta on voitu pohtia kulttuurienvälisyyden toteutumista opetuksen eri osa-alueilla ja nostaa esille bolivialaisen kulttuurienvälisen opetuksen erityispiirteitä. Aineiston analyysissä ilmenee, että kulttuurienvälisyyden käsitteen määrittely on vahvasti kontekstisidonnainen ja jatkuva prosessi, johon vaikuttavat eri toimijoiden näkemykset ja vaatimukset. EIB -asiantuntijoiden esittämät määrittelyt voidaan jakaa makro- ja mikrososiaaliseen kategoriaan sen mukaan, nähdäänkö kulttuurienvälisyys ensisijaisesti yhteiskunnallisena vai yksilötason käsitteenä. Aineistossa korostuu ajatus latinalaisamerikkalaisesta kulttuurienvälisyydestä poliittisena käsitteenä, jonka keskiössä on vaatimus yhteiskunnallisten valtasuhteiden muutoksesta. Yksi suurimmista haasteista kulttuurienvälisyyden toteuttamisessa bolivialaisessa korkeakouluopetuksessa ovat akateemiseen kulttuuriin liittyvät perinteet, jotka vaikeuttavat uusien toimintatapojen omaksumista. Kulttuurienvälisyys opetuksessa on toistaiseksi tarkoittanut etupäässä sisältöjen monipuolistamista lisäämällä opetusohjelmiin elementtejä paikallisista kulttuureista. Tärkeänä askeleena EIB:n kehityksessä voidaan pitää painopisteen siirtymistä sisältökysymyksistä kulttuurienvälisten opetusmetodien luomiseen. Näiden opetusmenetelmien tulisi pohjautua ymmärrykseen oppimisesta kokonaisvaltaisena, yhteisöllisenä prosessina ja siten kuroa umpeen kuilua koulun ja yhteisöjen arkielämän välillä. Opetusmenetelmien ja -materiaalien suhteen keskeinen kulttuurienvälisyyteen liittyvä kysymys on intiaanikansojen suullisen kulttuurin ja tiedon jakamisen perinteiden hyödyntäminen opetuksessa. Maisteriohjelman opiskelijoiden arvioinnissa pyritään huomioimaan yksilön kokonaisvaltainen kehitys pelkkien opintosuoritusten sijasta, mutta arvosanoihin pohjautuvasta arvostelukäytännöstä ei ole toistaiseksi voitu luopua yliopiston vaatimusten vuoksi. Kaiken kaikkiaan kulttuurienvälisyyden toteuttaminen EIB:ssä ja maisteriohjelmassa on pitkän tähtäimen prosessi, joka vaatii perinteisten opetuskäytäntöjen kyseenalaistamista ja korkeakouluopetuksessa myös akateemisen kulttuurin haastamista. On oleellisen tärkeää, että prosessiin osallistuvat asiantuntijoiden ohella myös esimerkiksi opiskelijat, intiaaniyhteisöt ja -järjestöaktiivit.
Resumo:
A theory and generalized synthesis procedure is advocated for the design of weir notches and orifice-notches having a base in any given shape, to a depth a, such that the discharge through it is proportional to any singular monotonically-increasing function of the depth of flow measured above a certain datum. The problem is reduced to finding an exact solution of a Volterra integral equation in Abel form. The maximization of the depth of the datum below the crest of the notch is investigated. Proof is given that for a weir notch made out of one continuous curve, and for a flow proportional to the mth power of the head, it is impossible to bring the datum lower than (2m − 1)a below the crest of the notch. A new concept of an orifice-notch, having discontinuity in the curve and a division of flow into two distinct portions, is presented. The division of flow is shown to have a beneficial effect in reducing the datum below (2m − 1)a from the crest of the weir and still maintaining the proportionality of the flow. Experimental proof with one such orifice-notch is found to have a constant coefficient of discharge of 0.625. The importance of this analysis in the design of grit chambers is emphasized.
Resumo:
Governance has been one of the most popular buzzwords in recent political science. As with any term shared by numerous fields of research, as well as everyday language, governance is encumbered by a jungle of definitions and applications. This work elaborates on the concept of network governance. Network governance refers to complex policy-making situations, where a variety of public and private actors collaborate in order to produce and define policy. Governance is processes of autonomous, self-organizing networks of organizations exchanging information and deliberating. Network governance is a theoretical concept that corresponds to an empirical phenomenon. Often, this phenomenon is used to descirbe a historical development: governance is often used to describe changes in political processes of Western societies since the 1980s. In this work, empirical governance networks are used as an organizing framework, and the concepts of autonomy, self-organization and network structure are developed as tools for empirical analysis of any complex decision-making process. This work develops this framework and explores the governance networks in the case of environmental policy-making in the City of Helsinki, Finland. The crafting of a local ecological sustainability programme required support and knowledge from all sectors of administration, a number of entrepreneurs and companies and the inhabitants of Helsinki. The policy process relied explicitly on networking, with public and private actors collaborating to design policy instruments. Communication between individual organizations led to the development of network structures and patterns. This research analyses these patterns and their effects on policy choice, by applying the methods of social network analysis. A variety of social network analysis methods are used to uncover different features of the networked process. Links between individual network positions, network subgroup structures and macro-level network patterns are compared to the types of organizations involved and final policy instruments chosen. By using governance concepts to depict a policy process, the work aims to assess whether they contribute to models of policy-making. The conclusion is that the governance literature sheds light on events that would otherwise go unnoticed, or whose conceptualization would remain atheoretical. The framework of network governance should be in the toolkit of the policy analyst.
Resumo:
We present a general formalism for deriving bounds on the shape parameters of the weak and electromagnetic form factors using as input correlators calculated from perturbative QCD, and exploiting analyticity and unitarily. The values resulting from the symmetries of QCD at low energies or from lattice calculations at special points inside the analyticity domain can be included in an exact way. We write down the general solution of the corresponding Meiman problem for an arbitrary number of interior constraints and the integral equations that allow one to include the phase of the form factor along a part of the unitarity cut. A formalism that includes the phase and some information on the modulus along a part of the cut is also given. For illustration we present constraints on the slope and curvature of the K-l3 scalar form factor and discuss our findings in some detail. The techniques are useful for checking the consistency of various inputs and for controlling the parameterizations of the form factors entering precision predictions in flavor physics.
Resumo:
In this thesis the current status and some open problems of noncommutative quantum field theory are reviewed. The introduction aims to put these theories in their proper context as a part of the larger program to model the properties of quantized space-time. Throughout the thesis, special focus is put on the role of noncommutative time and how its nonlocal nature presents us with problems. Applications in scalar field theories as well as in gauge field theories are presented. The infinite nonlocality of space-time introduced by the noncommutative coordinate operators leads to interesting structure and new physics. High energy and low energy scales are mixed, causality and unitarity are threatened and in gauge theory the tools for model building are drastically reduced. As a case study in noncommutative gauge theory, the Dirac quantization condition of magnetic monopoles is examined with the conclusion that, at least in perturbation theory, it cannot be fulfilled in noncommutative space.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.
Resumo:
A generalization of Nash-Williams′ lemma is proved for the Structure of m-uniform null (m − k)-designs. It is then applied to various graph reconstruction problems. A short combinatorial proof of the edge reconstructibility of digraphs having regular underlying undirected graphs (e.g., tournaments) is given. A type of Nash-Williams′ lemma is conjectured for the vertex reconstruction problem.
Resumo:
In the distributed storage setting introduced by Dimakis et al., B units of data are stored across n nodes in the network in such a way that the data can be recovered by connecting to any k nodes. Additionally one can repair a failed node by connecting to any d nodes while downloading at most beta units of data from each node. In this paper, we introduce a flexible framework in which the data can be recovered by connecting to any number of nodes as long as the total amount of data downloaded is at least B. Similarly, regeneration of a failed node is possible if the new node connects to the network using links whose individual capacity is bounded above by beta(max) and whose sum capacity equals or exceeds a predetermined parameter gamma. In this flexible setting, we obtain the cut-set lower bound on the repair bandwidth along with a constructive proof for the existence of codes meeting this bound for all values of the parameters. An explicit code construction is provided which is optimal in certain parameter regimes.
Resumo:
Critical buckling loads of laminated fibre-reinforced plastic square panels have been obtained using the finite element method. Various boundary conditions, lay-up details, fibre orientations, cut-out sizes are considered. A 36 degrees of freedom triangular element, based on the classical lamination theory (CLT) has been used for the analysis. The performance of this element is validated by comparing results with some of those available in literature. New results have been given for several cases of boundary conditions for [0°/ ± 45°/90°]s laminates. The effect of fibre-orientation in the ply on the buckling loads has been investigated by considering [±?]6s laminates.
Resumo:
We give a simple linear algebraic proof of the following conjecture of Frankl and Furedi [7, 9, 13]. (Frankl-Furedi Conjecture) if F is a hypergraph on X = {1, 2, 3,..., n} such that 1 less than or equal to /E boolean AND F/ less than or equal to k For All E, F is an element of F, E not equal F, then /F/ less than or equal to (i=0)Sigma(k) ((i) (n-1)). We generalise a method of Palisse and our proof-technique can be viewed as a variant of the technique used by Tverberg to prove a result of Graham and Pollak [10, 11, 14]. Our proof-technique is easily described. First, we derive an identity satisfied by a hypergraph F using its intersection properties. From this identity, we obtain a set of homogeneous linear equations. We then show that this defines the zero subspace of R-/F/. Finally, the desired bound on /F/ is obtained from the bound on the number of linearly independent equations. This proof-technique can also be used to prove a more general theorem (Theorem 2). We conclude by indicating how this technique can be generalised to uniform hypergraphs by proving the uniform Ray-Chaudhuri-Wilson theorem. (C) 1997 Academic Press.
Resumo:
The design and development of nonresonant edge slot antenna for phased array applications has been presented. The radiating element is a slot cut on the narrow wall of rectangular waveguide (edge slot). The admittance characteristics of the edge slot have been rigorously studied using a novel hybrid method. Nonresonant arrays have been fabricated using the present slot characterization data and the earlier published data. The experimentally measured electrical characteristics of the antenna are presented which clearly brings out the accuracy of the present method.