900 resultados para probabilistic rules
Resumo:
A novel framework for probabilistic-based structural assessment of existing structures, which combines model identification and reliability assessment procedures, considering in an objective way different sources of uncertainty, is presented in this paper. A short description of structural assessment applications, provided in literature, is initially given. Then, the developed model identification procedure, supported in a robust optimization algorithm, is presented. Special attention is given to both experimental and numerical errors, to be considered in this algorithm convergence criterion. An updated numerical model is obtained from this process. The reliability assessment procedure, which considers a probabilistic model for the structure in analysis, is then introduced, incorporating the results of the model identification procedure. The developed model is then updated, as new data is acquired, through a Bayesian inference algorithm, explicitly addressing statistical uncertainty. Finally, the developed framework is validated with a set of reinforced concrete beams, which were loaded up to failure in laboratory.
Resumo:
Many democratic decision making institutions involve quorum rules. Such rules are commonly motivated by concerns about the “legitimacy” or “representativeness” of decisions reached when only a subset of eligible voters participates. A prominent example of this can be found in the context of direct democracy mechanisms, such as referenda and initiatives. We conduct a laboratory experiment to investigate the consequences of the two most common types of quorum rules: a participation quorum and an approval quorum. We find that both types of quora lead to lower participation rates, dramatically increasing the likelihood of full-fledged electoral boycotts on the part of those who endorse the Status Quo. This discouraging effect is significantly larger under a participation quorum than under an approval quorum.
Resumo:
In recent decades, an increased interest has been evidenced in the research on multi-scale hierarchical modelling in the field of mechanics, and also in the field of wood products and timber engineering. One of the main motivations for hierar-chical modelling is to understand how properties, composition and structure at lower scale levels may influence and be used to predict the material properties on a macroscopic and structural engineering scale. This chapter presents the applicability of statistic and probabilistic methods, such as the Maximum Likelihood method and Bayesian methods, in the representation of timber’s mechanical properties and its inference accounting to prior information obtained in different importance scales. These methods allow to analyse distinct timber’s reference properties, such as density, bending stiffness and strength, and hierarchically consider information obtained through different non, semi or destructive tests. The basis and fundaments of the methods are described and also recommendations and limitations are discussed. The methods may be used in several contexts, however require an expert’s knowledge to assess the correct statistic fitting and define the correlation arrangement between properties.
Resumo:
Mestrado em Finanças
Resumo:
La verificación y el análisis de programas con características probabilistas es una tarea necesaria del quehacer científico y tecnológico actual. El éxito y su posterior masificación de las implementaciones de protocolos de comunicación a nivel hardware y soluciones probabilistas a problemas distribuidos hacen más que interesante el uso de agentes estocásticos como elementos de programación. En muchos de estos casos el uso de agentes aleatorios produce soluciones mejores y más eficientes; en otros proveen soluciones donde es imposible encontrarlas por métodos tradicionales. Estos algoritmos se encuentran generalmente embebidos en múltiples mecanismos de hardware, por lo que un error en los mismos puede llegar a producir una multiplicación no deseada de sus efectos nocivos.Actualmente el mayor esfuerzo en el análisis de programas probabilísticos se lleva a cabo en el estudio y desarrollo de herramientas denominadas chequeadores de modelos probabilísticos. Las mismas, dado un modelo finito del sistema estocástico, obtienen de forma automática varias medidas de performance del mismo. Aunque esto puede ser bastante útil a la hora de verificar programas, para sistemas de uso general se hace necesario poder chequear especificaciones más completas que hacen a la corrección del algoritmo. Incluso sería interesante poder obtener automáticamente las propiedades del sistema, en forma de invariantes y contraejemplos.En este proyecto se pretende abordar el problema de análisis estático de programas probabilísticos mediante el uso de herramientas deductivas como probadores de teoremas y SMT solvers. Las mismas han mostrado su madurez y eficacia en atacar problemas de la programación tradicional. Con el fin de no perder automaticidad en los métodos, trabajaremos dentro del marco de "Interpretación Abstracta" el cual nos brinda un delineamiento para nuestro desarrollo teórico. Al mismo tiempo pondremos en práctica estos fundamentos mediante implementaciones concretas que utilicen aquellas herramientas.
Resumo:
Fuzzy classification, semi-supervised learning, data mining
Resumo:
In this paper, we study individual incentives to report preferences truthfully for the special case when individuals have dichotomous preferences on the set of alternatives and preferences are aggregated in form of scoring rules. In particular, we show that (a) the Borda Count coincides with Approval Voting on the dichotomous preference domain, (b) the Borda Count is the only strategy-proof scoring rule on the dichotomous preference domain, and (c) if at least three individuals participate in the election, then the dichotomous preference domain is the unique maximal rich domain under which the Borda Count is strategy-proof.
Resumo:
Constitutional arrangements affect the decisions made by a society. We study how this effect leads to preferences of citizens over constitutions; and ultimately how this has a feedback that determines which constitutions can survive in a given society. Constitutions are stylized here, to consist of a voting rule for ordinary business and possibly different voting rule for making changes to the constitution. We deffine an equilibrium notion for constitutions, called self-stability, whereby under the rules of a self-stable constitution, the society would not vote to change the constitution. We argue that only self-stable constitutions will endure. We prove that self-stable constitutions always exist, but that most constitutions (even very prominent ones) may not be self-stable for some societies. We show that constitutions where the voting rule used to amend the constitution is the same as the voting rule used for ordinary business are dangerously simplistic, and there are (many) societies for which no such constitution is self-stable rule. We conclude with a characterization of the set of self-stable constitutions that use majority rule for ordinary business.
Resumo:
The division problem consists of allocating an amount of a perfectly divisible good among a group of n agents with single-peaked preferences. A rule maps preference profiles into n shares of the amount to be allocated. A rule is bribe-proof if no group of agents can compensate another agent to misrepresent his preference and, after an appropriate redistribution of their shares, each obtain a strictly preferred share. We characterize all bribe-proof rules as the class of efficient, strategy-proof, and weak replacement monotonic rules. In addition, we identify the functional form of all bribe-proof and tops-only rules.
Resumo:
We study the assignment of indivisible objects with quotas (houses, jobs, or offices) to a set of agents (students, job applicants, or professors). Each agent receives at most one object and monetary compensations are not possible. We characterize efficient priority rules by efficiency, strategy-proofness, and renegotiation-proofness. Such a rule respects an acyclical priority structure and the allocations can be determined using the deferred acceptance algorithm.
Resumo:
We consider the following allocation problem: A fixed number of public facilities must be located on a line. Society is composed of $N$ agents, who must be allocated to one and only one of these facilities. Agents have single peaked preferences over the possible location of the facilities they are assigned to, and do not care about the location of the rest of facilities. There is no congestion. In this context, we observe that if a public decision is a Condorcet winner, then it satisfies nice properties of internal and external stability. Though in many contexts and for some preference profiles there may be no Condorcet winners, we study the extent to which stability can be made compatible with the requirement of choosing Condorcet winners whenever they exist.
Resumo:
The division problem consists of allocating an amount M of a perfectly divisible good among a group of n agents. Sprumont (1991) showed that if agents have single-peaked preferences over their shares, the uniform rule is the unique strategy-proof, efficient, and anonymous rule. Ching and Serizawa (1998) extended this result by showing that the set of single-plateaued preferences is the largest domain, for all possible values of M, admitting a rule (the extended uniform rule) satisfying strategy-proofness, efficiency and symmetry. We identify, for each M and n, a maximal domain of preferences under which the extended uniform rule also satisfies the properties of strategy-proofness, efficiency, continuity, and "tops-onlyness". These domains (called weakly single-plateaued) are strictly larger than the set of single-plateaued preferences. However, their intersection, when M varies from zero to infinity, coincides with the set of single-plateaued preferences.
Resumo:
The aim of this article is to analyse those situations in which learning and socialisation take place within the context of the Common Foreign and Security Policy (CFSP), in particular, at the level of experts in the Council Working Groups. Learning can explain the institutional development of CFSP and changes in the foreign policies of the Member States. Some scope conditions for learning and channels of institutionalisation are identified. Socialisation, resulting from learning within a group, is perceived as a strategic action by reflective actors. National diplomats, once they arrive in Brussels, learn the new code of conduct of their Working Groups. They are embedded in two environments and faced with two logics: the European one in the Council and the national one in the Ministries of Foreign Affairs (MFA). The empirical evidence supports the argument that neither rational nor sociological approaches alone can account for these processes.
Resumo:
Scandals of selective reporting of clinical trial results by pharmaceutical firms have underlined the need for more transparency in clinical trials. We provide a theoretical framework which reproduces incentives for selective reporting and yields three key implications concerning regulation. First, a compulsory clinical trial registry complemented through a voluntary clinical trial results database can implement full transparency (the existence of all trials as well as their results is known). Second, full transparency comes at a price. It has a deterrence effect on the incentives to conduct clinical trials, as it reduces the firms'gains from trials. Third, in principle, a voluntary clinical trial results database without a compulsory registry is a superior regulatory tool; but we provide some qualified support for additional compulsory registries when medical decision-makers cannot anticipate correctly the drug companies' decisions whether to conduct trials. Keywords: pharmaceutical firms, strategic information transmission, clinical trials, registries, results databases, scientific knowledge JEL classification: D72, I18, L15