968 resultados para incomplete information
Resumo:
As a sequel to a paper that dealt with the analysis of two-way quantitative data in large germplasm collections, this paper presents analytical methods appropriate for two-way data matrices consisting of mixed data types, namely, ordered multicategory and quantitative data types. While various pattern analysis techniques have been identified as suitable for analysis of the mixed data types which occur in germplasm collections, the clustering and ordination methods used often can not deal explicitly with the computational consequences of large data sets (i.e. greater than 5000 accessions) with incomplete information. However, it is shown that the ordination technique of principal component analysis and the mixture maximum likelihood method of clustering can be employed to achieve such analyses. Germplasm evaluation data for 11436 accessions of groundnut (Arachis hypogaea L.) from the International Research Institute of the Semi-Arid Tropics, Andhra Pradesh, India were examined. Data for nine quantitative descriptors measured in the post-rainy season and five ordered multicategory descriptors were used. Pattern analysis results generally indicated that the accessions could be distinguished into four regions along the continuum of growth habit (or plant erectness). Interpretation of accession membership in these regions was found to be consistent with taxonomic information, such as subspecies. Each growth habit region contained accessions from three of the most common groundnut botanical varieties. This implies that within each of the habit types there is the full range of expression for the other descriptors used in the analysis. Using these types of insights, the patterns of variability in germplasm collections can provide scientists with valuable information for their plant improvement programs.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).
Resumo:
When authors of scholarly articles decide where to submit their manuscripts for peer review and eventual publication, they often base their choice of journals on very incomplete information abouthow well the journals serve the authors’ purposes of informing about their research and advancing their academic careers. The purpose of this study was to develop and test a new method for benchmarking scientific journals, providing more information to prospective authors. The method estimates a number of journal parameters, including readership, scientific prestige, time from submission to publication, acceptance rate and service provided by the journal during the review and publication process. Data directly obtainable from the web, data that can be calculated from such data, data obtained from publishers and editors, and data obtained using surveys with authors are used in the method, which has been tested on three different sets of journals, each from a different discipline. We found a number of problems with the different data acquisition methods, which limit the extent to which the method can be used. Publishers and editors are reluctant to disclose important information they have at hand (i.e. journal circulation, web downloads, acceptance rate). The calculation of some important parameters (for instance average time from submission to publication, regional spread of authorship) can be done but requires quite a lot of work. It can be difficult to get reasonable response rates to surveys with authors. All in all we believe that the method we propose, taking a “service to authors” perspective as a basis for benchmarking scientific journals, is useful and can provide information that is valuable to prospective authors in selected scientific disciplines.
Resumo:
Financing trade between economic agents located in different countries is affected by many types of risks, resulting from incomplete information about the debtor, the problems of enforcing international contracts, or the prevalence of political and financial crises. Trade is important for economic development and the availability of trade finance is essential, especially for developing countries. Relatively few studies treat the topic of political risk, particularly in the context of international lending. This thesis explores new ground to identify links between political risk and international debt defaults. The core hypothesis of the study is that the default probability of debt increases with increasing political risk in the country of the borrower. The thesis consists of three essays that support the hypothesis from different angles of the credit evaluation process. The first essay takes the point of view of an international lender assessing the credit risk of a public borrower. The second investigates creditworthiness assessment of companies. The obtained results are substantiated in the third essay that deals with an extensive political risk survey among finance professionals in developing countries. The financial instruments of core interest are export credit guaranteed debt initiated between the Export Credit Agency of Finland and buyers in 145 countries between 1975 and 2006. Default events of the foreign credit counterparts are conditioned on country-specific macroeconomic variables, corporate-specific accounting information as well as political risk indicators from various international sources. Essay 1 examines debt issued to government controlled institutions and conditions public default events on traditional macroeconomic fundamentals, in addition to selected political and institutional risk factors. Confirming previous research, the study finds country indebtedness and the GDP growth rate to be significant indicators of public default. Further, it is shown that public defaults respond to various political risk factors. However, the impact of the risk varies between countries at different stages of economic development. Essay 2 proceeds by investigating political risk factors as conveivable drivers of corporate default and uses traditional accounting variables together with new political risk indicators in the credit evaluation of private debtors. The study finds links between corporate default and leverage, as well as between corporate default and the general investment climate and measeures of conflict in the debtor country. Essay 3 concludes the thesis by offering survey evidence on the impact of political risk on debt default, as perceived and experienced by 103 finance professionals in 38 developing countries. Taken together, the results of the thesis suggest that various forms of political risk are associated with international debt defaults and continue to pose great concerns for both international creditors and borrowers in developing countries. The study provides new insights on the importance of variable selection in country risk analysis, and shows how political risk is actually perceived and experienced in the riskier, often lower income countries of the global economy.
Resumo:
We consider the problem of Probably Ap-proximate Correct (PAC) learning of a bi-nary classifier from noisy labeled exam-ples acquired from multiple annotators(each characterized by a respective clas-sification noise rate). First, we consider the complete information scenario, where the learner knows the noise rates of all the annotators. For this scenario, we derive sample complexity bound for the Mini-mum Disagreement Algorithm (MDA) on the number of labeled examples to be ob-tained from each annotator. Next, we consider the incomplete information sce-nario, where each annotator is strategic and holds the respective noise rate as a private information. For this scenario, we design a cost optimal procurement auc-tion mechanism along the lines of Myer-son’s optimal auction design framework in a non-trivial manner. This mechanism satisfies incentive compatibility property,thereby facilitating the learner to elicit true noise rates of all the annotators.
Resumo:
We generalise and extend the work of Iñarra and Laruelle (2011) by studying two person symmetric evolutionary games with two strategies, a heterogenous population with two possible types of individuals and incomplete information. Comparing such games with their classic homogeneous version vith complete information found in the literature, we show that for the class of anti-coordination games the only evolutionarily stable strategy vanishes. Instead, we find infinite neutrally stable strategies. We also model the evolutionary process using two different replicator dynamics setups, each with a different inheritance rule, and we show that both lead to the same results with respect to stability.
Resumo:
We assume that 2 x 2 matrix games are publicly known and that players perceive a dichotomous characteristic on their opponents which defines two types for each player. In turn, each type has beliefs concerning her opponent's types, and payoffs are assumed to be type-independent. We analyze whether the mere possibility of different types playing different strategies generates discriminatory equilibria. Given a specific information structure we find that in equilibrium a player discriminates between her types if and only if her opponent does so. We also find that for dominant solvable 2x2 games no discriminatory equilibrium exists, while under different conditions of concordance between players' beliefs discrimination appears for coordination and for competitive games. A complete characterization of the set of Bayesian equilibria is provided.
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
This thesis consists of three essays in the areas of political economy and game theory, unified by their focus on the effects of pre-play communication on equilibrium outcomes.
Communication is fundamental to elections. Chapter 2 extends canonical voter turnout models, where citizens, divided into two competing parties, choose between costly voting and abstaining, to include any form of communication, and characterizes the resulting set of Aumann's correlated equilibria. In contrast to previous research, high-turnout equilibria exist in large electorates and uncertain environments. This difference arises because communication can coordinate behavior in such a way that citizens find it incentive compatible to follow their correlated signals to vote more. The equilibria have expected turnout of at least twice the size of the minority for a wide range of positive voting costs.
In Chapter 3 I introduce a new equilibrium concept, called subcorrelated equilibrium, which fills the gap between Nash and correlated equilibrium, extending the latter to multiple mediators. Subcommunication equilibrium similarly extends communication equilibrium for incomplete information games. I explore the properties of these solutions and establish an equivalence between a subset of subcommunication equilibria and Myerson's quasi-principals' equilibria. I characterize an upper bound on expected turnout supported by subcorrelated equilibrium in the turnout game.
Chapter 4, co-authored with Thomas Palfrey, reports a new study of the effect of communication on voter turnout using a laboratory experiment. Before voting occurs, subjects may engage in various kinds of pre-play communication through computers. We study three communication treatments: No Communication, a control; Public Communication, where voters exchange public messages with all other voters, and Party Communication, where messages are exchanged only within one's own party. Our results point to a strong interaction effect between the form of communication and the voting cost. With a low voting cost, party communication increases turnout, while public communication decreases turnout. The data are consistent with correlated equilibrium play. With a high voting cost, public communication increases turnout. With communication, we find essentially no support for the standard Nash equilibrium turnout predictions.
Resumo:
Esta é uma tese centrada nas estratégias empregadas pelos eleitores para o processamento das informações sobre a política, no contexto da campanha presidencial brasileira de 2006. Propusemos, neste trabalho, um modelo estatístico para o processamento da informação sobre a política, construído a partir da contribuição de estudos realizados nos campos de conhecimento das ciências sociais, da economia, da psicologia cognitiva e da comunicação, e, sobretudo, a partir das evidências extraídas de nosso desenho de pesquisa. Este combinou métodos qualitativo, quantitativo e a análise das estratégias retóricas empregadas por candidatos e partidos políticos no Horário Gratuito de Propaganda Eleitoral (HGPE), elemento dinâmico de nosso estudo, por sintetizar os fluxos de informação no ambiente das campanhas políticas. Esse conjunto de abordagens metodológicas, foi empregado para o estudo de caso do eleitor belo-horizontino, inserido no complexo ambiente informacional das campanhas presidenciais. Com informações incompletas, o eleitor precisou escolher em quem acreditar, lidando com a incerteza dos resultados do pleito e com a incerteza em relação ao comportamento futuro dos atores, cioso de que as retóricas da campanha estavam orientadas para a persuasão. O nosso trabalho procurou mapear as estratégias empregadas pelos eleitores na seleção de temas do debate para a atenção e para o processamento das novas informações sobre a política, adquiridas em interações múltiplas ao longo da campanha. Essa complexa tarefa foi destinada à escolha de por quem ser persuadido. Procuramos responder, neste trabalho, a partir das evidências empíricas, várias preocupações deste campo de conhecimento, entre elas: 1) Em meio a tantos temas abordados na disputa entre partidos e candidatos, quais deles e por que o indivíduo escolhe para prestar atenção e acreditar? 2) Que variáveis intermedeiam e qual o seu peso nesse processo de interação com as novas informações para explicar a tomada de decisão? 3) As prioridades da agenda política do eleitor se alteram ao longo da campanha? 4) Os eleitores ampliam o repertório mais geral de informação sobre a política? 5) As percepções sobre avaliação de governo e em relação aos temas prioritários da agenda do eleitor se alteram ao longo da campanha?
Resumo:
本文介绍了关系数据库中信息不完全性的含义及其表现,论述了关系数据库中不同范畴不完全性同时存在的可能性和必然性,并在关系模式中引入一种混合类不完全信息。为了能够表示和处理关系数据库中这种混合类不完全信息,文中最后给出了一种简化的扩展关系模型。
Resumo:
Stochastic reservoir modeling is a technique used in reservoir describing. Through this technique, multiple data sources with different scales can be integrated into the reservoir model and its uncertainty can be conveyed to researchers and supervisors. Stochastic reservoir modeling, for its digital models, its changeable scales, its honoring known information and data and its conveying uncertainty in models, provides a mathematical framework or platform for researchers to integrate multiple data sources and information with different scales into their prediction models. As a fresher method, stochastic reservoir modeling is on the upswing. Based on related works, this paper, starting with Markov property in reservoir, illustrates how to constitute spatial models for catalogued variables and continuum variables by use of Markov random fields. In order to explore reservoir properties, researchers should study the properties of rocks embedded in reservoirs. Apart from methods used in laboratories, geophysical means and subsequent interpretations may be the main sources for information and data used in petroleum exploration and exploitation. How to build a model for flow simulations based on incomplete information is to predict the spatial distributions of different reservoir variables. Considering data source, digital extent and methods, reservoir modeling can be catalogued into four sorts: reservoir sedimentology based method, reservoir seismic prediction, kriging and stochastic reservoir modeling. The application of Markov chain models in the analogue of sedimentary strata is introduced in the third of the paper. The concept of Markov chain model, N-step transition probability matrix, stationary distribution, the estimation of transition probability matrix, the testing of Markov property, 2 means for organizing sections-method based on equal intervals and based on rock facies, embedded Markov matrix, semi-Markov chain model, hidden Markov chain model, etc, are presented in this part. Based on 1-D Markov chain model, conditional 1-D Markov chain model is discussed in the fourth part. By extending 1-D Markov chain model to 2-D, 3-D situations, conditional 2-D, 3-D Markov chain models are presented. This part also discusses the estimation of vertical transition probability, lateral transition probability and the initialization of the top boundary. Corresponding digital models are used to specify, or testify related discussions. The fifth part, based on the fourth part and the application of MRF in image analysis, discusses MRF based method to simulate the spatial distribution of catalogued reservoir variables. In the part, the probability of a special catalogued variable mass, the definition of energy function for catalogued variable mass as a Markov random field, Strauss model, estimation of components in energy function are presented. Corresponding digital models are used to specify, or testify, related discussions. As for the simulation of the spatial distribution of continuum reservoir variables, the sixth part mainly explores 2 methods. The first is pure GMRF based method. Related contents include GMRF model and its neighborhood, parameters estimation, and MCMC iteration method. A digital example illustrates the corresponding method. The second is two-stage models method. Based on the results of catalogued variables distribution simulation, this method, taking GMRF as the prior distribution for continuum variables, taking the relationship between catalogued variables such as rock facies, continuum variables such as porosity, permeability, fluid saturation, can bring a series of stochastic images for the spatial distribution of continuum variables. Integrating multiple data sources into the reservoir model is one of the merits of stochastic reservoir modeling. After discussing how to model spatial distributions of catalogued reservoir variables, continuum reservoir variables, the paper explores how to combine conceptual depositional models, well logs, cores, seismic attributes production history.
Resumo:
We consider the Battle of the Sexes game with incomplete information and allow two-sided cheap talk before the game is played. We characterise the set of fully revealing symmetric cheap talk equilibria. The best fully revealing symmetric cheap talk equilibrium, when exists, has a desirable characteristic. When the players' types are different, it fully coordinates on the ex-post efficient pure Nash equilibrium. We also analyse the mediated communication equilibria of the game. We find the range of the prior for which this desirable equilibrium exists under unmediated and mediated communication processes.
Resumo:
Artifact removal from physiological signals is an essential component of the biosignal processing pipeline. The need for powerful and robust methods for this process has become particularly acute as healthcare technology deployment undergoes transition from the current hospital-centric setting toward a wearable and ubiquitous monitoring environment. Currently, determining the relative efficacy and performance of the multiple artifact removal techniques available on real world data can be problematic, due to incomplete information on the uncorrupted desired signal. The majority of techniques are presently evaluated using simulated data, and therefore, the quality of the conclusions is contingent on the fidelity of the model used. Consequently, in the biomedical signal processing community, there is considerable focus on the generation and validation of appropriate signal models for use in artifact suppression. Most approaches rely on mathematical models which capture suitable approximations to the signal dynamics or underlying physiology and, therefore, introduce some uncertainty to subsequent predictions of algorithm performance. This paper describes a more empirical approach to the modeling of the desired signal that we demonstrate for functional brain monitoring tasks which allows for the procurement of a ground truth signal which is highly correlated to a true desired signal that has been contaminated with artifacts. The availability of this ground truth, together with the corrupted signal, can then aid in determining the efficacy of selected artifact removal techniques. A number of commonly implemented artifact removal techniques were evaluated using the described methodology to validate the proposed novel test platform. © 2012 IEEE.
Resumo:
In a human-computer dialogue system, the dialogue strategy can range from very restrictive to highly flexible. Each specific dialogue style has its pros and cons and a dialogue system needs to select the most appropriate style for a given user. During the course of interaction, the dialogue style can change based on a user’s response and the system observation of the user. This allows a dialogue system to understand a user better and provide a more suitable way of communication. Since measures of the quality of the user’s interaction with the system can be incomplete and uncertain, frameworks for reasoning with uncertain and incomplete information can help the system make better decisions when it chooses a dialogue strategy. In this paper, we investigate how to select a dialogue strategy based on aggregating the factors detected during the interaction with the user. For this purpose, we use probabilistic logic programming (PLP) to model probabilistic knowledge about how these factors will affect the degree of freedom of a dialogue. When a dialogue system needs to know which strategy is more suitable, an appropriate query can be executed against the PLP and a probabilistic solution with a degree of satisfaction is returned. The degree of satisfaction reveals how much the system can trust the probability attached to the solution.