858 resultados para 080401 Coding and Information Theory
Resumo:
Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly assigned to learn about probabilistic reasoning from one of 4 computer-based lessons generated from a 2 (color coding/no color coding) by 2 (labeling/no labeling) between-subjects design. Learners added the labels or color coding at their own pace by clicking buttons in a computer-based lesson. Participants' eye movements were recorded while viewing the lesson. Labeling was beneficial for learning, but color coding was not. In addition, labeling, but not color coding, increased attention to important information in the table and time with the lesson. Both labeling and color coding increased looks between the text and corresponding information in the table. The findings provide support for the multimedia principle, and they suggest that providing labeling enhances learning about probabilistic reasoning from text and tables
Resumo:
Software as a service (SaaS) is a service model in which the applications are accessible from various client devices through internet. Several studies report possible factors driving the adoption of SaaS but none have considered the perception of the SaaS features and the pressures existing in the organization’s environment. We propose an integrated research model that combines the process virtualization theory (PVT) and the institutional theory (INT). PVT seeks to explain whether SaaS processes are suitable for migration into virtual environments via an information technology-based mechanism. INT seeks to explain the effects of the institutionalized environment on the structure and actions of the organization. The research makes three contributions. First, it addresses a gap in the SaaS adoption literature by studying the internal perception of the technical features of SaaS and external coercive, normative, and mimetic pressures faced by an organization. Second, it empirically tests many of the propositions of PVT and INT in the SaaS context, thereby helping to determine how the theory operates in practice. Third, the integration of PVT and INT contributes to the information system (IS) discipline, deepening the applicability and strengths of these theories.
Resumo:
The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Esta tese de Doutorado é dedicada ao estudo de instabilidade financeira e dinâmica em Teoria Monet ária. E demonstrado que corridas banc árias são eliminadas sem custos no modelo padrão de teoria banc ária quando a popula ção não é pequena. É proposta uma extensão em que incerteza agregada é mais severa e o custo da estabilidade financeira é relevante. Finalmente, estabelece-se otimalidade de transições na distribui ção de moeda em economias em que oportunidades de trocas são escassas e heterogêneas. Em particular, otimalidade da inflação depende dos incentivos dinâmicos proporcionados por tais transi ções. O capí tulo 1 estabelece o resultado de estabilidade sem custos para economias grandes ao estudar os efeitos do tamanho populacional na an álise de corridas banc árias de Peck & Shell. No capí tulo 2, otimalidade de dinâmica é estudada no modelo de monet ário de Kiyotaki & Wright quando a sociedade é capaz de implementar uma polí tica inflacion ária. Apesar de adotar a abordagem de desenho de mecanismos, este capí tulo faz um paralelo com a an álise de Sargent & Wallace (1981) ao destacar efeitos de incentivos dinâmicos sobre a interação entre as polí ticas monet ária e fiscal. O cap ítulo 3 retoma o tema de estabilidade fi nanceira ao quanti car os custos envolvidos no desenho ótimo de um setor bancário à prova de corridas e ao propor uma estrutura informacional alternativa que possibilita bancos insolventes. A primeira an álise mostra que o esquema de estabilidade ótima exibe altas taxas de juros de longo prazo e a segunda que monitoramento imperfeito pode levar a corridas bancárias com insolvência.
Resumo:
In non-extensive statistical mechanics [14], it is a nonsense statement to say that the entropy of a system is extensive (or not), without mentioning a law of composition of its elements. In this theory quantum correlations might be perceived through quantum information process. This article, that is an extension of recent work [4], is a comparative study between the entropies of Von Neumann and of Tsallis, with some implementations of the effect of entropy in quantum entanglement, important as a process for transmission of quantum information. We consider two factorized (Fock number) states, which interact through a beam splitter bilinear Hamiltonian with two entries. This comparison showed us that the entropies of Tsallis and Von Neumann behave differently depending on the reflectance of the beam splitter. © 2011 Academic Publications.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
Chapter 1 studies how consumers’ switching costs affect the pricing and profits of firms competing in two-sided markets such as Apple and Google in the smartphone market. When two-sided markets are dynamic – rather than merely static – I show that switching costs lower the first-period price if network externalities are strong, which is in contrast to what has been found in one-sided markets. By contrast, switching costs soften price competition in the initial period if network externalities are weak and consumers are more patient than the platforms. Moreover, an increase in switching costs on one side decreases the first-period price on the other side. Chapter 2 examines firms’ incentives to invest in local and flexible resources when demand is uncertain and correlated. I find that market power of the monopolist providing flexible resources distorts investment incentives, while competition mitigates them. The extent of improvement depends critically on demand correlation and the cost of capacity: under social optimum and monopoly, if the flexible resource is cheap, the relationship between investment and correlation is positive, and if it is costly, the relationship becomes negative; under duopoly, the relationship is positive. The analysis also sheds light on some policy discussions in markets such as cloud computing. Chapter 3 develops a theory of sequential investments in cybersecurity. The regulator can use safety standards and liability rules to increase security. I show that the joint use of an optimal standard and a full liability rule leads to underinvestment ex ante and overinvestment ex post. Instead, switching to a partial liability rule can correct the inefficiencies. This suggests that to improve security, the regulator should encourage not only firms, but also consumers to invest in security.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
A new proposal to have secure communications in a system is reported. The basis is the use of a synchronized digital chaotic systems, sending the information signal added to an initial chaos. The received signal is analyzed by another chaos generator located at the receiver and, by a logic boolean function of the chaotic and the received signals, the original information is recovered. One of the most important facts of this system is that the bandwidth needed by the system remain the same with and without chaos.
Resumo:
A molecular model of poorly understood hydrophobic effects is heuristically developed using the methods of information theory. Because primitive hydrophobic effects can be tied to the probability of observing a molecular-sized cavity in the solvent, the probability distribution of the number of solvent centers in a cavity volume is modeled on the basis of the two moments available from the density and radial distribution of oxygen atoms in liquid water. The modeled distribution then yields the probability that no solvent centers are found in the cavity volume. This model is shown to account quantitatively for the central hydrophobic phenomena of cavity formation and association of inert gas solutes. The connection of information theory to statistical thermodynamics provides a basis for clarification of hydrophobic effects. The simplicity and flexibility of the approach suggest that it should permit applications to conformational equilibria of nonpolar solutes and hydrophobic residues in biopolymers.
Resumo:
In this paper, we propose a novel filter for feature selection. Such filter relies on the estimation of the mutual information between features and classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon one. The complexity of such bypassing process does not depend on the number of dimensions but on the number of patterns/samples, and thus the curse of dimensionality is circumvented. We show that it is then possible to outperform a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification.
Resumo:
The theory of deliberate practice (Ericsson, Krampe, & Tesch-Römer, 1993) is predicated on the concept that the engagement in specific forms of practice is necessary for the attainment of expertise. The purpose of this paper was to examine the quantity and type of training performed by expert UE triathletes. Twenty-eight UE triathletes were stratified into expert, middle of the pack, and back of the pack groups based on previous finishing times. All participants provided detailed information regarding their involvement in sports in general and the three triathlon sports in particular. Results illustrated that experts performed more training than non-experts but that the relationship between training and performance was not monotonic as suggested by Ericsson et al. Further, experts' training was designed so periods of high training stress were followed by periods of low stress. However, early specialization was not a requirement for expertise. This work indicates that the theory of deliberate practice does not fully explain expertise development in UE triathlon.