869 resultados para Error-correcting codes (Information theory)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We define an applicative theory of truth TPT which proves totality exactly for the polynomial time computable functions. TPT has natural and simple axioms since nearly all its truth axioms are standard for truth theories over an applicative framework. The only exception is the axiom dealing with the word predicate. The truth predicate can only reflect elementhood in the words for terms that have smaller length than a given word. This makes it possible to achieve the very low proof-theoretic strength. Truth induction can be allowed without any constraints. For these reasons the system TPT has the high expressive power one expects from truth theories. It allows embeddings of feasible systems of explicit mathematics and bounded arithmetic. The proof that the theory TPT is feasible is not easy. It is not possible to apply a standard realisation approach. For this reason we develop a new realisation approach whose realisation functions work on directed acyclic graphs. In this way, we can express and manipulate realisation information more efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information-centric networking (ICN) has been proposed to cope with the drawbacks of the Internet Protocol, namely scalability and security. The majority of research efforts in ICN have focused on routing and caching in wired networks, while little attention has been paid to optimizing the communication and caching efficiency in wireless networks. In this work, we study the application of Raptor codes to Named Data Networking (NDN), which is a popular ICN architecture, in order to minimize the number of transmitted messages and accelerate content retrieval times. We propose RC-NDN, which is a NDN compatible Raptor codes architecture. In contrast to other coding-based NDN solutions that employ network codes, RC-NDN considers security architectures inherent to NDN. Moreover, different from existing network coding based solutions for NDN, RC-NDN does not require significant computational resources, which renders it appropriate for low cost networks. We evaluate RC-NDN in mobile scenarios with high mobility. Evaluations show that RC-NDN outperforms the original NDN significantly. RC-NDN is particularly efficient in dense environments, where retrieval times can be reduced by 83% and the number of Data transmissions by 84.5% compared to NDN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many technological developments of the past two decades come with the promise of greater IT flexi-bility, i.e. greater capacity to adapt IT. These technologies are increasingly used to improve organiza-tional routines that are not affected by large, hard-to-change IT such as ERP. Yet, most findings on the interaction of routines and IT stem from contexts where IT is hard to change. Our research ex-plores how routines and IT co-evolve when IT is flexible. We review the literatures on routines to sug-gest that IT may act as a boundary object that mediates the learning process unfolding between the ostensive and the performative aspect of the routine. Although prior work has concluded from such conceptualizations that IT stabilizes routines, we qualify that flexible IT can also stimulate change because it enables learning in short feedback cycles. We suggest that, however, such change might not always materialize because it is contingent on governance choices and technical knowledge. We de-scribe the case-study method to explore how routines and flexible IT co-evolve and how governance and technical knowledge influence this process. We expect to contribute towards stronger theory of routines and to develop recommendations for the effective implementation of flexible IT in loosely coupled routines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The International GNSS Service (IGS) issues four sets of so-called ultra-rapid products per day, which are based on the contributions of the IGS Analysis Centers. The traditional (“old”) ultra-rapid orbit and earth rotation parameters (ERP) solution of the Center for Orbit Determination in Europe (CODE) was based on the output of three consecutive 3-day long-arc rapid solutions. Information from the IERS Bulletin A was required to generate the predicted part of the old CODE ultra-rapid product. The current (“new”) product, activated in November 2013, is based on the output of exactly one multi-day solution. A priori information from the IERS Bulletin A is no longer required for generating and predicting the orbits and ERPs. This article discusses the transition from the old to the new CODE ultra-rapid orbit and ERP products and the associated improvement in reliability and performance. All solutions used in this article were generated with the development version of the Bernese GNSS Software. The package was slightly extended to meet the needs of the new CODE ultra-rapid generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information systems (IS) outsourcing projects often fail to achieve initial goals. To avoid project failure, managers need to design formal controls that meet the specific contextual demands of the project. However, the dynamic and uncertain nature of IS outsourcing projects makes it difficult to design such specific formal controls at the outset of a project. It is hence crucial to translate high-level project goals into specific formal controls during the course of a project. This study seeks to understand the underlying patterns of such translation processes. Based on a comparative case study of four outsourced software development projects, we inductively develop a process model that consists of three unique patterns. The process model shows that the performance implications of emergent controls with higher specificity depend on differences in the translation process. Specific formal controls have positive implications for goal achievement if only the stakeholder context is adapted, while they are negative for goal achievement if in the translation process tasks are unintendedly adapted. In the latter case projects incrementally drift away from their initial direction. Our findings help to better understand control dynamics in IS outsourcing projects. We contribute to a process theoretic understanding of IS outsourcing governance and we derive implications for control theory and the IS project escalation literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We regularize compact and non-compact Abelian Chern–Simons–Maxwell theories on a spatial lattice using the Hamiltonian formulation. We consider a doubled theory with gauge fields living on a lattice and its dual lattice. The Hilbert space of the theory is a product of local Hilbert spaces, each associated with a link and the corresponding dual link. The two electric field operators associated with the link-pair do not commute. In the non-compact case with gauge group R, each local Hilbert space is analogous to the one of a charged “particle” moving in the link-pair group space R2 in a constant “magnetic” background field. In the compact case, the link-pair group space is a torus U(1)2 threaded by k units of quantized “magnetic” flux, with k being the level of the Chern–Simons theory. The holonomies of the torus U(1)2 give rise to two self-adjoint extension parameters, which form two non-dynamical background lattice gauge fields that explicitly break the manifest gauge symmetry from U(1) to Z(k). The local Hilbert space of a link-pair then decomposes into representations of a magnetic translation group. In the pure Chern–Simons limit of a large “photon” mass, this results in a Z(k)-symmetric variant of Kitaev’s toric code, self-adjointly extended by the two non-dynamical background lattice gauge fields. Electric charges on the original lattice and on the dual lattice obey mutually anyonic statistics with the statistics angle . Non-Abelian U(k) Berry gauge fields that arise from the self-adjoint extension parameters may be interesting in the context of quantum information processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A vast amount of temporal information is provided on the Web. Even though many facts expressed in documents are time-related, the temporal properties of Web presentations have not received much attention. In database research, temporal databases have become a mainstream topic in recent years. In Web documents, temporal data may exist as meta data in the header and as user-directed data in the body of a document. Whereas temporal data can easily be identified in the semi-structured meta data, it is more difficult to determine temporal data and its role in the body. We propose procedures for maintaining temporal integrity of Web pages and outline different approaches of applying bitemporal data concepts for Web documents. In particular, we regard desirable functionalities of Web repositories and other Web-related tools that may support the Webmasters in managing the temporal data of their Web documents. Some properties of a prototype environment are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study provided further information about stuttering among bilingual populations and attempted to assess the significance of repeated oral-motor movements during an adaptation task in two bilingual adults. This was accomplished by requesting that bilingual people who stutter to complete an adaptation task of the same written passage in two different languages. Explored was the following research question: In bilingual speakers who stutter, what is the effect of altering the oral-motor movements by changing the language of the passage read during an adaptation task? Two bilingual adults were each requested to complete an adaptation task consisting of 10 readings in two separate conditions. Participants 1 and 2 completed two conditions, each of which contained a separate passage. Condition B consisted of an adaptation procedure in which the participants read five successive readings in English followed by five additional successive readings in Language 1 (L1). Following the completion of the first randomly assigned condition, the participant was given a rest period of 30 minutes before beginning the remaining condition and passage. Condition A consisted of an adaptation procedure in which the participants read five successive readings in L1 followed by five additional successive readings in English. Results across participants, conditions, and languages indicated an atypical adaptation curve over 10 readings characterized by a dramatic increase in stuttering following a change of language. Closer examination of individual participants revealed differences in stuttering and adaptation among languages and conditions. Participants 1 and 2 demonstrated differences in adaptation and stuttering among languages. Participant 1 demonstrated an increase in stuttering following a change in language read in Condition B and a decrease in stuttering following a change in language read in Condition A. It is speculated that language proficiency contributed to the observed differences in stuttering following a change of language. Participant 2 demonstrated an increase in stuttering following a change in language read in Condition A and a minimal increase in stuttering following a change in language read in Condition B. It is speculated that a change in the oral-motor plan contributed to the increase in stuttering in Condition A. Collectively, findings from this exploratory study lend support to an interactive effect between language proficiency and a change in the oral-motor plan contributing to increased stuttering following a change of language during an adaptation task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this research and development project was to develop a method, a design, and a prototype for gathering, managing, and presenting data about occupational injuries.^ State-of-the-art systems analysis and design methodologies were applied to the long standing problem in the field of occupational safety and health of processing workplace injuries data into information for safety and health program management as well as preliminary research about accident etiologies. The top-down planning and bottom-up implementation approach was utilized to design an occupational injury management information system. A description of a managerial control system and a comprehensive system to integrate safety and health program management was provided.^ The project showed that current management information systems (MIS) theory and methods could be applied successfully to the problems of employee injury surveillance and control program performance evaluation. The model developed in the first section was applied at The University of Texas Health Science Center at Houston (UTHSCH).^ The system in current use at the UTHSCH was described and evaluated, and a prototype was developed for the UTHSCH. The prototype incorporated procedures for collecting, storing, and retrieving records of injuries and the procedures necessary to prepare reports, analyses, and graphics for management in the Health Science Center. Examples of reports, analyses, and graphics presenting UTHSCH and computer generated data were included.^ It was concluded that a pilot test of this MIS should be implemented and evaluated at the UTHSCH and other settings. Further research and development efforts for the total safety and health management information systems, control systems, component systems, and variable selection should be pursued. Finally, integration of the safety and health program MIS into the comprehensive or executive MIS was recommended. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current study investigates the relationship between individual differences in attachment style and the recall of autobiographical memories. According to attachment theory, affect regulation strategies employed by individuals high in attachment anxiety and high in attachment avoidance are likely to influence how information about the past is recalled. This study examines how attachment anxiety and attachment avoidance relate to the presence of negative emotions in autobiographical memories of upsetting events with important relationship figures (i.e., mother, father, or roommate). Participants included 248 undergraduate students ranging from ages 18-22 that attend a public university in the northeast. As hypothesized, individuals with an avoidant attachment expressed less sadness in their responses to the written narrative task, especially when prompted for memories involving their primary caregiver. Contrary to the hypothesis, anxiously attached individuals did not display higher levels of worry/fear emotions in their responses to the written narrative. Attachment anxiety was related to some differences in emotional content; however, this varied by relationship partner. The results provide evidence linking attachment style to emotion selection and retrieval in autobiographical memories of ‘upsetting’ events. Implications for close relationships and therapy are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a general theory of land inheritance rules. We distinguish between two classes of rules: those that allow a testator discretion in disposing of his land (like a best-qualified rule), and those that constrain his choice (like primogeniture). The primary benefit of the latter is to prevent rent seeking by heirs, but the cost is that testators cannot make use of information about the relative abilities of his heirs to manage the land. We also account for the impact of scale economies in land use. We conclude by offering some empirical tests of the model using a cross-cultural sample of societies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.