869 resultados para Error-correcting codes (Information theory)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional theory of monopolistic screening tackles individualself-selection but does not address the possibility that buyers couldform a coalition to coordinate their purchases and to reallocate thegoods. In this paper, we design the optimal sale mechanism which takesinto account both individual and coalition incentive compatibilityfocusing on the role of asymmetric information among buyers. We showthat when a coalition of buyers is formed under asymmetric information,the monopolist can do as well as when there is no coalition. Although inthe optimal sale mechanism marginal rates of substitution are notequalized across buyers (hence there exists room for arbitrage), theyfail to realize the gains from arbitrage because of the transaction costsin coalition formation generated by asymmetric information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this note is to analyze some implications of the model of commodity money described in Banerjee and Maskin (1996) which may seem paradoxical. In order to do this, we incorporate a general production cost structure into the model. We focus on two different results. First, the existence of technologies that make counterfeiting a commodity more difficult may exclude it from being used as medium of exchange. Second, allocative distortions due to problems of asymmetric information may become larger in the presence of such technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Registering originative business contracts allows entrepreneurs and creditors to choose, andcourts to enforce market-friendly contract rules that protect innocent third parties whenadjudicating disputes on subsequent contracts. This reduces information asymmetry for thirdparties, which enhances impersonal trade. It does so without seriously weakening property rights,because it is rightholders who choose or activate the legal rules and can, therefore, minimize thecost of any possible weakening. Registries are essential not only to make the chosen rules publicbut to ensure rightholders commitment and avoid rule-gaming, because independent registriesmake rightholders choices verifiable by courts. The theory is supported by comparative andhistorical analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the implications of a market imperfection related to the inability to establish intellectual property rights, that we label {\it unverifiable communication}. Employees are able to collude with external parties selling ``knowledge capital'' of the firm. The firm organizer engages in strategic interaction simultaneously with employees and competitors, as she introduces endogenous transaction costs in the market for information between those agents. Incentive schemes and communication costs are the key strategic variables used by the firm to induce frictions in collusive markets. Unverifiable communication introduces severe allocative distortions, both at internal product development and at intended sale of information (technology transfer). We derive implications of the model for observable decisions like characteristics of the employment relationship (full employment, incompatibility with other jobs), firms' preferences over cluster characteristics for location decisions, optimal size at entry, in--house development vs sale strategies for innovations and industry evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When can a single variable be more accurate in binary choice than multiple sources of information? We derive analytically the probability that a single variable (SV) will correctly predict one of two choices when both criterion and predictor are continuous variables. We further provide analogous derivations for multiple regression (MR) and equal weighting (EW) and specify the conditions under which the models differ in expected predictive ability. Key factors include variability in cue validities, intercorrelation between predictors, and the ratio of predictors to observations in MR. Theory and simulations are used to illustrate the differential effects of these factors. Results directly address why and when one-reason decision making can be more effective than analyses that use more information. We thus provide analytical backing to intriguing empirical results that, to date, have lacked theoretical justification. There are predictable conditions for which one should expect less to be more.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is no doubt about the necessity of protecting digital communication: Citizens are entrusting their most confidential and sensitive data to digital processing and communication, and so do governments, corporations, and armed forces. Digital communication networks are also an integral component of many critical infrastructures we are seriously depending on in our daily lives. Transportation services, financial services, energy grids, food production and distribution networks are only a few examples of such infrastructures. Protecting digital communication means protecting confidentiality and integrity by encrypting and authenticating its contents. But most digital communication is not secure today. Nevertheless, some of the most ardent problems could be solved with a more stringent use of current cryptographic technologies. Quite surprisingly, a new cryptographic primitive emerges from the ap-plication of quantum mechanics to information and communication theory: Quantum Key Distribution. QKD is difficult to understand, it is complex, technically challenging, and costly-yet it enables two parties to share a secret key for use in any subsequent cryptographic task, with an unprecedented long-term security. It is disputed, whether technically and economically fea-sible applications can be found. Our vision is, that despite technical difficulty and inherent limitations, Quantum Key Distribution has a great potential and fits well with other cryptographic primitives, enabling the development of highly secure new applications and services. In this thesis we take a structured approach to analyze the practical applicability of QKD and display several use cases of different complexity, for which it can be a technology of choice, either because of its unique forward security features, or because of its practicability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to study the diffusion and transformation of scientific information in everyday discussions. Based on rumour models and social representations theory, the impact of interpersonal communication and pre-existing beliefs on transmission of the content of a scientific discovery was analysed. In three experiments, a communication chain was simulated to investigate how laypeople make sense of a genetic discovery first published in a scientific outlet, then reported in a mainstream newspaper and finally discussed in groups. Study 1 (N=40) demonstrated a transformation of information when the scientific discovery moved along the communication chain. During successive narratives, scientific expert terminology disappeared while scientific information associated with lay terminology persisted. Moreover, the idea of a discovery of a faithfulness gene emerged. Study 2 (N=70) revealed that transmission of the scientific message varied as a function of attitudes towards genetic explanations of behaviour (pro-genetics vs. anti-genetics). Pro-genetics employed more scientific terminology than anti-genetics. Study 3 (N=75) showed that endorsement of genetic explanations was related to descriptive accounts of the scientific information, whereas rejection of genetic explanations was related to evaluative accounts of the information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[eng] Aim: The paper examines the current situation of recognition of patients' right to information in international standards and in the national laws of Belgium, France, Italy, Spain (and Catalonia), Switzerland and the United Kingdom.Methodology: International standards, laws and codes of ethics of physicians and librarians that are currently in force were identified and analyzed with regard to patients' right to information and the ownership of this right. The related subjects of access to clinical history, advance directives and informed consent were not taken into account.Results: All the standards, laws and codes analyzed deal with guaranteeing access to information. The codes of ethics of both physicians and librarians establish the duty to inform.Conclusions: Librarians must collaborate with physicians in the process of informing patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using Monte Carlo simulations and reanalyzing the data of a validation study of the AEIM emotional intelligence test, we demonstrated that an atheoretical approach and the use of weak statistical procedures can result in biased validity estimates. These procedures included stepwise regression-and the general case of failing to include important theoretical controls-extreme scores analysis, and ignoring heteroscedasticity as well as measurement error. The authors of the AEIM test responded by offering more complete information about their analyses, allowing us to further examine the perils of ignoring theory and correct statistical procedures. In this paper we show with extended analyses that the AEIM test is invalid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore the ability of the recently established quasilocal density functional theory for describing the isoscalar giant monopole resonance. Within this theory we use the scaling approach and perform constrained calculations for obtaining the cubic and inverse energy weighted moments (sum rules) of the RPA strength. The meaning of the sum rule approach in this case is discussed. Numerical calculations are carried out using Gogny forces and an excellent agreement is found with HF+RPA results previously reported in literature. The nuclear matter compression modulus predicted in our model lies in the range 210230 MeV which agrees with earlier findings. The information provided by the sum rule approach in the case of nuclei near the neutron drip line is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantum states can be used to encode the information contained in a direction, i.e., in a unit vector. We present the best encoding procedure when the quantum state is made up of N spins (qubits). We find that the quality of this optimal procedure, which we quantify in terms of the fidelity, depends solely on the dimension of the encoding space. We also investigate the use of spatial rotations on a quantum state, which provide a natural and less demanding encoding. In this case we prove that the fidelity is directly related to the largest zeros of the Legendre and Jacobi polynomials. We also discuss our results in terms of the information gain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human cooperation is often based on reputation gained from previous interactions with third parties. Such reputation can be built on generous or punitive actions, and both, one's own reputation and the reputation of others have been shown to influence decision making in experimental games that control for confounding variables. Here we test how reputation-based cooperation and punishment react to disruption of the cognitive processing in different kinds of helping games with observers. Saying a few superfluous words before each interaction was used to possibly interfere with working memory. In a first set of experiments, where reputation could only be based on generosity, the disruption reduced the frequency of cooperation and lowered mean final payoffs. In a second set of experiments where reputation could only be based on punishment, the disruption increased the frequency of antisocial punishment (i.e. of punishing those who helped) and reduced the frequency of punishing defectors. Our findings suggest that working memory can easily be constraining in reputation-based interactions within experimental games, even if these games are based on a few simple rules with a visual display that provides all the information the subjects need to play the strategies predicted from current theory. Our findings also highlight a weakness of experimental games, namely that they can be very sensitive to environmental variation and that quantitative conclusions about antisocial punishment or other behavioral strategies can easily be misleading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this essay is to reflect on a possible relation between entropy and emergence. A qualitative, relational approach is followed. We begin by highlighting that entropy includes the concept of dispersal, relevant to our enquiry. Emergence in complex systems arises from the coordinated behavior of their parts. Coordination in turn necessitates recognition between parts, i.e., information exchange. What will be argued here is that the scope of recognition processes between parts is increased when preceded by their dispersal, which multiplies the number of encounters and creates a richer potential for recognition. A process intrinsic to emergence is dissolvence (aka submergence or top-down constraints), which participates in the information-entropy interplay underlying the creation, evolution and breakdown of higher-level entities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.