954 resultados para inductive inference


Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Invited paper for the NATO Advanced Study Institute Seminar on Computer Oriented Learning Processes, Aug. 26-Sept. 7, 1974, Bonas, France."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"UILU-ENG 77 1727."--Cover.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recent axiomatic derivations of the maximum entropy principle from consistency conditions are critically examined. We show that proper application of consistency conditions alone allows a wider class of functionals, essentially of the form ∝ dx p(x)[p(x)/g(x)] s , for some real numbers, to be used for inductive inference and the commonly used form − ∝ dx p(x)ln[p(x)/g(x)] is only a particular case. The role of the prior densityg(x) is clarified. It is possible to regard it as a geometric factor, describing the coordinate system used and it does not represent information of the same kind as obtained by measurements on the system in the form of expectation values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We generalize the classical notion of Vapnik–Chernovenkis (VC) dimension to ordinal VC-dimension, in the context of logical learning paradigms. Logical learning paradigms encompass the numerical learning paradigms commonly studied in Inductive Inference. A logical learning paradigm is defined as a set W of structures over some vocabulary, and a set D of first-order formulas that represent data. The sets of models of ϕ in W, where ϕ varies over D, generate a natural topology W over W. We show that if D is closed under boolean operators, then the notion of ordinal VC-dimension offers a perfect characterization for the problem of predicting the truth of the members of D in a member of W, with an ordinal bound on the number of mistakes. This shows that the notion of VC-dimension has a natural interpretation in Inductive Inference, when cast into a logical setting. We also study the relationships between predictive complexity, selective complexity—a variation on predictive complexity—and mind change complexity. The assumptions that D is closed under boolean operators and that W is compact often play a crucial role to establish connections between these concepts. We then consider a computable setting with effective versions of the complexity measures, and show that the equivalence between ordinal VC-dimension and predictive complexity fails. More precisely, we prove that the effective ordinal VC-dimension of a paradigm can be defined when all other effective notions of complexity are undefined. On a better note, when W is compact, all effective notions of complexity are defined, though they are not related as in the noncomputable version of the framework.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exist where the more mind changes the learner is willing to accept, the less the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this research was to study how European churches contributed to the shaping of the Constitutional Treaty during the work of the Convention on the future of Europe through the public discussion forum, established by the Convention for this specific purpose in the years 2002 2003. In particular, this study sought to uncover the areas of interest brought up by the churches in their contributions, the objectives they pursued, and the approaches and arguments they employed to reach those objectives. The data for this study comprised all official submissions by European churches and church alliances to the Forum, totalling 21 contributions. A central criterion for inclusion of the data was that the organization can reasonably be assumed to represent the official position of one or more Christian churches within the European Union before the 2004 expansion. The contributing churches and organizations represent the vast majority of Christians in Europe. The data was analyzed using primarily qualitative content analysis. The research approach was a combination of abductive and inductive inference. Based on the analysis a two-fold theoretical framework was adopted, focusing on theories of public religion, secularization and deprivatization of religion, and of legitimation and collective identity. The main areas of interest found in the contributions of the churches were the value foundation of the European Union, which is demanded to coherently permeate all policies and actions of the EU, and the social dimension of Europe, which must be given equal status to the political and economic dimensions. In both areas the churches claim significant experience and expertise, which they want to see recognized in the Constituional Treaty through a formally guaranteed status for churches and religious communities in the EU. In their contributions the churches show a strong determination to secure a significant role for both religion and religious communities in the public life of Europe. As for the role of religion, they point out to its potential as a motivating and cohesive force in society and as a building block for a collective European identity, which is still missing. Churches also pursue a substantial public role for themselves beyond the spiritual dimension, permeating the secular areas of the social, political and economic dimensions. The arguments in suppport of such role are embedded in their interest and expertise in spiritual and other fundamental values and their broad involvement in providing social services. In this context churches use expressions inclusive of all religions and convictions, albeit clearly advocating the primacy of Europe's Christian heritage. Based on their historical role, their social involvement and their spiritual mission they use the public debate on the Constitutional Treaty to gain formal legitimacy for the public status of religion and religious communities, both nationally and on a European level, through appropriate provisions in the constitutional text. In return they offer the European Union ways of improving its own legitimacy by reducing the democratic and ideological deficit of the EU and advancing the development a collective European identity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Discovering a precise causal structure accurately reflecting the given data is one of the most essential tasks in the area of data mining and machine learning. One of the successful causal discovery approaches is the information-theoretic approach using the Minimum Message Length Principle[19]. This paper presents an improved and further experimental results of the MML discovery algorithm. We introduced a new encoding scheme for measuring the cost of describing the causal structure. Stiring function is also applied to further simplify the computational complexity and thus works more efficiently. The experimental results of the current version of the discovery system show that: (1) the current version is capable of discovering what discovered by previous system; (2) current system is capable of discovering more complicated causal models with large number of variables; (3) the new version works more efficiently compared with the previous version in terms of time complexity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an examination report on the performance of the improved MML based causal model discovery algorithm. In this paper, We firstly describe our improvement to the causal discovery algorithm which introduces a new encoding scheme for measuring the cost of describing the causal structure. Stiring function is also applied to further simplify the computational complexity and thus works more efficiently. It is followed by a detailed examination report on the performance of our improved discovery algorithm. The experimental results of the current version of the discovery system show that: (l) the current version is capable of discovering what discovered by previous system; (2) current system is capable of discovering more complicated causal networks with large number of variables; (3) the new version works more efficiently compared with the previous version in terms of time complexity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Binary signatures have been widely used to detect malicious software on the current Internet. However, this approach is unable to achieve the accurate identification of polymorphic malware variants, which can be easily generated by the malware authors using code generation engines. Code generation engines randomly produce varying code sequences but perform the same desired malicious functions. Previous research used flow graph and signature tree to identify polymorphic malware families. The key difficulty of previous research is the generation of precisely defined state machine models from polymorphic variants. This paper proposes a novel approach, using Hierarchical Hidden Markov Model (HHMM), to provide accurate inductive inference of the malware family. This model can capture the features of self-similar and hierarchical structure of polymorphic malware family signature sequences. To demonstrate the effectiveness and efficiency of this approach, we evaluate it with real malware samples. Using more than 15,000 real malware, we find our approach can achieve high true positives, low false positives, and low computational cost.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The following work is to interpret and analyze the problem of induction under a vision founded on set theory and probability theory as a basis for solution of its negative philosophical implications related to the systems of inductive logic in general. Due to the importance of the problem and the relatively recent developments in these fields of knowledge (early 20th century), as well as the visible relations between them and the process of inductive inference, it has been opened a field of relatively unexplored and promising possibilities. The key point of the study consists in modeling the information acquisition process using concepts of set theory, followed by a treatment using probability theory. Throughout the study it was identified as a major obstacle to the probabilistic justification, both: the problem of defining the concept of probability and that of rationality, as well as the subtle connection between the two. This finding called for a greater care in choosing the criterion of rationality to be considered in order to facilitate the treatment of the problem through such specific situations, but without losing their original characteristics so that the conclusions can be extended to classic cases such as the question about the continuity of the sunrise

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An approach is proposed for inferring implicative logical rules from examples. The concept of a good diagnostic test for a given set of positive examples lies in the basis of this approach. The process of inferring good diagnostic tests is considered as a process of inductive common sense reasoning. The incremental approach to learning algorithms is implemented in an algorithm DIAGaRa for inferring implicative rules from examples.