922 resultados para Complexity syntactic
                                
                                
                                
                                
                                
                                
                                
Resumo:
We investigate the relative complexity of two free-variable labelled modal tableaux(KEM and Single Step Tableaux, SST). We discuss the reasons why p-simulation is not a proper measure of the relative complexity of tableaux-like proof systems, and we propose an improved comparison scale (p-search-simulation). Finally we show that KEM p-search-simulates SST while SST cannot p-search-simulate KEM.
                                
                                
                                
                                
                                
                                
Resumo:
The assertion about the unique 'complexity' or the peculiarly intricate character of social phenomena has, at least within sociology, a long, venerable and virtually uncontested tradition. At the turn of the last century, classical social theorists, for example, Georg Simmel and Emile Durkheim, made prominent and repeated reference to this attribute of the subject matter of sociology and the degree to which it complicates, even inhibits the development and application of social scientific knowledge. Our paper explores the origins, the basis and the consequences of this assertion and asks in particular whether the classic complexity assertion still deserves to be invoked in analyses that ask about the production and the utilization of social scientific knowledge in modern society. We present John Maynard Keynes' economic theory and its practical applications as an illustration. We conclude that the practical value of social scientific knowledge is not dependent on a faithful, in the sense of complete, representation of social reality. Instead, social scientific knowledge that wants to optimize its practicality has to attend and attach itself to elements of social situations that can be altered or are actionable.
                                
Resumo:
In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.
                                
Resumo:
The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.
 
                    