855 resultados para Fuzzy rules
Resumo:
Opportunistic land encroachment occurs in many low-income countries, gradually yet pervasively, until discrete areas of common land disappear. This paper, motivated by field observations in Karnataka, India, demonstrates that such an evolution of property rights from common to private may be efficient when the boundaries between common and private land are poorly defined, or ‘‘fuzzy.’’ Using a multi-period optimization model, and introducing the concept of stock and flow enforcement, I show how effectiveness of enforcement effort, whether encroachment is reversible, and punitive fines, influence whether an area of common land is fully defined and protected or gradually or rapidly encroached.
Resumo:
This paper contributes to a fast growing literature which introduces game theory in the analysis of real option investments in a competitive setting. Specifically, in this paper we focus on the issue of multiple equilibria and on the implications that different equilibrium selections may have for the pricing of real options and for subsequent strategic decisions. We present some theoretical results of the necessary conditions to have multiple equilibria and we show under which conditions different tie-breaking rules result in different economic decisions. We then present a numerical exercise using the in formation set obtained on a real estate development in South London. We find that risk aversion reduces option value and this reduction decreases marginally as negative externalities decrease.
Resumo:
This paper summarises an initial report carried out by the Housing Business Research Group, of the University of Reading into Design and Build procurement and a number of research projects undertaken by the national federation of Housing Associations (NFHA), into their members' development programmes. The paper collates existing statistics from these sources and examines the way in which Design and Build procurement can be adapted for the provision of social housing. The paper comments on these changes and questions how risk averting the adopted strategies are in relation to long term housing business management issues arising from the quality of the product produced by the new system.
Resumo:
Risk and uncertainty are, to say the least, poorly considered by most individuals involved in real estate analysis - in both development and investment appraisal. Surveyors continue to express 'uncertainty' about the value (risk) of using relatively objective methods of analysis to account for these factors. These methods attempt to identify the risk elements more explicitly. Conventionally this is done by deriving probability distributions for the uncontrolled variables in the system. A suggested 'new' way of "being able to express our uncertainty or slight vagueness about some of the qualitative judgements and not entirely certain data required in the course of the problem..." uses the application of fuzzy logic. This paper discusses and demonstrates the terminology and methodology of fuzzy analysis. In particular it attempts a comparison of the procedures with those used in 'conventional' risk analysis approaches and critically investigates whether a fuzzy approach offers an alternative to the use of probability based analysis for dealing with aspects of risk and uncertainty in real estate analysis
Resumo:
The Distributed Rule Induction (DRI) project at the University of Portsmouth is concerned with distributed data mining algorithms for automatically generating rules of all kinds. In this paper we present a system architecture and its implementation for inducing modular classification rules in parallel in a local area network using a distributed blackboard system. We present initial results of a prototype implementation based on the Prism algorithm.
Resumo:
Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.
Resumo:
The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and postpruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.
Resumo:
The Prism family of algorithms induces modular classification rules in contrast to the Top Down Induction of Decision Trees (TDIDT) approach which induces classification rules in the intermediate form of a tree structure. Both approaches achieve a comparable classification accuracy. However in some cases Prism outperforms TDIDT. For both approaches pre-pruning facilities have been developed in order to prevent the induced classifiers from overfitting on noisy datasets, by cutting rule terms or whole rules or by truncating decision trees according to certain metrics. There have been many pre-pruning mechanisms developed for the TDIDT approach, but for the Prism family the only existing pre-pruning facility is J-pruning. J-pruning not only works on Prism algorithms but also on TDIDT. Although it has been shown that J-pruning produces good results, this work points out that J-pruning does not use its full potential. The original J-pruning facility is examined and the use of a new pre-pruning facility, called Jmax-pruning, is proposed and evaluated empirically. A possible pre-pruning facility for TDIDT based on Jmax-pruning is also discussed.
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
There are several scoring rules that one can choose from in order to score probabilistic forecasting models or estimate model parameters. Whilst it is generally agreed that proper scoring rules are preferable, there is no clear criterion for preferring one proper scoring rule above another. This manuscript compares and contrasts some commonly used proper scoring rules and provides guidance on scoring rule selection. In particular, it is shown that the logarithmic scoring rule prefers erring with more uncertainty, the spherical scoring rule prefers erring with lower uncertainty, whereas the other scoring rules are indifferent to either option.
Resumo:
Native-like use of preterit and imperfect morphology in all contexts by English learners of L2 Spanish is the exception rather than the rule, even for successful learners. Nevertheless, recent research has demonstrated that advanced English learners of L2 Spanish attain a native-like morphosyntactic competence for the preterit/imperfect contrast, as evidenced by their native-like knowledge of associated semantic entailments (Goodin-Mayeda and Rothman 2007, Montrul and Slabakova 2003, Slabakova and Montrul 2003, Rothman and Iverson 2007). In addition to an L2 disassociation of morphology and syntax (e.g., Bruhn de Garavito 2003, Lardiere 1998, 2000, 2005, Prévost and White 1999, 2000, Schwartz 2003), I hypothesize that a system of learned pedagogical rules contributes to target-deviant L2 performance in this domain through the most advanced stages of L2 acquisition via its competition with the generative system. I call this hypothesis the Competing Systems Hypothesis. To test its predictions, I compare and contrast the use of the preterit and imperfect in two production tasks by native, tutored (classroom), and naturalistic learners of L2 Spanish.