841 resultados para Sharing rules
Resumo:
The guiding principle of compulsory purchase of interests in land in England and Wales is that of fairness, best stated in the words of Lord Justice Scott in Horn v Sunderland Corporation when he said that the owner has “the right to be put, so far as money can do it, in the same position as if his land had not been taken from him”. In many instances, land acquired by compulsion subsequently becomes surplus to the requirements of the acquiring authority. This may be because the intended development scheme was scrapped, or substantially modified, or that after the passage of time the use of the land for which the purchase took place is no longer required. More controversially it may be that for ‘operational reasons’ the acquiring authority knowingly purchased more land than was required for the scheme. Under these circumstances, the Crichel Down Rules (‘the Rules’) require government departments and other statutory bodies to offer back to the former owners or their successors, any land previously so acquired by, or under the threat of, compulsory purchase.
Resumo:
The Virtual Lightbox for Museums and Archives (VLMA) is a tool for collecting and reusing, in a structured fashion, the online contents of museums and archive datasets. It is not restricted to datasets with visual components although VLMA includes a lightbox service that enables comparison and manipulation of visual information. With VLMA, one can browse and search collections, construct personal collections, annotate them, export these collections to XML or Impress (Open Office) presentation format, and share collections with other VLMA users. VLMA was piloted as an e-Learning tool as part of JISC’s e-Learning focus in its first phase (2004-2005) and in its second phase (2005-2006) it has incorporated new partner collections while improving and expanding interfaces and services. This paper concerns its development as a research and teaching tool, especially to teachers using museum collections, and discusses the recent development of VLMA.
Resumo:
This paper contributes to a fast growing literature which introduces game theory in the analysis of real option investments in a competitive setting. Specifically, in this paper we focus on the issue of multiple equilibria and on the implications that different equilibrium selections may have for the pricing of real options and for subsequent strategic decisions. We present some theoretical results of the necessary conditions to have multiple equilibria and we show under which conditions different tie-breaking rules result in different economic decisions. We then present a numerical exercise using the in formation set obtained on a real estate development in South London. We find that risk aversion reduces option value and this reduction decreases marginally as negative externalities decrease.
Resumo:
This paper summarises an initial report carried out by the Housing Business Research Group, of the University of Reading into Design and Build procurement and a number of research projects undertaken by the national federation of Housing Associations (NFHA), into their members' development programmes. The paper collates existing statistics from these sources and examines the way in which Design and Build procurement can be adapted for the provision of social housing. The paper comments on these changes and questions how risk averting the adopted strategies are in relation to long term housing business management issues arising from the quality of the product produced by the new system.
Resumo:
Pervasive computing is a continually, and rapidly, growing field, although still remains in relative infancy. The possible applications for the technology are numerous, and stand to fundamentally change the way users interact with technology. However, alongside these are equally numerous potential undesirable effects and risks. The lack of empirical naturalistic data in the real world makes studying the true impacts of this technology difficult. This paper describes how two independent research projects shared such valuable empirical data on the relationship between pervasive technologies and users. Each project had different aims and adopted different methods, but successfully used the same data and arrived at the same conclusions. This paper demonstrates the benefit of sharing research data in multidisciplinary pervasive computing research where real world implementations are not widely available.
Resumo:
The Distributed Rule Induction (DRI) project at the University of Portsmouth is concerned with distributed data mining algorithms for automatically generating rules of all kinds. In this paper we present a system architecture and its implementation for inducing modular classification rules in parallel in a local area network using a distributed blackboard system. We present initial results of a prototype implementation based on the Prism algorithm.
Resumo:
Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.
Resumo:
The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and postpruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.