982 resultados para Home rule
Resumo:
This paper examines the relationship between embodied individuals and the home that they inhabit. Although there has been some work on both the embodied practices in the home and on the material nature of the home itself, this has not been integrated with the majority of research on home which has focused on meaning. It is argued that there is a lack of a unifying framework that can incorporate both use and meaning elements of home. A way of incorporating these elements through adoption of the concept of affordances is put forward. However, the affordance approach needs to be developed to achieve this. The paper does this first by incorporating the concept of intentionality of actions and then through the use of the concept of well‐being. Debates about housing for people with a physical disability and the practical help provided to this group of people are used to illustrate how the approach could work.
Resumo:
Using NCANDS data of US child maltreatment reports for 2009, logistic regression, probit analysis, discriminant analysis and an artificial neural network are used to determine the factors which explain the decision to place a child in out-of-home care. As well as developing a new model for 2009, a previous study using 2005 data is replicated. While there are many small differences, the four estimation techniques give broadly the same results, demonstrating the robustness of the results. Similarly, apart from age and sexual abuse, the 2005 and 2009 results are roughly similar. For 2009, child characteristics (particularly child emotional problems) are more important than the nature of the abuse and the situation of the household; while caregiver characteristics are the least important. All these models have low explanatory power.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
Induction of classification rules is one of the most important technologies in data mining. Most of the work in this field has concentrated on the Top Down Induction of Decision Trees (TDIDT) approach. However, alternative approaches have been developed such as the Prism algorithm for inducing modular rules. Prism often produces qualitatively better rules than TDIDT but suffers from higher computational requirements. We investigate approaches that have been developed to minimize the computational requirements of TDIDT, in order to find analogous approaches that could reduce the computational requirements of Prism.
Resumo:
The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.
Resumo:
With the fast development of the Internet, wireless communications and semiconductor devices, home networking has received significant attention. Consumer products can collect and transmit various types of data in the home environment. Typical consumer sensors are often equipped with tiny, irreplaceable batteries and it therefore of the utmost importance to design energy efficient algorithms to prolong the home network lifetime and reduce devices going to landfill. Sink mobility is an important technique to improve home network performance including energy consumption, lifetime and end-to-end delay. Also, it can largely mitigate the hot spots near the sink node. The selection of optimal moving trajectory for sink node(s) is an NP-hard problem jointly optimizing routing algorithms with the mobile sink moving strategy is a significant and challenging research issue. The influence of multiple static sink nodes on energy consumption under different scale networks is first studied and an Energy-efficient Multi-sink Clustering Algorithm (EMCA) is proposed and tested. Then, the influence of mobile sink velocity, position and number on network performance is studied and a Mobile-sink based Energy-efficient Clustering Algorithm (MECA) is proposed. Simulation results validate the performance of the proposed two algorithms which can be deployed in a consumer home network environment.
Resumo:
Purpose – The aim of this paper is to present a conceptual valuation framework to allow telecare service stakeholders to assess telecare devices in the home in terms of their social, psychological and practical effects. The framework enables telecare service operators to more effectively engage with the social and psychological issues resulting from telecare technology deployment in the home and to design and develop appropriate responses as a result. Design/methodology/approach – The paper provides a contextual background for the need for sociologically pitched tools that engage with the social and cultural feelings of telecare service users before presenting the valuation framework and how it could be used. Findings – A conceptual valuation framework is presented for potential development/use. Research limitations/implications – The valuation framework has yet to be extensively tested or verified. Practical implications – The valuation framework needs to be tested and deployed by a telecare service operator but the core messages of the paper are valid and interesting for readership. Social implications – In addressing the social and cultural perspectives of telecare service stakeholders, the paper makes a link between the technologies in the home, the feelings and orientations of service users (e.g. residents, emergency services, wardens, etc.) and the telecare service operator. Originality/value – The paper is an original contribution to the field as it details how the sociological orientations of telecare technology service users should be valued and addressed by service operators. It has a value through the conceptual arguments made and through valuation framework presented.
Resumo:
This article explores the problematic nature of the label “home ownership” through a case study of the English model of shared ownership, one of the methods used by the UK government to make home ownership affordable. Adopting a legal and socio-legal analysis, the article considers whether shared ownership is capable of fulfilling the aspirations households have for home ownership. To do so, the article considers the financial and nonfinancial meanings attached to home ownership and suggests that the core expectation lies in ownership of the value. The article demonstrates that the rights and responsibilities of shared owners are different in many respects from those of traditional home owners, including their rights as regards ownership of the value. By examining home ownership through the lens of shared ownership the article draws out lessons of broader significance to housing studies. In particular, it is argued that shared ownership shows the limitations of two dichotomies commonly used in housing discourse: that between private and social housing; and the classification of tenure between owner-occupiers and renters. The article concludes that a much more nuanced way of referring to home ownership is required, and that there is a need for a change of expectations amongst consumers as to what sharing ownership means.