901 resultados para Rule enforcement
Resumo:
Forest managers in developing countries enforce extraction restrictions to limit forest degradation. In response, villagers may displace some of their extraction to other forests, which generates “leakage” of degradation. Managers also implement poverty alleviation projects to compensate for lost resource access or to induce conservation. We develop a model of spatial joint production of bees and fuelwood that is based on forest-compatible projects such as beekeeping in Thailand, Tanzania, and Mexico. We demonstrate that managers can better determine the amount and pattern of degradation by choosing the location of both enforcement and the forest-based activity.
Resumo:
This paper relates the key findings of the optimal economic enforcement literature to practical issues of enforcing forest and wildlife management access restrictions in developing countries. Our experiences, particularly from Tanzania and eastern India, provide detail of the key pragmatic issues facing those responsible for protecting natural resources. We identify large gaps in the theoretical literature that limit its ability to inform practical management, including issues of limited funding and cost recovery, multiple tiers of enforcement and the incentives facing enforcement officers, and conflict between protected area managers and rural people's needs.
The impact of buffer zone size and management on illegal extraction, park protection and enforcement
Resumo:
Many protected areas or parks in developing countries have buffer zones at their boundaries to achieve the dual goals of protecting park resources and providing resource benefits to neighbouring people. Despite the prevalence of these zoning policies, few behavioural models of people’s buffer zone use inform the sizing and management of those zones. This paper uses a spatially explicit resource extraction model to examine the impact of buffer zone size and management on extraction by local people, both legal and illegal, and the impact of that extraction on forest quality in the park’s core and buffer zone. The results demonstrate trade-offs between the level of enforcement, the size of a buffer zone, and the amount of illegal extraction in the park; and describe implications for “enrichment” of buffer zones and evaluating patterns of forest degradation.
Resumo:
Rensch’s rule, which states that the magnitude of sexual size dimorphism tends to increase with increasing body size, has evolved independently in three lineages of large herbivorous mammals: bovids (antelopes), cervids (deer), and macropodids (kangaroos). This pattern can be explained by a model that combines allometry,life-history theory, and energetics. The key features are thatfemale group size increases with increasing body size and that males have evolved under sexual selection to grow large enough to control these groups of females. The model predicts relationships among body size and female group size, male and female age at first breeding,death and growth rates, and energy allocation of males to produce body mass and weapons. Model predictions are well supported by data for these megaherbivores. The model suggests hypotheses for why some other sexually dimorphic taxa, such as primates and pinnipeds(seals and sea lions), do or do not conform to Rensh’s rule.
Resumo:
Where joint forest management has been introduced into Tanzania, ‘volunteer’ patrollers take responsibility for enforcing restrictions over the harvesting of forest resources, often receiving as an incentive a share of the collected fine revenue. Using an optimal enforcement model, we explore how that share, and whether villagers have alternative sources of forest products, determines the effort patrollers put into enforcement and whether they choose to take a bribe rather than honestly reporting the illegal collection of forest resources. Without funds for paying and monitoring patrollers, policy makers face tradeoffs over illegal extraction, forest protection and revenue generation through fine collection.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
Induction of classification rules is one of the most important technologies in data mining. Most of the work in this field has concentrated on the Top Down Induction of Decision Trees (TDIDT) approach. However, alternative approaches have been developed such as the Prism algorithm for inducing modular rules. Prism often produces qualitatively better rules than TDIDT but suffers from higher computational requirements. We investigate approaches that have been developed to minimize the computational requirements of TDIDT, in order to find analogous approaches that could reduce the computational requirements of Prism.
Resumo:
The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.