96 resultados para allocation rules
Resumo:
In recognition of their competitive vulnerability, a set of special rules have been devised for managing sectors such as steel and cement within the EU ETS. These rules basically seek to set sector specific performance benchmarks and reward top performers. However, the steel sector as a whole will receive the vast majority of its allowances for free in Phase III. Perceptions of competitive vulnerability have been largely based on inherently hypothetical analyses which rely heavily on counterfactual scenario and abatement cost estimates often provided by firms themselves. This paper diverges from these approaches by providing a qualitative assessment of the two key reasons underpinning the competitive vulnerability argument of the EU Steel Companies based on interviews and case study involving the three largest producers of steel within the EU – AcerlorMittal, Corus, and ThyssenKrupp. We find that these arguments provide only partial and weak justifications for competitive loss and discriminatory treatment in the EUETS. This strategy is difficult to counter by governments due to information asymmetry; and it appears to have proved very successful insofar as it has helped the industry to achieve free allocation in Phases I-III of EU ETS by playing up the risk of carbon leakage.
Resumo:
The Distributed Rule Induction (DRI) project at the University of Portsmouth is concerned with distributed data mining algorithms for automatically generating rules of all kinds. In this paper we present a system architecture and its implementation for inducing modular classification rules in parallel in a local area network using a distributed blackboard system. We present initial results of a prototype implementation based on the Prism algorithm.
Resumo:
Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.
Resumo:
The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and postpruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.
Resumo:
The Prism family of algorithms induces modular classification rules in contrast to the Top Down Induction of Decision Trees (TDIDT) approach which induces classification rules in the intermediate form of a tree structure. Both approaches achieve a comparable classification accuracy. However in some cases Prism outperforms TDIDT. For both approaches pre-pruning facilities have been developed in order to prevent the induced classifiers from overfitting on noisy datasets, by cutting rule terms or whole rules or by truncating decision trees according to certain metrics. There have been many pre-pruning mechanisms developed for the TDIDT approach, but for the Prism family the only existing pre-pruning facility is J-pruning. J-pruning not only works on Prism algorithms but also on TDIDT. Although it has been shown that J-pruning produces good results, this work points out that J-pruning does not use its full potential. The original J-pruning facility is examined and the use of a new pre-pruning facility, called Jmax-pruning, is proposed and evaluated empirically. A possible pre-pruning facility for TDIDT based on Jmax-pruning is also discussed.
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
This paper examines the implications of using marketing margins in applied commodity price analysis. The marketing-margin concept has a long and distinguished history, but it has caused considerable controversy. This is particularly the case in the context of analyzing the distribution of research gains in multi-stage production systems. We derive optimal tax schemes for raising revenues to finance research and promotion in a downstream market, derive the rules for efficient allocation of the funds, and compare the rules with an without the marketing-margin assumption. Applying the methodology to quarterly time series on the Australian beef-cattle sector and, with several caveats, we conclude that, during the period 1978:2 - 1988:4, the Australian Meat and Livestock Corporation optimally allocated research resources.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
There are several scoring rules that one can choose from in order to score probabilistic forecasting models or estimate model parameters. Whilst it is generally agreed that proper scoring rules are preferable, there is no clear criterion for preferring one proper scoring rule above another. This manuscript compares and contrasts some commonly used proper scoring rules and provides guidance on scoring rule selection. In particular, it is shown that the logarithmic scoring rule prefers erring with more uncertainty, the spherical scoring rule prefers erring with lower uncertainty, whereas the other scoring rules are indifferent to either option.
Resumo:
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.
Resumo:
Native-like use of preterit and imperfect morphology in all contexts by English learners of L2 Spanish is the exception rather than the rule, even for successful learners. Nevertheless, recent research has demonstrated that advanced English learners of L2 Spanish attain a native-like morphosyntactic competence for the preterit/imperfect contrast, as evidenced by their native-like knowledge of associated semantic entailments (Goodin-Mayeda and Rothman 2007, Montrul and Slabakova 2003, Slabakova and Montrul 2003, Rothman and Iverson 2007). In addition to an L2 disassociation of morphology and syntax (e.g., Bruhn de Garavito 2003, Lardiere 1998, 2000, 2005, Prévost and White 1999, 2000, Schwartz 2003), I hypothesize that a system of learned pedagogical rules contributes to target-deviant L2 performance in this domain through the most advanced stages of L2 acquisition via its competition with the generative system. I call this hypothesis the Competing Systems Hypothesis. To test its predictions, I compare and contrast the use of the preterit and imperfect in two production tasks by native, tutored (classroom), and naturalistic learners of L2 Spanish.