974 resultados para Linear decision rules
Resumo:
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
Resumo:
This paper addresses the effects of synchronisation errors (time delay, carrier phase, and carrier frequency) on the performance of linear decorrelating detectors (LDDs). A major effect is that all LDDs require certain degree of power control in the presence of synchronisation errors. The multi-shot sliding window algorithm (SLWA) and hard decision method (HDM) are analysed and their power control requirements are examined. Also, a more efficient one-shot detection scheme, called “hard-decision based coupling cancellation”, is proposed and analysed. These schemes are then compared with the isolation bit insertion (IBI) approach in terms of power control requirements.
Resumo:
Linear models of market performance may be misspecified if the market is subdivided into distinct regimes exhibiting different behaviour. Price movements in the US Real Estate Investment Trusts and UK Property Companies Markets are explored using a Threshold Autoregressive (TAR) model with regimes defined by the real rate of interest. In both US and UK markets, distinctive behaviour emerges, with the TAR model offering better predictive power than a more conventional linear autoregressive model. The research points to the possibility of developing trading rules to exploit the systematically different behaviour across regimes.
Resumo:
Conventional economic theory, applied to information released by listed companies, equates ‘useful’ with ‘price-sensitive’. Stock exchange rules accordingly prohibit the selec- tive, private communication of price-sensitive information. Yet, even in the absence of such communication, UK equity fund managers routinely meet privately with the senior execu- tives of the companies in which they invest. Moreover, they consider these brief, formal and formulaic meetings to be their most important sources of investment information. In this paper we ask how that can be. Drawing on interview and observation data with fund managers and CFOs, we find evidence for three, non-mutually exclusive explanations: that the characterisation of information in conventional economic theory is too restricted, that fund managers fail to act with the rationality that conventional economic theory assumes, and/or that the primary value of the meetings for fund managers is not related to their investment decision making but to the claims of superior knowledge made to clients in marketing their active fund management expertise. Our findings suggest a disconnect between economic theory and economic policy based on that theory, as well as a corre- sponding limitation in research studies that test information-usefulness by assuming it to be synonymous with price-sensitivity. We draw implications for further research into the role of tacit knowledge in equity investment decision-making, and also into the effects of the principal–agent relationship between fund managers and their clients.
Resumo:
Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.
Resumo:
The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and postpruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.
Resumo:
The Prism family of algorithms induces modular classification rules in contrast to the Top Down Induction of Decision Trees (TDIDT) approach which induces classification rules in the intermediate form of a tree structure. Both approaches achieve a comparable classification accuracy. However in some cases Prism outperforms TDIDT. For both approaches pre-pruning facilities have been developed in order to prevent the induced classifiers from overfitting on noisy datasets, by cutting rule terms or whole rules or by truncating decision trees according to certain metrics. There have been many pre-pruning mechanisms developed for the TDIDT approach, but for the Prism family the only existing pre-pruning facility is J-pruning. J-pruning not only works on Prism algorithms but also on TDIDT. Although it has been shown that J-pruning produces good results, this work points out that J-pruning does not use its full potential. The original J-pruning facility is examined and the use of a new pre-pruning facility, called Jmax-pruning, is proposed and evaluated empirically. A possible pre-pruning facility for TDIDT based on Jmax-pruning is also discussed.
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
We use QCD sum rules to test the nature of the recently observed mesons Y(4260), Y(4350) and Y(4660), assumed to be exotic four-quark (c (c) over barq (q) over bar) or (c (c) over bars (s) over bar) states with J(PC)= 1(--). We work at leading order in alpha(s), consider the contributions of higher dimension condensates and keep terms which are linear in the strange quark mass m(s). We find for the (c (c) over bars (s) over bar) state a mass in m(Y) = (4.65 +/- 0.10) GeV which is compatible with the experimental candidate Y (4660), while for the (c (c) over barq (q) over bar) state we find a mass in m(Y) = (4.49 +/- 0.11) GeV, which is still consistent with the mass of the experimental candidate Y(4350). With the tetraquark structure we are working we cannot explain the Y(4260) as a tetraquark state. We also consider molecular D(s0)(D) over bar (s)* and D(0)(D) over bar* states. For the D(s0)(D) over bar (s)* molecular state we get m(Ds0 (D) over bars*) = (4.42 +/- 0.10) GeV which is consistent, considering the errors, with the mass of the meson Y(4350) and for the D(0)(D) over bar* molecular state we get m(D0 (D) over bar*) = (4.27 +/- 0.10) GeV in excellent agreement with the mass of the meson Y(4260). (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this work is to develop a web based decision support system, based onfuzzy logic, to assess the motor state of Parkinson patients on their performance in onscreenmotor tests in a test battery on a hand computer. A set of well defined rules, basedon an expert’s knowledge, were made to diagnose the current state of the patient. At theend of a period, an overall score is calculated which represents the overall state of thepatient during the period. Acceptability of the rules is based on the absolute differencebetween patient’s own assessment of his condition and the diagnosed state. Anyinconsistency can be tracked by highlighted as an alert in the system. Graphicalpresentation of data aims at enhanced analysis of patient’s state and performancemonitoring by the clinic staff. In general, the system is beneficial for the clinic staff,patients, project managers and researchers.
Resumo:
The aim of this work was to design a set of rules for levodopa infusion dose adjustment in Parkinson’s disease based on a simulation experiments. Using this simulator, optimal infusions dose in different conditions were calculated. There are seven conditions (-3 to +3)appearing in a rating scale for Parkinson’s disease patients. By finding mean of the differences between conditions and optimal dose, two sets of rules were designed. The set of rules was optimized by several testing. Usefulness for optimizing the titration procedure of new infusion patients based on rule-based reasoning was investigated. Results show that both of the number of the steps and the errors for finding optimal dose was shorten by new rules. At last, the dose predicted with new rules well on each single occasion of majority of patients in simulation experiments.
Resumo:
This thesis presents a system to recognise and classify road and traffic signs for the purpose of developing an inventory of them which could assist the highway engineers’ tasks of updating and maintaining them. It uses images taken by a camera from a moving vehicle. The system is based on three major stages: colour segmentation, recognition, and classification. Four colour segmentation algorithms are developed and tested. They are a shadow and highlight invariant, a dynamic threshold, a modification of de la Escalera’s algorithm and a Fuzzy colour segmentation algorithm. All algorithms are tested using hundreds of images and the shadow-highlight invariant algorithm is eventually chosen as the best performer. This is because it is immune to shadows and highlights. It is also robust as it was tested in different lighting conditions, weather conditions, and times of the day. Approximately 97% successful segmentation rate was achieved using this algorithm.Recognition of traffic signs is carried out using a fuzzy shape recogniser. Based on four shape measures - the rectangularity, triangularity, ellipticity, and octagonality, fuzzy rules were developed to determine the shape of the sign. Among these shape measures octangonality has been introduced in this research. The final decision of the recogniser is based on the combination of both the colour and shape of the sign. The recogniser was tested in a variety of testing conditions giving an overall performance of approximately 88%.Classification was undertaken using a Support Vector Machine (SVM) classifier. The classification is carried out in two stages: rim’s shape classification followed by the classification of interior of the sign. The classifier was trained and tested using binary images in addition to five different types of moments which are Geometric moments, Zernike moments, Legendre moments, Orthogonal Fourier-Mellin Moments, and Binary Haar features. The performance of the SVM was tested using different features, kernels, SVM types, SVM parameters, and moment’s orders. The average classification rate achieved is about 97%. Binary images show the best testing results followed by Legendre moments. Linear kernel gives the best testing results followed by RBF. C-SVM shows very good performance, but ?-SVM gives better results in some case.
Resumo:
The clausal resolution method for propositional linear-time temporal logic is well known and provides the basis for a number of temporal provers. The method is based on an intuitive clausal form, called SNF, comprising three main clause types and a small number of resolution rules. In this paper, we show how the normal form can be radically simplified, and consequently, how a simplified clausal resolutioin method can be defined for this impoprtant variety of logics.