37 resultados para Filter selection paper
em Aston University Research Archive
Resumo:
In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Using electricity load data and training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise and forgetting factors for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. We also find that a recently-proposed alternative novelty criterion, found to be more robust in stationary environments, does not fare so well in the non-stationary case due to the need for filter adaptability during training.
Resumo:
We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.
Resumo:
A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.
Resumo:
Construction projects are risky. However, the characteristics of the risk highly depend on the type of procurement being adopted for managing the project. A build-operate-transfer (BOT) project is recognized as one of the most risky project schemes. There are instances of project failure where a BOT scheme was employed. Ineffective rts are increasingly being managed using various risk management tools and techniques. However, application of those tools depends on the nature of the project, organization's policy, project management strategy, risk attitude of the project team members, and availability of the resources. Understanding of the contents and contexts of BOT projects, together with a thorough understanding of risk management tools and techniques, helps select processes of risk management for effective project implementation in a BOT scheme. This paper studies application of risk management tools and techniques in BOT projects through reviews of relevant literatures and develops a model for selecting risk management process for BOT projects. The application to BOT projects is considered from the viewpoints of the major project participants. Discussion is also made with regard to political risks. This study would contribute to the establishment of a framework for systematic risk management in BOT projects.
Resumo:
This paper introduces a compact form for the maximum value of the non-Archimedean in Data Envelopment Analysis (DEA) models applied for the technology selection, without the need to solve a linear programming (LP). Using this method the computational performance the common weight multi-criteria decision-making (MCDM) DEA model proposed by Karsak and Ahiska (International Journal of Production Research, 2005, 43(8), 1537-1554) is improved. This improvement is significant when computational issues and complexity analysis are a concern.
Resumo:
Supplier evaluation and selection problem has been studied extensively. Various decision making approaches have been proposed to tackle the problem. In contemporary supply chain management, the performance of potential suppliers is evaluated against multiple criteria rather than considering a single factor-cost. This paper reviews the literature of the multi-criteria decision making approaches for supplier evaluation and selection. Related articles appearing in the international journals from 2000 to 2008 are gathered and analyzed so that the following three questions can be answered: (i) Which approaches were prevalently applied? (ii) Which evaluating criteria were paid more attention to? (iii) Is there any inadequacy of the approaches? Based on the inadequacy, if any, some improvements and possible future work are recommended. This research not only provides evidence that the multi-criteria decision making approaches are better than the traditional cost-based approach, but also aids the researchers and decision makers in applying the approaches effectively.
Resumo:
This paper extends previous analyses of the choice between internal and external R&D to consider the costs of internal R&D. The Heckman two-stage estimator is used to estimate the determinants of internal R&D unit cost (i.e. cost per product innovation) allowing for sample selection effects. Theory indicates that R&D unit cost will be influenced by scale issues and by the technological opportunities faced by the firm. Transaction costs encountered in research activities are allowed for and, in addition, consideration is given to issues of market structure which influence the choice of R&D mode without affecting the unit cost of internal or external R&D. The model is tested on data from a sample of over 500 UK manufacturing plants which have engaged in product innovation. The key determinants of R&D mode are the scale of plant and R&D input, and market structure conditions. In terms of the R&D cost equation, scale factors are again important and have a non-linear relationship with R&D unit cost. Specificities in physical and human capital also affect unit cost, but have no clear impact on the choice of R&D mode. There is no evidence of technological opportunity affecting either R&D cost or the internal/external decision.
Resumo:
The aim of our paper is to examine whether Exchange Traded Funds (ETFs) diversify away the private information of informed traders. We apply the spread decomposition models of Glosten and Harris (1998) and Madhavan, Richardson and Roomans (1997) to a sample of ETFs and their control securities. Our results indicate that ETFs have significantly lower adverse selection costs than their control securities. This suggests that private information is diversified away for these securities. Our results therefore offer one explanation for the rapid growth in the ETF market.
Resumo:
When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.
Resumo:
Biofuels and chemicals from biomass mean the gasification of biogenic feedstocks and the synthesis via methanol, dimethylester (DME) or Fischer-Tropsch products. To prevent the sensitive synthesis catalysts from poisoning the syngas must be free of tar and particulates. The trace concentrations of S-, C1-, N-species, alkali and heavy metals must be of the order of a few ppb. Moreover maximum conversion efficiency will be achieved performing the gas cleaning above the synthesis conditions. The concept of an innovative dry HTHP syngas cleaning is presented. Based on the HT particle filtration and suitable sorption and catalysis processes for the relevant contaminants a total concept will be derived, which leads to a syngas quality required for synthesis catalysts in only 2 combined stages. The experimental setup for the HT gas cleaning behind the 60 kWtherm entrained flow gasifier REGA of the institute is described. Results from HT filter experiments in pilot scale are presented. The performance of 2 natural minerals for HC1 and H2S sorption is discussed with respect to the parameters temperature, surface and residence time. Results from lab scale investigations on low temperature tar catalysts' performance (commercial and proprietary development) are discussed finally.
Resumo:
Artifact selection decisions typically involve the selection of one from a number of possible/candidate options (decision alternatives). In order to support such decisions, it is important to identify and recognize relevant key issues of problem solving and decision making (Albers, 1996; Harris, 1998a, 1998b; Jacobs & Holten, 1995; Loch & Conger, 1996; Rumble, 1991; Sauter, 1999; Simon, 1986). Sauter classifies four problem solving/decision making styles: (1) left-brain style, (2) right-brain style, (3) accommodating, and (4) integrated (Sauter, 1999). The left-brain style employs analytical and quantitative techniques and relies on rational and logical reasoning. In an effort to achieve predictability and minimize uncertainty, problems are explicitly defined, solution methods are determined, orderly information searches are conducted, and analysis is increasingly refined. Left-brain style decision making works best when it is possible to predict/control, measure, and quantify all relevant variables, and when information is complete. In direct contrast, right-brain style decision making is based on intuitive techniques—it places more emphasis on feelings than facts. Accommodating decision makers use their non-dominant style when they realize that it will work best in a given situation. Lastly, integrated style decision makers are able to combine the left- and right-brain styles—they use analytical processes to filter information and intuition to contend with uncertainty and complexity.
Resumo:
In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Two real world data sets, containing electricity load demands and foreign exchange market prices, are used to test several different methods, ranging from linear models with fixed parameters, to non-linear models which adapt both parameters and model order on-line. Training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. The results of our experiments show that there are advantages to be gained in tracking real world non-stationary data through the use of more complex adaptive models.
Resumo:
The main objective of the project is to enhance the already effective health-monitoring system (HUMS) for helicopters by analysing structural vibrations to recognise different flight conditions directly from sensor information. The goal of this paper is to develop a new method to select those sensors and frequency bands that are best for detecting changes in flight conditions. We projected frequency information to a 2-dimensional space in order to visualise flight-condition transitions using the Generative Topographic Mapping (GTM) and a variant which supports simultaneous feature selection. We created an objective measure of the separation between different flight conditions in the visualisation space by calculating the Kullback-Leibler (KL) divergence between Gaussian mixture models (GMMs) fitted to each class: the higher the KL-divergence, the better the interclass separation. To find the optimal combination of sensors, they were considered in pairs, triples and groups of four sensors. The sensor triples provided the best result in terms of KL-divergence. We also found that the use of a variational training algorithm for the GMMs gave more reliable results.
Resumo:
This paper presents results of a study examining the methods used to select employees in 579 UK organizations representing a range of different organization sizes and industry sectors. Overall, a smaller proportion of organizations in this sample reported using formalized methods (e.g., assessment centres) than informal methods (e.g., unstructured interviews). The curriculum vitae (CVs) was the most commonly used selection method, followed by the traditional triad of application form, interviews, and references. Findings also indicated that the use of different selection methods was similar in both large organizations and small-to-medium-sized enterprises. Differences were found across industry sector with public and voluntary sectors being more likely to use formalized techniques (e.g., application forms rather than CVs and structured rather than unstructured interviews). The results are discussed in relation to their implications, both in terms of practice and future research.
Resumo:
In this paper a microwave photonic filter using superstructured fiber Bragg grating and dispersive fiber is investigated. A theoretical model to describe the transfer function of the filter taking into consideration the spectral width of light source is established. Experiments are carried out to verify the theoretical analysis. Both theoretical and experimental results indicate that due to chromatic dispersion the source spectral width introduces an additional power penalty to the microwave photonic response of the filter.