26 resultados para Linear and multilinear programming
em Aston University Research Archive
Resumo:
In this paper, the exchange rate forecasting performance of neural network models are evaluated against the random walk, autoregressive moving average and generalised autoregressive conditional heteroskedasticity models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore, the parameters are chosen according to what the researcher considers to be the best. Such an approach, however,implies that the risk of making bad decisions is extremely high, which could explain why in many studies, neural network models do not consistently perform better than their time series counterparts. In this paper, through extensive experimentation, the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of Forecasting exchange rates with linear and nonlinear models 415 performing well. The results show that in general, neural network models perform better than the traditionally used time series models in forecasting exchange rates.
Resumo:
Logistics distribution network design is one of the major decision problems arising in contemporary supply chain management. The decision involves many quantitative and qualitative factors that may be conflicting in nature. This paper applies an integrated multiple criteria decision making approach to design an optimal distribution network. In the approach, the analytic hierarchy process (AHP) is used first to determine the relative importance weightings or priorities of alternative warehouses with respect to both deliverer oriented and customer oriented criteria. Then, the goal programming (GP) model incorporating the constraints of system, resource, and AHP priority is formulated to select the best set of warehouses without exceeding the limited available resources. In this paper, two commercial packages are used: Expert Choice for determining the AHP priorities of the warehouses, and LINDO for solving the GP model. © 2007 IEEE.
Resumo:
We compare the Q parameter obtained from scalar, semi-analytical and full vector models for realistic transmission systems. One set of systems is operated in the linear regime, while another is using solitons at high peak power. We report in detail on the different results obtained for the same system using different models. Polarisation mode dispersion is also taken into account and a novel method to average Q parameters over several independent simulation runs is described. © 2006 Elsevier B.V. All rights reserved.
Resumo:
We compare the Q parameter obtained from the semi-analytical model with scalar and vector models for two realistic transmission systems. First a linear system with a compensated dispersion map and second a soliton transmission system.
Resumo:
In this paper the exchange rate forecasting performance of neural network models are evaluated against random walk and a range of time series models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore the parameters are chosen according to what the researcher considers to be the best. Such an approach, however, implies that the risk of making bad decisions is extremely high which could explain why in many studies neural network models do not consistently perform better than their time series counterparts. In this paper through extensive experimentation the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of performing well. Our results show that in general neural network models perform better than traditionally used time series models in forecasting exchange rates.
Resumo:
Objective: This study aimed to explore methods of assessing interactions between neuronal sources using MEG beamformers. However, beamformer methodology is based on the assumption of no linear long-term source interdependencies [VanVeen BD, vanDrongelen W, Yuchtman M, Suzuki A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 1997;44:867-80; Robinson SE, Vrba J. Functional neuroimaging by synthetic aperture magnetometry (SAM). In: Recent advances in Biomagnetism. Sendai: Tohoku University Press; 1999. p. 302-5]. Although such long-term correlations are not efficient and should not be anticipated in a healthy brain [Friston KJ. The labile brain. I. Neuronal transients and nonlinear coupling. Philos Trans R Soc Lond B Biol Sci 2000;355:215-36], transient correlations seem to underlie functional cortical coordination [Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron 1999;49-65; Rodriguez E, George N, Lachaux J, Martinerie J, Renault B, Varela F. Perception's shadow: long-distance synchronization of human brain activity. Nature 1999;397:430-3; Bressler SL, Kelso J. Cortical coordination dynamics and cognition. Trends Cogn Sci 2001;5:26-36]. Methods: Two periodic sources were simulated and the effects of transient source correlation on the spatial and temporal performance of the MEG beamformer were examined. Subsequently, the interdependencies of the reconstructed sources were investigated using coherence and phase synchronization analysis based on Mutual Information. Finally, two interacting nonlinear systems served as neuronal sources and their phase interdependencies were studied under realistic measurement conditions. Results: Both the spatial and the temporal beamformer source reconstructions were accurate as long as the transient source correlation did not exceed 30-40 percent of the duration of beamformer analysis. In addition, the interdependencies of periodic sources were preserved by the beamformer and phase synchronization of interacting nonlinear sources could be detected. Conclusions: MEG beamformer methods in conjunction with analysis of source interdependencies could provide accurate spatial and temporal descriptions of interactions between linear and nonlinear neuronal sources. Significance: The proposed methods can be used for the study of interactions between neuronal sources. © 2005 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
Hydrogels are a unique class of polymer which swell, but do not dissolve in, water. A range of 2-hydroxyethyl methacrylate based copolymer hydrogels containing both cyclic and linear polyethers have been synthesised and are described in this thesis. Initially, cyclic polyethers were occluded within the polymer matrix and the transport properties investigated. The results indicated that the presence of an ionophore can be used to modulate ion transport and that ion transport is described by a dual-sorption mechanism. However, these studies were limited due to ionophore loss during hydration. Hence, the synthesis of a range of acrylate based crown ether monomers was considered. A pure sample of 4-acryolylaminobenzo-15-crown-5 was obtained and a terpolymer containing this monomer was prepared. Transport studies illustrated that the presence of a `bound' ionophore modulates ion transport in a similar way to the occluded systems. The transport properties of a series of terpolymers containing linear polyethers were then investigated. The results indicated that the dual-sorption mechanism is observed for these systems with group II metal cations while the transport of group I metal cations, with the exception of sodium, is enhanced. Finally, the equilibrium water contents (EWC) surface and mechanical properties of these terpolymers containing linear polyethers were examined. Although subtle variations in EWC are observed as the structure of the polyether side chain varies, generally EWC is enhanced due to the hydrophilicity of the polyether side chain. The macroscopic surface properties were investigated using a sessile drop technique and FTIR spectroscopy. At a molecular level surface properties were probed using an in vitro ocular spoilation model and preliminary cell adhesion studies. The results indicate that the polyethylene oxide side chains are expressed at the polymer surface thus reducing the adhesion of biological species.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
We present a new class of multi-channel Fiber Bragg grating, which provides the characteristics of channelized dispersion but does so with only a single reflection band. Such gratings can provide pure phase control of optical pulses without introducing any deleterious insertion-loss-variation. © 2006 Optical Society of America.
Resumo:
This article presents a potential method to assist developers of future bioenergy schemes when selecting from available suppliers of biomass materials. The method aims to allow tacit requirements made on biomass suppliers to be considered at the design stage of new developments. The method used is a combination of the Analytical Hierarchy Process and the Quality Function Deployment methods (AHP-QFD). The output of the method is a ranking and relative weighting of the available suppliers which could be used to improve optimization algorithms such as linear and goal programming. The paper is at a conceptual stage and no results have been obtained. The aim is to use the AHP-QFD method to bridge the gap between treatment of explicit and tacit requirements of bioenergy schemes; allowing decision makers to identify the most successful supply strategy available.
Resumo:
Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.
Resumo:
How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.
Resumo:
Linear typing schemes can be used to guarantee non-interference and so the soundness of in-place update with respect to a functional semantics. But linear schemes are restrictive in practice, and more restrictive than necessary to guarantee soundness of in-place update. This limitation has prompted research into static analysis and more sophisticated typing disciplines to determine when in-place update may be safely used, or to combine linear and non-linear schemes. Here we contribute to this direction by defining a new typing scheme that better approximates the semantic property of soundness of in-place update for a functional semantics. We begin from the observation that some data are used only in a read-only context, after which it may be safely re-used before being destroyed. Formalising the in-place update interpretation in a machine model semantics allows us to refine this observation, motivating three usage aspects apparent from the semantics that are used to annotate function argument types. The aspects are (1) used destructively, (2), used read-only but shared with result, and (3) used read-only and not shared with the result. The main novelty is aspect (2), which allows a linear value to be safely read and even aliased with a result of a function without being consumed. This novelty makes our type system more expressive than previous systems for functional languages in the literature. The system remains simple and intuitive, but it enjoys a strong soundness property whose proof is non-trivial. Moreover, our analysis features principal types and feasible type reconstruction, as shown in M. Konen'y (In TYPES 2002 workshop, Nijmegen, Proceedings, Springer-Verlag, 2003).
Resumo:
The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.