18 resultados para robust hedging
em Aston University Research Archive
Resumo:
We introduce a technique for quantifying and then exploiting uncertainty in nonlinear stochastic control systems. The approach is suboptimal though robust and relies upon the approximation of the forward and inverse plant models by neural networks, which also estimate the intrinsic uncertainty. Sampling from the resulting Gaussian distributions of the inversion based neurocontroller allows us to introduce a control law which is demonstrably more robust than traditional adaptive controllers.
Resumo:
We introduce a novel inversion-based neuro-controller for solving control problems involving uncertain nonlinear systems that could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. In this work a novel robust inverse control approach is obtained based on importance sampling from these distributions. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The performance of the new algorithm is illustrated through simulations with example systems.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
This thesis focuses on the theoretical examination of the exchange rate economic (operating) exposure within the context of the theory of the firm, and proposes some hedging solutions using currency options. The examination of economic exposure is based on such parameters as firms' objectives, industry structure and production cost efficiency. In particular, it examines an hypothetical exporting firm with costs in domestic currency, which faces competition from foreign firms in overseas markets and has a market share expansion objective. Within this framework, the hypothesis is established that economic exposure, portrayed in a diagram connecting export prices and real exchange rates, is asymmetric (i.e. the negative effects depreciation are higher than the positive effects of a currency depreciation). In this case, export business can be seen as a real option, given by exporting firms to overseas customer. Different scenarios about the asymmetry hypothesis can be derived for different assumptions about the determinants of economic exposure. Having established the asymmetry hypothesis, the hedging against this exposure is analysed. The hypothesis is established, that a currency call option should be used in hedging against asymmetric economic exposure. Further, some advanced currency options stategies are discussed, and their use in hedging several scenarios of exposure is indicated, establishing the hypothesis that, the optimal options strategy is a function of the determinants of exposure. Some extensions on the theoretical analysis are examined. These include the hedging of multicurrency exposure using options, and the exposure of a purely domestic firm facing import competition. The empirical work addresses two issues: the empirical validity of the asymmetry hypothesis and the examination of the hedging effectiveness of currency options.
Resumo:
MRI of fluids containing lipid coated microbubbles has been shown to be an effective toot for measuring the local fluid pressure. However, the intrinsically buoyant nature of these microbubbles precludes lengthy measurements due to their vertical migration under gravity and pressure-induced coalescence. A novel preparation is presented which is shown to minimize both these effects for at least 25 min. By using a 2% polysaccharide gel base with a small concentration of glycerol and 1,2-distearoyl-sn-glycero-3-phosphocholine coated gas microbubbles, MR measurements are made for pressures between 0.95 and 1.44 bar. The signal drifts due to migration and amalgamation are shown to be minimized for such an experiment whilst yielding very high NMR sensitivities up to 38% signal change per bar.
Resumo:
Recently introduced surface nanoscale axial photonics (SNAP) makes it possible to fabricate high-Q-factor microresonators and other photonic microdevices by dramatically small deformation of the optical fiber surface. To become a practical and robust technology, the SNAP platform requires methods enabling reproducible modification of the optical fiber radius at nanoscale. In this Letter, we demonstrate superaccurate fabrication of high-Q-factor microresonators by nanoscale modification of the optical fiber radius and refractive index using CO laser and UV excimer laser beam exposures. The achieved fabrication accuracy is better than 2Å in variation of the effective fiber radius. © 2011 Optical Society of America.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
Recently introduced surface nanoscale axial photonics (SNAP) makes it possible to fabricate high-Q-factor microresonators and other photonic microdevices by dramatically small deformation of the optical fiber surface. To become a practical and robust technology, the SNAP platform requires methods enabling reproducible modification of the optical fiber radius at nanoscale. In this Letter, we demonstrate superaccurate fabrication of high-Q-factor microresonators by nanoscale modification of the optical fiber radius and refractive index using CO laser and UV excimer laser beam exposures. The achieved fabrication accuracy is better than 2Å in variation of the effective fiber radius. © 2011 Optical Society of America.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
Predicting species potential and future distribution has become a relevant tool in biodiversity monitoring and conservation.In this data article we present the suitability map of a virtual species generated based on two bioclimatic variables, and a dataset containing more than 700,000 random observations at the extent of Europe. The dataset includes spatial attributes such as: distance to roads, protected areas, country codes, and the habitat suitability of two spatially clustered species (grassland and forest species) and a wide-spread species.
Resumo:
Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.
Resumo:
This paper proposes the use of the 2-D differential decoding to improve the robustness of dual-polarization optical packet receivers and is demonstrated in a wavelength switching scenario for the first time.
Resumo:
In ensuring the quality of learning and teaching in Higher Education, self-evaluation is an important component of the process. An example would be the approach taken within the CDIO community whereby self-evaluation against the CDIO standards is part of the quality assurance process. Eight European universities (Reykjavik University, Iceland; Turku University of Applied Sciences, Finland; Aarhus University, Denmark; Helsinki Metropolia University of Applied Sciences, Finland; Ume? University, Sweden; Telecom Bretagne, France; Aston University, United Kingdom; Queens University Belfast, United Kingdom) are engaged in an EU funded Erasmus + project that is exploring the quality assurance process associated with active learning. The development of a new self-evaluation framework that feeds into a ?Marketplace? where participating institutions can be paired up and then engage in peer evaluations and sharing around each institutions approach to and implementation of active learning. All of the partner institutions are engaged in the application of CDIO within their engineering programmes and this has provided a common starting point for the partnership to form and the project to be developed. Although the initial focus will be CDIO, the longer term aim is that the approach could be of value beyond CDIO and within other disciplines. The focus of this paper is the process by which the self-evaluation framework is being developed and the form of the draft framework. In today?s Higher Education environment, the need to comply with Quality Assurance standards is an ever present feature of programme development and review. When engaging in a project that spans several countries, the wealth of applicable standards and guidelines is significant. In working towards the development of a robust Self Evaluation Framework for this project, the project team decided to take a wide view of the available resources to ensure a full consideration of different requirements and practices. The approach to developing the framework considered: a) institutional standards and processes b) national standards and processes e.g. QAA in the UK c) documents relating to regional / global accreditation schemes e.g. ABET d) requirements / guidelines relating to particular learning and teaching frameworks e.g. CDIO. The resulting draft self-evaluation framework is to be implemented within the project team to start with to support the initial ?Marketplace? pairing process. Following this initial work, changes will be considered before a final version is made available as part of the project outputs. Particular consideration has been paid to the extent of the framework, as a key objective of the project is to ensure that the approach to quality assurance has impact but is not overly demanding in terms of time or paperwork. In other words that it is focused on action and value added to staff, students and the programmes being considered.