843 resultados para Discrete-time control
Resumo:
We revisit the one-unit gradient ICA algorithm derived from the kurtosis function. By carefully studying properties of the stationary points of the discrete-time one-unit gradient ICA algorithm, with suitable condition on the learning rate, convergence can be proved. The condition on the learning rate helps alleviate the guesswork that accompanies the problem of choosing suitable learning rate in practical computation. These results may be useful to extract independent source signals on-line.
Resumo:
This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a Tokamak fusion experiment. The Tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the Tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most Tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multi-layer perceptron, using a hybrid of digital and analogue technology, has been developed.
Resumo:
When making predictions with complex simulators it can be important to quantify the various sources of uncertainty. Errors in the structural specification of the simulator, for example due to missing processes or incorrect mathematical specification, can be a major source of uncertainty, but are often ignored. We introduce a methodology for inferring the discrepancy between the simulator and the system in discrete-time dynamical simulators. We assume a structural form for the discrepancy function, and show how to infer the maximum-likelihood parameter estimates using a particle filter embedded within a Monte Carlo expectation maximization (MCEM) algorithm. We illustrate the method on a conceptual rainfall-runoff simulator (logSPM) used to model the Abercrombie catchment in Australia. We assess the simulator and discrepancy model on the basis of their predictive performance using proper scoring rules. This article has supplementary material online. © 2011 International Biometric Society.
Resumo:
Despite the availability of various control techniques and project control software many construction projects still do not achieve their cost and time objectives. Research in this area so far has mainly been devoted to identifying causes of cost and time overruns. There is limited research geared towards studying factors inhibiting the ability of practitioners to effectively control their projects. To fill this gap, a survey was conducted on 250 construction project organizations in the UK, which was followed by face-to-face interviews with experienced practitioners from 15 of these organizations. The common factors that inhibit both time and cost control during construction projects were first identified. Subsequently 90 mitigating measures have been developed for the top five leading inhibiting factors—design changes, risks/uncertainties, inaccurate evaluation of project time/duration, complexities and non-performance of subcontractors were recommended. These mitigating measures were classified as: preventive, predictive, corrective and organizational measures. They can be used as a checklist of good practice and help project managers to improve the effectiveness of control of their projects.
Resumo:
Numerous techniques have been developed to control cost and time of construction projects. However, there is limited research on issues surrounding the practical usage of these techniques. To address this, a survey was conducted on the top 150 construction companies and 100 construction consultancies in the UK aimed at identifying common project control practices and factors inhibiting effective project control in practice. It found that despite the vast application of control techniques a high proportion of respondents still experienced cost and time overruns on a significant proportion of their projects. Analysis of the survey results concluded that more effort should be geared at the management of the identified top project control inhibiting factors. This paper has outlined some measures for mitigating these inhibiting factors so that the outcome of project time and cost control can be improved in practice.
Resumo:
We study a class of models used with success in the modelling of climatological sequences. These models are based on the notion of renewal. At first, we examine the probabilistic aspects of these models to afterwards study the estimation of their parameters and their asymptotical properties, in particular the consistence and the normality. We will discuss for applications, two particular classes of alternating renewal processes at discrete time. The first class is defined by laws of sojourn time that are translated negative binomial laws and the second class, suggested by Green is deduced from alternating renewal process in continuous time with sojourn time laws which are exponential laws with parameters α^0 and α^1 respectively.
Resumo:
In this paper, we propose a new edge-based matching kernel for graphs by using discrete-time quantum walks. To this end, we commence by transforming a graph into a directed line graph. The reasons of using the line graph structure are twofold. First, for a graph, its directed line graph is a dual representation and each vertex of the line graph represents a corresponding edge in the original graph. Second, we show that the discrete-time quantum walk can be seen as a walk on the line graph and the state space of the walk is the vertex set of the line graph, i.e., the state space of the walk is the edges of the original graph. As a result, the directed line graph provides an elegant way of developing new edge-based matching kernel based on discrete-time quantum walks. For a pair of graphs, we compute the h-layer depth-based representation for each vertex of their directed line graphs by computing entropic signatures (computed from discrete-time quantum walks on the line graphs) on the family of K-layer expansion subgraphs rooted at the vertex, i.e., we compute the depth-based representations for edges of the original graphs through their directed line graphs. Based on the new representations, we define an edge-based matching method for the pair of graphs by aligning the h-layer depth-based representations computed through the directed line graphs. The new edge-based matching kernel is thus computed by counting the number of matched vertices identified by the matching method on the directed line graphs. Experiments on standard graph datasets demonstrate the effectiveness of our new kernel.
Resumo:
In this paper, we develop a new graph kernel by using the quantum Jensen-Shannon divergence and the discrete-time quantum walk. To this end, we commence by performing a discrete-time quantum walk to compute a density matrix over each graph being compared. For a pair of graphs, we compare the mixed quantum states represented by their density matrices using the quantum Jensen-Shannon divergence. With the density matrices for a pair of graphs to hand, the quantum graph kernel between the pair of graphs is defined by exponentiating the negative quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets, and demonstrate the effectiveness of the new kernel.
Resumo:
2000 Mathematics Subject Classification: 60J80
Resumo:
2010 Mathematics Subject Classification: 60J80.
Resumo:
In this study, discrete time one-factor models of the term structure of interest rates and their application to the pricing of interest rate contingent claims are examined theoretically and empirically. The first chapter provides a discussion of the issues involved in the pricing of interest rate contingent claims and a description of the Ho and Lee (1986), Maloney and Byrne (1989), and Black, Derman, and Toy (1990) discrete time models. In the second chapter, a general discrete time model of the term structure from which the Ho and Lee, Maloney and Byrne, and Black, Derman, and Toy models can all be obtained is presented. The general model also provides for the specification of an additional model, the ExtendedMB model. The third chapter illustrates the application of the discrete time models to the pricing of a variety of interest rate contingent claims. In the final chapter, the performance of the Ho and Lee, Black, Derman, and Toy, and ExtendedMB models in the pricing of Eurodollar futures options is investigated empirically. The results indicate that the Black, Derman, and Toy and ExtendedMB models outperform the Ho and Lee model. Little difference in the performance of the Black, Derman, and Toy and ExtendedMB models is detected. ^
Resumo:
The future power grid will effectively utilize renewable energy resources and distributed generation to respond to energy demand while incorporating information technology and communication infrastructure for their optimum operation. This dissertation contributes to the development of real-time techniques, for wide-area monitoring and secure real-time control and operation of hybrid power systems. ^ To handle the increased level of real-time data exchange, this dissertation develops a supervisory control and data acquisition (SCADA) system that is equipped with a state estimation scheme from the real-time data. This system is verified on a specially developed laboratory-based test bed facility, as a hardware and software platform, to emulate the actual scenarios of a real hybrid power system with the highest level of similarities and capabilities to practical utility systems. It includes phasor measurements at hundreds of measurement points on the system. These measurements were obtained from especially developed laboratory based Phasor Measurement Unit (PMU) that is utilized in addition to existing commercially based PMU’s. The developed PMU was used in conjunction with the interconnected system along with the commercial PMU’s. The tested studies included a new technique for detecting the partially islanded micro grids in addition to several real-time techniques for synchronization and parameter identifications of hybrid systems. ^ Moreover, due to numerous integration of renewable energy resources through DC microgrids, this dissertation performs several practical cases for improvement of interoperability of such systems. Moreover, increased number of small and dispersed generating stations and their need to connect fast and properly into the AC grids, urged this work to explore the challenges that arise in synchronization of generators to the grid and through introduction of a Dynamic Brake system to improve the process of connecting distributed generators to the power grid.^ Real time operation and control requires data communication security. A research effort in this dissertation was developed based on Trusted Sensing Base (TSB) process for data communication security. The innovative TSB approach improves the security aspect of the power grid as a cyber-physical system. It is based on available GPS synchronization technology and provides protection against confidentiality attacks in critical power system infrastructures. ^
Resumo:
We introduce a discrete-time fibre channel model that provides an accurate analytical description of signal-signal and signal-noise interference with memory defined by the interplay of nonlinearity and dispersion. Also the conditional pdf of signal distortion, which captures non-circular complex multivariate symbol interactions, is derived providing the necessary platform for the analysis of channel statistics and capacity estimations in fibre optic links.
Resumo:
The BlackEnergy malware targeting critical infrastructures has a long history. It evolved over time from a simple DDoS platform to a quite sophisticated plug-in based malware. The plug-in architecture has a persistent malware core with easily installable attack specific modules for DDoS, spamming, info-stealing, remote access, boot-sector formatting etc. BlackEnergy has been involved in several high profile cyber physical attacks including the recent Ukraine power grid attack in December 2015. This paper investigates the evolution of BlackEnergy and its cyber attack capabilities. It presents a basic cyber attack model used by BlackEnergy for targeting industrial control systems. In particular, the paper analyzes cyber threats of BlackEnergy for synchrophasor based systems which are used for real-time control and monitoring functionalities in smart grid. Several BlackEnergy based attack scenarios have been investigated by exploiting the vulnerabilities in two widely used synchrophasor communication standards: (i) IEEE C37.118 and (ii) IEC 61850-90-5. Specifically, the paper addresses reconnaissance, DDoS, man-in-the-middle and replay/reflection attacks on IEEE C37.118 and IEC 61850-90-5. Further, the paper also investigates protection strategies for detection and prevention of BlackEnergy based cyber physical attacks.
Resumo:
In-situ characterisation of thermocouple sensors is a challenging problem. Recently the authors presented a blind characterisation technique based on the cross-relation method of blind identification. The method allows in-situ identification of two thermocouple probes, each with a different dynamic response, using only sampled sensor measurement data. While the technique offers certain advantages over alternative methods, including low estimation variance and the ability to compensate for noise induced bias, the robustness of the method is limited by the multimodal nature of the cost function. In this paper, a normalisation term is proposed which improves the convexity of
the cost function. Further, a normalisation and bias compensation hybrid approach is presented that exploits the advantages of both normalisation and bias compensation. It is found that the optimum of the hybrid cost function is less biased and more stable than when only normalisation is applied. All results were verified by simulation.