80 resultados para Wind power, Gaussian Process, Similar Pattern, Forecasting
Resumo:
This chapter discusses network protection of high-voltage direct current (HVDC) transmission systems for large-scale offshore wind farms where the HVDC system utilizes voltage-source converters. The multi-terminal HVDC network topology and protection allocation and configuration are discussed with DC circuit breaker and protection relay configurations studied for different fault conditions. A detailed protection scheme is designed with a solution that does not require relay communication. Advanced understanding of protection system design and operation is necessary for reliable and safe operation of the meshed HVDC system under fault conditions. Meshed-HVDC systems are important as they will be used to interconnect large-scale offshore wind generation projects. Offshore wind generation is growing rapidly and offers a means of securing energy supply and addressing emissions targets whilst minimising community impacts. There are ambitious plans concerning such projects in Europe and in the Asia-Pacific region which will all require a reliable yet economic system to generate, collect, and transmit electrical power from renewable resources. Collective offshore wind farms are efficient and have potential as a significant low-carbon energy source. However, this requires a reliable collection and transmission system. Offshore wind power generation is a relatively new area and lacks systematic analysis of faults and associated operational experience to enhance further development. Appropriate fault protection schemes are required and this chapter highlights the process of developing and assessing such schemes. The chapter illustrates the basic meshed topology, identifies the need for distance evaluation, and appropriate cable models, then details the design and operation of the protection scheme with simulation results used to illustrate operation. © Springer Science+Business Media Singapore 2014.
Resumo:
Although maximum power point tracking (MPPT) is crucial in the design of a wind power generation system, the necessary control strategies should also be considered for conditions that require a power reduction, called de-loading in this paper. A coordinated control scheme for a proposed current source converter (CSC) based DC wind energy conversion system is presented in this paper. This scheme combines coordinated control of the pitch angle, a DC load dumping chopper and the DC/DC converter, to quickly achieve wind farm de-loading. MATLAB/Simulink simulations and experiments are used to validate the purpose and effectiveness of the control scheme, both at the same power level. © 2013 IEEE.
Resumo:
The Bayesian analysis of neural networks is difficult because the prior over functions has a complex form, leading to implementations that either make approximations or use Monte Carlo integration techniques. In this paper I investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis to be carried out exactly using matrix operations. The method has been tested on two challenging problems and has produced excellent results.
Resumo:
The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions. In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.
Resumo:
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems.
Resumo:
We consider the problem of assigning an input vector bfx to one of m classes by predicting P(c|bfx) for c = 1, ldots, m. For a two-class problem, the probability of class 1 given bfx is estimated by s(y(bfx)), where s(y) = 1/(1 + e-y). A Gaussian process prior is placed on y(bfx), and is combined with the training data to obtain predictions for new bfx points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior; the necessary integration over y is carried out using Laplace's approximation. The method is generalized to multi-class problems (m >2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.
Resumo:
We develop an approach for sparse representations of Gaussian Process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the GP model. By using an appealing parametrisation and projection techniques that use the RKHS norm, recursions for the effective parameters and a sparse Gaussian approximation of the posterior process are obtained. This allows both for a propagation of predictions as well as of Bayesian error measures. The significance and robustness of our approach is demonstrated on a variety of experiments.
Resumo:
Based on a statistical mechanics approach, we develop a method for approximately computing average case learning curves and their sample fluctuations for Gaussian process regression models. We give examples for the Wiener process and show that universal relations (that are independent of the input distribution) between error measures can be derived.
Resumo:
We develop an approach for sparse representations of Gaussian Process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the GP model. By using an appealing parametrisation and projection techniques that use the RKHS norm, recursions for the effective parameters and a sparse Gaussian approximation of the posterior process are obtained. This allows both for a propagation of predictions as well as of Bayesian error measures. The significance and robustness of our approach is demonstrated on a variety of experiments.
Resumo:
This note explores the regulatory process of UK privatised utilities through the periodic review of prices. It provides a brief history of the privatisation programme in the UK and the theoretical arguments for the price-cap regulation that has been used. It argues that regulatory process appears to involve a covert dialogue and exchange of information between the regulator and regulated and also a second separate review process that consists of an overt dialogue. Using a semiotic analysis the authors suggest that the unfolding of each of these overt reviews follows a very similar pattern that is constantly being re-enacted. It is concluded that further research is required into the relative importance of the two separate review processes in the setting of the price-cap.
Resumo:
Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.
Resumo:
Cancer cachexia is characterised by selective depletion of skeletal muscle protein reserves. The ubiquitin-proteasome proteolytic pathway has been shown to be responsible for muscle wasting in a range of cachectic conditions including cancer cachexia. To establish the importance of this pathway in muscle wasting during cancer (and sepsis), a quantitative competitive RT-PCR (QcRT-PCR) method was developed to measure the mRNA levels of the proteasome sub units C2a and C5ß and the ubiquitin-conjugating enzyme E214k. Western blotting was also used to measure the 20S proteasome and E214k protein expression. In vivo studies in mice bearing a cachexia inducing murine colon adenocarcinoma (MAC16) demonstrated the effect of progressive weight loss on the mRNA and protein expression for 20S proteasome subunits, as well as the ubiquitin-conjugating enzyme, E214k, in gastrocnemius and pectoral muscles. QcRT-PCR measurements showed a good correlation between expression of the proteasome subunits (C2 and CS) and the E214k enzyme mRNA and weight loss in gastrocnemius muscle, where expression increased with increasing weight loss followed by a decrease in expression at higher weight losses (25-27%). Similar results were obtained in pectoral muscles, but with the expression being several fold lower in comparison to that in gastrocnemius muscle, reflecting the different degrees of protein degradation in the two muscles during the process of cancer cachexia. Western blot analysis of 20S and E214k protein expression followed a similar pattern with respect to weight loss as that found with mRNA. In addition, mRNA and protein expression of the 20S proteasome subunits and E214k enzyme was measured in biopsies from cachectic cancer patients, which also showed a good correlation between weight loss and proteasome expression, demonstrating a progressive increase in expression of the proteasome subunits and E214k mRNA and protein in cachectic patients with progressively increasing weight loss.The effect of the cachexia-inducing tumour product PIF (proteolysis inducing factor) and 15-hydroxyeicosatetraenoic acid (15-HETE), the arachidoinic acid metabolite (thought to be the intracellular transducer of PIF action) has also been determined. Using a surrogate model system for skeletal muscle, C2C12 myotubes in vitro, it was shown that both PIF and 15-HETE increased proteasome subunit expression (C2a and C5ß) as well as the E214k enzyme. This increase gene expression was attenuated by preincubation with EPA or the 15-lipoxygenase inhibitor CV-6504; immunoblotting also confirmed these findings. Similarly, in sepsis-induced cachexia in NMRI mice there was increased mRNA and protein expression of the 20S proteasome subunits and the E214k enzyme, which was inhibited by EPA treatment. These results suggest that 15-HETE is the intracellular mediator for PIF induced protein degradation in skeletal muscle, and that elevated muscle catabolism is accomplished through upregulation of the ubiquitin-proteasome-proteolytic pathway. Furthermore, both EPA and CV -6504 have shown anti-cachectic properties, which could be used in the future for the treatment of cancer cachexia and other similar catabolic conditions.
Resumo:
This thesis addresses data assimilation, which typically refers to the estimation of the state of a physical system given a model and observations, and its application to short-term precipitation forecasting. A general introduction to data assimilation is given, both from a deterministic and' stochastic point of view. Data assimilation algorithms are reviewed, in the static case (when no dynamics are involved), then in the dynamic case. A double experiment on two non-linear models, the Lorenz 63 and the Lorenz 96 models, is run and the comparative performance of the methods is discussed in terms of quality of the assimilation, robustness "in the non-linear regime and computational time. Following the general review and analysis, data assimilation is discussed in the particular context of very short-term rainfall forecasting (nowcasting) using radar images. An extended Bayesian precipitation nowcasting model is introduced. The model is stochastic in nature and relies on the spatial decomposition of the rainfall field into rain "cells". Radar observations are assimilated using a Variational Bayesian method in which the true posterior distribution of the parameters is approximated by a more tractable distribution. The motion of the cells is captured by a 20 Gaussian process. The model is tested on two precipitation events, the first dominated by convective showers, the second by precipitation fronts. Several deterministic and probabilistic validation methods are applied and the model is shown to retain reasonable prediction skill at up to 3 hours lead time. Extensions to the model are discussed.
Surface roughness after excimer laser ablation using a PMMA model:profilometry and effects on vision
Resumo:
PURPOSE: To show that the limited quality of surfaces produced by one model of excimer laser systems can degrade visual performance with a polymethylmethacrylate (PMMA) model. METHODS: A range of lenses of different powers was ablated in PMMA sheets using five DOS-based Nidek EC-5000 laser systems (Nidek Technologies, Gamagori, Japan) from different clinics. Surface quality was objectively assessed using profilometry. Contrast sensitivity and visual acuity were measured through the lenses when their powers were neutralized with suitable spectacle trial lenses. RESULTS: Average surface roughness was found to increase with lens power, roughness values being higher for negative lenses than for positive lenses. Losses in visual contrast sensitivity and acuity measured in two subjects were found to follow a similar pattern. Findings are similar to those previously published with other excimer laser systems. CONCLUSIONS: Levels of surface roughness produced by some laser systems may be sufficient to degrade visual performance under some circumstances.