47 resultados para electricity distribution networks


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Using electricity load data and training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise and forgetting factors for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. We also find that a recently-proposed alternative novelty criterion, found to be more robust in stationary environments, does not fare so well in the non-stationary case due to the need for filter adaptability during training.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Online model order complexity estimation remains one of the key problems in neural network research. The problem is further exacerbated in situations where the underlying system generator is non-stationary. In this paper, we introduce a novelty criterion for resource allocating networks (RANs) which is capable of being applied to both stationary and slowly varying non-stationary problems. The deficiencies of existing novelty criteria are discussed and the relative performances are demonstrated on two real-world problems : electricity load forecasting and exchange rate prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about km800, carrying a C-band scatterometer. A scatterometer measures the amount of radar back scatter generated by small ripples on the ocean surface induced by instantaneous local winds. Operational methods that extract wind vectors from satellite scatterometer data are based on the local inversion of a forward model, mapping scatterometer observations to wind vectors, by the minimisation of a cost function in the scatterometer measurement space.par This report uses mixture density networks, a principled method for modelling conditional probability density functions, to model the joint probability distribution of the wind vectors given the satellite scatterometer measurements in a single cell (the `inverse' problem). The complexity of the mapping and the structure of the conditional probability density function are investigated by varying the number of units in the hidden layer of the multi-layer perceptron and the number of kernels in the Gaussian mixture model of the mixture density network respectively. The optimal model for networks trained per trace has twenty hidden units and four kernels. Further investigation shows that models trained with incidence angle as an input have results comparable to those models trained by trace. A hybrid mixture density network that incorporates geophysical knowledge of the problem confirms other results that the conditional probability distribution is dominantly bimodal.par The wind retrieval results improve on previous work at Aston, but do not match other neural network techniques that use spatial information in the inputs, which is to be expected given the ambiguity of the inverse problem. Current work uses the local inverse model for autonomous ambiguity removal in a principled Bayesian framework. Future directions in which these models may be improved are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using methods of Statistical Physics, we investigate the generalization performance of support vector machines (SVMs), which have been recently introduced as a general alternative to neural networks. For nonlinear classification rules, the generalization error saturates on a plateau, when the number of examples is too small to properly estimate the coefficients of the nonlinear part. When trained on simple rules, we find that SVMs overfit only weakly. The performance of SVMs is strongly enhanced, when the distribution of the inputs has a gap in feature space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel approach, based on statistical mechanics, to analyze typical performance of optimum code-division multiple-access (CDMA) multiuser detectors is reviewed. A `black-box' view ot the basic CDMA channel is introduced, based on which the CDMA multiuser detection problem is regarded as a `learning-from-examples' problem of the `binary linear perceptron' in the neural network literature. Adopting Bayes framework, analysis of the performance of the optimum CDMA multiuser detectors is reduced to evaluation of the average of the cumulant generating function of a relevant posterior distribution. The evaluation of the average cumulant generating function is done, based on formal analogy with a similar calculation appearing in the spin glass theory in statistical mechanics, by making use of the replica method, a method developed in the spin glass theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a theoretical paper that examines the interplay between individual and collective capabilities and competencies and value transactions in collaborative environments. The theory behind value creation is examined and two types of value are identified, internal value (Shareholder value) and external value (Value proposition). The literature on collaborative enterprises/network is also examined with particular emphasis on supply chains, extended/virtual enterprises and clusters as representatives of different forms and maturities of collaboration. The interplay of value transactions and competencies and capabilities are examined and discussed in detail. Finally, a model is presented which consists of value transactions and a table which compares the characteristics of different types of collaborative enterprises/networks. It is proposed that this model presents a platform for further research to develop an in-depth understanding into how value may be created and managed in collaborative enterprises/networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of the Integrated Services Digital Network (ISDN) led to the standardisation of the first video codecs for interpersonal video communications, followed closely by the development of standards for the compression, storage and distribution of digital video in the PC environment, mainly targeted at CD-ROM storage. At the same time the second-generation digital wireless networks, and the third-generation networks being developed, have enough bandwidth to support digital video services. The radio propagation medium is a difficult environment in which to deploy low bit error rate, real time services such as video. The video coding standards designed for ISDN and storage applications, were targeted at low bit error rate levels, orders of magnitude lower than the typical bit error rates experienced on wireless networks. This thesis is concerned with the transmission of digital, compressed video over wireless networks. It investigates the behaviour of motion compensated, hybrid interframe DPCM/DCT video coding algorithms, which form the basis of current coding algorithms, in the presence of high bit error rates commonly found on digital wireless networks. A group of video codecs, based on the ITU-T H.261 standard, are developed which are robust to the burst errors experienced on radio channels. The radio link is simulated at low level, to generate typical error files that closely model real world situations, in a Rayleigh fading environment perturbed by co-channel interference, and on frequency selective channels which introduce inter symbol interference. Typical anti-multipath techniques, such as antenna diversity, are deployed to mitigate the effects of the channel. Link layer error control techniques are also investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A local area network that can support both voice and data packets offers economic advantages due to the use of only a single network for both types of traffic, greater flexibility to changing user demands, and it also enables efficient use to be made of the transmission capacity. The latter aspect is very important in local broadcast networks where the capacity is a scarce resource, for example mobile radio. This research has examined two types of local broadcast network, these being the Ethernet-type bus local area network and a mobile radio network with a central base station. With such contention networks, medium access control (MAC) protocols are required to gain access to the channel. MAC protocols must provide efficient scheduling on the channel between the distributed population of stations who want to transmit. No access scheme can exceed the performance of a single server queue, due to the spatial distribution of the stations. Stations cannot in general form a queue without using part of the channel capacity to exchange protocol information. In this research, several medium access protocols have been examined and developed in order to increase the channel throughput compared to existing protocols. However, the established performance measures of average packet time delay and throughput cannot adequately characterise protocol performance for packet voice. Rather, the percentage of bits delivered within a given time bound becomes the relevant performance measure. Performance evaluation of the protocols has been examined using discrete event simulation and in some cases also by mathematical modelling. All the protocols use either implicit or explicit reservation schemes, with their efficiency dependent on the fact that many voice packets are generated periodically within a talkspurt. Two of the protocols are based on the existing 'Reservation Virtual Time CSMA/CD' protocol, which forms a distributed queue through implicit reservations. This protocol has been improved firstly by utilising two channels, a packet transmission channel and a packet contention channel. Packet contention is then performed in parallel with a packet transmission to increase throughput. The second protocol uses variable length packets to reduce the contention time between transmissions on a single channel. A third protocol developed, is based on contention for explicit reservations. Once a station has achieved a reservation, it maintains this effective queue position for the remainder of the talkspurt and transmits after it has sensed the transmission from the preceeding station within the queue. In the mobile radio environment, adaptions to the protocols were necessary in order that their operation was robust to signal fading. This was achieved through centralised control at a base station, unlike the local area network versions where the control was distributed at the stations. The results show an improvement in throughput compared to some previous protocols. Further work includes subjective testing to validate the protocols' effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generating functional method is employed to investigate the synchronous dynamics of Boolean networks, providing an exact result for the system dynamics via a set of macroscopic order parameters. The topology of the networks studied and its constituent Boolean functions represent the system's quenched disorder and are sampled from a given distribution. The framework accommodates a variety of topologies and Boolean function distributions and can be used to study both the noisy and noiseless regimes; it enables one to calculate correlation functions at different times that are inaccessible via commonly used approximations. It is also used to determine conditions for the annealed approximation to be valid, explore phases of the system under different levels of noise and obtain results for models with strong memory effects, where existing approximations break down. Links between Boolean networks and general Boolean formulas are identified and results common to both system types are highlighted. © 2012 Copyright Taylor and Francis Group, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis concerns itself with the application of Demand Side Management (DSM) by industrial subsector as applied to the UK electricity industry. A review of the origins of DSM in the US and the relevance of experience gained to the UK electricity industry is made. Reviews are also made of the current status of the UK electricity industry, the regulatory system, and the potential role of DSM within the prevalent industry environment. A financial appraisal of DSM in respect of the distribution business of a Regional Electricity Company (REC) is also made. This financial appraisal highlights the economic viability of DSM within the context of the current UK electricity industry. The background of the work presented above is then followed by the construction of a framework detailing the necessary requirements for expanding the commercial role of DSM to encompass benefits for the supply business of a REC. The derived framework is then applied, in part, to the UK ceramics manufacturing industry, and in full to the UK sanitaryware manufacturing industry. The application of the framework to the UK sanitaryware manufacturing industry has required the undertaking of a unique first-order energy audit of every such manufacturing site in the UK. As such the audit has revealed previously unknown data on the timings and magnitude of electricity demand and consumption attributable to end-use manufacturing technologies and processes. The audit also served to reveal the disparity in the attitudes toward energy services, and thus by implication towards DSM, of manufacturers within the same Standard Industrial Classification (SIC) code. In response to this, attempt is made to identify the underlying drivers which could cause this variation in attitude. A novel approach to the market segmentation of the companies within the UK ceramics manufacturing sector has been utilised to classify these companies in terms of their likelihood to participate in DSM programmes through the derived Energy Services approach. The market segmentation technique, although requiring further development to progress from a research based concept, highlights the necessity to look beyond the purely energy based needs of manufacturing industries when considering the utilisation of the Energy Services approach to facilitate DSM programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fast pyrolysis of biomass produces a liquid bio-oil that can be used for electricity generation. Bio-oil can be stored and transported so it is possible to decouple the pyrolysis process from the generation process. This allows each process to be separately optimised. It is necessary to have an understanding of the transport costs involved in order to carry out techno-economic assessments of combinations of remote pyrolysis plants and generation plants. Published fixed and variable costs for freight haulage have been used to calculate the transport cost for trucks running between field stores and a pyrolysis plant. It was found that the key parameter for estimating these costs was the number of round trips a day a truck could make rather than the distance covered. This zone costing approach was used to estimate the transport costs for a range of pyrolysis plants size for willow woodchips and baled miscanthus. The possibility of saving transport costs by producing bio-oil near to the field stores and transporting the bio-oil to a central plant was investigated and it was found that this would only be cost effective for large generation plants.