843 resultados para Elliptical Basis Function Network
Resumo:
Training Mixture Density Network (MDN) configurations within the NETLAB framework takes time due to the nature of the computation of the error function and the gradient of the error function. By optimising the computation of these functions, so that gradient information is computed in parameter space, training time is decreased by at least a factor of sixty for the example given. Decreased training time increases the spectrum of problems to which MDNs can be practically applied making the MDN framework an attractive method to the applied problem solver.
Resumo:
A conventional neural network approach to regression problems approximates the conditional mean of the output vector. For mappings which are multi-valued this approach breaks down, since the average of two solutions is not necessarily a valid solution. In this article mixture density networks, a principled method to model conditional probability density functions, are applied to retrieving Cartesian wind vector components from satellite scatterometer data. A hybrid mixture density network is implemented to incorporate prior knowledge of the predominantly bimodal function branches. An advantage of a fully probabilistic model is that more sophisticated and principled methods can be used to resolve ambiguities.
Resumo:
Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.
Resumo:
A conventional neural network approach to regression problems approximates the conditional mean of the output vector. For mappings which are multi-valued this approach breaks down, since the average of two solutions is not necessarily a valid solution. In this article mixture density networks, a principled method to model conditional probability density functions, are applied to retrieving Cartesian wind vector components from satellite scatterometer data. A hybrid mixture density network is implemented to incorporate prior knowledge of the predominantly bimodal function branches. An advantage of a fully probabilistic model is that more sophisticated and principled methods can be used to resolve ambiguities.
Resumo:
Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.
Resumo:
The thrust of the argument presented in this chapter is that inter-municipal cooperation (IMC) in the United Kingdom reflects local government's constitutional position and its exposure to the exigencies of Westminster (elected central government) and Whitehall (centre of the professional civil service that services central government). For the most part councils are without general powers of competence and are restricted in what they can do by Parliament. This suggests that the capacity for locally driven IMC is restricted and operates principally within a framework constructed by central government's policy objectives and legislation and the political expediencies of the governing political party. In practice, however, recent examples of IMC demonstrate that the practices are more complex than this initial analysis suggests. Central government may exert top-down pressures and impose hierarchical directives, but there are important countervailing forces. Constitutional changes in Scotland and Wales have shifted the locus of central- local relations away from Westminster and Whitehall. In England, the seeding of English government regional offices in 1994 has evolved into an important structural arrangement that encourages councils to work together. Within the local government community there is now widespread acknowledgement that to achieve the ambitious targets set by central government, councils are, by necessity, bound to cooperate and work with other agencies. In recent years, the fragmentation of public service delivery has affected the scope of IMC. Elected local government in the UK is now only one piece of a complex jigsaw of agencies that provides services to the public; whether it is with non-elected bodies, such as health authorities, public protection authorities (police and fire), voluntary nonprofit organisations or for-profit bodies, councils are expected to cooperate widely with agencies in their localities. Indeed, for projects such as regeneration and community renewal, councils may act as the coordinating agency but the success of such projects is measured by collaboration and partnership working (Davies 2002). To place these developments in context, IMC is an example of how, in spite of the fragmentation of traditional forms of government, councils work with other public service agencies and other councils through the medium of interagency partnerships, collaboration between organisations and a mixed economy of service providers. Such an analysis suggests that, following changes to the system of local government, contemporary forms of IMC are less dependent on vertical arrangements (top-down direction from central government) as they are replaced by horizontal modes (expansion of networks and partnership arrangements). Evidence suggests, however that central government continues to steer local authorities through the agency of inspectorates and regulatory bodies, and through policy initiatives, such as local strategic partnerships and local area agreements (Kelly 2006), thus questioning whether, in the case of UK local government, the shift from hierarchy to network and market solutions is less differentiated and transformation less complete than some literature suggests. Vertical or horizontal pressures may promote IMC, yet similar drivers may deter collaboration between local authorities. An example of negative vertical pressure was central government's change of the systems of local taxation during the 1980s. The new taxation regime replaced a tax on property with a tax on individual residency. Although the community charge lasted only a few years, it was a highpoint of the then Conservative government policy that encouraged councils to compete with each other on the basis of the level of local taxation. In practice, however, the complexity of local government funding in the UK rendered worthless any meaningful ambition of councils competing with each other, especially as central government granting to local authorities is predicated (however imperfectly) on at least notional equalisation between those areas with lower tax yields and the more prosperous locations. Horizontal pressures comprise factors such as planning decisions. Over the last quarter century, councils have competed on the granting of permission to out-of-town retail and leisure complexes, now recognised as detrimental to neighbouring authorities because economic forces prevail and local, independent shops are unable to compete with multiple companies. These examples illustrate tensions at the core of the UK polity of whether IMC is feasible when competition between local authorities heightened by local differences reduces opportunities for collaboration. An alternative perspective on IMC is to explore whether specific purposes or functions promote or restrict it. Whether in the principle areas of local government responsibilities relating to social welfare, development and maintenance of the local infrastructure or environmental matters, there are examples of IMC. But opportunities have diminished considerably as councils lost responsibility for services provision as a result of privatisation and transfer of powers to new government agencies or to central government. Over the last twenty years councils have lost their role in the provision of further-or higher-education, public transport and water/sewage. Councils have commissioning power but only a limited presence in providing housing needs, social care and waste management. In other words, as a result of central government policy, there are, in practice, currently far fewer opportunities for councils to cooperate. Since 1997, the New Labour government has promoted IMC through vertical drivers and the development; the operation of these policy initiatives is discussed following the framework of the editors. Current examples of IMC are notable for being driven by higher tiers of government, working with subordinate authorities in principal-agent relations. Collaboration between local authorities and intra-interand cross-sectoral partnerships are initiated by central government. In other words, IMC is shaped by hierarchical drivers from higher levels of government but, in practice, is locally varied and determined less by formula than by necessity and function. © 2007 Springer.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.
Resumo:
The fast spread of the Internet and the increasing demands of the service are leading to radical changes in the structure and management of underlying telecommunications systems. Active networks (ANs) offer the ability to program the network on a per-router, per-user, or even per-packet basis, thus promise greater flexibility than current networks. To make this new network paradigm of active network being widely accepted, a lot of issues need to be solved. Management of the active network is one of the challenges. This thesis investigates an adaptive management solution based on genetic algorithm (GA). The solution uses a distributed GA inspired by bacterium on the active nodes within an active network, to provide adaptive management for the network, especially the service provision problems associated with future network. The thesis also reviews the concepts, theories and technologies associated with the management solution. By exploring the implementation of these active nodes in hardware, this thesis demonstrates the possibility of implementing a GA based adaptive management in the real network that being used today. The concurrent programming language, Handel-C, is used for the description of the design system and a re-configurable computer platform based on a FPGA process element is used for the hardware implementation. The experiment results demonstrate both the availability of the hardware implementation and the efficiency of the proposed management solution.
Resumo:
Respiration is a complex activity. If the relationship between all neurological and skeletomuscular interactions was perfectly understood, an accurate dynamic model of the respiratory system could be developed and the interaction between different inputs and outputs could be investigated in a straightforward fashion. Unfortunately, this is not the case and does not appear to be viable at this time. In addition, the provision of appropriate sensor signals for such a model would be a considerable invasive task. Useful quantitative information with respect to respiratory performance can be gained from non-invasive monitoring of chest and abdomen motion. Currently available devices are not well suited in application for spirometric measurement for ambulatory monitoring. A sensor matrix measurement technique is investigated to identify suitable sensing elements with which to base an upper body surface measurement device that monitors respiration. This thesis is divided into two main areas of investigation; model based and geometrical based surface plethysmography. In the first instance, chapter 2 deals with an array of tactile sensors that are used as progression of existing and previously investigated volumetric measurement schemes based on models of respiration. Chapter 3 details a non-model based geometrical approach to surface (and hence volumetric) profile measurement. Later sections of the thesis concentrate upon the development of a functioning prototype sensor array. To broaden the application area the study has been conducted as it would be fore a generically configured sensor array. In experimental form the system performance on group estimation compares favourably with existing system on volumetric performance. In addition provides continuous transient measurement of respiratory motion within an acceptable accuracy using approximately 20 sensing elements. Because of the potential size and complexity of the system it is possible to deploy it as a fully mobile ambulatory monitoring device, which may be used outside of the laboratory. It provides a means by which to isolate coupled physiological functions and thus allows individual contributions to be analysed separately. Thus facilitating greater understanding of respiratory physiology and diagnostic capabilities. The outcome of the study is the basis for a three-dimensional surface contour sensing system that is suitable for respiratory function monitoring and has the prospect with future development to be incorporated into a garment based clinical tool.
Resumo:
The rapidly increasing demand for cellular telephony is placing greater demand on the limited bandwidth resources available. This research is concerned with techniques which enhance the capacity of a Direct-Sequence Code-Division-Multiple-Access (DS-CDMA) mobile telephone network. The capacity of both Private Mobile Radio (PMR) and cellular networks are derived and the many techniques which are currently available are reviewed. Areas which may be further investigated are identified. One technique which is developed is the sectorisation of a cell into toroidal rings. This is shown to provide an increased system capacity when the cell is split into these concentric rings and this is compared with cell clustering and other sectorisation schemes. Another technique for increasing the capacity is achieved by adding to the amount of inherent randomness within the transmitted signal so that the system is better able to extract the wanted signal. A system model has been produced for a cellular DS-CDMA network and the results are presented for two possible strategies. One of these strategies is the variation of the chip duration over a signal bit period. Several different variation functions are tried and a sinusoidal function is shown to provide the greatest increase in the maximum number of system users for any given signal-to-noise ratio. The other strategy considered is the use of additive amplitude modulation together with data/chip phase-shift-keying. The amplitude variations are determined by a sparse code so that the average system power is held near its nominal level. This strategy is shown to provide no further capacity since the system is sensitive to amplitude variations. When both strategies are employed, however, the sensitivity to amplitude variations is shown to reduce, thus indicating that the first strategy both increases the capacity and the ability to handle fluctuations in the received signal power.
Resumo:
Neuronal network oscillations are a unifying phenomenon in neuroscience research, with comparable measurements across scales and species. Cortical oscillations are of central importance in the characterization of neuronal network function in health and disease and are influential in effective drug development. Whilst animal in vitro and in vivo electrophysiology is able to characterize pharmacologically induced modulations in neuronal activity, present human counterparts have spatial and temporal limitations. Consequently, the potential applications for a human equivalent are extensive. Here, we demonstrate a novel implementation of contemporary neuroimaging methods called pharmaco-magnetoencephalography. This approach determines the spatial profile of neuronal network oscillatory power change across the cortex following drug administration and reconstructs the time course of these modulations at focal regions of interest. As a proof of concept, we characterize the nonspecific GABAergic modulator diazepam, which has a broad range of therapeutic applications. We demonstrate that diazepam variously modulates ? (4–7 Hz), a (7–14 Hz), ß (15–25 Hz), and ? (30–80 Hz) frequency oscillations in specific regions of the cortex, with a pharmacodynamic profile consistent with that of drug uptake. We examine the relevance of these results with regard to the spatial and temporal observations from other modalities and the various therapeutic consequences of diazepam and discuss the potential applications of such an approach in terms of drug development and translational neuroscience.
Resumo:
The chemical functionality within porous architectures dictates their performance as heterogeneous catalysts; however, synthetic routes to control the spatial distribution of individual functions within porous solids are limited. Here we report the fabrication of spatially orthogonal bifunctional porous catalysts, through the stepwise template removal and chemical functionalization of an interconnected silica framework. Selective removal of polystyrene nanosphere templates from a lyotropic liquid crystal-templated silica sol–gel matrix, followed by extraction of the liquid crystal template, affords a hierarchical macroporous–mesoporous architecture. Decoupling of the individual template extractions allows independent functionalization of macropore and mesopore networks on the basis of chemical and/or size specificity. Spatial compartmentalization of, and directed molecular transport between, chemical functionalities affords control over the reaction sequence in catalytic cascades; herein illustrated by the Pd/Pt-catalysed oxidation of cinnamyl alcohol to cinnamic acid. We anticipate that our methodology will prompt further design of multifunctional materials comprising spatially compartmentalized functions.
Resumo:
Satellite-borne scatterometers are used to measure backscattered micro-wave radiation from the ocean surface. This data may be used to infer surface wind vectors where no direct measurements exist. Inherent in this data are outliers owing to aberrations on the water surface and measurement errors within the equipment. We present two techniques for identifying outliers using neural networks; the outliers may then be removed to improve models derived from the data. Firstly the generative topographic mapping (GTM) is used to create a probability density model; data with low probability under the model may be classed as outliers. In the second part of the paper, a sensor model with input-dependent noise is used and outliers are identified based on their probability under this model. GTM was successfully modified to incorporate prior knowledge of the shape of the observation manifold; however, GTM could not learn the double skinned nature of the observation manifold. To learn this double skinned manifold necessitated the use of a sensor model which imposes strong constraints on the mapping. The results using GTM with a fixed noise level suggested the noise level may vary as a function of wind speed. This was confirmed by experiments using a sensor model with input-dependent noise, where the variation in noise is most sensitive to the wind speed input. Both models successfully identified gross outliers with the largest differences between models occurring at low wind speeds. © 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
This thesis describes a novel connectionist machine utilizing induction by a Hilbert hypercube representation. This representation offers a number of distinct advantages which are described. We construct a theoretical and practical learning machine which lies in an area of overlap between three disciplines - neural nets, machine learning and knowledge acquisition - hence it is refered to as a "coalesced" machine. To this unifying aspect is added the various advantages of its orthogonal lattice structure as against less structured nets. We discuss the case for such a fundamental and low level empirical learning tool and the assumptions behind the machine are clearly outlined. Our theory of an orthogonal lattice structure the Hilbert hypercube of an n-dimensional space using a complemented distributed lattice as a basis for supervised learning is derived from first principles on clearly laid out scientific principles. The resulting "subhypercube theory" was implemented in a development machine which was then used to test the theoretical predictions again under strict scientific guidelines. The scope, advantages and limitations of this machine were tested in a series of experiments. Novel and seminal properties of the machine include: the "metrical", deterministic and global nature of its search; complete convergence invariably producing minimum polynomial solutions for both disjuncts and conjuncts even with moderate levels of noise present; a learning engine which is mathematically analysable in depth based upon the "complexity range" of the function concerned; a strong bias towards the simplest possible globally (rather than locally) derived "balanced" explanation of the data; the ability to cope with variables in the network; and new ways of reducing the exponential explosion. Performance issues were addressed and comparative studies with other learning machines indicates that our novel approach has definite value and should be further researched.