95 resultados para Analytic network process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise or corruption. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural network framework which allows for input noise given that some model of the noise process exists. In the limit where this noise process is small and symmetric it is shown, using the Laplace approximation, that there is an additional term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable and sampling this jointly with the network's weights, using Markov Chain Monte Carlo methods, it is demonstrated that it is possible to infer the unbiassed regression over the noiseless input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Word of mouth (WOM) communication is a major part of online consumer interactions, particularly within the environment of online communities. Nevertheless, existing (offline) theory may be inappropriate to describe online WOM and its influence on evaluation and purchase.The authors report the results of a two-stage study aimed at investigating online WOM: a set of in-depth qualitative interviews followed by a social network analysis of a single online community. Combined, the results provide strong evidence that individuals behave as if Web sites themselves are primary "actors" in online social networks and that online communities can act as a social proxy for individual identification. The authors offer a conceptualization of online social networks which takes the Web site into account as an actor, an initial exploration of the concept of a consumer-Web site relationship, and a conceptual model of the online interaction and information evaluation process. © 2007 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent developments in the new economic geography and the literature on regional innovation systems have emphasised the potentially important role of networking and the characteristics of firms' local operating environment in shaping their innovative activity. Modeling UK, German and Irish plants' investments in R&D, technology transfer and networking, and their effect on the extent and success of plants' innovation activities, casts some doubt on the importance of both of these relationships. In particular, our analysis provides no support for the contention that firms or plants in the UK, Ireland or Germany with more strongly developed external links (collaborative networks or technology transfer) develop greater innovation intensity. However, although inter-firm links also have no effect on the commercial success of plants' innovation activity, intra-group links are important in terms of achieving commercial success. We also find evidence that R&D, technology transfer and networking inputs are substitutes rather than complements in the innovation process, and that there are systematic sectoral and regional influences in the efficiency with which such inputs are translated into innovation outputs. © 2001 Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fast spread of the Internet and the increasing demands of the service are leading to radical changes in the structure and management of underlying telecommunications systems. Active networks (ANs) offer the ability to program the network on a per-router, per-user, or even per-packet basis, thus promise greater flexibility than current networks. To make this new network paradigm of active network being widely accepted, a lot of issues need to be solved. Management of the active network is one of the challenges. This thesis investigates an adaptive management solution based on genetic algorithm (GA). The solution uses a distributed GA inspired by bacterium on the active nodes within an active network, to provide adaptive management for the network, especially the service provision problems associated with future network. The thesis also reviews the concepts, theories and technologies associated with the management solution. By exploring the implementation of these active nodes in hardware, this thesis demonstrates the possibility of implementing a GA based adaptive management in the real network that being used today. The concurrent programming language, Handel-C, is used for the description of the design system and a re-configurable computer platform based on a FPGA process element is used for the hardware implementation. The experiment results demonstrate both the availability of the hardware implementation and the efficiency of the proposed management solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural network framework which accounts for input noise provided that a model of the noise process exists. In the limit where the noise process is small and symmetric it is shown, using the Laplace approximation, that this method adds an extra term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable, and sampling this jointly with the network’s weights, using a Markov chain Monte Carlo method, it is demonstrated that it is possible to infer the regression over the noiseless input. This leads to the possibility of training an accurate model of a system using less accurate, or more uncertain, data. This is demonstrated on both the, synthetic, noisy sine wave problem and a real problem of inferring the forward model for a satellite radar backscatter system used to predict sea surface wind vectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The introduction of Regional Development Agencies (RDAs) in the English regions in 1999 presented a new set of collaborative challenges to existing local institutions. The key objectives of the new policy impetus emphasise increased joined-up thinking and holistic regional governance. Partners were enjoined to promote cross-sector collaboration and present a coherent regional voice. This study aims to evaluate the impact of an RDA on the partnership infrastructure of the West Midlands. The RDA network incorporates a wide spectrum of interest and organisations with diverse collaborative histories, competencies and capacities. The study has followed partners through the process over an eighteen-month period and has sought to explore the complexities and tensions of partnership working 'on the ground'. A strong qualitative methodology has been employed in generating 'thick descriptions' of the policy domain. The research has probed beyond the 'rhetoric' of partnerships and explores the sensitivities of the collaboration process. A number of theoretical frameworks have been employed, including policy network theory; partnership and collaboration theory; organisational learning; and trust and social capital. The structural components of the West Midlands RDA network are explored, including the structural configuration of the network and stocks of human and social capital assets. These combine to form the asset base of the network. Three sets of network behaviours are then explored, namely, strategy, the management of perceptions, and learning. The thesis explores how the combination of assets and behaviours affect, and in turn are affected by, each other. The findings contribute to the growing body of knowledge and understanding surrounding policy networks and collaborative governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study examines factors influencing language planning decisions in contemporary France. It focuses upon the period 1992-1994, which witnessed the introduction of two major language policy measures, the first an amendment to the French Constitution, in 1992, proclaiming the language of the Republic as French, the second, in 1994, legislation to extend the ambit of the loi Bas-Lauriol, governing the use of the French language in France. The thesis posits a significant role for the pro-reform movement led by the French language association Avenir de la Langue Francaise (ALF) in the introduction and formulation of the policy measures concerned. The movement is depicted as continuing the traditional pattern of intellectual involvement in language planning, whilst also marking the beginning of a highly proactive, and increasingly political approach. Detailed examination of the movement's activities reveals that contextual factors and strategic strength combined to facilitate access to the levers of power, and enabled those involved to exert an impact on policy initiation, formulation, and ultimately implementation. However, ALF's decision to pursue the legislative route led to the expansion of the network of actors involved in language policymaking, and the development of counter-pressure from sectoral groups. It is suggested that this more interventionist approach destabilised the traditionally consensual language policy community, and called into question the quasi-monopoly of the intelligentsia in respect of language policymaking. It raised broader questions relating to freedom of expression and the permissible limits of language regulation in a democracy such as France. It also exposed ongoing ambiguities and inconsistencies in the interpretation of the tenets of language planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At rest, the primary motor cortex (M1) exhibits spontaneous neuronal network oscillations in the beta (15–30 Hz) frequency range, mediated by inhibitory interneuron drive via GABA-A receptors. However, questions remain regarding the neuropharmacological basis of movement related oscillatory phenomena, such as movement related beta desynchronisation (MRBD), post-movement beta rebound (PMBR) and movement related gamma synchronisation (MRGS). To address this, we used magnetoencephalography (MEG) to study the movement related oscillatory changes in M1 cortex of eight healthy participants, following administration of the GABA-A modulator diazepam. Results demonstrate that, contrary to initial hypotheses, neither MRGS nor PMBR appear to be GABA-A dependent, whilst the MRBD is facilitated by increased GABAergic drive. These data demonstrate that while movement-related beta changes appear to be dependent upon spontaneous beta oscillations, they occur independently of one other. Crucially, MRBD is a GABA-A mediated process, offering a possible mechanism by which motor function may be modulated. However, in contrast, the transient increase in synchronous power observed in PMBR and MRGS appears to be generated by a non-GABA-A receptor mediated process; the elucidation of which may offer important insights into motor processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process framework comprises three phases, as follows: scope the supply chain/network; identify the options for supply system architecture and select supply system architecture. It facilitates a structured approach that analyses the supply chain/network contextual characteristics, in order to ensure alignment with the appropriate supply system architecture. The process framework was derived from comprehensive literature review and archival case study analysis. The review led to the classification of supply system architectures according to their orientation, whether integrated; partially integrated; co-ordinated or independent. The classification was combined with the characteristics that influence the selection of supply system architecture to encapsulate the conceptual framework. It builds upon existing frameworks and methodologies by focusing on structured procedure; supporting project management; facilitating participation and clarifying point of entry. The process framework was initially tested in three case study applications from the food, automobile and hand tool industries. A variety of industrial settings was chosen to illustrate transferability. The case study applications indicate that the process framework is a valid approach to the problem; however, further testing is required. In particular, the use of group support system technologies to support the process and the steps involving the participation of software vendors need further testing. However, the process framework can be followed due to the clarity of its presentation. It considers the issue of timing by including alternative decision-making techniques, dependent on the constraints. It is useful for ensuring a sound business case is developed, with supporting documentation and analysis that identifies the strategic and functional requirements of supply system architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a continuum model describing data losses in a single node of a packet-switched network (like the Internet) which preserves the discrete nature of the data loss process. By construction, the model has critical behavior with a sharp transition from exponentially small to finite losses with increasing data arrival rate. We show that such a model exhibits strong fluctuations in the loss rate at the critical point and non-Markovian power-law correlations in time, in spite of the Markovian character of the data arrival process. The continuum model allows for rather general incoming data packet distributions and can be naturally generalized to consider the buffer server idleness statistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of wireless networks is limited by multiple access interference (MAI) in the traditional communication approach where the interfered signals of the concurrent transmissions are treated as noise. In this paper, we treat the interfered signals from a new perspective on the basis of additive electromagnetic (EM) waves and propose a network coding based interference cancelation (NCIC) scheme. In the proposed scheme, adjacent nodes can transmit simultaneously with careful scheduling; therefore, network performance will not be limited by the MAI. Additionally we design a space segmentation method for general wireless ad hoc networks, which organizes network into clusters with regular shapes (e.g., square and hexagon) to reduce the number of relay nodes. The segmentation methodworks with the scheduling scheme and can help achieve better scalability and reduced complexity. We derive accurate analytic models for the probability of connectivity between two adjacent cluster heads which is important for successful information relay. We proved that with the proposed NCIC scheme, the transmission efficiency can be improved by at least 50% for general wireless networks as compared to the traditional interference avoidance schemes. Numeric results also show the space segmentation is feasible and effective. Finally we propose and discuss a method to implement the NCIC scheme in a practical orthogonal frequency division multiplexing (OFDM) communications networks. Copyright © 2009 John Wiley & Sons, Ltd.