792 resultados para Peer-to-Peer networks
Resumo:
In this paper we propose a neural network model to simplify and 2D meshes. This model is based on the Growing Neural Gas model and is able to simplify any mesh with different topologies and sizes. A triangulation process is included with the objective to reconstruct the mesh. This model is applied to some problems related to urban networks.
Resumo:
Food policy is one the most regulated policy fields at the EU level. Unholy alliances are collaborative patterns that temporarily bring together antagonistic stakeholders behind a common cause. This paper deals with such transversal co-operations between citizens groups (NGOs, consumers associations) and economic stakeholders (food industries, retailers), focusing on their ambitions and consequences. This paper builds on two case studies that enable a more nuanced view on the perspectives for the development of transversal networks at the EU level. The main findings are that (i) the rationale behind the adoption of collaborative partnerships actually comes from a case-by-case cost/benefit analysis leading to hopes of improved access to institutions; (ii) membership of a collaborative network leads to a learning process closely linked to the networks performance; and (iii) coalitions can have a better reception rather than an automatic better access depending on several factors independent of the stakeholders themselves.
Resumo:
Hearings held Jan. 26, 1956 - Feb. 5, 1960, pursuant to Senate resolution 18, 84th Cong. [and others] volume 8 has also special subtitle : The finale phase of the Committee's inquiry with reference to overall television allocations.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Traditional methods of R&D management are no longer sufficient for embracing innovations and leveraging complex new technologies to fully integrated positions in established systems. This paper presents the view that the technology integration process is a result of fundamental interactions embedded in inter-organisational activities. Emerging industries, high technology companies and knowledge intensive organisations owe a large part of their viability to complex networks of inter-organisational interactions and relationships. R&D organisations are the gatekeepers in the technology integration process with their initial sanction and motivation to develop technologies providing the first point of entry. Networks rely on the activities of stakeholders to provide the foundations of collaborative R&D activities, business-to-business marketing and strategic alliances. Such complex inter-organisational interactions and relationships influence value creation and organisational goals as stakeholders seek to gain investment opportunities. A theoretical model is developed here that contributes to our understanding of technology integration (adoption) as a dynamic process, which is simultaneously structured and enacted through the activities of stakeholders and organisations in complex inter-organisational networks of sanction and integration.
Resumo:
Boolean models of genetic regulatory networks (GRNs) have been shown to exhibit many of the characteristic dynamics of real GRNs, with gene expression patterns settling to point attractors or limit cycles, or displaying chaotic behaviour, depending upon the connectivity of the network and the relative proportions of excitatory and inhibitory interactions. This range of behaviours is only apparent, however, when the nodes of the GRN are updated synchronously, a biologically implausible state of affairs. In this paper we demonstrate that evolution can produce GRNs with interesting dynamics under an asynchronous update scheme. We use an Artificial Genome to generate networks which exhibit limit cycle dynamics when updated synchronously, but collapse to a point attractor when updated asynchronously. Using a hill climbing algorithm the networks are then evolved using a fitness function which rewards patterns of gene expression which revisit as many previously seen states as possible. The final networks exhibit fuzzy limit cycle dynamics when updated asynchronously.
Resumo:
This paper reports preliminary progress on a principled approach to modelling nonstationary phenomena using neural networks. We are concerned with both parameter and model order complexity estimation. The basic methodology assumes a Bayesian foundation. However to allow the construction of pragmatic models, successive approximations have to be made to permit computational tractibility. The lowest order corresponds to the (Extended) Kalman filter approach to parameter estimation which has already been applied to neural networks. We illustrate some of the deficiencies of the existing approaches and discuss our preliminary generalisations, by considering the application to nonstationary time series.
Resumo:
Bayesian techniques have been developed over many years in a range of different fields, but have only recently been applied to the problem of learning in neural networks. As well as providing a consistent framework for statistical pattern recognition, the Bayesian approach offers a number of practical advantages including a potential solution to the problem of over-fitting. This chapter aims to provide an introductory overview of the application of Bayesian methods to neural networks. It assumes the reader is familiar with standard feed-forward network models and how to train them using conventional techniques.
Resumo:
Bayesian techniques have been developed over many years in a range of different fields, but have only recently been applied to the problem of learning in neural networks. As well as providing a consistent framework for statistical pattern recognition, the Bayesian approach offers a number of practical advantages including a potential solution to the problem of over-fitting. This chapter aims to provide an introductory overview of the application of Bayesian methods to neural networks. It assumes the reader is familiar with standard feed-forward network models and how to train them using conventional techniques.
Resumo:
Using methods of Statistical Physics, we investigate the generalization performance of support vector machines (SVMs), which have been recently introduced as a general alternative to neural networks. For nonlinear classification rules, the generalization error saturates on a plateau, when the number of examples is too small to properly estimate the coefficients of the nonlinear part. When trained on simple rules, we find that SVMs overfit only weakly. The performance of SVMs is strongly enhanced, when the distribution of the inputs has a gap in feature space.
Resumo:
This thesis proposes a novel graphical model for inference called the Affinity Network,which displays the closeness between pairs of variables and is an alternative to Bayesian Networks and Dependency Networks. The Affinity Network shares some similarities with Bayesian Networks and Dependency Networks but avoids their heuristic and stochastic graph construction algorithms by using a message passing scheme. A comparison with the above two instances of graphical models is given for sparse discrete and continuous medical data and data taken from the UCI machine learning repository. The experimental study reveals that the Affinity Network graphs tend to be more accurate on the basis of an exhaustive search with the small datasets. Moreover, the graph construction algorithm is faster than the other two methods with huge datasets. The Affinity Network is also applied to data produced by a synchronised system. A detailed analysis and numerical investigation into this dynamical system is provided and it is shown that the Affinity Network can be used to characterise its emergent behaviour even in the presence of noise.
Resumo:
* Supported by INTAS 00-626 and TIC 2003-09319-c03-03.
Resumo:
In this paper, we investigate the hop distance optimization problem in ad hoc networks where cooperative multiinput- single-output (MISO) is adopted to improve the energy efficiency of the network. We first establish the energy model of multihop cooperative MISO transmission. Based on the model, the energy consumption per bit of the network with high node density is minimized numerically by finding an optimal hop distance, and, to get the global minimum energy consumption, both hop distance and the number of cooperating nodes around each relay node for multihop transmission are jointly optimized. We also compare the performance between multihop cooperative MISO transmission and single-input-single-output (SISO) transmission, under the same network condition (high node density). We show that cooperative MISO transmission could be energyinefficient compared with SISO transmission when the path-loss exponent becomes high. We then extend our investigation to the networks with varied node densities and show the effectiveness of the joint optimization method in this scenario using simulation results. It is shown that the optimal results depend on network conditions such as node density and path-loss exponent, and the simulation results are closely matched to those obtained using the numerical models for high node density cases.
Resumo:
Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^
Resumo:
<p>I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.</p><p>In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.</p><p>Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.</p><p>I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and</p><p>discuss some implications for capital regulation policy and stress testing.</p>