913 resultados para NETWORK THEORY
Resumo:
Wireless sensor networks (WSNs) are one of the most important users of wireless communication technologies in the coming years and some challenges in this area must be addressed for their complete development. Energy consumption and spectrum availability are two of the most severe constraints of WSNs due to their intrinsic nature. The introduction of cognitive capabilities into these networks has arisen to face the issue of spectrum scarcity but could be used to face energy challenges too due to their new range of communication possibilities. In this paper a new strategy based on game theory for cognitive WSNs is discussed. The presented strategy improves energy consumption by taking advantage of the new change-communication-channel capability. Based on game theory, the strategy decides when to change the transmission channel depending on the behavior of the rest of the network nodes. The strategy presented is lightweight but still has higher energy saving rates as compared to noncognitive networks and even to other strategies based on scheduled spectrum sensing. Simulations are presented for several scenarios that demonstrate energy saving rates of around 65% as compared to WSNs without cognitive techniques.
Resumo:
The Biomolecular Interaction Network Database (BIND; http://binddb.org) is a database designed to store full descriptions of interactions, molecular complexes and pathways. Development of the BIND 2.0 data model has led to the incorporation of virtually all components of molecular mechanisms including interactions between any two molecules composed of proteins, nucleic acids and small molecules. Chemical reactions, photochemical activation and conformational changes can also be described. Everything from small molecule biochemistry to signal transduction is abstracted in such a way that graph theory methods may be applied for data mining. The database can be used to study networks of interactions, to map pathways across taxonomic branches and to generate information for kinetic simulations. BIND anticipates the coming large influx of interaction information from high-throughput proteomics efforts including detailed information about post-translational modifications from mass spectrometry. Version 2.0 of the BIND data model is discussed as well as implementation, content and the open nature of the BIND project. The BIND data specification is available as ASN.1 and XML DTD.
Resumo:
Visual classification is the way we relate to different images in our environment as if they were the same, while relating differently to other collections of stimuli (e.g., human vs. animal faces). It is still not clear, however, how the brain forms such classes, especially when introduced with new or changing environments. To isolate a perception-based mechanism underlying class representation, we studied unsupervised classification of an incoming stream of simple images. Classification patterns were clearly affected by stimulus frequency distribution, although subjects were unaware of this distribution. There was a common bias to locate class centers near the most frequent stimuli and their boundaries near the least frequent stimuli. Responses were also faster for more frequent stimuli. Using a minimal, biologically based neural-network model, we demonstrate that a simple, self-organizing representation mechanism based on overlapping tuning curves and slow Hebbian learning suffices to ensure classification. Combined behavioral and theoretical results predict large tuning overlap, implicating posterior infero-temporal cortex as a possible site of classification.
Resumo:
The role of intrinsic cortical connections in processing sensory input and in generating behavioral output is poorly understood. We have examined this issue in the context of the tuning of neuronal responses in cortex to the orientation of a visual stimulus. We analytically study a simple network model that incorporates both orientation-selective input from the lateral geniculate nucleus and orientation-specific cortical interactions. Depending on the model parameters, the network exhibits orientation selectivity that originates from within the cortex, by a symmetry-breaking mechanism. In this case, the width of the orientation tuning can be sharp even if the lateral geniculate nucleus inputs are only weakly anisotropic. By using our model, several experimental consequences of this cortical mechanism of orientation tuning are derived. The tuning width is relatively independent of the contrast and angular anisotropy of the visual stimulus. The transient population response to changing of the stimulus orientation exhibits a slow "virtual rotation." Neuronal cross-correlations exhibit long time tails, the sign of which depends on the preferred orientations of the cells and the stimulus orientation.
Resumo:
This project attempts to answer the question "What holds the construction of money together?" by asserting that it is money's religious nature which provides the moral compulsion for people to use, and continue to uphold, money as a socially constructed concept. This project is primarily descriptive and focuses on the religious nature of money by employing a sociological theory of religion in viewing money as a technical concept. This is an interdisciplinary work between religious studies, economics, and sociology and draws heavily from Emile Durkheim's 'The Elementary Forms of Religious Life' as well as work related to heterodox theories of money developed by Geoffrey Ingham, A. Mitchell Innes, and David Graeber. Two new concepts are developed: the idea of monetary sacrality and monetary effervescence, both of which serve to recharge the religious saliency of money. By developing the concept of monetary sacrality, this project shows how money acts to interpret our economic relations while also obfuscating complex power dynamics in society, making them seem naturally occurring and unchangeable. The project also shows how our contemporary fractional reserve banking system contributes to money's collective effervescence and serves to animate economic acting within a monetary network. The project concludes by outlining multiple implications for religious studies, economics, sociology, and central banking.
Resumo:
Network governance of collective learning processes is an essential approach to sustainable development. The first section of the article briefly refers to recent theories about both market and government failures that express scepticism about the way framework conditions for market actors are set. For this reason, the development of networks for collective learning processes seems advantageous if new solutions are to be developed in policy areas concerned with long-term changes and a stepwise internalisation of externalities. With regard to corporate actors’ interests, the article shows recent insights from theories about the knowledge-based firm, where the creation of new knowledge is based on the absorption of societal views. This concept shifts the focus towards knowledge generation as an essential element in the evolution of sustainable markets. This involves at the same time the development of new policies. In this context innovation-inducing regulation is suggested and discussed. The evolution of the Swedish, German and Dutch wind turbine industries are analysed based on the approach of governance put forward in this article. We conclude that these coevolutionary mechanisms may take for granted some of the stabilising and orientating functions previously exercised by basic regulatory activities of the state. In this context, the main function of the governments is to facilitate learning processes that depart from the government functions suggested by welfare economics.
Resumo:
Vita.
Resumo:
"Grant no. US NSF MCS75-21758."
Resumo:
Bibliography: p. 25-28.
Resumo:
"Supported in part by the National Science Foundation under grant no. NSF GJ 28289."
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
In this paper, we present the results of the prediction of the high-pressure adsorption equilibrium of supercritical. gases (Ar, N-2, CH4, and CO2) on various activated carbons (BPL, PCB, and Norit R1 extra) at various temperatures using a density-functional-theory-based finite wall thickness (FWT) model. Pore size distribution results of the carbons are taken from our recent previous work 1,2 using this approach for characterization. To validate the model, isotherms calculated from the density functional theory (DFT) approach are comprehensively verified against those determined by grand canonical Monte Carlo (GCMC) simulation, before the theoretical adsorption isotherms of these investigated carbons calculated by the model are compared with the experimental adsorption measurements of the carbons. We illustrate the accuracy and consistency of the FWT model for the prediction of adsorption isotherms of the all investigated gases. The pore network connectivity problem occurring in the examined carbons is also discussed, and on the basis of the success of the predictions assuming a similar pore size distribution for accessible and inaccessible regions, it is suggested that this is largely related to the disordered nature of the carbon.
Resumo:
The theoretical impacts of anthropogenic habitat degradation on genetic resources have been well articulated. Here we use a simulation approach to assess the magnitude of expected genetic change, and review 31 studies of 23 neotropical tree species to assess whether empirical case studies conform to theory. Major differences in the sensitivity of measures to detect the genetic health of degraded populations were obvious. Most studies employing genetic diversity (nine out of 13) found no significant consequences, yet most that assessed progeny inbreeding (six out of eight), reproductive output (seven out of 10) and fitness (all six) highlighted significant impacts. These observations are in line with theory, where inbreeding is observed immediately following impact, but genetic diversity is lost slowly over subsequent generations, which for trees may take decades. Studies also highlight the ecological, not just genetic, consequences of habitat degradation that can cause reduced seed set and progeny fitness. Unexpectedly, two studies examining pollen flow using paternity analysis highlight an extensive network of gene flow at smaller spatial scales (less than 10 km). Gene flow can thus mitigate against loss of genetic diversity and assist in long-term population viability, even in degraded landscapes. Unfortunately, the surveyed studies were too few and heterogeneous to examine concepts of population size thresholds and genetic resilience in relation to life history. Future suggested research priorities include undertaking integrated studies on a range of species in the same landscapes; better documentation of the extent and duration of impact; and most importantly, combining neutral marker, pollination dynamics, ecological consequences, and progeny fitness assessment within single studies.
Resumo:
Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.
Resumo:
The Great Barrier Reef Marine Park, an area almost the size , of Japan, has a new network of no-take areas that significantly improves the protection of biodiversity. The new marine park zoning implements, in a quantitative manner, many of the theoretical design principles discussed in the literature. For example, the new network of no-take areas has at least 20% protection per bioregion, minimum levels of protection for all known habitats and special or unique features, and minimum sizes for no-take areas of at least 10 or 20 kat across at the smallest diameter Overall, more than 33% of the Great Barrier Reef Marine Park is now in no-take areas (previously 4.5%). The steps taken leading to this outcome were to clarify to the interested public why the existing level of protection wets inadequate; detail the conservation objectives of establishing new no-take areas; work with relevant and independent experts to define, and contribute to, the best scientific process to deliver on the objectives; describe the biodiversity (e.g., map bioregions); define operational principles needed to achieve the objectives; invite community input on all of The above; gather and layer the data gathered in round-table discussions; report the degree of achievement of principles for various options of no-take areas; and determine how to address negative impacts. Some of the key success factors in this case have global relevance and include focusing initial communication on the problem to be addressed; applying the precautionary principle; using independent experts; facilitating input to decision making; conducting extensive and participatory consultation; having an existing marine park that encompassed much of the ecosystem; having legislative power under federal law; developing high-level support; ensuring agency Priority and ownership; and being able to address the issue of displaced fishers.