333 resultados para PROBABILISTIC NETWORKS
Resumo:
We study coverage in sensor networks having two types of nodes, namely, sensor nodes and backbone nodes. Each sensor is capable of transmitting information over relatively small distances. The backbone nodes collect information from the sensors. This information is processed and communicated over an ad hoc network formed by the backbone nodes, which are capable of transmitting over much larger distances. We consider two models of deployment for the sensor and backbone nodes. One is a PoissonPoisson cluster model and the other a dependently thinned Poisson point process. We deduce limit laws for functionals of vacancy in both models using properties of association for random measures.
Resumo:
The key problem tackled in this paper is the development of a stand-alone self-powered sensor to directly sense the spectrum of mechanical vibrations. Such a sensor could be deployed in wide area sensor networks to monitor structural vibrations of large machines (e. g. aircrafts) and initiate corrective action if the structure approaches resonance. In this paper, we study the feasibility of using stretched membranes of polymer piezoelectric polyvinlidene fluoride for low-frequency vibration spectrum sensing. We design and evaluate a low-frequency vibration spectrum sensor that accepts an incoming vibration and directly provides the spectrum of the vibration as the output.
Resumo:
It is increasingly being recognized that resting state brain connectivity derived from functional magnetic resonance imaging (fMRI) data is an important marker of brain function both in healthy and clinical populations. Though linear correlation has been extensively used to characterize brain connectivity, it is limited to detecting first order dependencies. In this study, we propose a framework where in phase synchronization (PS) between brain regions is characterized using a new metric ``correlation between probabilities of recurrence'' (CPR) and subsequent graph-theoretic analysis of the ensuing networks. We applied this method to resting state fMRI data obtained from human subjects with and without administration of propofol anesthetic. Our results showed decreased PS during anesthesia and a biologically more plausible community structure using CPR rather than linear correlation. We conclude that CPR provides an attractive nonparametric method for modeling interactions in brain networks as compared to standard correlation for obtaining physiologically meaningful insights about brain function.
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.
Resumo:
In a networked society, governing advocacy groups and networks through decentralized systems of policy implementation has been the interest of governance network literature. This paper addresses the topic of governing networks in the context of Indian agrarian societies by taking the case example of a welfare scheme for the Indian rural poor. We explore context-specific regulatory dynamics through the situated agent based architectural framework. The effects of various regulatory strategies that can be adopted by governing node are tested under various action arenas through experimental design. Results show the impact of regulatory strategies on the resource dependencies and asymmetries in the network relationships. This indicates that the optimal feasible regulatory strategy in networked society is institutionally rational and is context dependent. Further, we show that situated MAS architecture is a natural fit for institutional understanding of the dynamics (Ostrom et al. in Rules, games, and common-pool resources, 1994).
Resumo:
The rapid development of communication and networking has lessened geographical boundaries among actors in social networks. In social networks, actors often want to access databases depending upon their access rights, privacy, context, privileges, etc. Managing and handling knowledge based access of actors is complex and hard for which broad range of technologies need to be called. Access based on dynamic access rights and circumstances of actors impose major tasks on access systems. In this paper, we present an Access Mechanism for Social Networks (AMSN) to render access to actors over databases taking privacy and status of actors into consideration. The designed AMSN model is tested over an Agriculture Social Network (ASN) which utilises distinct access rights and privileges of actors related to the agriculture occupation, and provides access to actors over databases.
Resumo:
In this paper, we study the diversity-multiplexing-gain tradeoff (DMT) of wireless relay networks under the half-duplex constraint. It is often unclear what penalty if any, is imposed by the half-duplex constraint on the DMT of such networks. We study two classes of networks; the first class, called KPP(I) networks, is the class of networks with the relays organized in K parallel paths between the source and the destination. While we assume that there is no direct source-destination path, the K relaying paths can interfere with each other. The second class, termed as layered networks, is comprised of relays organized in layers, where links exist only between adjacent layers. We present a communication scheme based on static schedules and amplify-and-forward relaying for these networks. We also show that for KPP(I) networks with K >= 3, the proposed schemes can achieve full-duplex DMT performance, thus demonstrating that there is no performance hit on the DMT due to the half-duplex constraint. We also show that, for layered networks, a linear DMT of d(max)(1 - r)(+) between the maximum diversity d(max) and the maximum MG, r(max) = 1 is achievable. We adapt existing DMT optimal coding schemes to these networks, thus specifying the end-to-end communication strategy explicitly.
Resumo:
The delineation of seismic source zones plays an important role in the evaluation of seismic hazard. In most of the studies the seismic source delineation is done based on geological features. In the present study, an attempt has been made to delineate seismic source zones in the study area (south India) based on the seismicity parameters. Seismicity parameters and the maximum probable earthquake for these source zones were evaluated and were used in the hazard evaluation. The probabilistic evaluation of seismic hazard for south India was carried out using a logic tree approach. Two different types of seismic sources, linear and areal, were considered in the present study to model the seismic sources in the region more precisely. In order to properly account for the attenuation characteristics of the region, three different attenuation relations were used with different weightage factors. Seismic hazard evaluation was done for the probability of exceedance (PE) of 10% and 2% in 50 years. The spatial variation of rock level peak horizontal acceleration (PHA) and spectral acceleration (Sa) values corresponding to return periods of 475 and 2500 years for the entire study area are presented in this work. The peak ground acceleration (PGA) values at ground surface level were estimated based on different NEHRP site classes by considering local site effects.
Resumo:
Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an ``adaptive threshold,'' i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.
Resumo:
We have developed SmartConnect, a tool that addresses the growing need for the design and deployment of multihop wireless relay networks for connecting sensors to a control center. Given the locations of the sensors, the traffic that each sensor generates, the quality of service (QoS) requirements, and the potential locations at which relays can be placed, SmartConnect helps design and deploy a low-cost wireless multihop relay network. SmartConnect adopts a field interactive, iterative approach, with model based network design, field evaluation and relay augmentation performed iteratively until the desired QoS is met. The design process is based on approximate combinatorial optimization algorithms. In the paper, we provide the design choices made in SmartConnect and describe the experimental work that led to these choices. Finally, we provide results from some experimental deployments.
Resumo:
Latent variable methods, such as PLCA (Probabilistic Latent Component Analysis) have been successfully used for analysis of non-negative signal representations. In this paper, we formulate PLCS (Probabilistic Latent Component Segmentation), which models each time frame of a spectrogram as a spectral distribution. Given the signal spectrogram, the segmentation boundaries are estimated using a maximum-likelihood approach. For an efficient solution, the algorithm imposes a hard constraint that each segment is modelled by a single latent component. The hard constraint facilitates the solution of ML boundary estimation using dynamic programming. The PLCS framework does not impose a parametric assumption unlike earlier ML segmentation techniques. PLCS can be naturally extended to model coarticulation between successive phones. Experiments on the TIMIT corpus show that the proposed technique is promising compared to most state of the art speech segmentation algorithms.
Resumo:
Mycobacterium tuberculosis owes its high pathogenic potential to its ability to evade host immune responses and thrive inside the macrophage. The outcome of infection is largely determined by the cellular response comprising a multitude of molecular events. The complexity and inter-relatedness in the processes makes it essential to adopt systems approaches to study them. In this work, we construct a comprehensive network of infection-related processes in a human macrophage comprising 1888 proteins and 14,016 interactions. We then compute response networks based on available gene expression profiles corresponding to states of health, disease and drug treatment. We use a novel formulation for mining response networks that has led to identifying highest activities in the cell. Highest activity paths provide mechanistic insights into pathogenesis and response to treatment. The approach used here serves as a generic framework for mining dynamic changes in genome-scale protein interaction networks.
Resumo:
Many networks such as social networks and organizational networks in global companies consist of self-interested agents. The topology of these networks often plays a crucial role in important tasks such as information diffusion and information extraction. Consequently, growing a stable network having a certain topology is of interest. Motivated by this, we study the following important problem: given a certain desired network topology, under what conditions would best response (link addition/deletion) strategies played by self-interested agents lead to formation of a stable network having that topology. We study this interesting reverse engineering problem by proposing a natural model of recursive network formation and a utility model that captures many key features. Based on this model, we analyze relevant network topologies and derive a set of sufficient conditions under which these topologies emerge as pairwise stable networks, wherein no node wants to delete any of its links and no two nodes would want to create a link between them.
Resumo:
In social choice theory, preference aggregation refers to computing an aggregate preference over a set of alternatives given individual preferences of all the agents. In real-world scenarios, it may not be feasible to gather preferences from all the agents. Moreover, determining the aggregate preference is computationally intensive. In this paper, we show that the aggregate preference of the agents in a social network can be computed efficiently and with sufficient accuracy using preferences elicited from a small subset of critical nodes in the network. Our methodology uses a model developed based on real-world data obtained using a survey on human subjects, and exploits network structure and homophily of relationships. Our approach guarantees good performance for aggregation rules that satisfy a property which we call expected weak insensitivity. We demonstrate empirically that many practically relevant aggregation rules satisfy this property. We also show that two natural objective functions in this context satisfy certain properties, which makes our methodology attractive for scalable preference aggregation over large scale social networks. We conclude that our approach is superior to random polling while aggregating preferences related to individualistic metrics, whereas random polling is acceptable in the case of social metrics.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.