980 resultados para Bayesian approaches
Resumo:
Wetlands are the most productive and biologically diverse but very fragile ecosystems. They are vulnerable to even small changes in their biotic and abiotic factors. In recent years, there has been concern over the continuous degradation of wetlands due to unplanned developmental activities. This necessitates inventorying, mapping, and monitoring of wetlands to implement sustainable management approaches. The principal objective of this work is to evolve a strategy to identify and monitor wetlands using temporal remote sensing (RS) data. Pattern classifiers were used to extract wetlands automatically from NIR bands of MODIS, Landsat MSS and Landsat TM remote sensing data. MODIS provided data for 2002 to 2007, while for 1973 and 1992 IR Bands of Landsat MSS and TM (79m and 30m spatial resolution) data were used. Principal components of IR bands of MODIS (250 m) were fused with IRS LISS-3 NIR (23.5 m). To extract wetlands, statistical unsupervised learning of IR bands for the respective temporal data was performed using Bayesian approach based on prior probability, mean and covariance. Temporal analysis of wetlands indicates a sharp decline of 58% in Greater Bangalore attributing to intense urbanization processes, evident from a 466% increase in built-up area from 1973 to 2007.
Resumo:
Urbanisation is the increase in the population of cities in proportion to the region's rural population. Urbanisation in India is very rapid with urban population growing at around 2.3 percent per annum. Urban sprawl refers to the dispersed development along highways or surrounding the city and in rural countryside with implications such as loss of agricultural land, open space and ecologically sensitive habitats. Sprawl is thus a pattern and pace of land use in which the rate of land consumed for urban purposes exceeds the rate of population growth resulting in an inefficient and consumptive use of land and its associated resources. This unprecedented urbanisation trend due to burgeoning population has posed serious challenges to the decision makers in the city planning and management process involving plethora of issues like infrastructure development, traffic congestion, and basic amenities (electricity, water, and sanitation), etc. In this context, to aid the decision makers in following the holistic approaches in the city and urban planning, the pattern, analysis, visualization of urban growth and its impact on natural resources has gained importance. This communication, analyses the urbanisation pattern and trends using temporal remote sensing data based on supervised learning using maximum likelihood estimation of multivariate normal density parameters and Bayesian classification approach. The technique is implemented for Greater Bangalore – one of the fastest growing city in the World, with Landsat data of 1973, 1992 and 2000, IRS LISS-3 data of 1999, 2006 and MODIS data of 2002 and 2007. The study shows that there has been a growth of 466% in urban areas of Greater Bangalore across 35 years (1973 to 2007). The study unravels the pattern of growth in Greater Bangalore and its implication on local climate and also on the natural resources, necessitating appropriate strategies for the sustainable management.
Resumo:
The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.
Broadcast in Adhoc Wireless Networks with Selfish Nodes: A Bayesian Incentive Compatibility Approach
Resumo:
We consider the incentive compatible broadcast (ICB) problem in ad hoc wireless networks with selfish nodes. We design a Bayesian incentive compatible broadcast (BIC-B) protocol to address this problem. VCG mechanism based schemes have been popularly used in the literature to design dominant strategy incentive compatible (DSIC) protocols for ad hoc wireless networks. VCG based mechanisms have two critical limitations: (i) the network is required to be bi-connected, (ii) the resulting protocol is not budget balanced. Our proposed BIC-B protocol overcomes these difficulties. We also prove the optimality of the proposed scheme.
Resumo:
Applications in various domains often lead to very large and frequently high-dimensional data. Successful algorithms must avoid the curse of dimensionality but at the same time should be computationally efficient. Finding useful patterns in large datasets has attracted considerable interest recently. The primary goal of the paper is to implement an efficient Hybrid Tree based clustering method based on CF-Tree and KD-Tree, and combine the clustering methods with KNN-Classification. The implementation of the algorithm involves many issues like good accuracy, less space and less time. We will evaluate the time and space efficiency, data input order sensitivity, and clustering quality through several experiments.
Resumo:
In this paper, we develop a low-complexity message passing algorithm for joint support and signal recovery of approximately sparse signals. The problem of recovery of strictly sparse signals from noisy measurements can be viewed as a problem of recovery of approximately sparse signals from noiseless measurements, making the approach applicable to strictly sparse signal recovery from noisy measurements. The support recovery embedded in the approach makes it suitable for recovery of signals with same sparsity profiles, as in the problem of multiple measurement vectors (MMV). Simulation results show that the proposed algorithm, termed as JSSR-MP (joint support and signal recovery via message passing) algorithm, achieves performance comparable to that of sparse Bayesian learning (M-SBL) algorithm in the literature, at one order less complexity compared to the M-SBL algorithm.
Resumo:
The widely used Bayesian classifier is based on the assumption of equal prior probabilities for all the classes. However, inclusion of equal prior probabilities may not guarantee high classification accuracy for the individual classes. Here, we propose a novel technique-Hybrid Bayesian Classifier (HBC)-where the class prior probabilities are determined by unmixing a supplemental low spatial-high spectral resolution multispectral (MS) data that are assigned to every pixel in a high spatial-low spectral resolution MS data in Bayesian classification. This is demonstrated with two separate experiments-first, class abundances are estimated per pixel by unmixing Moderate Resolution Imaging Spectroradiometer data to be used as prior probabilities, while posterior probabilities are determined from the training data obtained from ground. These have been used for classifying the Indian Remote Sensing Satellite LISS-III MS data through Bayesian classifier. In the second experiment, abundances obtained by unmixing Landsat Enhanced Thematic Mapper Plus are used as priors, and posterior probabilities are determined from the ground data to classify IKONOS MS images through Bayesian classifier. The results indicated that HBC systematically exploited the information from two image sources, improving the overall accuracy of LISS-III MS classification by 6% and IKONOS MS classification by 9%. Inclusion of prior probabilities increased the average producer's and user's accuracies by 5.5% and 6.5% in case of LISS-III MS with six classes and 12.5% and 5.4% in IKONOS MS for five classes considered.
Resumo:
A major challenge in wireless communications is overcoming the deleterious effects of fading, a phenomenon largely responsible for the seemingly inevitable dropped call. Multiple-antennas communication systems, commonly referred to as MIMO systems, employ multiple antennas at both transmitter and receiver, thereby creating a multitude of signalling pathways between transmitter and receiver. These multiple pathways give the signal a diversity advantage with which to combat fading. Apart from helping overcome the effects of fading, MIMO systems can also be shown to provide a manyfold increase in the amount of information that can be transmitted from transmitter to receiver. Not surprisingly,MIMO has played, and continues to play, a key role in the advancement of wireless communication.Space-time codes are a reference to a signalling format in which information about the message is dispersed across both the spatial (or antenna) and time dimension. Algebraic techniques drawing from algebraic structures such as rings, fields and algebras, have been extensively employed in the construction of optimal space-time codes that enable the potential of MIMO communication to be realized, some of which have found their way into the IEEE wireless communication standards. In this tutorial article, reflecting the authors’interests in this area, we survey some of these techniques.
Resumo:
A central scheduling problem in wireless communications is that of allocating resources to one of many mobile stations that have a common radio channel. Much attention has been given to the design of efficient and fair scheduling schemes that are centrally controlled by a base station (BS) whose decisions depend on the channel conditions reported by each mobile. The BS is the only entity taking decisions in this framework. The decisions are based on the reports of mobiles on their radio channel conditions. In this paper, we study the scheduling problem from a game-theoretic perspective in which some of the mobiles may be noncooperative or strategic, and may not necessarily report their true channel conditions. We model this situation as a signaling game and study its equilibria. We demonstrate that the only Perfect Bayesian Equilibria (PBE) of the signaling game are of the babbling type: the noncooperative mobiles send signals independent of their channel states, the BS simply ignores them, and allocates channels based only on the prior information on the channel statistics. We then propose various approaches to enforce truthful signaling of the radio channel conditions: a pricing approach, an approach based on some knowledge of the mobiles' policies, and an approach that replaces this knowledge by a stochastic approximations approach that combines estimation and control. We further identify other equilibria that involve non-truthful signaling.
Resumo:
Results from elasto-plastic numerical simulations of jointed rocks using both the equivalent continuum and discrete continuum approaches are presented, and are compared with experimental measurements. Initially triaxial compression tests on different types of rocks with wide variation in the uniaxial compressive strength are simulated using both the approaches and the results are compared. The applicability and relative merits and limitations of both the approaches for the simulation of jointed rocks are discussed. It is observed that both the approaches are reasonably good in predicting the real response. However, the equivalent continuum approach has predicted somewhat higher stiffness values at low strains. Considering the modelling effort involved in case of discrete continuum approach, for problems with complex geometry, it is suggested that a proper equivalent continuum model can be used, without compromising much on the accuracy of the results. Then the numerical analysis of a tunnel in Japan is taken up using the continuum approach. The deformations predicted are compared well against the field measurements and the predictions from discontinuum analysis. (C) 2012 Elsevier Ltd. All rights reserved.