901 resultados para probabilistic roadmap
Resumo:
The stability of a bioreactor landfill slope is influenced by the quantity and method of leachate recirculation as well as on the degree of decomposition. Other factors include properties variation of waste material and geometrical configurations, i.e., height and slope of landfills. Conventionally, the stability of slopes is evaluated using factor of safety approach, in which the variability in the engineering properties of MSW is not considered directly and stability issues are resolved from past experiences and good engineering judgments. On the other hand, probabilistic approach considers variability in mathematical framework and provides stability in a rational manner that helps in decision making. The objective of the present study is to perform a parametric study on the stability of a bioreactor landfill slope in probabilistic framework considering important influencing factors, such as, variation in MSW properties, amount of leachate recirculation, and age of degradation, in a systematic manner. The results are discussed in the light of existing relevant regulations, design and operation issues.
Resumo:
This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.
Resumo:
Non-negative matrix factorization [5](NMF) is a well known tool for unsupervised machine learning. It can be viewed as a generalization of the K-means clustering, Expectation Maximization based clustering and aspect modeling by Probabilistic Latent Semantic Analysis (PLSA). Specifically PLSA is related to NMF with KL-divergence objective function. Further it is shown that K-means clustering is a special case of NMF with matrix L2 norm based error function. In this paper our objective is to analyze the relation between K-means clustering and PLSA by examining the KL-divergence function and matrix L2 norm based error function.
Resumo:
In recent years, there has been an upsurge of research interest in cooperative wireless communications in both academia and industry. This article presents a simple overview of the pivotal topics in both mobile station (MS)- and base station (BS)- assisted cooperation in the context of cellular radio systems. Owing to the ever-increasing amount of literature in this particular field, this article is by no means exhaustive, but is intended to serve as a roadmap by assembling a representative sample of recent results and to stimulate further research. The emphasis is initially on relay-base cooperation, relying on network coding, followed by the design of cross-layer cooperative protocols conceived for MS cooperation and the concept of coalition network element (CNE)-assisted BS cooperation. Then, a range of complexity and backhaul traffic reduction techniques that have been proposed for BS cooperation are reviewed. A more detailed discussion is provided in the context of MS cooperation concerning the pros and cons of dispensing with high-complexity, power-hungry channel estimation. Finally, generalized design guidelines, conceived for cooperative wireless communications, are presented.
Resumo:
Past studies use deterministic models to evaluate optimal cache configuration or to explore its design space. However, with the increasing number of components present on a chip multiprocessor (CMP), deterministic approaches do not scale well. Hence, we apply probabilistic genetic algorithms (GA) to determine a near-optimal cache configuration for a sixteen tiled CMP. We propose and implement a faster trace based approach to estimate fitness of a chromosome. It shows up-to 218x simulation speedup over the cycle-accurate architectural simulation. Our methodology can be applied to solve other cache optimization problems such as design space exploration of cache and its partitioning among applications/ virtual machines.
Resumo:
The uncertainty in material properties and traffic characterization in the design of flexible pavements has led to significant efforts in recent years to incorporate reliability methods and probabilistic design procedures for the design, rehabilitation, and maintenance of pavements. In the mechanistic-empirical (ME) design of pavements, despite the fact that there are multiple failure modes, the design criteria applied in the majority of analytical pavement design methods guard only against fatigue cracking and subgrade rutting, which are usually considered as independent failure events. This study carries out the reliability analysis for a flexible pavement section for these failure criteria based on the first-order reliability method (FORM) and the second-order reliability method (SORM) techniques and the crude Monte Carlo simulation. Through a sensitivity analysis, the most critical parameter affecting the design reliability for both fatigue and rutting failure criteria was identified as the surface layer thickness. However, reliability analysis in pavement design is most useful if it can be efficiently and accurately applied to components of pavement design and the combination of these components in an overall system analysis. The study shows that for the pavement section considered, there is a high degree of dependence between the two failure modes, and demonstrates that the probability of simultaneous occurrence of failures can be almost as high as the probability of component failures. Thus, the need to consider the system reliability in the pavement analysis is highlighted, and the study indicates that the improvement of pavement performance should be tackled in the light of reducing this undesirable event of simultaneous failure and not merely the consideration of the more critical failure mode. Furthermore, this probability of simultaneous occurrence of failures is seen to increase considerably with small increments in the mean traffic loads, which also results in wider system reliability bounds. The study also advocates the use of narrow bounds to the probability of failure, which provides a better estimate of the probability of failure, as validated from the results obtained from Monte Carlo simulation (MCS).
Resumo:
In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.
Resumo:
The Himalayas are one of very active seismic regions in the world where devastating earthquakes of 1803 Bihar-Nepal, 1897 Shillong, 1905 Kangra, 1934 Bihar-Nepal, 1950 Assam and 2011 Sikkim were reported. Several researchers highlighted central seismic gap based on the stress accumulation in central part of Himalaya and the non-occurrence of earthquake between 1905 Kangra and 1934 Bihar-Nepal. The region has potential of producing great seismic event in the near future. As a result of this seismic gap, all regions which fall adjacent to the active Himalayan region are under high possible seismic hazard due to future earthquakes in the Himalayan region. In this study, the study area of the Lucknow urban centre which lies within 350 km from the central seismic gap has been considered for detailed assessment of seismic hazard. The city of Lucknow also lies close to Lucknow-Faizabad fault having a seismic gap of 350 years. Considering the possible seismic gap in the Himalayan region and also the seismic gap in Lucknow-Faizabad fault, the seismic hazard of Lucknow has been studied based on deterministic and the probabilistic seismic hazard analysis. Results obtained show that the northern and western parts of Lucknow are found to have a peak ground acceleration of 0.11-0.13 g, which is 1.6- to 2.0-fold higher than the seismic hazard compared to the other parts of Lucknow.
Resumo:
Impoverishment of particles, i.e. the discretely simulated sample paths of the process dynamics, poses a major obstacle in employing the particle filters for large dimensional nonlinear system identification. A known route of alleviating this impoverishment, i.e. of using an exponentially increasing ensemble size vis-a-vis the system dimension, remains computationally infeasible in most cases of practical importance. In this work, we explore the possibility of unscented transformation on Gaussian random variables, as incorporated within a scaled Gaussian sum stochastic filter, as a means of applying the nonlinear stochastic filtering theory to higher dimensional structural system identification problems. As an additional strategy to reconcile the evolving process dynamics with the observation history, the proposed filtering scheme also modifies the process model via the incorporation of gain-weighted innovation terms. The reported numerical work on the identification of structural dynamic models of dimension up to 100 is indicative of the potential of the proposed filter in realizing the stated aim of successfully treating relatively larger dimensional filtering problems. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Sensory receptors determine the type and the quantity of information available for perception. Here, we quantified and characterized the information transferred by primary afferents in the rat whisker system using neural system identification. Quantification of ``how much'' information is conveyed by primary afferents, using the direct method (DM), a classical information theoretic tool, revealed that primary afferents transfer huge amounts of information (up to 529 bits/s). Information theoretic analysis of instantaneous spike-triggered kinematic stimulus features was used to gain functional insight on ``what'' is coded by primary afferents. Amongst the kinematic variables tested-position, velocity, and acceleration-primary afferent spikes encoded velocity best. The other two variables contributed to information transfer, but only if combined with velocity. We further revealed three additional characteristics that play a role in information transfer by primary afferents. Firstly, primary afferent spikes show preference for well separated multiple stimuli (i.e., well separated sets of combinations of the three instantaneous kinematic variables). Secondly, neurons are sensitive to short strips of the stimulus trajectory (up to 10 ms pre-spike time), and thirdly, they show spike patterns (precise doublet and triplet spiking). In order to deal with these complexities, we used a flexible probabilistic neuron model fitting mixtures of Gaussians to the spike triggered stimulus distributions, which quantitatively captured the contribution of the mentioned features and allowed us to achieve a full functional analysis of the total information rate indicated by the DM. We found that instantaneous position, velocity, and acceleration explained about 50% of the total information rate. Adding a 10 ms pre-spike interval of stimulus trajectory achieved 80-90%. The final 10-20% were found to be due to non-linear coding by spike bursts.
Resumo:
In this paper, we propose a low-complexity algorithm based on Markov chain Monte Carlo (MCMC) technique for signal detection on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. The algorithm employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection. The proposed algorithm alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems with M-QAM. A novel ingredient in the algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a randomized MCMC (R-MCMC) strategy coupled with a multiple restart strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users).
Resumo:
In this work, we consider two-dimensional (2-D) binary channels in which the 2-D error patterns are constrained so that errors cannot occur in adjacent horizontal or vertical positions. We consider probabilistic and combinatorial models for such channels. A probabilistic model is obtained from a 2-D random field defined by Roth, Siegel and Wolf (2001). Based on the conjectured ergodicity of this random field, we obtain an expression for the capacity of the 2-D non-adjacent-errors channel. We also derive an upper bound for the asymptotic coding rate in the combinatorial model.
Resumo:
This paper highlights the seismic microzonation carried out for a nuclear power plant site. Nuclear power plants are considered to be one of the most important and critical structures designed to withstand all natural disasters. Seismic microzonation is a process of demarcating a region into individual areas having different levels of various seismic hazards. This will help in identifying regions having high seismic hazard which is vital for engineering design and land-use planning. The main objective of this paper is to carry out the seismic microzonation of a nuclear power plant site situated in the east coast of South India, based on the spatial distribution of the hazard index value. The hazard index represents the consolidated effect of all major earthquake hazards and hazard influencing parameters. The present work will provide new directions for assessing the seismic hazards of new power plant sites in the country. Major seismic hazards considered for the evaluation of the hazard index are (1) intensity of ground shaking at bedrock, (2) site amplification, (3) liquefaction potential and (4) the predominant frequency of the earthquake motion at the surface. The intensity of ground shaking in terms of peak horizontal acceleration (PHA) was estimated for the study area using both deterministic and probabilistic approaches with logic tree methodology. The site characterization of the study area has been carried out using the multichannel analysis of surface waves test and available borehole data. One-dimensional ground response analysis was carried out at major locations within the study area for evaluating PHA and spectral accelerations at the ground surface. Based on the standard penetration test data, deterministic as well as probabilistic liquefaction hazard analysis has been carried out for the entire study area. Finally, all the major earthquake hazards estimated above, and other significant parameters representing local geology were integrated using the analytic hierarchy process and hazard index map for the study area was prepared. Maps showing the spatial variation of seismic hazards (intensity of ground shaking, liquefaction potential and predominant frequency) and hazard index are presented in this work.
Resumo:
The seismic hazard value of any region depends upon three important components such as probable earthquake location, maximum earthquake magnitude and the attenuation equation. This paper presents a representative way of estimating these three important components considering region specific seismotectonic features. Rupture Based Seismic Hazard Analysis (RBSHA) given by Anbazhagan et al. (2011) is used to determine the probable future earthquake locations. This approach is verified on the earthquake data of Bhuj region. The probable earthquake location for this region is identified considering earthquake data till the year 2000. These identified locations match well with the reported locations after 2000. The further Coimbatore City is selected as the study area to develop a representative seismic hazard map using RBSHA approach and to compare with deterministic seismic hazard analysis. Probable future earthquake zones for Coimbatore are located considering the rupture phenomenon as per energy release theory discussed by Anbazhagan et at (2011). Rupture character of the region has been established by estimating the subsurface rupture length of each source and normalized with respect to the length of the source. Average rupture length of the source with respect to its total length is found to be similar for most of the sources in the region, which is called as the rupture character of the region. Maximum magnitudes of probable zones are estimated considering seismic sources close by and regional rupture character established. Representative GMPEs for the study area have been selected by carrying out efficacy test through an average log likelihood value (LLH) as ranking estimator and considering the Isoseismal map. New seismic hazard map of Coimbatore has been developed using the above regional representative parameters of probable earthquake locations, maximum earthquake magnitude and best suitable GMPEs. The new hazard map gives acceleration values at bedrock for maximum possible earthquakes. These results are compared with deterministic seismic hazard map and recently published probabilistic seismic hazard values. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.