901 resultados para probabilistic roadmap


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, an improved probabilistic linearization approach is developed to study the response of nonlinear single degree of freedom (SDOF) systems under narrow-band inputs. An integral equation for the probability density function (PDF) of the envelope is derived. This equation is solved using an iterative scheme. The technique is applied to study the hardening type Duffing's oscillator under narrow-band excitation. The results compare favorably with those obtained using numerical simulation. In particular, the bimodal nature of the PDF for the response envelope for certain parameter ranges is brought out.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Free-living amoebae of the cellular slime mould Dictyostelium discoideum aggregate when starved and give rise to a long and thin multicellular structure, the slug. The slug resembles a metazoan embryo, and as with other embryos it is possible to specify a fate map. In the case of Dictyostelium discoideum the map is especially simple: cells in the anterior fifth of the slug die and form a stalk while the majority of those in the posterior differentiate into spores. The genesis of this anterior-posterior distinction is the subject of our review. In particular, we ask: what are the relative roles of individual pre-aggregative predispositions and post-aggregative position in determining cell fate? We review the literature on the subject and conclude that both factors are important. Variations in nutritional status, or in cell cycle phase at starvation, can bias the probability that an amoeba differentiates into a stalk cell or a spore. On the other hand, isolates, or slug fragments, consisting of only prestalk cells or only prespore cells can regulate so as to result in a normal range of both cell types. We identify three levels of control, each being responsible for guiding patterning in normal development: (i) 'coin tossing', whereby a cell autonomously exhibits a preference for developing along either the stalk or the spore pathway with relative probabilities that can be influenced by the environment; (ii) 'chemical kinetics', whereby prestalk and prespore cells originate from undifferentiated amoebae on a probabilistic basis but, having originated, interact (e.g. via positive and negative feedbacks), and the interaction influences the possibility of conversion of one cell type into the other, and (iii) 'positional information', in which the spatial distribution of morphogens in the slug influences the pathway of differentiation. In the case of possibilities (i) and (ii), sorting out of like cell types leads to the final spatial pattern. In the case of possibility (iii), the pattern arises in situ.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report enhanced emission and gain narrowing in Rhodamine 590 perchlorate dye in an aqueous suspension of polystyrene microspheres. A systematic experimental study of the threshold condition for and the gain narrowing of the stimulated emission over a wide range of dye concentrations and scatterer number densities showed several interesting features, even though the transport mean free path far exceeded the system size. The conventional diffusive-reactive approximation to radiative transfer in an inhomogeneously illuminated random amplifying medium, which is valid for a transport mean-free path much smaller than the system size, is clearly inapplicable here. We propose a new probabilistic approach for the present case of dense, random, weak scatterers involving the otherwise rare and ignorable sub-mean-free-path scatterings, now made effective by the high gain in the medium, which is consistent: with experimentally observed features. (C) 1997 Optical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an overview of the seismic microzonation and the grade/level based study along with methods used for estimating hazard. The principles of seismic microzonation along with some current practices are discussed. Summary of seismic microzonation experiments carried out in India is presented. A detailed work of seismic microzonation of Bangalore has been presented as a case study. In this case study, a seismotectonic map for microzonation area has been developed covering 350 km radius around Bangalore, India using seismicity and seismotectonic parameters of the region. For seismic microzonation Bangalore Mahanagar Palike (BMP) area of 220 km2 has been selected as the study area. Seismic hazard analysis has been carried out using deterministic as well as probabilistic approaches. Synthetic ground motion at 653 locations, recurrence relation and peak ground acceleration maps at rock level have been generated. A detailed site characterization has been carried out using borehole with standard penetration test (SPT) ―N‖ values and geophysical data. The base map and 3-dimensional sub surface borehole model has been generated for study area using geographical information system (GIS). Multichannel analysis of surface wave (MASW)method has been used to generate one-dimensional shear wave velocity profile at 58 locations and two- dimensional profile at 20 locations. These shear wave velocities are used to estimate equivalent shear wave velocity in the study area at every 5m intervals up to a depth of 30m. Because of wider variation in the rock depth, equivalent shear for the soil overburden thickness alone has been estimated and mapped using ArcGIS 9.2. Based on equivalent shear wave velocity of soil overburden thickness, the study area is classified as ―site class D‖. Site response study has been carried out using geotechnical properties and synthetic ground motions with program SHAKE2000.The soil in the study area is classified as soil with moderate amplification potential. Site response results obtained using standard penetration test (SPT) ―N‖ values and shear wave velocity are compared, it is found that the results based on shear wave velocity is lower than the results based on SPT ―N‖ values. Further, predominant frequency of soil column has been estimated based on ambient noise survey measurements using instruments of L4-3D short period sensors equipped with Reftek 24 bit digital acquisition systems. Predominant frequency obtained from site response study is compared with ambient noise survey. In general, predominant frequencies in the study area vary from 3Hz to 12Hz. Due to flat terrain in the study area, the induced effect of land slide possibility is considered to be remote. However, induced effect of liquefaction hazard has been estimated and mapped. Finally, by integrating the above hazard parameters two hazard index maps have been developed using Analytic Hierarchy Process (AHP) on GIS platform. One map is based on deterministic hazard analysis and other map is based on probabilistic hazard analysis. Finally, a general guideline is proposed by bringing out the advantages and disadvantages of different approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an overview of the seismic microzonation and the grade/level based study along with methods used for estimating hazard. The principles of seismic microzonation along with some current practices are discussed. Summary of seismic microzonation experiments carried out in India is presented. A detailed work of seismic microzonation of Bangalore has been presented as a case study. In this case study, a seismotectonic map for microzonation area has been developed covering 350 km radius around Bangalore, India using seismicity and seismotectonic parameters of the region. For seismic microzonation Bangalore Mahanagar Palike (BMP) area of 220 km2 has been selected as the study area. Seismic hazard analysis has been carried out using deterministic as well as probabilistic approaches. Synthetic ground motion at 653 locations, recurrence relation and peak ground acceleration maps at rock level have been generated. A detailed site characterization has been carried out using borehole with standard penetration test (SPT) ―N‖ values and geophysical data. The base map and 3-dimensional sub surface borehole model has been generated for study area using geographical information system (GIS). Multichannel analysis of surface wave (MASW)method has been used to generate one-dimensional shear wave velocity profile at 58 locations and two- dimensional profile at 20 locations. These shear wave velocities are used to estimate equivalent shear wave velocity in the study area at every 5m intervals up to a depth of 30m. Because of wider variation in the rock depth, equivalent shear for the soil overburden thickness alone has been estimated and mapped using ArcGIS 9.2. Based on equivalent shear wave velocity of soil overburden thickness, the study area is classified as ―site class D‖. Site response study has been carried out using geotechnical properties and synthetic ground motions with program SHAKE2000.The soil in the study area is classified as soil with moderate amplification potential. Site response results obtained using standard penetration test (SPT) ―N‖ values and shear wave velocity are compared, it is found that the results based on shear wave velocity is lower than the results based on SPT ―N‖ values. Further, predominant frequency of soil column has been estimated based on ambient noise survey measurements using instruments of L4-3D short period sensors equipped with Reftek 24 bit digital acquisition systems. Predominant frequency obtained from site response study is compared with ambient noise survey. In general, predominant frequencies in the study area vary from 3Hz to 12Hz. Due to flat terrain in the study area, the induced effect of land slide possibility is considered to be remote. However, induced effect of liquefaction hazard has been estimated and mapped. Finally, by integrating the above hazard parameters two hazard index maps have been developed using Analytic Hierarchy Process (AHP) on GIS platform. One map is based on deterministic hazard analysis and other map is based on probabilistic hazard analysis. Finally, a general guideline is proposed by bringing out the advantages and disadvantages of different approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature selection is an important first step in regional hydrologic studies (RHYS). Over the past few decades, advances in data collection facilities have resulted in development of data archives on a variety of hydro-meteorological variables that may be used as features in RHYS. Currently there are no established procedures for selecting features from such archives. Therefore, hydrologists often use subjective methods to arrive at a set of features. This may lead to misleading results. To alleviate this problem, a probabilistic clustering method for regionalization is presented to determine appropriate features from the available dataset. The effectiveness of the method is demonstrated by application to regionalization of watersheds in conterminous United States for low flow frequency analysis. Plausible homogeneous regions that are formed by using the proposed clustering method are compared with those from conventional methods of regionalization using L-moment based homogeneity tests. Results show that the proposed methodology is promising for RHYS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Landslides are hazards encountered during monsoon in undulating terrains of Western Ghats causing geomorphic make over of earth surface resulting in significant damages to life and property. An attempt is made in this paper to identify landslides susceptibility regions in the Sharavathi river basin downstream using frequency ratio method based on the field investigations during July- November 2007. In this regard, base layers of spatial data such as topography, land cover, geology and soil were considered. This is supplemented with the field investigations of landslides. Factors that influence landslide were extracted from the spatial database. The probabilistic model -frequency ratio is computed based on these factors. Landslide susceptibility indices were computed and grouped into five classes. Validation of LHS, showed an accuracy of 89% as 25 of the 28 regions tallied with the field condition of highly vulnerable landslide regions. The landslide susceptible map generated for the downstream would be useful for the district officials to implement appropriate mitigation measures to reduce hazards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper the use of probability theory in reliability based optimum design of reinforced gravity retaining wall is described. The formulation for computing system reliability index is presented. A parametric study is conducted using advanced first order second moment method (AFOSM) developed by Hasofer-Lind and Rackwitz-Fiessler (HL-RF) to asses the effect of uncertainties in design parameters on the probability of failure of reinforced gravity retaining wall. Totally 8 modes of failure are considered, viz overturning, sliding, eccentricity, bearing capacity failure, shear and moment failure in the toe slab and heel slab. The analysis is performed by treating back fill soil properties, foundation soil properties, geometric properties of wall, reinforcement properties and concrete properties as random variables. These results are used to investigate optimum wall proportions for different coefficients of variation of φ (5% and 10%) and targeting system reliability index (βt) in the range of 3 – 3.2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fuzzy multiobjective programming for a deterministic case involves maximizing the minimum goal satisfaction level among conflicting goals of different stakeholders using Max-min approach. Uncertainty due to randomness in a fuzzy multiobjective programming may be addressed by modifying the constraints using probabilistic inequality (e.g., Chebyshev’s inequality) or by addition of new constraints using statistical moments (e.g., skewness). Such modifications may result in the reduction of the optimal value of the system performance. In the present study, a methodology is developed to allow some violation in the newly added and modified constraints, and then minimizing the violation of those constraints with the objective of maximizing the minimum goal satisfaction level. Fuzzy goal programming is used to solve the multiobjective model. The proposed methodology is demonstrated with an application in the field of Waste Load Allocation (WLA) in a river system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In terabit-density magnetic recording, several bits of data can be replaced by the values of their neighbors in the storage medium. As a result, errors in the medium are dependent on each other and also on the data written. We consider a simple 1-D combinatorial model of this medium. In our model, we assume a setting where binary data is sequentially written on the medium and a bit can erroneously change to the immediately preceding value. We derive several properties of codes that correct this type of errors, focusing on bounds on their cardinality. We also define a probabilistic finite-state channel model of the storage medium, and derive lower and upper estimates of its capacity. A lower bound is derived by evaluating the symmetric capacity of the channel, i.e., the maximum transmission rate under the assumption of the uniform input distribution of the channel. An upper bound is found by showing that the original channel is a stochastic degradation of another, related channel model whose capacity we can compute explicitly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of on-line recognition and retrieval of relatively weak industrial signals such as partial discharges (PD), buried in excessive noise, has been addressed in this paper. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) due to the overlapping broad band frequency spectrum of PI and PD pulses. Therefore, on-line, onsite, PD measurement is hardly possible in conventional frequency based DSP techniques. The observed PD signal is modeled as a linear combination of systematic and random components employing probabilistic principal component analysis (PPCA) and the pdf of the underlying stochastic process is obtained. The PD/PI pulses are assumed as the mean of the process and modeled instituting non-parametric methods, based on smooth FIR filters, and a maximum aposteriori probability (MAP) procedure employed therein, to estimate the filter coefficients. The classification of the pulses is undertaken using a simple PCA classifier. The methods proposed by the authors were found to be effective in automatic retrieval of PD pulses completely rejecting PI.