991 resultados para Probabilistic approaches
Assessment of seismic hazard and liquefaction potential of Gujarat based on probabilistic approaches
Resumo:
Gujarat is one of the fastest-growing states of India with high industrial activities coming up in major cities of the state. It is indispensable to analyse seismic hazard as the region is considered to be most seismically active in stable continental region of India. The Bhuj earthquake of 2001 has caused extensive damage in terms of causality and economic loss. In the present study, the seismic hazard of Gujarat evaluated using a probabilistic approach with the use of logic tree framework that minimizes the uncertainties in hazard assessment. The peak horizontal acceleration (PHA) and spectral acceleration (Sa) values were evaluated for 10 and 2 % probability of exceedance in 50 years. Two important geotechnical effects of earthquakes, site amplification and liquefaction, are also evaluated, considering site characterization based on site classes. The liquefaction return period for the entire state of Gujarat is evaluated using a performance-based approach. The maps of PHA and PGA values prepared in this study are very useful for seismic hazard mitigation of the region in future.
Resumo:
The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.
Resumo:
This paper explores the current state-of-the-art in performance indicators and use of probabilistic approaches used in climate change impact studies. It presents a critical review of recent publications in this field, focussing on (1) metrics for energy use for heating and cooling, emissions, overheating and high-level performance aspects, and (2) uptake of uncertainty and risk analysis. This is followed by a case study, which is used to explore some of the contextual issues around the broader uptake of climate change impact studies in practice. The work concludes that probabilistic predictions of the impact of climate change are feasible, but only based on strict and explicitly stated assumptions. © 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an approach for probabilistic analysis of unbalanced three-phase weakly meshed distribution systems considering uncertainty in load demand. In order to achieve high computational efficiency this approach uses both an efficient method for probabilistic analysis and a radial power flow. The probabilistic approach used is the well-known Two-Point Estimate Method. Meanwhile, the compensation-based radial power flow is used in order to extract benefits from the topological characteristics of the distribution systems. The generation model proposed allows modeling either PQ or PV bus on the connection point between the network and the distributed generator. In addition allows control of the generator operating conditions, such as the field current and the power delivery at terminals. Results on test with IEEE 37 bus system is given to illustrate the operation and effectiveness of the proposed approach. A Monte Carlo Simulations method is used to validate the results. © 2011 IEEE.
Resumo:
Structural durability is an important criterion that must be evaluated for every type of structure. Concerning reinforced concrete members, chloride diffusion process is widely used to evaluate durability, especially when these structures are constructed in aggressive atmospheres. The chloride ingress triggers the corrosion of reinforcements; therefore, by modelling this phenomenon, the corrosion process can be better evaluated as well as the structural durability. The corrosion begins when a threshold level of chloride concentration is reached at the steel bars of reinforcements. Despite the robustness of several models proposed in literature, deterministic approaches fail to predict accurately the corrosion time initiation due the inherent randomness observed in this process. In this regard, structural durability can be more realistically represented using probabilistic approaches. This paper addresses the analyses of probabilistic corrosion time initiation in reinforced concrete structures exposed to chloride penetration. The chloride penetration is modelled using the Fick's diffusion law. This law simulates the chloride diffusion process considering time-dependent effects. The probability of failure is calculated using Monte Carlo simulation and the first order reliability method, with a direct coupling approach. Some examples are considered in order to study these phenomena. Moreover, a simplified method is proposed to determine optimal values for concrete cover.
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
This paper presents an overview of the seismic microzonation and the grade/level based study along with methods used for estimating hazard. The principles of seismic microzonation along with some current practices are discussed. Summary of seismic microzonation experiments carried out in India is presented. A detailed work of seismic microzonation of Bangalore has been presented as a case study. In this case study, a seismotectonic map for microzonation area has been developed covering 350 km radius around Bangalore, India using seismicity and seismotectonic parameters of the region. For seismic microzonation Bangalore Mahanagar Palike (BMP) area of 220 km2 has been selected as the study area. Seismic hazard analysis has been carried out using deterministic as well as probabilistic approaches. Synthetic ground motion at 653 locations, recurrence relation and peak ground acceleration maps at rock level have been generated. A detailed site characterization has been carried out using borehole with standard penetration test (SPT) ―N‖ values and geophysical data. The base map and 3-dimensional sub surface borehole model has been generated for study area using geographical information system (GIS). Multichannel analysis of surface wave (MASW)method has been used to generate one-dimensional shear wave velocity profile at 58 locations and two- dimensional profile at 20 locations. These shear wave velocities are used to estimate equivalent shear wave velocity in the study area at every 5m intervals up to a depth of 30m. Because of wider variation in the rock depth, equivalent shear for the soil overburden thickness alone has been estimated and mapped using ArcGIS 9.2. Based on equivalent shear wave velocity of soil overburden thickness, the study area is classified as ―site class D‖. Site response study has been carried out using geotechnical properties and synthetic ground motions with program SHAKE2000.The soil in the study area is classified as soil with moderate amplification potential. Site response results obtained using standard penetration test (SPT) ―N‖ values and shear wave velocity are compared, it is found that the results based on shear wave velocity is lower than the results based on SPT ―N‖ values. Further, predominant frequency of soil column has been estimated based on ambient noise survey measurements using instruments of L4-3D short period sensors equipped with Reftek 24 bit digital acquisition systems. Predominant frequency obtained from site response study is compared with ambient noise survey. In general, predominant frequencies in the study area vary from 3Hz to 12Hz. Due to flat terrain in the study area, the induced effect of land slide possibility is considered to be remote. However, induced effect of liquefaction hazard has been estimated and mapped. Finally, by integrating the above hazard parameters two hazard index maps have been developed using Analytic Hierarchy Process (AHP) on GIS platform. One map is based on deterministic hazard analysis and other map is based on probabilistic hazard analysis. Finally, a general guideline is proposed by bringing out the advantages and disadvantages of different approaches.
Resumo:
This paper presents an overview of the seismic microzonation and the grade/level based study along with methods used for estimating hazard. The principles of seismic microzonation along with some current practices are discussed. Summary of seismic microzonation experiments carried out in India is presented. A detailed work of seismic microzonation of Bangalore has been presented as a case study. In this case study, a seismotectonic map for microzonation area has been developed covering 350 km radius around Bangalore, India using seismicity and seismotectonic parameters of the region. For seismic microzonation Bangalore Mahanagar Palike (BMP) area of 220 km2 has been selected as the study area. Seismic hazard analysis has been carried out using deterministic as well as probabilistic approaches. Synthetic ground motion at 653 locations, recurrence relation and peak ground acceleration maps at rock level have been generated. A detailed site characterization has been carried out using borehole with standard penetration test (SPT) ―N‖ values and geophysical data. The base map and 3-dimensional sub surface borehole model has been generated for study area using geographical information system (GIS). Multichannel analysis of surface wave (MASW)method has been used to generate one-dimensional shear wave velocity profile at 58 locations and two- dimensional profile at 20 locations. These shear wave velocities are used to estimate equivalent shear wave velocity in the study area at every 5m intervals up to a depth of 30m. Because of wider variation in the rock depth, equivalent shear for the soil overburden thickness alone has been estimated and mapped using ArcGIS 9.2. Based on equivalent shear wave velocity of soil overburden thickness, the study area is classified as ―site class D‖. Site response study has been carried out using geotechnical properties and synthetic ground motions with program SHAKE2000.The soil in the study area is classified as soil with moderate amplification potential. Site response results obtained using standard penetration test (SPT) ―N‖ values and shear wave velocity are compared, it is found that the results based on shear wave velocity is lower than the results based on SPT ―N‖ values. Further, predominant frequency of soil column has been estimated based on ambient noise survey measurements using instruments of L4-3D short period sensors equipped with Reftek 24 bit digital acquisition systems. Predominant frequency obtained from site response study is compared with ambient noise survey. In general, predominant frequencies in the study area vary from 3Hz to 12Hz. Due to flat terrain in the study area, the induced effect of land slide possibility is considered to be remote. However, induced effect of liquefaction hazard has been estimated and mapped. Finally, by integrating the above hazard parameters two hazard index maps have been developed using Analytic Hierarchy Process (AHP) on GIS platform. One map is based on deterministic hazard analysis and other map is based on probabilistic hazard analysis. Finally, a general guideline is proposed by bringing out the advantages and disadvantages of different approaches.
Resumo:
This paper presents a detailed study on the seismic pattern of the state of Karnataka and also quantifies the seismic hazard for the entire state. In the present work, historical and instrumental seismicity data for Karnataka (within 300 km from Karnataka political boundary) were compiled and hazard analysis was done based on this data. Geographically, Karnataka forms a part of peninsular India which is tectonically identified as an intraplate region of Indian plate. Due to the convergent movement of the Indian plate with the Eurasian plate, movements are occurring along major intraplate faults resulting in seismic activity of the region and hence the hazard assessment of this region is very important. Apart from referring to seismotectonic atlas for identifying faults and fractures, major lineaments in the study area were also mapped using satellite data. The earthquake events reported by various national and international agencies were collected until 2009. Declustering of earthquake events was done to remove foreshocks and aftershocks. Seismic hazard analysis was done for the state of Karnataka using both deterministic and probabilistic approaches incorporating logic tree methodology. The peak ground acceleration (PGA) at rock level was evaluated for the entire state considering a grid size of 0.05A degrees x 0.05A degrees. The attenuation relations proposed for stable continental shield region were used in evaluating the seismic hazard with appropriate weightage factors. Response spectra at rock level for important Tier II cities and Bangalore were evaluated. The contour maps showing the spatial variation of PGA values at bedrock are presented in this work.
Resumo:
This paper highlights the seismic microzonation carried out for a nuclear power plant site. Nuclear power plants are considered to be one of the most important and critical structures designed to withstand all natural disasters. Seismic microzonation is a process of demarcating a region into individual areas having different levels of various seismic hazards. This will help in identifying regions having high seismic hazard which is vital for engineering design and land-use planning. The main objective of this paper is to carry out the seismic microzonation of a nuclear power plant site situated in the east coast of South India, based on the spatial distribution of the hazard index value. The hazard index represents the consolidated effect of all major earthquake hazards and hazard influencing parameters. The present work will provide new directions for assessing the seismic hazards of new power plant sites in the country. Major seismic hazards considered for the evaluation of the hazard index are (1) intensity of ground shaking at bedrock, (2) site amplification, (3) liquefaction potential and (4) the predominant frequency of the earthquake motion at the surface. The intensity of ground shaking in terms of peak horizontal acceleration (PHA) was estimated for the study area using both deterministic and probabilistic approaches with logic tree methodology. The site characterization of the study area has been carried out using the multichannel analysis of surface waves test and available borehole data. One-dimensional ground response analysis was carried out at major locations within the study area for evaluating PHA and spectral accelerations at the ground surface. Based on the standard penetration test data, deterministic as well as probabilistic liquefaction hazard analysis has been carried out for the entire study area. Finally, all the major earthquake hazards estimated above, and other significant parameters representing local geology were integrated using the analytic hierarchy process and hazard index map for the study area was prepared. Maps showing the spatial variation of seismic hazards (intensity of ground shaking, liquefaction potential and predominant frequency) and hazard index are presented in this work.
Resumo:
The objective of this paper was to develop the seismic hazard maps of Patna district considering the region-specific maximum magnitude and ground motion prediction equation (GMPEs) by worst-case deterministic and classical probabilistic approaches. Patna, located near Himalayan active seismic region has been subjected to destructive earthquakes such as 1803 and 1934 Bihar-Nepal earthquakes. Based on the past seismicity and earthquake damage distribution, linear sources and seismic events have been considered at radius of about 500 km around Patna district center. Maximum magnitude (M (max)) has been estimated based on the conventional approaches such as maximum observed magnitude (M (max) (obs) ) and/or increment of 0.5, Kijko method and regional rupture characteristics. Maximum of these three is taken as maximum probable magnitude for each source. Twenty-seven ground motion prediction equations (GMPEs) are found applicable for Patna region. Of these, suitable region-specific GMPEs are selected by performing the `efficacy test,' which makes use of log-likelihood. Maximum magnitude and selected GMPEs are used to estimate PGA and spectral acceleration at 0.2 and 1 s and mapped for worst-case deterministic approach and 2 and 10 % period of exceedance in 50 years. Furthermore, seismic hazard results are used to develop the deaggregation plot to quantify the contribution of seismic sources in terms of magnitude and distance. In this study, normalized site-specific design spectrum has been developed by dividing the hazard map into four zones based on the peak ground acceleration values. This site-specific response spectrum has been compared with recent Sikkim 2011 earthquake and Indian seismic code IS1893.
Resumo:
综述了RoboCup足球赛中全自主移动机器人基于视觉的定位技术,包括机器人自定位和多机器人协作物体定位.介绍了定位技术的发展情况与分类.从机器人环境构建形式的不同以及先验位姿和概率方法的应用与否等方面,系统地分析和比较了各种自定位方法.对于多机器人协作物体定位,阐述了静态方法和动态跟踪方法.总结了定位过程中需要重点研究的传感器模型构建、图像处理、特征匹配以及协作过程涉及的相关问题.最后就视觉定位存在的问题和技术发展趋势进行了讨论.
Resumo:
In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, we reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: "The Jama Model. On Legal Narratives and Interpretation Patterns"), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story, is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability was infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for AI researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially bayesian probability) in accounts of evidence has been flouishing among legal scholars. Nowadays both the the Bayesians (e.g. Peter Tillers) and Bayesioskeptics (e.g. Ron Allen) among those legal scholars whoare involved in the controversy are willing to give AI researchers a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application or probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making (Rosoni 1995). Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.
Resumo:
In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches, which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, I reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: 'The JAMA Model and Narrative Interpretation Patterns'), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability were infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for Artificial Intelligence (AI) researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially Bayesian probability) in accounts of evidence has been flourishing among legal scholars; nowadays both the Bayesians (e.g. Peter Tillers) and the Bayesio-skeptics (e.g. Ron Allen), among those legal scholars who are involved in the controversy, are willing to give AI research a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application of probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making. Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.
Resumo:
Inferring population admixture from genetic data and quantifying it is a difficult but crucial task in evolutionary and conservation biology. Unfortunately state-of-the-art probabilistic approaches are computationally demanding. Effectively exploiting the computational power of modern multiprocessor systems can thus have a positive impact to Monte Carlo-based simulation of admixture modeling. A novel parallel approach is briefly described and promising results on its message passing interface (MPI)-based C implementation are reported.