874 resultados para Automated Hazard Analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the problem of detecting cells in biological images. The problem is important in many automated image analysis applications. We identify the problem as one of clustering and formulate it within the framework of robust estimation using loss functions. We show how suitable loss functions may be chosen based on a priori knowledge of the noise distribution. Specifically, in the context of biological images, since the measurement noise is not Gaussian, quadratic loss functions yield suboptimal results. We show that by incorporating the Huber loss function, cells can be detected robustly and accurately. To initialize the algorithm, we also propose a seed selection approach. Simulation results show that Huber loss exhibits better performance compared with some standard loss functions. We also provide experimental results on confocal images of yeast cells. The proposed technique exhibits good detection performance even when the signal-to-noise ratio is low.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new hierarchical clustering algorithm for crop stage classification using hyperspectral satellite image. Amongst the multiple benefits and uses of remote sensing, one of the important application is to solve the problem of crop stage classification. Modern commercial imaging satellites, owing to their large volume of satellite imagery, offer greater opportunities for automated image analysis. Hence, we propose a unsupervised algorithm namely Hierarchical Artificial Immune System (HAIS) of two steps: splitting the cluster centers and merging them. The high dimensionality of the data has been reduced with the help of Principal Component Analysis (PCA). The classification results have been compared with K-means and Artificial Immune System algorithms. From the results obtained, we conclude that the proposed hierarchical clustering algorithm is accurate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Himalayan region is one of the most active seismic regions in the world and many researchers have highlighted the possibility of great seismic event in the near future due to seismic gap. Seismic hazard analysis and microzonation of highly populated places in the region are mandatory in a regional scale. Region specific Ground Motion Predictive Equation (GMPE) is an important input in the seismic hazard analysis for macro- and micro-zonation studies. Few GMPEs developed in India are based on the recorded data and are applicable for a particular range of magnitudes and distances. This paper focuses on the development of a new GMPE for the Himalayan region considering both the recorded and simulated earthquakes of moment magnitude 5.3-8.7. The Finite Fault simulation model has been used for the ground motion simulation considering region specific seismotectonic parameters from the past earthquakes and source models. Simulated acceleration time histories and response spectra are compared with available records. In the absence of a large number of recorded data, simulations have been performed at unavailable locations by adopting Apparent Stations concept. Earthquakes recorded up to 2007 have been used for the development of new GMPE and earthquakes records after 2007 are used to validate new GMPE. Proposed GMPE matched very well with recorded data and also with other highly ranked GMPEs developed elsewhere and applicable for the region. Comparison of response spectra also have shown good agreement with recorded earthquake data. Quantitative analysis of residuals for the proposed GMPE and region specific GMPEs to predict Nepal-India 2011 earthquake of Mw of 5.7 records values shows that the proposed GMPE predicts Peak ground acceleration and spectral acceleration for entire distance and period range with lower percent residual when compared to exiting region specific GMPEs. Crown Copyright (C) 2013 Published by Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper highlights the seismic microzonation carried out for a nuclear power plant site. Nuclear power plants are considered to be one of the most important and critical structures designed to withstand all natural disasters. Seismic microzonation is a process of demarcating a region into individual areas having different levels of various seismic hazards. This will help in identifying regions having high seismic hazard which is vital for engineering design and land-use planning. The main objective of this paper is to carry out the seismic microzonation of a nuclear power plant site situated in the east coast of South India, based on the spatial distribution of the hazard index value. The hazard index represents the consolidated effect of all major earthquake hazards and hazard influencing parameters. The present work will provide new directions for assessing the seismic hazards of new power plant sites in the country. Major seismic hazards considered for the evaluation of the hazard index are (1) intensity of ground shaking at bedrock, (2) site amplification, (3) liquefaction potential and (4) the predominant frequency of the earthquake motion at the surface. The intensity of ground shaking in terms of peak horizontal acceleration (PHA) was estimated for the study area using both deterministic and probabilistic approaches with logic tree methodology. The site characterization of the study area has been carried out using the multichannel analysis of surface waves test and available borehole data. One-dimensional ground response analysis was carried out at major locations within the study area for evaluating PHA and spectral accelerations at the ground surface. Based on the standard penetration test data, deterministic as well as probabilistic liquefaction hazard analysis has been carried out for the entire study area. Finally, all the major earthquake hazards estimated above, and other significant parameters representing local geology were integrated using the analytic hierarchy process and hazard index map for the study area was prepared. Maps showing the spatial variation of seismic hazards (intensity of ground shaking, liquefaction potential and predominant frequency) and hazard index are presented in this work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Detection of QRS serves as a first step in many automated ECG analysis techniques. Motivated by the strong similarities between the signal structures of an ECG signal and the integrated linear prediction residual (ILPR) of voiced speech, an algorithm proposed earlier for epoch detection from ILPR is extended to the problem of QRS detection. The ECG signal is pre-processed by high-pass filtering to remove the baseline wandering and by half-wave rectification to reduce the ambiguities. The initial estimates of the QRS are iteratively obtained using a non-linear temporal feature, named the dynamic plosion index suitable for detection of transients in a signal. These estimates are further refined to obtain a higher temporal accuracy. Unlike most of the high performance algorithms, this technique does not make use of any threshold or differencing operation. The proposed algorithm is validated on the MIT-BIH database using the standard metrics and its performance is found to be comparable to the state-of-the-art algorithms, despite its threshold independence and simple decision logic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, an attempt has been made to prepare the seismic intensity map for south India considering the probable earthquakes in the region. Anbazhagan et al. (Nat Hazards 60:1325-1345, 2012) have identified eight probable future earthquake zones in south India based on rupture-based seismic hazard analysis. Anbazhagan et al. (Eng Geol 171:81-95, 2014) has estimated the maximum future earthquake magnitude at these eight zones using regional rupture character. In this study, the whole south India is divided into several grids of size 1(o) x 1(o) and the intensity at each grid point is calculated using the regional intensity model for the maximum earthquake magnitude at each of the eight zones. The intensity due to earthquakes at these zones is mapped and thus eight seismic intensity maps are prepared. The final seismic intensity map of south India is obtained by considering the maximum intensity at each grid point due to the estimated earthquakes. By looking at the seismic intensity map, one can expect slight to heavy damage due to the probable earthquake magnitudes. Heavy damage may happen close to the probable earthquake zones.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A padronização para a fabricação de instrumentos endodônticos em aço inoxidável contribuiu para o desenvolvimento de novos aspectos geométricos. Surgiram propostas de alterações no desenho da haste helicoidal, da seção reta transversal, da ponta, da conicidade e do diâmetro na extremidade (D0). Concomitantemente, o emprego de ligas em Níquel-Titânio possibilitou a produção de instrumentos acionados a motor, largamente empregados hoje. A cada ano a indústria lança instrumentos com diversas modificações, sem, contudo, disponibilizar informações suficientes quanto às implicações clínicas destas modificações. Existe um crescente interesse no estudo dos diferentes aspectos geométricos e sua precisa metrologia. Tradicionalmente, a aferição de aspectos geométricos de instrumentos endodônticos é realizada visualmente através de microscopia ótica. Entretanto, esse procedimento visual é lento e subjetivo. Este trabalho propõe um novo método para a metrologia de instrumentos endodônticos baseado no microscópio eletrônico de varredura e na análise digital das imagens. A profundidade de campo do MEV permite obter a imagem de todo o relevo do instrumento endodôntico a uma distância de trabalho constante. Além disso, as imagens obtidas pelo detector de elétrons retro-espalhados possuem menos artefatos e sombras, tornando a obtenção e análise das imagens mais fáceis. Adicionalmente a análise das imagens permite formas de mensuração mais eficientes, com maior velocidade e qualidade. Um porta-amostras específico foi adaptado para obtenção das imagens dos instrumentos endodônticos. Ele é composto de um conector elétrico múltiplo com terminais parafusados de 12 pólos com 4 mm de diâmetro, numa base de alumínio coberta por discos de ouro. Os nichos do conector (terminais fêmeas) têm diâmetro apropriado (2,5 mm) para o encaixe dos instrumentos endodônticos. Outrossim, o posicionamento ordenado dos referidos instrumentos no conector elétrico permite a aquisição automatizada das imagens no MEV. Os alvos de ouro produzem, nas imagens de elétrons retro-espalhados, melhor contraste de número atômico entre o fundo em ouro e os instrumentos. No porta-amostras desenvolvido, os discos que compõem o fundo em ouro são na verdade, alvos do aparelho metalizador, comumente encontrados em laboratórios de MEV. Para cada instrumento, imagens de quatro a seis campos adjacentes de 100X de aumento são automaticamente obtidas para cobrir todo o comprimento do instrumento com a magnificação e resolução requeridas (3,12 m/pixel). As imagens obtidas são processadas e analisadas pelos programas Axiovision e KS400. Primeiro elas são dispostas num campo único estendido de cada instrumento por um procedimento de alinhamento semi-automático baseado na inter-relação com o Axiovision. Então a imagem de cada instrumento passa por uma rotina automatizada de análise de imagens no KS400. A rotina segue uma sequência padrão: pré-processamento, segmentação, pós-processamento e mensuração dos aspectos geométricos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As a typical geological and environmental hazard, landslide has been causing more and more property and life losses. However, to predict its accurate occurring time is very difficult or even impossible due to landslide's complex nature. It has been realized that it is not a good solution to spend a lot of money to treat with and prevent landslide. The research trend is to study landslide's spatial distribution and predict its potential hazard zone under certain region and certain conditions. GIS(Geographical Information System) is a power tools for data management, spatial analysis based on reasonable spatial models and visualization. It is new and potential study field to do landslide hazard analysis and prediction based on GIS. This paper systematically studies the theory and methods for GIS based landslide hazard analysis. On the basis of project "Mountainous hazard study-landslide and debris flows" supported by Chinese Academy of Sciences and the former study foundation, this paper carries out model research, application, verification and model result analysis. The occurrence of landslide has its triggering factors. Landslide has its special landform and topographical feature which can be identify from field work and remote sensing image (aerial photo). Historical record of landslide is the key to predict the future behaviors of landslide. These are bases for landslide spatial data base construction. Based on the plenty of literatures reviews, the concept framework of model integration and unit combinations is formed. Two types of model, CF multiple regression model and landslide stability and hydrological distribution coupled model are bought forward. CF multiple regression model comes form statistics and possibility theory based on data. Data itself contains the uncertainty and random nature of landslide hazard, so it can be seen as a good method to study and understand landslide's complex feature and mechanics. CF multiple regression model integrates CF (landslide Certainty Factor) and multiple regression prediction model. CF can easily treat with the problems of data quantifying and combination of heteroecious data types. The combination of CF can assist to determine key landslide triggering factors which are then inputted into multiple regression model. CF regression model can provide better prediction results than traditional model. The process of landslide can be described and modeled by suitable physical and mechanical model. Landslide stability and hydrological distribution coupled model is such a physical deterministic model that can be easily used for landslide hazard analysis and prediction. It couples the general limit equilibrium method and hydrological distribution model based on DEM, and can be used as a effective approach to predict the occurrence of landslide under different precipitation conditions as well as landslide mechanics research. It can not only explain pre-existed landslides, but also predict the potential hazard region with environmental conditions changes. Finally, this paper carries out landslide hazard analysis and prediction in Yunnan Xiaojiang watershed, including landslide hazard sensitivity analysis and regression prediction model based on selected key factors, determining the relationship between landslide occurrence possibility and triggering factors. The result of landslide hazard analysis and prediction by coupled model is discussed in details. On the basis of model verification and validation, the modeling results are showing high accuracy and good applying potential in landslide research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation that includes most of the P. PH.D research work during 2001~2002 covers the large-scale distribution of continental earthquakes in mainland China, the mechanism and statistic features of grouped strong earthquakes related to the tidal triggering, some results in earthquake prediction with correlativity analysis methods, and the flushes from the two strong continental earthquakes in South Asia in 2001. Mainland China is the only continental sub-plate that is compressed by collision boundaries at its two sides, within which earthquakes are dispersive and distributed as seismic belts with different widths. The control capability of the continental block boundaries on the strong earthquakes and seismic hazards is calculated and analyzed in this dissertation. By mapping the distribution of the 31282 ML:3s2,0 earthquakes, I found that the depth of continental earthquakes depend on the tectonic zonings. The events on the boundaries of relatively integrated blocks are deep and those on the new-developed ruptures are shallow. The average depth of earthquakes in the West of China is about 5km deeper than that in the east. The western and southwestern brim of Tarim Basin generated the deepest earthquakes in mainland China. The statistic results from correlation between the grouped M7 earthquakes and the tidal stress show that the strong events were modulated by tidal stress in active periods. Taking Taiwan area as an example, the dependence of moderate events on the moon phase angles (£>) is analyzed, which shows that the number of the earthquakes in Taiwan when D is 50° ,50° +90° and 50° +180° is more than 2 times of standard deviation over the average frequency at each degree, corresponding to the 4th, 12th and 19th solar day after the new moon. The probability of earthquake attack to the densely populated Taiwan island on the 4th solar day is about 4 times of that on other solar days. On the practice of earthquake prediction, I calculated and analyzed the temporal correlation of the earthquakes in Xinjinag area, Qinghai-Tibet area, west Yunnan area, North China area and those in their adjacent areas, and predicted at the end of 2000 that it would be a special time interval from 2001 to 2003, within which moderate to strong earthquakes would be more active in the west of China. What happened in 2001 partly validated the prediction. Within 10 months, there were 2 great continental earthquakes in south Asia, i.e., the M7.8 event in India on Jan 26 and M8.1 event in China on Nov. 14, 2001, which are the largest earthquake in the past 50 years both for India and China. No records for two great earthquakes in Asia within so short time interval. We should speculate the following aspects from the two incidences: The influence of the fallacious deployment of seismic stations on the fine location and focal mechanism determination of strong earthquakes must be affronted. It is very important to introduce comparative seismology research to seismic hazard analysis and earthquake prediction research. The improvement or changes in real-time prediction of strong earthquakes with precursors is urged. Methods need to be refreshed to protect environment and historical relics in earthquake-prone areas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work reported here lies in the area of overlap between artificial intelligence software engineering. As research in artificial intelligence, it is a step towards a model of problem solving in the domain of programming. In particular, this work focuses on the routine aspects of programming which involve the application of previous experience with similar programs. I call this programming by inspection. Programming is viewed here as a kind of engineering activity. Analysis and synthesis by inspection area prominent part of expert problem solving in many other engineering disciplines, such as electrical and mechanical engineering. The notion of inspections methods in programming developed in this work is motivated by similar notions in other areas of engineering. This work is also motivated by current practical concerns in the area of software engineering. The inadequacy of current programming technology is universally recognized. Part of the solution to this problem will be to increase the level of automation in programming. I believe that the next major step in the evolution of more automated programming will be interactive systems which provide a mixture of partially automated program analysis, synthesis and verification. One such system being developed at MIT, called the programmer's apprentice, is the immediate intended application of this work. This report concentrates on the knowledge are of the programmer's apprentice, which is the form of a taxonomy of commonly used algorithms and data structures. To the extent that a programmer is able to construct and manipulate programs in terms of the forms in such a taxonomy, he may relieve himself of many details and generally raise the conceptual level of his interaction with the system, as compared with present day programming environments. Also, since it is practical to expand a great deal of effort pre-analyzing the entries in a library, the difficulty of verifying the correctness of programs constructed this way is correspondingly reduced. The feasibility of this approach is demonstrated by the design of an initial library of common techniques for manipulating symbolic data. This document also reports on the further development of a formalism called the plan calculus for specifying computations in a programming language independent manner. This formalism combines both data and control abstraction in a uniform framework that has facilities for representing multiple points of view and side effects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ellis, D. I., Broadhurst, D., Kell, D. B., Rowland, J. J., Goodacre, R. (2002). Rapid and quantitative detection of the microbial spoilage of meat by Fourier Transform Infrared Spectroscopy and machine learning. ? Applied and Environmental Microbiology, 68, (6), 2822-2828 Sponsorship: BBSRC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents innovative work in the development of policy-based autonomic computing. The core of the work is a powerful and flexible policy-expression language AGILE, which facilitates run-time adaptable policy configuration of autonomic systems. AGILE also serves as an integrating platform for other self-management technologies including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. The various technologies are implemented as object components. Self-management behaviour is specified using the policy language semantics to bind the various components together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. Additional benefits include the standardisation of the application programmer interface, terminology and semantics, and only a single point of embedding is required.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes an autonomics development tool which serves as both a powerful and flexible policy-expression language and a policy-based framework that supports the integration and dynamic composition of several autonomic computing techniques including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. Self-management behaviour is specified using the policy language semantics to bind the various technologies together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. The policy language and implementation library have integrated support for self-stabilising behaviour, enabling oscillation and other forms of instability to be handled at the policy level with very little effort on the part of the application developer. Example applications are presented to illustrate the integration of different autonomics techniques, and the achievement of dynamic composition.