36 resultados para multi-criteria analysis

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different seismic hazard components pertaining to Bangalore city,namely soil overburden thickness, effective shear-wave velocity, factor of safety against liquefaction potential, peak ground acceleration at the seismic bedrock, site response in terms of amplification factor, and the predominant frequency, has been individually evaluated. The overburden thickness distribution, predominantly in the range of 5-10 m in the city, has been estimated through a sub-surface model from geotechnical bore-log data. The effective shear-wave velocity distribution, established through Multi-channel Analysis of Surface Wave (MASW) survey and subsequent data interpretation through dispersion analysis, exhibits site class D (180-360 m/s), site class C (360-760 m/s), and site class B (760-1500 m/s) in compliance to the National Earthquake Hazard Reduction Program (NEHRP) nomenclature. The peak ground acceleration has been estimated through deterministic approach, based on the maximum credible earthquake of M-W = 5.1 assumed to be nucleating from the closest active seismic source (Mandya-Channapatna-Bangalore Lineament). The 1-D site response factor, computed at each borehole through geotechnical analysis across the study region, is seen to be ranging from around amplification of one to as high as four times. Correspondingly, the predominant frequency estimated from the Fourier spectrum is found to be predominantly in range of 3.5-5.0 Hz. The soil liquefaction hazard assessment has been estimated in terms of factor of safety against liquefaction potential using standard penetration test data and the underlying soil properties that indicates 90% of the study region to be non-liquefiable. The spatial distributions of the different hazard entities are placed on a GIS platform and subsequently, integrated through analytical hierarchal process. The accomplished deterministic hazard map shows high hazard coverage in the western areas. The microzonation, thus, achieved is envisaged as a first-cut assessment of the site specific hazard in laying out a framework for higher order seismic microzonation as well as a useful decision support tool in overall land-use planning, and hazard management. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A supply chain ecosystem consists of the elements of the supply chain and the entities that influence the goods, information and financial flows through the supply chain. These influences come through government regulations, human, financial and natural resources, logistics infrastructure and management, etc., and thus affect the supply chain performance. Similarly, all the ecosystem elements also contribute to the risk. The aim of this paper is to identify both performances-based and risk-based decision criteria, which are important and critical to the supply chain. A two step approach using fuzzy AHP and fuzzy technique for order of preference by similarity to ideal solution has been proposed for multi-criteria decision-making and illustrated using a numerical example. The first step does the selection without considering risks and then in the next step suppliers are ranked according to their risk profiles. Later, the two ranks are consolidated into one. In subsequent section, the method is also extended for multi-tier supplier selection. In short, we are presenting a method for the design of a resilient supply chain, in this paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Compliant mechanisms are elastic continua used to transmit or transform force and motion mechanically. The topology optimization methods developed for compliant mechanisms also give the shape for a chosen parameterization of the design domain with a fixed mesh. However, in these methods, the shapes of the flexible segments in the resulting optimal solutions are restricted either by the type or the resolution of the design parameterization. This limitation is overcome in this paper by focusing on optimizing the skeletal shape of the compliant segments in a given topology. It is accomplished by identifying such segments in the topology and representing them using Bezier curves. The vertices of the Bezier control polygon are used to parameterize the shape-design space. Uniform parameter steps of the Bezier curves naturally enable adaptive finite element discretization of the segments as their shapes change. Practical constraints such as avoiding intersections with other segments, self-intersections, and restrictions on the available space and material, are incorporated into the formulation. A multi-criteria function from our prior work is used as the objective. Analytical sensitivity analysis for the objective and constraints is presented and is used in the numerical optimization. Examples are included to illustrate the shape optimization method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ductility based design of reinforced concrete structures implicitly assumes certain damage under the action of a design basis earthquake. The damage undergone by a structure needs to be quantified, so as to assess the post-seismic reparability and functionality of the structure. The paper presents an analytical method of quantification and location of seismic damage, through system identification methods. It may be noted that soft ground storied buildings are the major casualties in any earthquake and hence the example structure is a soft or weak first storied one, whose seismic response and temporal variation of damage are computed using a non-linear dynamic analysis program (IDARC) and compared with a normal structure. Time period based damage identification model is used and suitably calibrated with classic damage models. Regenerated stiffness of the three degrees of freedom model (for the three storied frame) is used to locate the damage, both on-line as well as after the seismic event. Multi resolution analysis using wavelets is also used for localized damage identification for soft storey columns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Learning to rank from relevance judgment is an active research area. Itemwise score regression, pairwise preference satisfaction, and listwise structured learning are the major techniques in use. Listwise structured learning has been applied recently to optimize important non-decomposable ranking criteria like AUC (area under ROC curve) and MAP(mean average precision). We propose new, almost-lineartime algorithms to optimize for two other criteria widely used to evaluate search systems: MRR (mean reciprocal rank) and NDCG (normalized discounted cumulative gain)in the max-margin structured learning framework. We also demonstrate that, for different ranking criteria, one may need to use different feature maps. Search applications should not be optimized in favor of a single criterion, because they need to cater to a variety of queries. E.g., MRR is best for navigational queries, while NDCG is best for informational queries. A key contribution of this paper is to fold multiple ranking loss functions into a multi-criteria max-margin optimization.The result is a single, robust ranking model that is close to the best accuracy of learners trained on individual criteria. In fact, experiments over the popular LETOR and TREC data sets show that, contrary to conventional wisdom, a test criterion is often not best served by training with the same individual criterion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The efficiency of track foundation material gradually decreases due to insufficient lateral confinement, ballast fouling, and loss of shear strength of the subsurface soil under cyclic loading. This paper presents characterization of rail track subsurface to identify ballast fouling and subsurface layers shear wave velocity using seismic survey. Seismic surface wave method of multi-channel analysis of surface wave (MASW) has been carried out in the model track and field track for finding out shear wave velocity of the clean and fouled ballast and track subsurface. The shear wave velocity (SWV) of fouled ballast increases with increase in fouling percentage, and reaches a maximum value and then decreases. This character is similar to typical compaction curve of soil, which is used to define optimum and critical fouling percentage (OFP and CFP). Critical fouling percentage of 15 % is noticed for Coal fouled ballast and 25 % is noticed for clayey sand fouled ballast. Coal fouled ballast reaches the OFP and CFP before clayey sand fouled ballast. Fouling of ballast reduces voids in ballast and there by decreases the drainage. Combined plot of permeability and SWV with percentage of fouling shows that after critical fouling point drainage condition of fouled ballast goes below acceptable limit. Shear wave velocities are measured in the selected location in the Wollongong field track by carrying out similar seismic survey. In-situ samples were collected and degrees of fouling were measured. Field SWV values are more than that of the model track SWV values for the same degree of fouling, which might be due to sleeper's confinement. This article also highlights the ballast gradation widely followed in different countries and presents the comparison of Indian ballast gradation with international gradation standards. Indian ballast contains a coarser particle size when compared to other countries. The upper limit of Indian gradation curve matches with lower limit of ballast gradation curves of America and Australia. The ballast gradation followed by Indian railways is poorly graded and more favorable for the drainage conditions. Indian ballast engineering needs extensive research to improve presents track conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bush frogs of the genus Raorchestes are distributed mainly in the Western Ghats Escarpment of Peninsular India. The inventory of species in this genus is incomplete and there is ambiguity in the systematic status of species recognized by morphological criteria. To address the dual problem of taxon sampling and systematic uncertainty in bush frogs, we used a large-scale spatial sampling design, explicitly incorporating the geographic and ecological heterogeneity of the Western Ghats. We then used a hierarchical multi-criteria approach by combining mitochondrial phylogeny, genetic distance, geographic range, morphology and advertisement call to delimit bush frog lineages. Our analyses revealed the existence of a large number of new lineages with varying levels of genetic divergence. Here, we provide diagnoses and descriptions for nine lineages that exhibit divergence across multiple axes. The discovery of new lineages that exhibit high divergence across wide ranges of elevation and across the major massifs highlights the large gaps in historical sampling. These discoveries underscore the significance of addressing inadequate knowledge of species distribution, namely the ``Wallacean shortfall'', in addressing the problem of taxon sampling and unknown diversity in tropical hotspots. A biogeographically informed sampling and analytical approach was critical in detecting and delineating lineages in a consistent manner across the genus. Through increased taxon sampling, we were also able to discern a number of well-supported sub-clades that were either unresolved or absent in earlier phylogenetic reconstructions and identify a number of shallow divergent lineages which require further examination for assessment of their taxonomic status.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi temporal land use information were derived using two decades remote sensing data and simulated for 2012 and 2020 with Cellular Automata (CA) considering scenarios, change probabilities (through Markov chain) and Multi Criteria Evaluation (MCE). Agents and constraints were considered for modeling the urbanization process. Agents were nornmlized through fiizzyfication and priority weights were assigned through Analytical Hierarchical Process (AHP) pairwise comparison for each factor (in MCE) to derive behavior-oriented rules of transition for each land use class. Simulation shows a good agreement with the classified data. Fuzzy and AHP helped in analyzing the effects of agents of growth clearly and CA-Markov proved as a powerful tool in modelling and helped in capturing and visualizing the spatiotemporal patterns of urbanization. This provided rapid land evaluation framework with the essential insights of the urban trajectory for effective sustainable city planning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We performed Gaussian network model based normal mode analysis of 3-dimensional structures of multiple active and inactive forms of protein kinases. In 14 different kinases, a more number of residues (1095) show higher structural fluctuations in inactive states than those in active states (525), suggesting that, in general, mobility of inactive states is higher than active states. This statistically significant difference is consistent with higher crystallographic B-factors and conformational energies for inactive than active states, suggesting lower stability of inactive forms. Only a small number of inactive conformations with the DFG motif in the ``in'' state were found to have fluctuation magnitudes comparable to the active conformation. Therefore our study reports for the first time, intrinsic higher structural fluctuation for almost all inactive conformations compared to the active forms. Regions with higher fluctuations in the inactive states are often localized to the aC-helix, aG-helix and activation loop which are involved in the regulation and/or in structural transitions between active and inactive states. Further analysis of 476 kinase structures involved in interactions with another domain/protein showed that many of the regions with higher inactive-state fluctuation correspond to contact interfaces. We also performed extensive GNM analysis of (i) insulin receptor kinase bound to another protein and (ii) holo and apo forms of active and inactive conformations followed by multi-factor analysis of variance. We conclude that binding of small molecules or other domains/proteins reduce the extent of fluctuation irrespective of active or inactive forms. Finally, we show that the perceived fluctuations serve as a useful input to predict the functional state of a kinase.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The results are presented of applying multi-time scale analysis using the singular perturbation technique for long time simulation of power system problems. A linear system represented in state-space form can be decoupled into slow and fast subsystems. These subsystems can be simulated with different time steps and then recombined to obtain the system response. Simulation results with a two-time scale analysis of a power system show a large saving in computational costs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Relay selection for cooperative communications promises significant performance improvements, and is, therefore, attracting considerable attention. While several criteria have been proposed for selecting one or more relays, distributed mechanisms that perform the selection have received relatively less attention. In this paper, we develop a novel, yet simple, asymptotic analysis of a splitting-based multiple access selection algorithm to find the single best relay. The analysis leads to simpler and alternate expressions for the average number of slots required to find the best user. By introducing a new contention load' parameter, the analysis shows that the parameter settings used in the existing literature can be improved upon. New and simple bounds are also derived. Furthermore, we propose a new algorithm that addresses the general problem of selecting the best Q >= 1 relays, and analyze and optimize it. Even for a large number of relays, the scalable algorithm selects the best two relays within 4.406 slots and the best three within 6.491 slots, on average. We also propose a new and simple scheme for the practically relevant case of discrete metrics. Altogether, our results develop a unifying perspective about the general problem of distributed selection in cooperative systems and several other multi-node systems.