30 resultados para Multi-objective analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new multi-sensor image registration technique is proposed based on detecting the feature corner points using modified Harris Corner Detector (HDC). These feature points are matched using multi-objective optimization (distance condition and angle criterion) based on Discrete Particle Swarm Optimization (DPSO). This optimization process is more efficient as it considers both the distance and angle criteria to incorporate multi-objective switching in the fitness function. This optimization process helps in picking up three corresponding corner points detected in the sensed and base image and thereby using the affine transformation, the sensed image is aligned with the base image. Further, the results show that the new approach can provide a new dimension in solving multi-sensor image registration problems. From the obtained results, the performance of image registration is evaluated and is concluded that the proposed approach is efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates a novel approach for point matching of multi-sensor satellite imagery. The feature (corner) points extracted using an improved version of the Harris Corner Detector (HCD) is matched using multi-objective optimization based on a Genetic Algorithm (GA). An objective switching approach to optimization that incorporates an angle criterion, distance condition and point matching condition in the multi-objective fitness function is applied to match corresponding corner-points between the reference image and the sensed image. The matched points obtained in this way are used to align the sensed image with a reference image by applying an affine transformation. From the results obtained, the performance of the image registration is evaluated and compared with existing methods, namely Nearest Neighbor-Random SAmple Consensus (NN-Ran-SAC) and multi-objective Discrete Particle Swarm Optimization (DPSO). From the performed experiments it can be concluded that the proposed approach is an accurate method for registration of multi-sensor satellite imagery. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The number of available structures of large multi-protein assemblies is quite small. Such structures provide phenomenal insights on the organization, mechanism of formation and functional properties of the assembly. Hence detailed analysis of such structures is highly rewarding. However, the common problem in such analyses is the low resolution of these structures. In the recent times a number of attempts that combine low resolution cryo-EM data with higher resolution structures determined using X-ray analysis or NMR or generated using comparative modeling have been reported. Even in such attempts the best result one arrives at is the very course idea about the assembly structure in terms of trace of the C alpha atoms which are modeled with modest accuracy. Methodology/Principal Findings: In this paper first we present an objective approach to identify potentially solvent exposed and buried residues solely from the position of C alpha atoms and amino acid sequence using residue type-dependent thresholds for accessible surface areas of C alpha. We extend the method further to recognize potential protein-protein interface residues. Conclusion/Significance: Our approach to identify buried and exposed residues solely from the positions of C alpha atoms resulted in an accuracy of 84%, sensitivity of 83-89% and specificity of 67-94% while recognition of interfacial residues corresponded to an accuracy of 94%, sensitivity of 70-96% and specificity of 58-94%. Interestingly, detailed analysis of cases of mismatch between recognition of interface residues from C alpha positions and all-atom models suggested that, recognition of interfacial residues using C alpha atoms only correspond better with intuitive notion of what is an interfacial residue. Our method should be useful in the objective analysis of structures of protein assemblies when positions of only C alpha positions are available as, for example, in the cases of integration of cryo-EM data and high resolution structures of the components of the assembly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Different seismic hazard components pertaining to Bangalore city,namely soil overburden thickness, effective shear-wave velocity, factor of safety against liquefaction potential, peak ground acceleration at the seismic bedrock, site response in terms of amplification factor, and the predominant frequency, has been individually evaluated. The overburden thickness distribution, predominantly in the range of 5-10 m in the city, has been estimated through a sub-surface model from geotechnical bore-log data. The effective shear-wave velocity distribution, established through Multi-channel Analysis of Surface Wave (MASW) survey and subsequent data interpretation through dispersion analysis, exhibits site class D (180-360 m/s), site class C (360-760 m/s), and site class B (760-1500 m/s) in compliance to the National Earthquake Hazard Reduction Program (NEHRP) nomenclature. The peak ground acceleration has been estimated through deterministic approach, based on the maximum credible earthquake of M-W = 5.1 assumed to be nucleating from the closest active seismic source (Mandya-Channapatna-Bangalore Lineament). The 1-D site response factor, computed at each borehole through geotechnical analysis across the study region, is seen to be ranging from around amplification of one to as high as four times. Correspondingly, the predominant frequency estimated from the Fourier spectrum is found to be predominantly in range of 3.5-5.0 Hz. The soil liquefaction hazard assessment has been estimated in terms of factor of safety against liquefaction potential using standard penetration test data and the underlying soil properties that indicates 90% of the study region to be non-liquefiable. The spatial distributions of the different hazard entities are placed on a GIS platform and subsequently, integrated through analytical hierarchal process. The accomplished deterministic hazard map shows high hazard coverage in the western areas. The microzonation, thus, achieved is envisaged as a first-cut assessment of the site specific hazard in laying out a framework for higher order seismic microzonation as well as a useful decision support tool in overall land-use planning, and hazard management. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Package-board co-design plays a crucial role in determining the performance of high-speed systems. Although there exist several commercial solutions for electromagnetic analysis and verification, lack of Computer Aided Design (CAD) tools for SI aware design and synthesis lead to longer design cycles and non-optimal package-board interconnect geometries. In this work, the functional similarities between package-board design and radio-frequency (RF) imaging are explored. Consequently, qualitative methods common to the imaging community, like Tikhonov Regularization (TR) and Landweber method are applied to solve multi-objective, multi-variable package design problems. In addition, a new hierarchical iterative piecewise linear algorithm is developed as a wrapper over LBP for an efficient solution in the design space.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper extends the iterative linear matrix inequality algorithm (ILMI) for systems having non-ideal PI, PD and PID implementations. The new algorithm uses the practical implementation of the feedback blocksto form the equivalent static output feedback plant. The LMI based synthesis techniques are used in the algorithm to design a multi-loop, multi-objective fixed structure control. The benefits of such a control design technique are brought out by applying it to the lateral stabilizing and tracking feedback control problem of a 30cm wingspan micro air vehicle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ductility based design of reinforced concrete structures implicitly assumes certain damage under the action of a design basis earthquake. The damage undergone by a structure needs to be quantified, so as to assess the post-seismic reparability and functionality of the structure. The paper presents an analytical method of quantification and location of seismic damage, through system identification methods. It may be noted that soft ground storied buildings are the major casualties in any earthquake and hence the example structure is a soft or weak first storied one, whose seismic response and temporal variation of damage are computed using a non-linear dynamic analysis program (IDARC) and compared with a normal structure. Time period based damage identification model is used and suitably calibrated with classic damage models. Regenerated stiffness of the three degrees of freedom model (for the three storied frame) is used to locate the damage, both on-line as well as after the seismic event. Multi resolution analysis using wavelets is also used for localized damage identification for soft storey columns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences critical system design objectives like area, power and performance. Hence the embedded system designer performs a complete memory architecture exploration to custom design a memory architecture for a given set of applications. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of exhaustive-search based memory exploration at the outer level and a two step based integrated data layout for SPRAM-Cache based architectures at the inner level. We present a two step integrated approach for data layout for SPRAM-Cache based hybrid architectures with the first step as data-partitioning that partitions data between SPRAM and Cache, and the second step is the cache conscious data layout. We formulate the cache-conscious data layout as a graph partitioning problem and show that our approach gives up to 34% improvement over an existing approach and also optimizes the off-chip memory address space. We experimented our approach with 3 embedded multimedia applications and our approach explores several hundred memory configurations for each application, yielding several optimal design points in a few hours of computation on a standard desktop.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The efficiency of track foundation material gradually decreases due to insufficient lateral confinement, ballast fouling, and loss of shear strength of the subsurface soil under cyclic loading. This paper presents characterization of rail track subsurface to identify ballast fouling and subsurface layers shear wave velocity using seismic survey. Seismic surface wave method of multi-channel analysis of surface wave (MASW) has been carried out in the model track and field track for finding out shear wave velocity of the clean and fouled ballast and track subsurface. The shear wave velocity (SWV) of fouled ballast increases with increase in fouling percentage, and reaches a maximum value and then decreases. This character is similar to typical compaction curve of soil, which is used to define optimum and critical fouling percentage (OFP and CFP). Critical fouling percentage of 15 % is noticed for Coal fouled ballast and 25 % is noticed for clayey sand fouled ballast. Coal fouled ballast reaches the OFP and CFP before clayey sand fouled ballast. Fouling of ballast reduces voids in ballast and there by decreases the drainage. Combined plot of permeability and SWV with percentage of fouling shows that after critical fouling point drainage condition of fouled ballast goes below acceptable limit. Shear wave velocities are measured in the selected location in the Wollongong field track by carrying out similar seismic survey. In-situ samples were collected and degrees of fouling were measured. Field SWV values are more than that of the model track SWV values for the same degree of fouling, which might be due to sleeper's confinement. This article also highlights the ballast gradation widely followed in different countries and presents the comparison of Indian ballast gradation with international gradation standards. Indian ballast contains a coarser particle size when compared to other countries. The upper limit of Indian gradation curve matches with lower limit of ballast gradation curves of America and Australia. The ballast gradation followed by Indian railways is poorly graded and more favorable for the drainage conditions. Indian ballast engineering needs extensive research to improve presents track conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We performed Gaussian network model based normal mode analysis of 3-dimensional structures of multiple active and inactive forms of protein kinases. In 14 different kinases, a more number of residues (1095) show higher structural fluctuations in inactive states than those in active states (525), suggesting that, in general, mobility of inactive states is higher than active states. This statistically significant difference is consistent with higher crystallographic B-factors and conformational energies for inactive than active states, suggesting lower stability of inactive forms. Only a small number of inactive conformations with the DFG motif in the ``in'' state were found to have fluctuation magnitudes comparable to the active conformation. Therefore our study reports for the first time, intrinsic higher structural fluctuation for almost all inactive conformations compared to the active forms. Regions with higher fluctuations in the inactive states are often localized to the aC-helix, aG-helix and activation loop which are involved in the regulation and/or in structural transitions between active and inactive states. Further analysis of 476 kinase structures involved in interactions with another domain/protein showed that many of the regions with higher inactive-state fluctuation correspond to contact interfaces. We also performed extensive GNM analysis of (i) insulin receptor kinase bound to another protein and (ii) holo and apo forms of active and inactive conformations followed by multi-factor analysis of variance. We conclude that binding of small molecules or other domains/proteins reduce the extent of fluctuation irrespective of active or inactive forms. Finally, we show that the perceived fluctuations serve as a useful input to predict the functional state of a kinase.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Routing is a very important step in VLSI physical design. A set of nets are routed under delay and resource constraints in multi-net global routing. In this paper a delay-driven congestion-aware global routing algorithm is developed, which is a heuristic based method to solve a multi-objective NP-hard optimization problem. The proposed delay-driven Steiner tree construction method is of O(n(2) log n) complexity, where n is the number of terminal points and it provides n-approximation solution of the critical time minimization problem for a certain class of grid graphs. The existing timing-driven method (Hu and Sapatnekar, 2002) has a complexity O(n(4)) and is implemented on nets with small number of sinks. Next we propose a FPTAS Gradient algorithm for minimizing the total overflow. This is a concurrent approach considering all the nets simultaneously contrary to the existing approaches of sequential rip-up and reroute. The algorithms are implemented on ISPD98 derived benchmarks and the drastic reduction of overflow is observed. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Production of high tip deflection in a piezoelectric bimorph laminar actuator by applying high voltage is limited by many physical constraints. Therefore, piezoelectric bimorph actuator with a rigid extension of non-piezoelectric material at its tip is used to increase the tip deflection of such an actuator. Research on this type of piezoelectric bending actuator is either limited to first order constitutive relations, which do not include non-linear behavior of piezoelectric element at high electric field, or limited to curve fitting techniques. Therefore, this paper considers high electric field, and analytically models tapered piezoelectric bimorph actuator with a rigid extension of non-piezoelectric material at its tip. The stiffness, capacitance, effective tip deflection, block force, output strain energy, output energy density, input electrical energy and energy efficiency of the actuator are calculated analytically. The paper also discusses the multi-objective optimization of this type of actuator subjected to the mechanical and electrical constraints.