66 resultados para Graph-based approach
Resumo:
Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed, A brief overview of Genetic Algorithms (GAs) and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance pf our GA-based approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger. To account for the relatively quick convergence of the gradient descent methods, we analyze the landscape of the COP-based cost function. We prove that the cost function is unimodal in the search space. This feature makes the cost function amenable to optimization by gradient-descent techniques as compared to random search methods such as Genetic Algorithms.
Resumo:
In this paper, we consider the application of belief propagation (BP) to achieve near-optimal signal detection in large multiple-input multiple-output (MIMO) systems at low complexities. Large-MIMO architectures based on spatial multiplexing (V-BLAST) as well as non-orthogonal space-time block codes(STBC) from cyclic division algebra (CDA) are considered. We adopt graphical models based on Markov random fields (MRF) and factor graphs (FG). In the MRF based approach, we use pairwise compatibility functions although the graphical models of MIMO systems are fully/densely connected. In the FG approach, we employ a Gaussian approximation (GA) of the multi-antenna interference, which significantly reduces the complexity while achieving very good performance for large dimensions. We show that i) both MRF and FG based BP approaches exhibit large-system behavior, where increasingly closer to optimal performance is achieved with increasing number of dimensions, and ii) damping of messages/beliefs significantly improves the bit error performance.
Resumo:
Numerical modeling of several turbulent nonreacting and reacting spray jets is carried out using a fully stochastic separated flow (FSSF) approach. As is widely used, the carrier-phase is considered in an Eulerian framework, while the dispersed phase is tracked in a Lagrangian framework following the stochastic separated flow (SSF) model. Various interactions between the two phases are taken into account by means of two-way coupling. Spray evaporation is described using a thermal model with an infinite conductivity in the liquid phase. The gas-phase turbulence terms are closed using the k-epsilon model. A novel mixture fraction based approach is used to stochastically model the fluctuating temperature and composition in the gas phase and these are then used to refine the estimates of the heat and mass transfer rates between the droplets and the surrounding gas-phase. In classical SSF (CSSF) methods, stochastic fluctuations of only the gas-phase velocity are modeled. Successful implementation of the FSSF approach to turbulent nonreacting and reacting spray jets is demonstrated. Results are compared against experimental measurements as well as with predictions using the CSSF approach for both nonreacting and reacting spray jets. The FSSF approach shows little difference from the CSSF predictions for nonreacting spray jets but differences are significant for reacting spray jets. In general, the FSSF approach gives good predictions of the flame length and structure but further improvements in modeling may be needed to improve the accuracy of some details of the Predictions. (C) 2011 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
We address the problem of robust formant tracking in continuous speech in the presence of additive noise. We propose a new approach based on mixture modeling of the formant contours. Our approach consists of two main steps: (i) Computation of a pyknogram based on multiband amplitude-modulation/frequency-modulation (AM/FM) decomposition of the input speech; and (ii) Statistical modeling of the pyknogram using mixture models. We experiment with both Gaussian mixture model (GMM) and Student's-t mixture model (tMM) and show that the latter is robust with respect to handling outliers in the pyknogram data, parameter selection, accuracy, and smoothness of the estimated formant contours. Experimental results on simulated data as well as noisy speech data show that the proposed tMM-based approach is also robust to additive noise. We present performance comparisons with a recently developed adaptive filterbank technique proposed in the literature and the classical Burg's spectral estimator technique, which show that the proposed technique is more robust to noise.
Assessment of seismic hazard and liquefaction potential of Gujarat based on probabilistic approaches
Resumo:
Gujarat is one of the fastest-growing states of India with high industrial activities coming up in major cities of the state. It is indispensable to analyse seismic hazard as the region is considered to be most seismically active in stable continental region of India. The Bhuj earthquake of 2001 has caused extensive damage in terms of causality and economic loss. In the present study, the seismic hazard of Gujarat evaluated using a probabilistic approach with the use of logic tree framework that minimizes the uncertainties in hazard assessment. The peak horizontal acceleration (PHA) and spectral acceleration (Sa) values were evaluated for 10 and 2 % probability of exceedance in 50 years. Two important geotechnical effects of earthquakes, site amplification and liquefaction, are also evaluated, considering site characterization based on site classes. The liquefaction return period for the entire state of Gujarat is evaluated using a performance-based approach. The maps of PHA and PGA values prepared in this study are very useful for seismic hazard mitigation of the region in future.
Resumo:
In this letter, we characterize the extrinsic information transfer (EXIT) behavior of a factor graph based message passing algorithm for detection in large multiple-input multiple-output (MIMO) systems with tens to hundreds of antennas. The EXIT curves of a joint detection-decoding receiver are obtained for low density parity check (LDPC) codes of given degree distributions. From the obtained EXIT curves, an optimization of the LDPC code degree profiles is carried out to design irregular LDPC codes matched to the large-MIMO channel and joint message passing receiver. With low complexity joint detection-decoding, these codes are shown to perform better than off-the-shelf irregular codes in the literature by about 1 to 1.5 dB at a coded BER of 10(-5) in 16 x 16, 64 x 64 and 256 x 256 MIMO systems.
Resumo:
The problem of designing good space-time block codes (STBCs) with low maximum-likelihood (ML) decoding complexity has gathered much attention in the literature. All the known low ML decoding complexity techniques utilize the same approach of exploiting either the multigroup decodable or the fast-decodable (conditionally multigroup decodable) structure of a code. We refer to this well-known technique of decoding STBCs as conditional ML (CML) decoding. In this paper, we introduce a new framework to construct ML decoders for STBCs based on the generalized distributive law (GDL) and the factor-graph-based sum-product algorithm. We say that an STBC is fast GDL decodable if the order of GDL decoding complexity of the code, with respect to the constellation size, is strictly less than M-lambda, where lambda is the number of independent symbols in the STBC. We give sufficient conditions for an STBC to admit fast GDL decoding, and show that both multigroup and conditionally multigroup decodable codes are fast GDL decodable. For any STBC, whether fast GDL decodable or not, we show that the GDL decoding complexity is strictly less than the CML decoding complexity. For instance, for any STBC obtained from cyclic division algebras which is not multigroup or conditionally multigroup decodable, the GDL decoder provides about 12 times reduction in complexity compared to the CML decoder. Similarly, for the Golden code, which is conditionally multigroup decodable, the GDL decoder is only half as complex as the CML decoder.
Resumo:
A wave propagation based approach for the detection of damage in components of structures having periodic damage has been proposed. Periodic damage pattern may arise in a structure due to periodicity in geometry and in loading. The method exploits the Block-Floquet band formation mechanism, a feature specific to structures with periodicity, to identify propagation bands (pass bands) and attenuation bands (stop bands) at different frequency ranges. The presence of damage modifies the wave propagation behaviour forming these bands. With proper positioning of sensors a damage force indicator (DFI) method can be used to locate the defect at an accuracy level of sensor to sensor distance. A wide range of transducer frequency may be used to obtain further information about the shape and size of the damage. The methodology is demonstrated using a few 1-D structures with different kinds of periodicity and damage. For this purpose, dynamic stiffness matrix is formed for the periodic elements to obtain the dispersion relationship using frequency domain spectral element and spectral super element method. The sensitivity of the damage force indicator for different types of periodic damages is also analysed.
Resumo:
The Lovasz θ function of a graph, is a fundamental tool in combinatorial optimization and approximation algorithms. Computing θ involves solving a SDP and is extremely expensive even for moderately sized graphs. In this paper we establish that the Lovasz θ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM−θ graphs, on which the Lovasz θ function can be approximated well by a one-class SVM. This leads to a novel use of SVM techniques to solve algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(n√) in a random graph G(n,12). A classic approach for this problem involves computing the θ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM−θ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. Further, we introduce the notion of a ''common orthogonal labeling'' which extends the notion of a ''orthogonal labelling of a single graph (used in defining the θ function) to multiple graphs. The problem of finding the optimal common orthogonal labelling is cast as a Multiple Kernel Learning problem and is used to identify a large common dense region in multiple graphs. The proposed algorithm achieves an order of magnitude scalability compared to the state of the art.
Resumo:
The problem of designing good Space-Time Block Codes (STBCs) with low maximum-likelihood (ML) decoding complexity has gathered much attention in the literature. All the known low ML decoding complexity techniques utilize the same approach of exploiting either the multigroup decodable or the fast-decodable (conditionally multigroup decodable) structure of a code. We refer to this well known technique of decoding STBCs as Conditional ML (CML) decoding. In [1], we introduced a framework to construct ML decoders for STBCs based on the Generalized Distributive Law (GDL) and the Factor-graph based Sum-Product Algorithm, and showed that for two specific families of STBCs, the Toepltiz codes and the Overlapped Alamouti Codes (OACs), the GDL based ML decoders have strictly less complexity than the CML decoders. In this paper, we introduce a `traceback' step to the GDL decoding algorithm of STBCs, which enables roughly 4 times reduction in the complexity of the GDL decoders proposed in [1]. Utilizing this complexity reduction from `traceback', we then show that for any STBC (not just the Toeplitz and Overlapped Alamouti Codes), the GDL decoding complexity is strictly less than the CML decoding complexity. For instance, for any STBC obtained from Cyclic Division Algebras that is not multigroup or conditionally multigroup decodable, the GDL decoder provides approximately 12 times reduction in complexity compared to the CML decoder. Similarly, for the Golden code, which is conditionally multigroup decodable, the GDL decoder is only about half as complex as the CML decoder.
Resumo:
Sustainability has emerged as one of the important planning concepts from its beginnings in economics and ecological thinking, and has widely been applied to assessing urban development. Different methods, techniques and instruments for urban sustainability assessment that help determine how cities can become more sustainable have emerged over a period of time. Among these, indicator-based approaches contribute to building of sustainable self-regulated systems that integrate development and environment protection. Hence, these provide a solid foundation for decision-making at all levels and are being increasingly used. The present paper builds on the background of the available literature and suggests the need for benchmarking indicator-based approach in a given urban area and incorporating various local issues, thus enhancing the long-term sustainability of cities which can be developed by introducing sustainability indicators into the urban planning process. (C) 2013 International Energy Initiative. Published by Elsevier Inc. All rights reserved.
Resumo:
Rapid and facile synthesis of similar to 7 nm and similar to 100-400 nm nano-structures of anatase titania is achieved by exploiting the chemical nature of solvents through a microwave based approach. After using these nanostructures as a photoanode in dye-sensitized solar cells, a modest yet appreciable efficiency of 6.5% was achieved under the illumination of AM 1.5 G one sun (100 mW cm(-2)).
Resumo:
Classification of pharmacologic activity of a chemical compound is an essential step in any drug discovery process. We develop two new atom-centered fragment descriptors (vertex indices) - one based solely on topological considerations without discriminating atomor bond types, and another based on topological and electronic features. We also assess their usefulness by devising a method to rank and classify molecules with regard to their antibacterial activity. Classification performances of our method are found to be superior compared to two previous studies on large heterogeneous data sets for hit finding and hit-to-lead studies even though we use much fewer parameters. It is found that for hit finding studies topological features (simple graph) alone provide significant discriminating power, and for hit-to-lead process small but consistent improvement can be made by additionally including electronic features (colored graph). Our approach is simple, interpretable, and suitable for design of molecules as we do not use any physicochemical properties. The singular use of vertex index as descriptor, novel range based feature extraction, and rigorous statistical validation are the key elements of this study.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.
Resumo:
An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The proposed method was compared with the recently established data-resolution matrix-based approach for optimal choice of independent measurements and shown, using simulated and experimental gelatin phantom data sets, to be superior as it does not require an optimal regularization parameter for providing the same information. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)