906 resultados para General Linear Methods
Resumo:
Many problems of state estimation in structural dynamics permit a partitioning of system states into nonlinear and conditionally linear substructures. This enables a part of the problem to be solved exactly, using the Kalman filter, and the remainder using Monte Carlo simulations. The present study develops an algorithm that combines sequential importance sampling based particle filtering with Kalman filtering to a fairly general form of process equations and demonstrates the application of a substructuring scheme to problems of hidden state estimation in structures with local nonlinearities, response sensitivity model updating in nonlinear systems, and characterization of residual displacements in instrumented inelastic structures. The paper also theoretically demonstrates that the sampling variance associated with the substructuring scheme used does not exceed the sampling variance corresponding to the Monte Carlo filtering without substructuring. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.
Resumo:
Unlike zero-sum stochastic games, a difficult problem in general-sum stochastic games is to obtain verifiable conditions for Nash equilibria. We show in this paper that by splitting an associated non-linear optimization problem into several sub-problems, characterization of Nash equilibria in a general-sum discounted stochastic games is possible. Using the aforementioned sub-problems, we in fact derive a set of necessary and sufficient verifiable conditions (termed KKT-SP conditions) for a strategy-pair to result in Nash equilibrium. Also, we show that any algorithm which tracks the zero of the gradient of the Lagrangian of every sub-problem provides a Nash strategy-pair. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.
Resumo:
A current-error space-vector-based hysteresis current controller for a general n-level voltage-source inverter (VSI)-fed three-phase induction motor (IM) drive is proposed here, with control of the switching frequency variation for the full linear modulation range. The proposed current controller monitors the space-vector-based current error of an n-level VSI-fed IM to keep the current error within a parabolic boundary, using the information of the current triangular sector in which the tip of the reference vector lies. Information of the reference voltage vector is estimated using the measured current-error space vectors, along the alpha- and beta-axes. Appropriate dimension and orientation of this parabolic boundary ensure a switching frequency spectrum similar to that of a constant-switching-frequency voltage-controlled space vector pulsewidth modulation (PWM) (SVPWM)-based IM drive. Like SVPWM for multilevel inverters, the proposed controller selects inverter switching vectors, forming a triangular sector in which the tip of the reference vector stays, for the hysteresis PWM control. The sector in the n-level inverter space vector diagram, in which the tip of the fundamental stator voltage stays, is precisely detected, using the sampled reference space vector estimated from the instantaneous current-error space vectors. The proposed controller retains all the advantages of a conventional hysteresis controller such as fast current control, with smooth transition to the overmodulation region. The proposed controller is implemented on a five-level VSI-fed 7.5-kW IM drive.
Resumo:
We provide new analytical results concerning the spread of information or influence under the linear threshold social network model introduced by Kempe et al. in, in the information dissemination context. The seeder starts by providing the message to a set of initial nodes and is interested in maximizing the number of nodes that will receive the message ultimately. A node's decision to forward the message depends on the set of nodes from which it has received the message. Under the linear threshold model, the decision to forward the information depends on the comparison of the total influence of the nodes from which a node has received the packet with its own threshold of influence. We derive analytical expressions for the expected number of nodes that receive the message ultimately, as a function of the initial set of nodes, for a generic network. We show that the problem can be recast in the framework of Markov chains. We then use the analytical expression to gain insights into information dissemination in some simple network topologies such as the star, ring, mesh and on acyclic graphs. We also derive the optimal initial set in the above networks, and also hint at general heuristics for picking a good initial set.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
State estimation is one of the most important functions in an energy control centre. An computationally efficient state estimator which is free from numerical instability/ill-conditioning is essential for security assessment of electric power grid. Whereas approaches to successfully overcome the numerical ill-conditioning issues have been proposed, an efficient algorithm for addressing the convergence issues in the presence of topological errors is yet to be evolved. Trust region (TR) methods have been successfully employed to overcome the divergence problem to certain extent. In this study, case studies are presented where the conventional algorithms including the existing TR methods would fail to converge. A linearised model-based TR method for successfully overcoming the convergence issues is proposed. On the computational front, unlike the existing TR methods for state estimation which employ quadratic models, the proposed linear model-based estimator is computationally efficient because the model minimiser can be computed in a single step. The model minimiser at each step is computed by minimising the linearised model in the presence of TR and measurement mismatch constraints. The infinity norm is used to define the geometry of the TR. Measurement mismatch constraints are employed to improve the accuracy. The proposed algorithm is compared with the quadratic model-based TR algorithm with case studies on the IEEE 30-bus system, 205-bus and 514-bus equivalent systems of part of Indian grid.
Resumo:
The objective in this work is to develop downscaling methodologies to obtain a long time record of inundation extent at high spatial resolution based on the existing low spatial resolution results of the Global Inundation Extent from Multi-Satellites (GIEMS) dataset. In semiarid regions, high-spatial-resolution a priori information can be provided by visible and infrared observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). The study concentrates on the Inner Niger Delta where MODIS-derived inundation extent has been estimated at a 500-m resolution. The space-time variability is first analyzed using a principal component analysis (PCA). This is particularly effective to understand the inundation variability, interpolate in time, or fill in missing values. Two innovative methods are developed (linear regression and matrix inversion) both based on the PCA representation. These GIEMS downscaling techniques have been calibrated using the 500-m MODIS data. The downscaled fields show the expected space-time behaviors from MODIS. A 20-yr dataset of the inundation extent at 500 m is derived from this analysis for the Inner Niger Delta. The methods are very general and may be applied to many basins and to other variables than inundation, provided enough a priori high-spatial-resolution information is available. The derived high-spatial-resolution dataset will be used in the framework of the Surface Water Ocean Topography (SWOT) mission to develop and test the instrument simulator as well as to select the calibration validation sites (with high space-time inundation variability). In addition, once SWOT observations are available, the downscaled methodology will be calibrated on them in order to downscale the GIEMS datasets and to extend the SWOT benefits back in time to 1993.
Resumo:
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential.
Resumo:
Elastic Net Regularizers have shown much promise in designing sparse classifiers for linear classification. In this work, we propose an alternating optimization approach to solve the dual problems of elastic net regularized linear classification Support Vector Machines (SVMs) and logistic regression (LR). One of the sub-problems turns out to be a simple projection. The other sub-problem can be solved using dual coordinate descent methods developed for non-sparse L2-regularized linear SVMs and LR, without altering their iteration complexity and convergence properties. Experiments on very large datasets indicate that the proposed dual coordinate descent - projection (DCD-P) methods are fast and achieve comparable generalization performance after the first pass through the data, with extremely sparse models.
Resumo:
This paper presents a simple technique for reducing the computational effort while solving any geotechnical stability problem by using the upper bound finite element limit analysis and linear optimization. In the proposed method, the problem domain is discretized into a number of different regions in which a particular order (number of sides) of the polygon is chosen to linearize the Mohr-Coulomb yield criterion. A greater order of the polygon needs to be selected only in that region wherein the rate of the plastic strains becomes higher. The computational effort required to solve the problem with this implementation reduces considerably. By using the proposed method, the bearing capacity has been computed for smooth and rough strip footings and the results are found to be quite satisfactory.
Resumo:
A space vector-based hysteresis current controller for any general n-level three phase inverter fed induction motor drive is proposed in this study. It offers fast dynamics, inherent overload protection and low harmonic distortion for the phase voltages and currents. The controller performs online current error boundary calculations and a nearly constant switching frequency is obtained throughout the linear modulation range. The proposed scheme uses only the adjacent voltage vectors of the present sector, similar to space vector pulse-width modulation and exhibits fast dynamic behaviour under different transient conditions. The steps involved in the boundary calculation include the estimation of phase voltages from the current ripple, computation of switching time and voltage error vectors. Experimental results are given to show the performance of the drive at various speeds, effect of sudden change of the load, acceleration, speed reversal and validate the proposed advantages.
Resumo:
A numerical formulation has been proposed for solving an axisymmetric stability problem in geomechanics with upper bound limit analysis, finite elements, and linear optimization. The Drucker-Prager yield criterion is linearized by simulating a sphere with a circumscribed truncated icosahedron. The analysis considers only the velocities and plastic multiplier rates, not the stresses, as the basic unknowns. The formulation is simple to implement, and it has been employed for finding the collapse loads of a circular footing placed over the surface of a cohesive-frictional material. The formulation can be used to solve any general axisymmetric geomechanics stability problem.