820 resultados para Lanczos, Linear systems, Generalized cross validation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaussian Processes (GPs) are promising Bayesian methods for classification and regression problems. They have also been used for semi-supervised learning tasks. In this paper, we propose a new algorithm for solving semi-supervised binary classification problem using sparse GP regression (GPR) models. It is closely related to semi-supervised learning based on support vector regression (SVR) and maximum margin clustering. The proposed algorithm is simple and easy to implement. It gives a sparse solution directly unlike the SVR based algorithm. Also, the hyperparameters are estimated easily without resorting to expensive cross-validation technique. Use of sparse GPR model helps in making the proposed algorithm scalable. Preliminary results on synthetic and real-world data sets demonstrate the efficacy of the new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presented here, in a vector formulation, is an O(mn2) direct concise algorithm that prunes/identifies the linearly dependent (ld) rows of an arbitrary m X n matrix A and computes its reflexive type minimum norm inverse A(mr)-, which will be the true inverse A-1 if A is nonsingular and the Moore-Penrose inverse A+ if A is full row-rank. The algorithm, without any additional computation, produces the projection operator P = (I - A(mr)- A) that provides a means to compute any of the solutions of the consistent linear equation Ax = b since the general solution may be expressed as x = A(mr)+b + Pz, where z is an arbitrary vector. The rank r of A will also be produced in the process. Some of the salient features of this algorithm are that (i) the algorithm is concise, (ii) the minimum norm least squares solution for consistent/inconsistent equations is readily computable when A is full row-rank (else, a minimum norm solution for consistent equations is obtainable), (iii) the algorithm identifies ld rows, if any, and reduces concerned computation and improves accuracy of the result, (iv) error-bounds for the inverse as well as the solution x for Ax = b are readily computable, (v) error-free computation of the inverse, solution vector, rank, and projection operator and its inherent parallel implementation are straightforward, (vi) it is suitable for vector (pipeline) machines, and (vii) the inverse produced by the algorithm can be used to solve under-/overdetermined linear systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 sq.km. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, ordinary kriging and Support Vector Machine (SVM) models have been developed. In ordinary kriging, the knowledge of the semivariogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of Bangalore, where field measurements are not available. A cross validation (Q1 and Q2) analysis is also done for the developed ordinary kriging model. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing e-insensitive loss function has been used to predict the reduced level of rock from a large set of data. A comparison between ordinary kriging and SVM model demonstrates that the SVM is superior to ordinary kriging in predicting rock depth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A current injection pattern in Electrical Impedance Tomography (EIT) has its own current distribution profile within the domain under test. Hence, different current patterns have different sensitivity, spatial resolution and distinguishability. Image reconstruction studies with practical phantoms are essential to assess the performance of EIT systems for their validation, calibration and comparison purposes. Impedance imaging of real tissue phantoms with different current injection methods is also essential for better assessment of the biomedical EIT systems. Chicken tissue paste phantoms and chicken tissue block phantoms are developed and the resistivity image reconstruction is studied with different current injection methods. A 16-electrode array is placed inside the phantom tank and the tank is filled with chicken muscle tissue paste or chicken tissue blocks as the background mediums. Chicken fat tissue, chicken bone, air hole and nylon cylinders are used as the inhomogeneity to obtained different phantom configurations. A low magnitude low frequency constant sinusoidal current is injected at the phantom boundary with opposite and neighboring current patterns and the boundary potentials are measured. Resistivity images are reconstructed from the boundary data using EIDORS and the reconstructed images are analyzed with the contrast parameters calculated from their elemental resistivity profiles. Results show that the resistivity profiles of all the phantom domains are successfully reconstructed with a proper background resistivity and high inhomogeneity resistivity for both the current injection methods. Reconstructed images show that, for all the chicken tissue phantoms, the inhomogeneities are suitably reconstructed with both the current injection protocols though the chicken tissue block phantom and opposite method are found more suitable. It is observed that the boundary potentials of the chicken tissue block phantoms are higher than the chicken tissue paste phantom. SNR of the chicken tissue block phantoms are found comparatively more and hence the chicken tissue block phantom is found more suitable for its lower noise performance. The background noise is found less in opposite method for all the phantom configurations which yields the better resistivity images with high PCR and COC and proper IRMean and IRMax neighboring method showed higher noise level for both the chicken tissue paste phantoms and chicken tissue block phantoms with all the inhomogeneities. Opposite method is found more suitable for both the chicken tissue phantoms, and also, chicken tissue block phantoms are found more suitable compared to the chicken tissue paste phantom. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, authors published a method to indirectly measure series capacitance (C-s) of a single, isolated, uniformly wound transformer winding, from its measured frequency response. The next step was to implement it on an actual three-phase transformer. This task is not as straightforward as it might appear at first glance, since the measured frequency response on a three-phase transformer is influenced by nontested windings and their terminal connections, core, tank, etc. To extract the correct value of C-s from this composite frequency response, the formulation has to be reworked to first identify all significant influences and then include their effects. Initially, the modified method and experimental results on a three-phase transformer (4 MVA, 33 kV/433 V) are presented along with results on the winding considered in isolation (for cross validation). Later, the method is directly implemented on another three-phase unit (3.5 MVA, 13.8 kV/765 V) to show repeatability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of identification of multi-component and (or) spatially varying earthquake support motions based on measured responses in instrumented structures is considered. The governing equations of motion are cast in the state space form and a time domain solution to the input identification problem is developed based on the Kalman and particle filtering methods. The method allows for noise in measured responses, imperfections in mathematical model for the structure, and possible nonlinear behavior of the structure. The unknown support motions are treated as hypothetical additional system states and a prior model for these motions are taken to be given in terms of white noise processes. For linear systems, the solution is developed within the Kalman filtering framework while, for nonlinear systems, the Monte Carlo simulation based particle filtering tools are employed. In the latter case, the question of controlling sampling variance based on the idea of Rao-Blackwellization is also explored. Illustrative examples include identification of multi-component and spatially varying support motions in linear/nonlinear structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel approach to solve the ordinal regression problem using Gaussian processes. The proposed approach, probabilistic least squares ordinal regression (PLSOR), obtains the probability distribution over ordinal labels using a particular likelihood function. It performs model selection (hyperparameter optimization) using the leave-one-out cross-validation (LOO-CV) technique. PLSOR has conceptual simplicity and ease of implementation of least squares approach. Unlike the existing Gaussian process ordinal regression (GPOR) approaches, PLSOR does not use any approximation techniques for inference. We compare the proposed approach with the state-of-the-art GPOR approaches on some synthetic and benchmark data sets. Experimental results show the competitiveness of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Practical phantoms are essential to assess the electrical impedance tomography (EIT) systems for their validation, calibration and comparison purposes. Metal surface electrodes are generally used in practical phantoms which reduce the SNR of the boundary data due to their design and development errors. Novel flexible and biocompatible gold electrode arrays of high geometric precision are proposed to improve the boundary data quality in EIT. The flexible gold electrode arrays are developed on flexible FR4 sheets using thin film technology and practical gold electrode phantoms are developed with different configurations. Injecting a constant current to the phantom boundary the surface potentials are measured by a LabVIEW based data acquisition system and the resistivity images are reconstructed in EIDORS. Boundary data profile and the resistivity images obtained from the gold electrode phantoms are compared with identical phantoms developed with stainless steel electrodes. Surface profilometry, microscopy and the impedance spectroscopy show that the gold electrode arrays are smooth, geometrically precised and less resistive. Results show that the boundary data accuracy and image quality are improved with gold electrode arrays. Results show that the diametric resistivity plot (DRP), contrast to noise ratio (CNR), percentage of contrast recovery (PCR) and coefficient of contrast (COC) of reconstructed images are improved in gold electrode phantoms. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regionalization of extreme rainfall is useful for various applications in hydro-meteorology. There is dearth of regionalization studies on extreme rainfall in India. In this perspective, a set of 25 regions that are homogeneous in 1-, 2-, 3-, 4- and 5-day extreme rainfall is delineated based on seasonality measure of extreme rainfall and location indicators (latitude, longitude and altitude) by using global fuzzy c-means (GFCM) cluster analysis. The regions are validated for homogeneity in L-moment framework. One of the applications of the regions is in arriving at quantile estimates of extreme rainfall at sparsely gauged/ungauged locations using options such as regional frequency analysis (RFA). The RFA involves use of rainfall-related information from gauged sites in a region as the basis to estimate quantiles of extreme rainfall for target locations that resemble the region in terms of rainfall characteristics. A procedure for RFA based on GFCM-delineated regions is presented and its effectiveness is evaluated by leave-one-out cross validation. Error in quantile estimates for ungauged sites is compared with that resulting from the use of region-of-influence (ROI) approach that forms site-specific regions exclusively for quantile estimation. Results indicate that error in quantile estimates based on GFCM regions and ROI are fairly close, and neither of them is consistent in yielding the least error over all the sites. The cluster analysis approach was effective in reducing the number of regions to be delineated for RFA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precise information on streamflows is of major importance for planning and monitoring of water resources schemes related to hydro power, water supply, irrigation, flood control, and for maintaining ecosystem. Engineers encounter challenges when streamflow data are either unavailable or inadequate at target locations. To address these challenges, there have been efforts to develop methodologies that facilitate prediction of streamflow at ungauged sites. Conventionally, time intensive and data exhaustive rainfall-runoff models are used to arrive at streamflow at ungauged sites. Most recent studies show improved methods based on regionalization using Flow Duration Curves (FDCs). A FDC is a graphical representation of streamflow variability, which is a plot between streamflow values and their corresponding exceedance probabilities that are determined using a plotting position formula. It provides information on the percentage of time any specified magnitude of streamflow is equaled or exceeded. The present study assesses the effectiveness of two methods to predict streamflow at ungauged sites by application to catchments in Mahanadi river basin, India. The methods considered are (i) Regional flow duration curve method, and (ii) Area Ratio method. The first method involves (a) the development of regression relationships between percentile flows and attributes of catchments in the study area, (b) use of the relationships to construct regional FDC for the ungauged site, and (c) use of a spatial interpolation technique to decode information in FDC to construct streamflow time series for the ungauged site. Area ratio method is conventionally used to transfer streamflow related information from gauged sites to ungauged sites. Attributes that have been considered for the analysis include variables representing hydrology, climatology, topography, land-use/land- cover and soil properties corresponding to catchments in the study area. Effectiveness of the presented methods is assessed using jack knife cross-validation. Conclusions based on the study are presented and discussed. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The response of structural dynamical systems excited by multiple random excitations is considered. Two new procedures for evaluating global response sensitivity measures with respect to the excitation components are proposed. The first procedure is valid for stationary response of linear systems under stationary random excitations and is based on the notion of Hellinger's metric of distance between two power spectral density functions. The second procedure is more generally valid and is based on the l2 norm based distance measure between two probability density functions. Specific cases which admit exact solutions are presented, and solution procedures based on Monte Carlo simulations for more general class of problems are outlined. Illustrations include studies on a parametrically excited linear system and a nonlinear random vibration problem involving moving oscillator-beam system that considers excitations attributable to random support motions and guide-way unevenness. (C) 2015 American Society of Civil Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acoustic feature based speech (syllable) rate estimation and syllable nuclei detection are important problems in automatic speech recognition (ASR), computer assisted language learning (CALL) and fluency analysis. A typical solution for both the problems consists of two stages. The first stage involves computing a short-time feature contour such that most of the peaks of the contour correspond to the syllabic nuclei. In the second stage, the peaks corresponding to the syllable nuclei are detected. In this work, instead of the peak detection, we perform a mode-shape classification, which is formulated as a supervised binary classification problem - mode-shapes representing the syllabic nuclei as one class and remaining as the other. We use the temporal correlation and selected sub-band correlation (TCSSBC) feature contour and the mode-shapes in the TCSSBC feature contour are converted into a set of feature vectors using an interpolation technique. A support vector machine classifier is used for the classification. Experiments are performed separately using Switchboard, TIMIT and CTIMIT corpora in a five-fold cross validation setup. The average correlation coefficients for the syllable rate estimation turn out to be 0.6761, 0.6928 and 0.3604 for three corpora respectively, which outperform those obtained by the best of the existing peak detection techniques. Similarly, the average F-scores (syllable level) for the syllable nuclei detection are 0.8917, 0.8200 and 0.7637 for three corpora respectively. (C) 2016 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical Process Control (SPC) technique are well established across a wide range of industries. In particular, the plotting of key steady state variables with their statistical limit against time (Shewart charting) is a common approach for monitoring the normality of production. This paper aims with extending Shewart charting techniques to the quality monitoring of variables driven by uncertain dynamic processes, which has particular application in the process industries where it is desirable to monitor process variables on-line as well as final product. The robust approach to dynamic SPC is based on previous work on guaranteed cost filtering for linear systems and is intended to provide a basis for both a wide application of SPC monitoring and also motivate unstructured fault detection.