988 resultados para Mean vector
Resumo:
The most difficult operation in the flood inundation mapping using optical flood images is to separate fully inundated areas from the ‘wet’ areas where trees and houses are partly covered by water. This can be referred as a typical problem the presence of mixed pixels in the images. A number of automatic information extraction image classification algorithms have been developed over the years for flood mapping using optical remote sensing images. Most classification algorithms generally, help in selecting a pixel in a particular class label with the greatest likelihood. However, these hard classification methods often fail to generate a reliable flood inundation mapping because the presence of mixed pixels in the images. To solve the mixed pixel problem advanced image processing techniques are adopted and Linear Spectral unmixing method is one of the most popular soft classification technique used for mixed pixel analysis. The good performance of linear spectral unmixing depends on two important issues, those are, the method of selecting endmembers and the method to model the endmembers for unmixing. This paper presents an improvement in the adaptive selection of endmember subset for each pixel in spectral unmixing method for reliable flood mapping. Using a fixed set of endmembers for spectral unmixing all pixels in an entire image might cause over estimation of the endmember spectra residing in a mixed pixel and hence cause reducing the performance level of spectral unmixing. Compared to this, application of estimated adaptive subset of endmembers for each pixel can decrease the residual error in unmixing results and provide a reliable output. In this current paper, it has also been proved that this proposed method can improve the accuracy of conventional linear unmixing methods and also easy to apply. Three different linear spectral unmixing methods were applied to test the improvement in unmixing results. Experiments were conducted in three different sets of Landsat-5 TM images of three different flood events in Australia to examine the method on different flooding conditions and achieved satisfactory outcomes in flood mapping.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.
Resumo:
A better performing product code vector quantization (VQ) method is proposed for coding the line spectrum frequency (LSF) parameters; the method is referred to as sequential split vector quantization (SeSVQ). The split sub-vectors of the full LSF vector are quantized in sequence and thus uses conditional distribution derived from the previous quantized sub-vectors. Unlike the traditional split vector quantization (SVQ) method, SeSVQ exploits the inter sub-vector correlation and thus provides improved rate-distortion performance, but at the expense of higher memory. We investigate the quantization performance of SeSVQ over traditional SVQ and transform domain split VQ (TrSVQ) methods. Compared to SVQ, SeSVQ saves 1 bit and nearly 3 bits, for telephone-band and wide-band speech coding applications respectively.
Resumo:
At the time of restoration transmission line switching is one of the major causes, which creates transient overvoltages. Though detailed Electro Magnetic Transient studies are carried out extensively for the planning and design of transmission systems, such studies are not common in a day-today operation of power systems. However it is important for the operator to ensure during restoration of supply that peak overvoltages resulting from the switching operations are well within safe limits. This paper presents a support vector machine approach to classify the various cases of line energization in the category of safe or unsafe based upon the peak value of overvoltage at the receiving end of line. Operator can define the threshold value of voltage to assign the data pattern in either of the class. For illustration of proposed approach the power system used for switching transient peak overvoltages tests is a 400 kV equivalent system of an Indian southern gri
Resumo:
A constant switching frequency current error space vector-based hysteresis controller for two-level voltage source inverter-fed induction motor (IM) drives is proposed in this study. The proposed controller is capable of driving the IM in the entire speed range extending to the six-step mode. The proposed controller uses the parabolic boundary, reported earlier, for vector selection in a sector, but uses simple, fast and self-adaptive sector identification logic for sector change detection in the entire modulation range. This new scheme detects the sector change using the change in direction of current error along the axes jA, jB and jC. Most of the previous schemes use an outer boundary for sector change detection. So the current error goes outside the boundary six times during sector change, in one cycle,, introducing additional fifth and seventh harmonic components in phase current. This may cause sixth harmonic torque pulsations in the motor and spread in the harmonic spectrum of phase voltage. The proposed new scheme detects the sector change fast and accurately eliminating the chance of introducing additional fifth and seventh harmonic components in phase current and provides harmonic spectrum of phase voltage, which exactly matches with that of constant switching frequency voltage-controlled space vector pulse width modulation (VC-SVPWM)-based two-level inverter-fed drives.
Resumo:
Part I: Parkinson’s disease is a slowly progressive neurodegenerative disorder in which particularly the dopaminergic neurons of the substantia nigra pars compacta degenerate and die. Current conventional treatment is based on restraining symptoms but it has no effect on the progression of the disease. Gene therapy research has focused on the possibility of restoring the lost brain function by at least two means: substitution of critical enzymes needed for the synthesis of dopamine and slowing down the progression of the disease by supporting the functions of the remaining nigral dopaminergic neurons by neurotrophic factors. The striatal levels of enzymes such as tyrosine hydroxylase, dopadecarboxylase and GTP-CH1 are decreased as the disease progresses. By replacing one or all of the enzymes, dopamine levels in the striatum may be restored to normal and behavioral impairments caused by the disease may be ameliorated especially in the later stages of the disease. The neurotrophic factors glial cell derived neurotrophic factor (GDNF) and neurturin have shown to protect and restore functions of dopaminergic cell somas and terminals as well as improve behavior in animal lesion models. This therapy may be best suited at the early stages of the disease when there are more dopaminergic neurons for neurotrophic factors to reach. Viral vector-mediated gene transfer provides a tool to deliver proteins with complex structures into specific brain locations and provides long-term protein over-expression. Part II: The aim of our study was to investigate the effects of two orally dosed COMT inhibitors entacapone (10 and 30 mg/kg) and tolcapone (10 and 30 mg/kg) with a subsequent administration of a peripheral dopadecarboxylase inhibitor carbidopa (30 mg/kg) and L- dopa (30 mg/kg) on dopamine and its metabolite levels in the dorsal striatum and nucleus accumbens of freely moving rats using dual-probe in vivo microdialysis. Earlier similarly designed studies have only been conducted in the dorsal striatum. We also confirmed the result of earlier ex vivo studies regarding the effects of intraperitoneally dosed tolcapone (30 mg/kg) and entacapone (30 mg/kg) on striatal and hepatic COMT activity. The results obtained from the dorsal striatum were generally in line with earlier studies, where tolcapone tended to increase dopamine and DOPAC levels and decrease HVA levels. Entacapone tended to keep striatal dopamine and HVA levels elevated longer than in controls and also tended to elevate the levels of DOPAC. Surprisingly in the nucleus accumbens, dopamine levels after either dose of entacapone or tolcapone were not elevated. Accumbal DOPAC levels, especially in the tolcapone 30 mg/kg group, were elevated nearly to the same extent as measured in the dorsal striatum. Entacapone 10 mg/kg elevated accumbal HVA levels more than the dose of 30 mg/kg and the effect was more pronounced in the nucleus accumbens than in the dorsal striatum. This suggests that entacapone 30 mg/kg has minor central effects. Also our ex vivo study results obtained from the dorsal striatum suggest that entacapone 30 mg/kg has minor and transient central effects, even though central HVA levels were not suppressed below those of the control group in either brain area in the microdialysis study. Both entacapone and tolcapone suppressed hepatic COMT activity more than striatal COMT activity. Tolcapone was more effective than entacapone in the dorsal striatum. The differences between dopamine and its metabolite levels in the dorsal striatum and nucleus accumbens may be due to different properties of the two brain areas.
Resumo:
Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.
Resumo:
We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W, Z) where one boson decays to a dijet final state. The data correspond to 3.5 fb-1 of integrated luminosity of pp̅ collisions at √s=1.96 TeV collected by the CDF II detector at the Fermilab Tevatron. We observe 1516±239(stat)±144(syst) diboson candidate events and measure a cross section σ(pp̅ →VV+X) of 18.0±2.8(stat)±2.4(syst)±1.1(lumi) pb, in agreement with the expectations of the standard model.
Resumo:
We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W,Z) where one boson decays to a dijet final state . The data correspond to 3.5 inverse femtobarns of integrated luminosity of ppbar collisions at sqrt(s)=1.96 TeV collected by the CDFII detector at the Fermilab Tevatron. We observe 1516+/-239(stat)+/-144(syst) diboson candidate events and measure a cross section sigma(ppbar->VV+X) of 18.0+/-2.8(stat)+/-2.4(syst)+/-1.1(lumi) pb, in agreement with the expectations of the standard model.
Resumo:
We present the result of a search for a massive color-octet vector particle, (e.g. a massive gluon) decaying to a pair of top quarks in proton-antiproton collisions with a center-of-mass energy of 1.96 TeV. This search is based on 1.9 fb$^{-1}$ of data collected using the CDF detector during Run II of the Tevatron at Fermilab. We study $t\bar{t}$ events in the lepton+jets channel with at least one $b$-tagged jet. A massive gluon is characterized by its mass, decay width, and the strength of its coupling to quarks. These parameters are determined according to the observed invariant mass distribution of top quark pairs. We set limits on the massive gluon coupling strength for masses between 400 and 800 GeV$/c^2$ and width-to-mass ratios between 0.05 and 0.50. The coupling strength of the hypothetical massive gluon to quarks is consistent with zero within the explored parameter space.
Resumo:
The paper deals with a linearization technique in non-linear oscillations for systems which are governed by second-order non-linear ordinary differential equations. The method is based on approximation of the non-linear function by a linear function such that the error is least in the weighted mean square sense. The method has been applied to cubic, sine, hyperbolic sine, and odd polynomial types of non-linearities and the results obtained are more accurate than those given by existing linearization methods.
Resumo:
In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.