838 resultados para Vector error correction model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W, Z) where one boson decays to a dijet final state. The data correspond to 3.5  fb-1 of integrated luminosity of pp̅ collisions at √s=1.96  TeV collected by the CDF II detector at the Fermilab Tevatron. We observe 1516±239(stat)±144(syst) diboson candidate events and measure a cross section σ(pp̅ →VV+X) of 18.0±2.8(stat)±2.4(syst)±1.1(lumi)  pb, in agreement with the expectations of the standard model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W,Z) where one boson decays to a dijet final state . The data correspond to 3.5 inverse femtobarns of integrated luminosity of ppbar collisions at sqrt(s)=1.96 TeV collected by the CDFII detector at the Fermilab Tevatron. We observe 1516+/-239(stat)+/-144(syst) diboson candidate events and measure a cross section sigma(ppbar->VV+X) of 18.0+/-2.8(stat)+/-2.4(syst)+/-1.1(lumi) pb, in agreement with the expectations of the standard model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The error introduced in depolarisation measurements due to the convergence of the incident beam has been investigated theoretically as well as experimentally for the case of colloid scattering, where the particles are not small compared to the wavelength of light. Assuming the scattering particles to be anisotropic rods, it is shown that, when the incident unpolarised light is condensed by means of a lens with a circular aperture, the observed depolarisation ratio ϱ u is given by ϱ u = ϱ u0 + 5/3 θ2 where ϱ u0 is the true depolarisation for incident parallel light, and θ the semi-angle of convergence. Appropriate formulae are derived when the incident beam is polarised vertically and horizontally. Experiments performed on six typical colloids support the theoretical conclusions. Other immediate consequences of the theory are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variation of switching frequency over the entire operating speed range of an induction motor (M drive is the major problem associated with conventional two-level three-phase hysteresis controller as well as the space phasor based PWM hysteresis controller. This paper describes a simple hysteresis current controller for controlling the switching frequency variation in the two-level PWM inverter fed IM drives for various operating speeds. A novel concept of continuously variable hysteresis boundary of current error space phasor with the varying speed of the IM drive is proposed in the present work. The variable parabolic boundary for the current error space phasor is suggested for the first time in this paper for getting the switching frequency pattern with the hysteresis controller, similar to that of the constant switching frequency voltage-controlled space vector PWM (VC-SVPWM) based inverter fed IM drive. A generalized algorithm is also developed to determine parabolic boundary for controlling the switching frequency variation, for any IM load. Only the adjacent inverter voltage vectors forming a triangular sector, in which tip of the machine voltage vector ties, are switched to keep current error space vector within the parabolic boundary. The controller uses a self-adaptive sector identification logic, which provides smooth transition between the sectors and is capable of taldng the inverter up to six-step mode of operation, if demanded by drive system. The proposed scheme is simulated and experimentally verified on a 3.7 kW IM drive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mandelstam�s argument that PCAC follows from assigning Lorentz quantum numberM=1 to the massless pion is examined in the context of multiparticle dual resonance model. We construct a factorisable dual model for pions which is formulated operatorially on the harmonic oscillator Fock space along the lines of Neveu-Schwarz model. The model has bothm ? andm ? as arbitrary parameters unconstrained by the duality requirement. Adler self-consistency condition is satisfied if and only if the conditionm?2?m?2=1/2 is imposed, in which case the model reduces to the chiral dual pion model of Neveu and Thorn, and Schwarz. The Lorentz quantum number of the pion in the dual model is shown to beM=0.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A microscopic expression for the frequency and wave vector dependent dielectric constant of a dense dipolar liquid is derived starting from the linear response theory. The new expression properly takes into account the effects of the translational modes in the polarization relaxation. The longitudinal and the transverse components of the dielectric constant show vastly different behavior at the intermediate values of the wave vector k. We find that the microscopic structure of the dense liquid plays an important role at intermediate wave vectors. The continuum model description of the dielectric constant, although appropriate at very small values of wave vector, breaks down completely at the intermediate values of k. Numerical results for the longitudinal and the transverse dielectric constants are obtained by using the direct correlation function from the mean‐spherical approximation for dipolar hard spheres. We show that our results are consistent with all the limiting expressions known for the dielectric function of matter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The shape of the vector and scalar K-l3 form factors is investigated by exploiting analyticity and unitarity in a model-independent formalism. The method uses as input dispersion relations for certain correlators computed in perturbative QCD in the deep Euclidean region, soft-meson theorems, and experimental information on the phase and modulus of the form factors along the elastic part of the unitarity cut. We derive constraints on the coefficients of the parameterizations valid in the semileptonic range and on the truncation error. The method also predicts low-energy domains in the complex t plane where zeros of the form factors are excluded. The results are useful for K-l3 data analyses and provide theoretical underpinning for recent phenomenological dispersive representations for the form factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An a priori error analysis of discontinuous Galerkin methods for a general elliptic problem is derived under a mild elliptic regularity assumption on the solution. This is accomplished by using some techniques from a posteriori error analysis. The model problem is assumed to satisfy a GAyenrding type inequality. Optimal order L (2) norm a priori error estimates are derived for an adjoint consistent interior penalty method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An associative memory with parallel architecture is presented. The neurons are modelled by perceptrons having only binary, rather than continuous valued input. To store m elements each having n features, m neurons each with n connections are needed. The n features are coded as an n-bit binary vector. The weights of the n connections that store the n features of an element has only two values -1 and 1 corresponding to the absence or presence of a feature. This makes the learning very simple and straightforward. For an input corrupted by binary noise, the associative memory indicates the element that is closest (in terms of Hamming distance) to the noisy input. In the case where the noisy input is equidistant from two or more stored vectors, the associative memory indicates two or more elements simultaneously. From some simple experiments performed on the human memory and also on the associative memory, it can be concluded that the associative memory presented in this paper is in some respect more akin to a human memory than a Hopfield model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A state-of-the-art model of the coupled ocean-atmosphere system, the climate forecast system (CFS), from the National Centres for Environmental Prediction (NCEP), USA, has been ported onto the PARAM Padma parallel computing system at the Centre for Development of Advanced Computing (CDAC), Bangalore and retrospective predictions for the summer monsoon (June-September) season of 2009 have been generated, using five initial conditions for the atmosphere and one initial condition for the ocean for May 2009. Whereas a large deficit in the Indian summer monsoon rainfall (ISMR; June-September) was experienced over the Indian region (with the all-India rainfall deficit by 22% of the average), the ensemble average prediction was for above-average rainfall during the summer monsoon. The retrospective predictions of ISMR with CFS from NCEP for 1981-2008 have been analysed. The retrospective predictions from NCEP for the summer monsoon of 1994 and that from CDAC for 2009 have been compared with the simulations for each of the seasons with the stand-alone atmospheric component of the model, the global forecast system (GFS), and observations. It has been shown that the simulation with GFS for 2009 showed deficit rainfall as observed. The large error in the prediction for the monsoon of 2009 can be attributed to a positive Indian Ocean Dipole event seen in the prediction from July onwards, which was not present in the observations. This suggests that the error could be reduced with improvement of the ocean model over the equatorial Indian Ocean.