115 resultados para two-Gaussian mixture model
Resumo:
In the design of practical web page classification systems one often encounters a situation in which the labeled training set is created by choosing some examples from each class; but, the class proportions in this set are not the same as those in the test distribution to which the classifier will be actually applied. The problem is made worse when the amount of training data is also small. In this paper we explore and adapt binary SVM methods that make use of unlabeled data from the test distribution, viz., Transductive SVMs (TSVMs) and expectation regularization/constraint (ER/EC) methods to deal with this situation. We empirically show that when the labeled training data is small, TSVM designed using the class ratio tuned by minimizing the loss on the labeled set yields the best performance; its performance is good even when the deviation between the class ratios of the labeled training set and the test set is quite large. When the labeled training data is sufficiently large, an unsupervised Gaussian mixture model can be used to get a very good estimate of the class ratio in the test set; also, when this estimate is used, both TSVM and EC/ER give their best possible performance, with TSVM coming out superior. The ideas in the paper can be easily extended to multi-class SVMs and MaxEnt models.
Resumo:
A characterization of the voice source (VS) signal by the pitch synchronous (PS) discrete cosine transform (DCT) is proposed. With the integrated linear prediction residual (ILPR) as the VS estimate, the PS DCT of the ILPR is evaluated as a feature vector for speaker identification (SID). On TIMIT and YOHO databases, using a Gaussian mixture model (GMM)-based classifier, it performs on par with existing VS-based features. On the NIST 2003 database, fusion with a GMM-based classifier using MFCC features improves the identification accuracy by 12% in absolute terms, proving that the proposed characterization has good promise as a feature for SID studies. (C) 2015 Acoustical Society of America
Resumo:
Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.
Resumo:
The recently discovered twist phase is studied in the context of the full ten-parameter family of partially coherent general anisotropic Gaussian Schell-model beams. It is shown that the nonnegativity requirement on the cross-spectral density of the beam demands that the strength of the twist phase be bounded from above by the inverse of the transverse coherence area of the beam. The twist phase as a two-point function is shown to have the structure of the generalized Huygens kernel or Green's function of a first-order system. The ray-transfer matrix of this system is exhibited. Wolf-type coherent-mode decomposition of the twist phase is carried out. Imposition of the twist phase on an otherwise untwisted beam is shown to result in a linear transformation in the ray phase space of the Wigner distribution. Though this transformation preserves the four-dimensional phase-space volume, it is not symplectic and hence it can, when impressed on a Wigner distribution, push it out of the convex set of all bona fide Wigner distributions unless the original Wigner distribution was sufficiently deep into the interior of the set.
Resumo:
Ground-state properties of the two-dimensional Hubbard model with point-defect disorder are investigated numerically in the Hartree-Fock approximation. The phase diagram in the p(point defect concentration)-delta(deviation from half filling) plane exhibits antiferromagnetic, spin-density-wave, paramagnetic, and spin-glass-like phases. The disorder stabilizes the antiferromagnetic phase relative to the spin-density-wave phase. The presence of U strongly enhances the localization in the antiferromagnetic phase. The spin-density-wave and spin-glass-like phases are weakly localized.
Resumo:
A rare example of a two-dimensional Heisenberg model with an exact dimerized ground state is presented. This model, which can be regarded as a variation on the kagome' lattice, has several features of interest: it has a highly (but not macroscopically) degenerate ground state; it is closely related to spin chains studied by earlier authors; in particular, it exhibits domain-wall-like "kink" excitations normally associated only with one-dimensional systems. In some limits it decouples into noninteracting chains; unusually, this happens in the limit of strong, rather than weak, interchain coupling. [S0163-1829(99)50338-X].
Resumo:
A two-state Ising model has been applied to the two-dimensional condensation of tymine at the mercury-water interface. The model predicts a quadratic dependence of the transition potential on temperature and on the logarithm of the adsorbate concentration. Both predictions have been confirmed experimentally.
Resumo:
Theoretical optimization studies of the performance of a combustion driven premixed two-phase flow gasdynamic laser are presented. The steady inviscid nonreacting quasi-one-dimensional two-phase flow model including appropriate finite rate vibrational kinetic rates has been used in the analysis. The analysis shows that the effect of the particles on the optimum performance of the two-phase laser is very small. The results are presented in graphical form. Applied Physics Letters is copyrighted by The American Institute of Physics.
Resumo:
In this paper, numerical modelling of fracture in concrete using two-dimensional lattice model is presented and also a few issues related to lattice modelling technique applicable to concrete fracture are reviewed. A comparison is made with acoustic emission (AE) events with the number of fractured elements. To implement the heterogeneity of the plain concrete, two methods namely, by generating grain structure of the concrete using Fuller's distribution and the concrete material properties are randomly distributed following Gaussian distribution are used. In the first method, the modelling of the concrete at meso level is carried out following the existing methods available in literature. The shape of the aggregates present in the concrete are assumed as perfect spheres and shape of the same in two-dimensional lattice network is circular. A three-point bend (TPB) specimen is tested in the experiment under crack mouth opening displacement (CMOD) control at a rate of 0.0004 mm/sec and the fracture process in the same TPB specimen is modelled using regular triangular 2D lattice network. Load versus crack mouth opening isplacement (CMOD) plots thus obtained by using both the methods are compared with experimental results. It was observed that the number of fractured elements increases near the peak load and beyond the peak load. That is once the crack starts to propagate. AE hits also increase rapidly beyond the peak load. It is compulsory here to mention that although the lattice modelling of concrete fracture used in this present study is very similar to those already available in literature, the present work brings out certain finer details which are not available explicitly in the earlier works.
Resumo:
In this paper, we introduce the three-user cognitive radio channels with asymmetric transmitter cooperation, and derive achievable rate regions under several scenarios depending on the type of cooperation and decoding capability at the receivers. Two of the most natural cooperation mechanisms for the three-user channel are considered here: cumulative message sharing (CMS) and primary-only message sharing (PMS). In addition to the message sharing mechanism, the achievable rate region is critically dependent on the decoding capability at the receivers. Here, we consider two scenarios for the decoding capability, and derive an achievable rate region for each one of them by employing a combination of superposition and Gel'fand-Pinsker coding techniques. Finally, to provide a numerical example, we consider the Gaussian channel model to plot the rate regions. In terms of achievable rates, CMS turns out to be a better scheme than PMS. However, the practical aspects of implementing such message-sharing schemes remain to be investigated.
Resumo:
Distributed space time coding for wireless relay networks where the source, the destination and the relays have multiple antennas have been studied by Jing and Hassibi. In this set up, the transmit and the receive signals at different antennas of the same relay are processed and designed independently, even though the antennas are colocated. In this paper, a wireless relay network with single antenna at the source and the destination and two antennas at each of the R relays is considered. In the first phase of the two-phase transmission model, a T -length complex vector is transmitted from the source to all the relays. At each relay, the inphase and quadrature component vectors of the received complex vectors at the two antennas are interleaved before processing them. After processing, in the second phase, a T x 2R matrix codeword is transmitted to the destination. The collection of all such codewords is called Co-ordinate interleaved distributed space-time code (CIDSTC). Compared to the scheme proposed by Jing-Hassibi, for T ges AR, it is shown that while both the schemes give the same asymptotic diversity gain, the CIDSTC scheme gives additional asymptotic coding gain as well and that too at the cost of negligible increase in the processing complexity at the relays.
Resumo:
In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations.
Resumo:
We interpret the recent discovery of a 125 GeV Higgs-like state in the context of a two-Higgs-doublet model with a heavy fourth sequential generation of fermions, in which one Higgs doublet couples only to the fourth-generation fermions, while the second doublet couples to the lighter fermions of the first three families. This model is designed to accommodate the apparent heaviness of the fourth-generation fermions and to effectively address the low-energy phenomenology of a dynamical electroweak-symmetry-breaking scenario. The physical Higgs states of the model are, therefore, viewed as composites primarily of the fourth-generation fermions. We find that the lightest Higgs, h, is a good candidate for the recently discovered 125 GeV spin-zero particle, when tan beta similar to O(1), for typical fourth-generation fermion masses of M-4G = 400-600 GeV, and with a large t-t' mixing in the right-handed quark sector. This, in turn, leads to BR(t' -> th) similar to O(1), which drastically changes the t' decay pattern. We also find that, based on the current Higgs data, this two-Higgs-doublet model generically predicts an enhanced production rate (compared to the Standard Model) in the pp -> h -> tau tau channel, and reduced rates in the VV -> h -> gamma gamma and p (p) over bar /pp -> V -> hV -> Vbb channels. Finally, the heavier CP-even Higgs is excluded by the current data up to m(H) similar to 500 GeV, while the pseudoscalar state, A, can be as light as 130 GeV. These heavier Higgs states and the expected deviations from the Standard Model din some of the Higgs production channels can be further excluded or discovered with more data.
Resumo:
Variable Endmember Constrained Least Square (VECLS) technique is proposed to account endmember variability in the linear mixture model by incorporating the variance for each class, the signals of which varies from pixel to pixel due to change in urban land cover (LC) structures. VECLS is first tested with a computer simulated three class endmember considering four bands having small, medium and large variability with three different spatial resolutions. The technique is next validated with real datasets of IKONOS, Landsat ETM+ and MODIS. The results show that correlation between actual and estimated proportion is higher by an average of 0.25 for the artificial datasets compared to a situation where variability is not considered. With IKONOS, Landsat ETM+ and MODIS data, the average correlation increased by 0.15 for 2 and 3 classes and by 0.19 for 4 classes, when compared to single endmember per class. (C) 2013 COSPAR. Published by Elsevier Ltd. All rights reserved.
Resumo:
An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior.