921 resultados para Conditional moments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, an improved probabilistic linearization approach is developed to study the response of nonlinear single degree of freedom (SDOF) systems under narrow-band inputs. An integral equation for the probability density function (PDF) of the envelope is derived. This equation is solved using an iterative scheme. The technique is applied to study the hardening type Duffing's oscillator under narrow-band excitation. The results compare favorably with those obtained using numerical simulation. In particular, the bimodal nature of the PDF for the response envelope for certain parameter ranges is brought out.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic Algorithms are efficient and robust search methods that are being employed in a plethora of applications with extremely large search spaces. The directed search mechanism employed in Genetic Algorithms performs a simultaneous and balanced, exploration of new regions in the search space and exploitation of already discovered regions.This paper introduces the notion of fitness moments for analyzing the working of Genetic Algorithms (GAs). We show that the fitness moments in any generation may be predicted from those of the initial population. Since a knowledge of the fitness moments allows us to estimate the fitness distribution of strings, this approach provides for a method of characterizing the dynamics of GAs. In particular the average fitness and fitness variance of the population in any generation may be predicted. We introduce the technique of fitness-based disruption of solutions for improving the performance of GAs. Using fitness moments, we demonstrate the advantages of using fitness-based disruption. We also present experimental results comparing the performance of a standard GA and GAs (CDGA and AGA) that incorporate the principle of fitness-based disruption. The experimental evidence clearly demonstrates the power of fitness based disruption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents recursive algorithms for fast computation of Legendre and Zernike moments of a grey-level image intensity distribution. For a binary image, a contour integration method is developed for the evaluation of Legendre moments using only the boundary information. A method for recursive calculation of Zernike polynomial coefficients is also given. A square-to-circular image transformation scheme is introduced to minimize the computation involved in Zernike moment functions. The recursive formulae can also be used in inverse moment transforms to reconstruct the original image from moments. The mathematical framework of the algorithms is given in detail, and illustrated with binary and grey-level images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A systematic assessment of the submodels of conditional moment closure (CMC) formalism for the autoignition problem is carried out using direct numerical simulation (DNS) data. An initially non-premixed, n-heptane/air system, subjected to a three-dimensional, homogeneous, isotropic, and decaying turbulence, is considered. Two kinetic schemes, (1) a one-step and (2) a reduced four-step reaction mechanism, are considered for chemistry An alternative formulation is developed for closure of the mean chemical source term , based on the condition that the instantaneous fluctuation of excess temperature is small. With this model, it is shown that the CMC equations describe the autoignition process all the way up to near the equilibrium limit. The effect of second-order terms (namely, conditional variance of temperature excess sigma(2) and conditional correlations of species q(ij)) in modeling is examined. Comparison with DNS data shows that sigma(2) has little effect on the predicted conditional mean temperature evolution, if the average conditional scalar dissipation rate is properly modeled. Using DNS data, a correction factor is introduced in the modeling of nonlinear terms to include the effect of species fluctuations. Computations including such a correction factor show that the species conditional correlations q(ij) have little effect on model predictions with a one-step reaction, but those q(ij) involving intermediate species are found to be crucial when four-step reduced kinetics is considered. The "most reactive mixture fraction" is found to vary with time when a four-step kinetics is considered. First-order CMC results are found to be qualitatively wrong if the conditional mean scalar dissipation rate is not modeled properly. The autoignition delay time predicted by the CMC model compares excellently with DNS results and shows a trend similar to experimental data over a range of initial temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental realization of quantum information processing in the field of nuclear magnetic resonance (NMR) has been well established. Implementation of conditional phase-shift gate has been a significant step, which has lead to realization of important algorithms such as Grover's search algorithm and quantum Fourier transform. This gate has so far been implemented in NMR by using coupling evolution method. We demonstrate here the implementation of the conditional phase-shift gate using transition selective pulses. As an application of the gate, we demonstrate Grover's search algorithm and quantum Fourier transform by simulations and experiments using transition selective pulses. (C) 2002 Elsevier Science (USA). All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. The resultant classifiers are applied to the case of classification of unbalanced datasets with asymmetric costs for misclassification. Experimental results on benchmark datasets show the efficacy of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this talk I discuss some aspects of the study of electric dipole moments (EDMs) of the fermions, in the context of R-parity violating (\rpv) Supersymmetry (SUSY). I will start with a brief general discussion of how dipole moments, in general, serve as a probe of physics beyond the Standard Model (SM) and an even briefer summary of \rpv SUSY. I will follow by discussing a general method of analysis for obtaining the leading fermion mass dependence of the dipole moments and present its application to \rpv SUSY case. Then I will summarise the constraints that the analysis of $e,n$ and $Hg$ EDMs provide for the case of trilinear \rpv SUSY couplings and make a few comments on the case of bilinear \rpv, where the general method of analysis proposed by us does not work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The transport of reactive solutes through fractured porous formations has been analyzed. The transport through the porous block is represented by a general multiprocess nonequilibrium equation (MPNE), which, for the fracture, is represented by an advection-dispersion equation with linear equilibrium sorption and first-order transformation. An implicit finite-difference technique has been used to solve the two coupled equations. The transport characteristics have been analyzed in terms of zeroth, first, and second temporal moments of the solute in the fracture. The solute behavior for fractured impermeable and fractured permeable formations are first compared and the effects of various fracture and matrix transport parameters are analyzed. Subsequently, the transport through a fractured permeable formation is analyzed to ascertain the effect of equilibrium sorption, rate-limited sorption, and the multiprocess nonequilibrium transport process. It was found that the temporal moments were nearly identical for the fractured impermeable and permeable formations when both the diffusion coefficient and the first-order transformation coefficient were relatively large. The multiprocess nonequilibrium model resulted in a smaller mass recovery in the fracture and higher dispersion than the equilibrium and rate-limited sorption models. DOI: 10.1061/(ASCE)HE.19435584.0000586. (C) 2012 American Society of Civil Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A low complexity, essentially-ML decoding technique for the Golden code and the three antenna Perfect code was introduced by Sirianunpiboon, Howard and Calderbank. Though no theoretical analysis of the decoder was given, the simulations showed that this decoding technique has almost maximum-likelihood (ML) performance. Inspired by this technique, in this paper we introduce two new low complexity decoders for Space-Time Block Codes (STBCs)-the Adaptive Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive interference cancellation (ACZF-SIC), which include as a special case the decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC decoders are capable of achieving full-diversity, and we give a set of sufficient conditions for an STBC to give full-diversity with these decoders. We then show that the Golden code, the three and four antenna Perfect codes, the three antenna Threaded Algebraic Space-Time code and the four antenna rate 2 code of Srinath and Rajan are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less than that of their ML decoders. Simulations show that the proposed decoding method performs identical to ML decoding for all these five codes. These STBCs along with the proposed decoding algorithm have the least decoding complexity and best error performance among all known codes for transmit antennas. We further provide a lower bound on the complexity of full-diversity ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sequence of moments obtained from statistical trials encodes a classical probability distribution. However, it is well known that an incompatible set of moments arises in the quantum scenario, when correlation outcomes associated with measurements on spatially separated entangled states are considered. This feature, viz., the incompatibility of moments with a joint probability distribution, is reflected in the violation of Bell inequalities. Here, we focus on sequential measurements on a single quantum system and investigate if moments and joint probabilities are compatible with each other. By considering sequential measurement of a dichotomic dynamical observable at three different time intervals, we explicitly demonstrate that the moments and the probabilities are inconsistent with each other. Experimental results using a nuclear magnetic resonance system are reported here to corroborate these theoretical observations, viz., the incompatibility of the three-time joint probabilities with those extracted from the moment sequence when sequential measurements on a single-qubit system are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.