68 resultados para Probability distributions
Resumo:
The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.
Resumo:
The probability distribution of the eigenvalues of a second-order stochastic boundary value problem is considered. The solution is characterized in terms of the zeros of an associated initial value problem. It is further shown that the probability distribution is related to the solution of a first-order nonlinear stochastic differential equation. Solutions of this equation based on the theory of Markov processes and also on the closure approximation are presented. A string with stochastic mass distribution is considered as an example for numerical work. The theoretical probability distribution functions are compared with digital simulation results. The comparison is found to be reasonably good.
Resumo:
A method is presented for optimising the performance indices of aperture antennas in the presence of blockage. An N-dimensional objective function is formed for maximising the directivity factor of a circular aperture with blockage under sidelobe-level constraints, and is minimised using the simplex search method. Optimum aperture distributions are computed for a circular aperture with blockage of circular geometry that gives the maximum directivity factor under sidelobe-level constraints.
Resumo:
In this paper, a novel genetic algorithm is developed by generating artificial chromosomes with probability control to solve the machine scheduling problems. Generating artificial chromosomes for Genetic Algorithm (ACGA) is closely related to Evolutionary Algorithms Based on Probabilistic Models (EAPM). The artificial chromosomes are generated by a probability model that extracts the gene information from current population. ACGA is considered as a hybrid algorithm because both the conventional genetic operators and a probability model are integrated. The ACGA proposed in this paper, further employs the ``evaporation concept'' applied in Ant Colony Optimization (ACO) to solve the permutation flowshop problem. The ``evaporation concept'' is used to reduce the effect of past experience and to explore new alternative solutions. In this paper, we propose three different methods for the probability of evaporation. This probability of evaporation is applied as soon as a job is assigned to a position in the permutation flowshop problem. Experimental results show that our ACGA with the evaporation concept gives better performance than some algorithms in the literature.
Resumo:
We have derived explicitly, the large scale distribution of quantum Ohmic resistance of a disordered one-dimensional conductor. We show that in the thermodynamic limit this distribution is characterized by two independent parameters for strong disorder, leading to a two-parameter scaling theory of localization. Only in the limit of weak disorder we recover single parameter scaling, consistent with existing theoretical treatments.
Resumo:
We study the photon-number distribution in squeezed states of a single-mode radiation field. A U(l)-invariant squeezing criterion is compared and contrasted with a more restrictive criterion, with the help of suggestive geometric representations. The U(l) invariance of the photon-number distribution in a squeezed coherent state, with arbitrary complex squeeze and displacement parameters, is explicitly demonstrated. The behavior of the photon-number distribution for a representative value of the displacement and various values of the squeeze parameter is numerically investigated. A new kind of giant oscillation riding as an envelope over more rapid oscillations in this distribution is demonstrated.
Resumo:
Probably the most informative description of the ground slate of a magnetic molecular species is provided by the spin density map. Such a map may be experimentally obtained from polarized neutron diffraction (PND) data or theoretically calculated using quantum chemical approaches. Density functional theory (DFT) methods have been proved to be well-adapted for this. Spin distributions in one-dimensional compounds may also be computed using the density matrix renormalization group (DMRG) formalism. These three approaches, PND, DFT, and DMRG, have been utilized to obtain new insights on the ground state of two antiferromagnetically coupled Mn2+Cu2+ compounds, namely [Mn(Me-6-[14]ane-N-4)Cu(oxpn)](CF3SO3)(2) and MnCu(pba)(H2O)(3) . 2H(2)O, with Me-6-[14]ane-N-4 = (+/-)-5,7,7,12,14,14-hexamethyl-1,4,8,11-tetraazacyclotetradecane, oxpn = N,N'-bis(3-aminopropyl)oxamido and pba = 1,3-propylenebis(oxamato). Three problems in particular have been investigated: the spin distribution in the mononuclear precursors [Cu(oxpn)] and [Cu(pba)](2-), the spin density maps in the two Mn2+Cu2+ compounds, and the evolution of the spin distributions on the Mn2+ and Cu2+ sites when passing from a pair to a one-dimensional ferrimagnet.
Resumo:
In this paper, we report an analysis of the protein sequence length distribution for 13 bacteria, four archaea and one eukaryote whose genomes have been completely sequenced, The frequency distribution of protein sequence length for all the 18 organisms are remarkably similar, independent of genome size and can be described in terms of a lognormal probability distribution function. A simple stochastic model based on multiplicative processes has been proposed to explain the sequence length distribution. The stochastic model supports the random-origin hypothesis of protein sequences in genomes. Distributions of large proteins deviate from the overall lognormal behavior. Their cumulative distribution follows a power-law analogous to Pareto's law used to describe the income distribution of the wealthy. The protein sequence length distribution in genomes of organisms has important implications for microbial evolution and applications. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The growth and dissolution dynamics of nonequilibrium crystal size distributions (CSDs) can be determined by solving the governing population balance equations (PBEs) representing reversible addition or dissociation. New PBEs are considered that intrinsically incorporate growth dispersion and yield complete CSDs. We present two approaches to solving the PBEs, a moment method and a numerical scheme. The results of the numerical scheme agree with the moment technique, which can be solved exactly when powers on mass-dependent growth and dissolution rate coefficients are either zero or one. The numerical scheme is more general and can be applied when the powers of the rate coefficients are non-integers or greater than unity. The influence of the size dependent rates on the time variation of the CSDs indicates that as equilibrium is approached, the CSDs become narrow when the exponent on the growth rate is less than the exponent on the dissolution rate. If the exponent on the growth rate is greater than the exponent on the dissolution rate, then the polydispersity continues to broaden. The computation method applies for crystals large enough that interfacial stability issues, such as ripening, can be neglected. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Distribution of fluorescence resonance energy transfer (FRET) efficiency between the two ends of a Lennard-Jones polymer chain both at equilibrium and during folding and unfolding has been calculated, for the first time, by Brownian dynamics simulations. The distribution of FRET efficiency becomes bimodal during folding of the extended state subsequent to a temperature quench, with the width of the distribution for the extended state broader than that for the folded state. The reverse process of unfolding subsequent to a upward temperature jump shows different characteristics. The distributions show significant viscosity dependence which can be tested against experiments.
Resumo:
The statistically steady humidity distribution resulting from an interaction of advection, modelled as an uncorrelated random walk of moist parcels on an isentropic surface, and a vapour sink, modelled as immediate condensation whenever the specific humidity exceeds a specified saturation humidity, is explored with theory and simulation. A source supplies moisture at the deep-tropical southern boundary of the domain and the saturation humidity is specified as a monotonically decreasing function of distance from the boundary. The boundary source balances the interior condensation sink, so that a stationary spatially inhomogeneous humidity distribution emerges. An exact solution of the Fokker-Planck equation delivers a simple expression for the resulting probability density function (PDF) of the wate-rvapour field and also the relative humidity. This solution agrees completely with a numerical simulation of the process, and the humidity PDF exhibits several features of interest, such as bimodality close to the source and unimodality further from the source. The PDFs of specific and relative humidity are broad and non-Gaussian. The domain-averaged relative humidity PDF is bimodal with distinct moist and dry peaks, a feature which we show agrees with middleworld isentropic PDFs derived from the ERA interim dataset. Copyright (C) 2011 Royal Meteorological Society
Resumo:
This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.
Resumo:
Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Evaluation of the probability of error in decision feedback equalizers is difficult due to the presence of a hard limiter in the feedback path. This paper derives the upper and lower bounds on the probability of a single error and multiple error patterns. The bounds are fairly tight. The bounds can also be used to select proper tap gains of the equalizer.