101 resultados para Probability generating function


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following a Migdal-Kadanoff-type bond moving procedure, we derive the renormalisation-group equations for the characteristic function of the full probability distribution of resistance (conductance) of a three-dimensional disordered system. The resulting recursion relations for the first two cumulants, K, the mean resistance and K ~ t,he meansquare deviation of resistance exhibit a mobility edge dominated by large dispersion, i.e., K $ ’/ K=, 1, suggesting inadequacy of the one-parameter scaling ansatz.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper deals with the approximate analysis of non-linear non-conservative systems oftwo degrees of freedom subjected to step-function excitation. The method of averaging of Krylov and Bogoliubov is used to arrive at the approximate equations for amplitude and phase. An example of a spring-mass-damper system is presented to illustrate the method and a comparison with numerical results brings out the validity of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By applying the theory of the asymptotic distribution of extremes and a certain stability criterion to the question of the domain of convergence in the probability sense, of the renormalized perturbation expansion (RPE) for the site self-energy in a cellularly disordered system, an expression has been obtained in closed form for the probability of nonconvergence of the RPE on the real-energy axis. Hence, the intrinsic mobility mu (E) as a function of the carrier energy E is deduced to be given by mu (E)= mu 0exp(-exp( mod E mod -Ec) Delta ), where Ec is a nominal 'mobility edge' and Delta is the width of the random site-energy distribution. Thus mobility falls off sharply but continuously for mod E mod >Ec, in contradistinction with the notion of an abrupt 'mobility edge' proposed by Cohen et al. and Mott. Also, the calculated electrical conductivity shows a temperature dependence in qualitative agreement with experiments on disordered semiconductors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent trend towards minimizing the interconnections in large scale integration (LSI) circuits has led to intensive investigation in the development of ternary circuits and the improvement of their design. The ternary multiplexer is a convenient and useful logic module which can be used as a basic building block in the design of a ternary system. This paper discusses a systematic procedure for the simplification and realization of ternary functions using ternary multiplexers as building blocks. Both single level and multilevel multiplexing techniques are considered. The importance of the design procedure is highlighted by considering two specific applications, namely, the development of ternary adder/subtractor and TCD to ternary converter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method is presented for obtaining, approximately, the response covariance and probability distribution of a non-linear oscillator under a Gaussian excitation. The method has similarities with the hierarchy closure and the equivalent linearization approaches, but is different. A Gaussianization technique is used to arrive at the output autocorrelation and the input-output cross-correlation. This along with an energy equivalence criterion is used to estimate the response distribution function. The method is applicable in both the transient and steady state response analysis under either stationary or non-stationary excitations. Good comparison has been observed between the predicted and the exact steady state probability distribution of a Duffing oscillator under a white noise input.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explore the semi-classical structure of the Wigner functions ($\Psi $(q, p)) representing bound energy eigenstates $|\psi \rangle $ for systems with f degrees of freedom. If the classical motion is integrable, the classical limit of $\Psi $ is a delta function on the f-dimensional torus to which classical trajectories corresponding to ($|\psi \rangle $) are confined in the 2f-dimensional phase space. In the semi-classical limit of ($\Psi $ ($\hslash $) small but not zero) the delta function softens to a peak of order ($\hslash ^{-\frac{2}{3}f}$) and the torus develops fringes of a characteristic 'Airy' form. Away from the torus, $\Psi $ can have semi-classical singularities that are not delta functions; these are discussed (in full detail when f = 1) using Thom's theory of catastrophes. Brief consideration is given to problems raised when ($\Psi $) is calculated in a representation based on operators derived from angle coordinates and their conjugate momenta. When the classical motion is non-integrable, the phase space is not filled with tori and existing semi-classical methods fail. We conjecture that (a) For a given value of non-integrability parameter ($\epsilon $), the system passes through three semi-classical regimes as ($\hslash $) diminishes. (b) For states ($|\psi \rangle $) associated with regions in phase space filled with irregular trajectories, ($\Psi $) will be a random function confined near that region of the 'energy shell' explored by these trajectories (this region has more than f dimensions). (c) For ($\epsilon \neq $0, $\hslash $) blurs the infinitely fine classical path structure, in contrast to the integrable case ($\epsilon $ = 0, where $\hslash $ )imposes oscillatory quantum detail on a smooth classical path structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The PRP17 gene product is required for the second step of pre-mRNA splicing reactions. The C-terminal half of this protein bears four repeat units with homology to the beta transducin repeat. Missense mutations in three temperature-sensitive prp17 mutants map to a region in the N-terminal half of the protein. We have generated, in vitro, 11 missense alleles at the beta transducin repeat units and find that only one affects function in vivo. A phenotypically silent missense allele at the fourth repeat unit enhances the slow-growing phenotype conferred by an allele at the third repeat, suggesting an interaction between these domains. Although many missense mutations in highly conserved amino acids lack phenotypic effects, deletion analysis suggests an essential role for these units. Only mutations in the N-terminal nonconserved domain of PRP17 are synthetically lethal in combination with mutations in PRP16 and PRP18, two other gene products required for the second splicing reaction. A mutually allele-specific interaction between Prp17 and snr7, with mutations in U5 snRNA, was observed. We therefore suggest that the functional region of Prp17p that interacts with Prp18p, Prp16p, and U5 snRNA is the N terminal region of the protein.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Investigations on the structure and function of hemoglobin (Hb) confined inside sol-gel template synthesized silica nanotubes (SNTs) have been discussed here. Immobilization of hemoglobin inside SNTs resulted in the enhancement of direct electron transfer during an electrochemical reaction. Extent of influence of nanoconfinement on protein activity is further probed via ligand binding and thermal stability studies. Electrochemical investigations show reversible binding of n-donor liquid ligands, such as pyridine and its derivatives, and predictive variation in their redox potentials suggests an absence of any adverse effect on the structure and function of Hb confined inside nanometer-sized channels of SNTs. Immobilization also resulted in enhanced thermal stability of Hb. The melting or denaturation temperature of Hb immobilized inside SNTs increase by approximately 4 degrees C as compared with that of free Hb in solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of matching applicants to jobs under one-sided preferences: that is, each applicant ranks a non-empty subset of jobs under an order of preference, possibly involving ties. A matching M is said to be rnore popular than T if the applicants that prefer M to T outnumber those that prefer T to M. A matching is said to be popular if there is no matching more popular than it. Equivalently, a matching M is popular if phi(M,T) >= phi(T, M) for all matchings T, where phi(X, Y) is the number of applicants that prefer X to Y. Previously studied solution concepts based oil the popularity criterion are either not guaranteed to exist for every instance (e.g., popular matchings) or are NP-hard to compute (e.g., least unpopular matchings). This paper addresses this issue by considering mixed matchings. A mixed matching is simply a probability distributions over matchings in the input graph. The function phi that compares two matchings generalizes in a natural manner to mixed matchings by taking expectation. A mixed matching P is popular if phi(P,Q) >= phi(Q,P) for all mixed matchings Q. We show that popular mixed matchings always exist. and we design polynomial time algorithms for finding them. Then we study their efficiency and give tight bounds on the price of anarchy and price of stability of the popular matching problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grain misorientation was studied in relation to the nearest neighbor's mutual distance using electron back-scattered diffraction measurements. The misorientation correlation function was defined as the probability density for the occurrence of a certain misorientation between pairs of grains separated by a certain distance. Scale-invariant spatial correlation between neighbor grains was manifested by a power law dependence of the preferred misorientation vs. inter-granular distance in various materials after diverse strain paths. The obtained negative scaling exponents were in the range of -2 +/- 0.3 for high-angle grain boundaries. The exponent decreased in the presence of low-angle grain boundaries or dynamic recrystallization, indicating faster decay of correlations. The correlations vanished in annealed materials. The results were interpreted in terms of lattice incompatibility and continuity conditions at the interface between neighboring grains. Grain-size effects on texture development, as well as the implications of such spatial correlations on texture modeling, were discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning automata are adaptive decision making devices that are found useful in a variety of machine learning and pattern recognition applications. Although most learning automata methods deal with the case of finitely many actions for the automaton, there are also models of continuous-action-set learning automata (CALA). A team of such CALA can be useful in stochastic optimization problems where one has access only to noise-corrupted values of the objective function. In this paper, we present a novel formulation for noise-tolerant learning of linear classifiers using a CALA team. We consider the general case of nonuniform noise, where the probability that the class label of an example is wrong may be a function of the feature vector of the example. The objective is to learn the underlying separating hyperplane given only such noisy examples. We present an algorithm employing a team of CALA and prove, under some conditions on the class conditional densities, that the algorithm achieves noise-tolerant learning as long as the probability of wrong label for any example is less than 0.5. We also present some empirical results to illustrate the effectiveness of the algorithm.