107 resultados para simple algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple extended finite field nuclear relaxation procedure for calculating vibrational contributions to degenerate four-wave mixing (also known as the intensity-dependent refractive index) is presented. As a by-product one also obtains the static vibrationally averaged linear polarizability, as well as the first and second hyperpolarizability. The methodology is validated by illustrative calculations on the water molecule. Further possible extensions are suggested

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our new simple method for calculating accurate Franck-Condon factors including nondiagonal (i.e., mode-mode) anharmonic coupling is used to simulate the C2H4+X2B 3u←C2H4X̃1 Ag band in the photoelectron spectrum. An improved vibrational basis set truncation algorithm, which permits very efficient computations, is employed. Because the torsional mode is highly anharmonic it is separated from the other modes and treated exactly. All other modes are treated through the second-order perturbation theory. The perturbation-theory corrections are significant and lead to a good agreement with experiment, although the separability assumption for torsion causes the C2 D4 results to be not as good as those for C2 H4. A variational formulation to overcome this circumstance, and deal with large anharmonicities in general, is suggested

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a method for analyzing the curvature (second derivatives) of the conical intersection hyperline at an optimized critical point. Our method uses the projected Hessians of the degenerate states after elimination of the two branching space coordinates, and is equivalent to a frequency calculation on a single Born-Oppenheimer potential-energy surface. Based on the projected Hessians, we develop an equation for the energy as a function of a set of curvilinear coordinates where the degeneracy is preserved to second order (i.e., the conical intersection hyperline). The curvature of the potential-energy surface in these coordinates is the curvature of the conical intersection hyperline itself, and thus determines whether one has a minimum or saddle point on the hyperline. The equation used to classify optimized conical intersection points depends in a simple way on the first- and second-order degeneracy splittings calculated at these points. As an example, for fulvene, we show that the two optimized conical intersection points of C2v symmetry are saddle points on the intersection hyperline. Accordingly, there are further intersection points of lower energy, and one of C2 symmetry - presented here for the first time - is found to be the global minimum in the intersection space

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Annotation of protein-coding genes is a key goal of genome sequencing projects. In spite of tremendous recent advances in computational gene finding, comprehensive annotation remains a challenge. Peptide mass spectrometry is a powerful tool for researching the dynamic proteome and suggests an attractive approach to discover and validate protein-coding genes. We present algorithms to construct and efficiently search spectra against a genomic database, with no prior knowledge of encoded proteins. By searching a corpus of 18.5 million tandem mass spectra (MS/MS) from human proteomic samples, we validate 39,000 exons and 11,000 introns at the level of translation. We present translation-level evidence for novel or extended exons in 16 genes, confirm translation of 224 hypothetical proteins, and discover or confirm over 40 alternative splicing events. Polymorphisms are efficiently encoded in our database, allowing us to observe variant alleles for 308 coding SNPs. Finally, we demonstrate the use of mass spectrometry to improve automated gene prediction, adding 800 correct exons to our predictions using a simple rescoring strategy. Our results demonstrate that proteomic profiling should play a role in any genome sequencing project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, I consider a general and informationally effcient approach to determine the optimal access rule and show that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. My approach is informationally effcient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique access rule that achieves the Ramsey outcome as the unique equilibrium as long as there exists at least a mild degree of substitutability among networks' services.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Which projects should be financed through separate non-recourse loans (or limited- liability companies) and which should be bundled into a single loan? In the pres- ence of bankruptcy costs, this conglomeration decision trades off the benefit of co- insurance with the cost of risk contamination. This paper characterize this tradeoff for projects with binary returns, depending on the mean, variability, and skewness of returns, the bankruptcy recovery rate, the correlation across projects, the number of projects, and their heterogeneous characteristics. In some cases, separate financing dominates joint financing, even though it increases the interest rate or the probability of bankruptcy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present simple procedures for the prediction of a real valued sequence. The algorithms are based on a combinationof several simple predictors. We show that if the sequence is a realization of a bounded stationary and ergodic random process then the average of squared errors converges, almost surely, to that of the optimum, given by the Bayes predictor. We offer an analog result for the prediction of stationary gaussian processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hierarchical clustering is a popular method for finding structure in multivariate data,resulting in a binary tree constructed on the particular objects of the study, usually samplingunits. The user faces the decision where to cut the binary tree in order to determine the numberof clusters to interpret and there are various ad hoc rules for arriving at a decision. A simplepermutation test is presented that diagnoses whether non-random levels of clustering are presentin the set of objects and, if so, indicates the specific level at which the tree can be cut. The test isvalidated against random matrices to verify the type I error probability and a power study isperformed on data sets with known clusteredness to study the type II error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Working Paper no longer available. Please contact the author.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a simple adaptive procedure for playing a game. In thisprocedure, players depart from their current play with probabilities thatare proportional to measures of regret for not having used other strategies(these measures are updated every period). It is shown that our adaptiveprocedure guaranties that with probability one, the sample distributionsof play converge to the set of correlated equilibria of the game. Tocompute these regret measures, a player needs to know his payoff functionand the history of play. We also offer a variation where every playerknows only his own realized payoff history (but not his payoff function).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effectiveness of decision rules depends on characteristics of bothrules and environments. A theoretical analysis of environments specifiesthe relative predictive accuracies of the lexicographic rule 'take-the-best'(TTB) and other simple strategies for binary choice. We identify threefactors: how the environment weights variables; characteristics of choicesets; and error. For cases involving from three to five binary cues, TTBis effective across many environments. However, hybrids of equal weights(EW) and TTB models are more effective as environments become morecompensatory. In the presence of error, TTB and similar models do not predictmuch better than a naïve model that exploits dominance. We emphasizepsychological implications and the need for more complete theories of theenvironment that include the role of error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we attempt to describe the general reasons behind the world populationexplosion in the 20th century. The size of the population at the end of the century inquestion, deemed excessive by some, was a consequence of a dramatic improvementin life expectancies, attributable, in turn, to scientific innovation, the circulation ofinformation and economic growth. Nevertheless, fertility is a variable that plays acrucial role in differences in demographic growth. We identify infant mortality, femaleeducation levels and racial identity as important exogenous variables affecting fertility.It is estimated that in poor countries one additional year of primary schooling forwomen leads to 0.614 child less per couple on average (worldwide). While it may bepossible to identify a global tendency towards convergence in demographic trends,particular attention should be paid to the case of Africa, not only due to its differentdemographic patterns, but also because much of the continent's population has yet toexperience improvement in quality of life generally enjoyed across the rest of theplanet.