967 resultados para Function theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud-point curves reported for the system polyethersulfone (PES)/phenoxy were calculated by means of the Sanchez-Lacombe (SL) lattice fluid theory. The one adjustable parameter epsilon(12)*/k (quantifying the interaction energy between mers of the different components) can be evaluated by comparison of the theoretical and experimental phase diagrams. The Flory-Huggins (FH) interaction parameters are computed based on the evaluated epsilon(12)*/k and are approximately a linear function of volume fraction and of inverse temperature. The calculated enthalpies of mixing of PES/phenoxy blends for different compositions are consistent with the experimental values obtained previously by Singh and Walsh [1].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

General expressions used for transforming raw laser-induced fluorescence (LIF) intensity into the population and alignment parameters of a symmetric top molecule are derived by employing the density matrix approach. The molecular population and alignment are described by molecular state multipoles. The results are presented for a general excitation-detection geometry and then applied to some special geometries. In general cases, the LIF intensity is a complex function of the initial molecular state multipoles, the dynamic factors and the excitation-detection geometrical factors. It contains a population and 14 alignment multipoles. How to extract all initial state multipoles from the rotationally unresolved emission LIF intensity is discussed in detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Expressions used for extracting the population and alignment parameters of a symmetric top molecule from (n + 1) laser-induced fluorescence (LIF) are derived by employing the tensor density matrix method. The molecular population and alignment are described by molecular state multipoles. The LIF intensity is a complex function of the initial molecular state multipoles, the dynamic factors, and the excitation-detection geometrical factors. The problem of how to extract the initial molecular state multipoles from (2 + 1) LIF, as an example, is discussed in detail. (C) 2000 American Institute of Physics. [S0021-9606(00)30744-9].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. In this note, we extend the theory by introducing ways of dealing with two aspects of learning: learning in the presence of unreliable examples and learning from positive and negative examples. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The model would apply directly to the cortical cells involved in visual face recognition. I will also outline the relation of our theory to existing models of the cerebellum and of motor control. Specific biophysical mechanisms can be readily suggested as part of a basic type of neural circuitry that can learn to approximate multidimensional input-output mappings from sets of examples and that is expected to be replicated in different regions of the brain and across modalities. The main points of the theory are: -the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. -these modules are realized as HyperBF networks (Poggio and Girosi, 1990a,b). -HyperBF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as look-up tables. I will conclude with some speculations about the trade-off between memory and computation and the evolution of intelligence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gatherer, D., and McEwan, N.R. (2003). Analysis of sequence periodicity in E. coli proteins: empirical investigation of the 'duplication and divergence' theory of protein evolution. Journal of Molecular Evolution 57, 149-158. RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on Pulay's direct inversion iterative subspace (DIIS) approach, we present a method to accelerate self-consistent field (SCF) convergence. In this method, the quadratic augmented Roothaan-Hall (ARH) energy function, proposed recently by Høst and co-workers [J. Chem. Phys. 129, 124106 (2008)], is used as the object of minimization for obtaining the linear coefficients of Fock matrices within DIIS. This differs from the traditional DIIS of Pulay, which uses an object function derived from the commutator of the density and Fock matrices. Our results show that the present algorithm, abbreviated ADIIS, is more robust and efficient than the energy-DIIS (EDIIS) approach. In particular, several examples demonstrate that the combination of ADIIS and DIIS ("ADIIS+DIIS") is highly reliable and efficient in accelerating SCF convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.A key component in calculations of exchange and correlation energies is the Coulomb operator, which requires the evaluation of two-electron integrals. For localized basis sets, these four-center integrals are most efficiently evaluated with the resolution of identity (RI) technique, which expands basis-function products in an auxiliary basis. In this work we show the practical applicability of a localized RI-variant ('RI-LVL'), which expands products of basis functions only in the subset of those auxiliary basis functions which are located at the same atoms as the basis functions. We demonstrate the accuracy of RI-LVL for Hartree-Fock calculations, for the PBE0 hybrid density functional, as well as for RPA and MP2 perturbation theory. Molecular test sets used include the S22 set of weakly interacting molecules, the G3 test set, as well as the G2-1 and BH76 test sets, and heavy elements including titanium dioxide, copper and gold clusters. Our RI-LVL implementation paves the way for linear-scaling RI-based hybrid functional calculations for large systems and for all-electron many-body perturbation theory with significantly reduced computational and memory cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whether a small cell, a small genome or a minimal set of chemical reactions with self-replicating properties, simplicity is beguiling. As Leonardo da Vinci reportedly said, 'simplicity is the ultimate sophistication'. Two diverging views of simplicity have emerged in accounts of symbiotic and commensal bacteria and cosmopolitan free-living bacteria with small genomes. The small genomes of obligate insect endosymbionts have been attributed to genetic drift caused by small effective population sizes (Ne). In contrast, streamlining theory attributes small cells and genomes to selection for efficient use of nutrients in populations where Ne is large and nutrients limit growth. Regardless of the cause of genome reduction, lost coding potential eventually dictates loss of function. Consequences of reductive evolution in streamlined organisms include atypical patterns of prototrophy and the absence of common regulatory systems, which have been linked to difficulty in culturing these cells. Recent evidence from metagenomics suggests that streamlining is commonplace, may broadly explain the phenomenon of the uncultured microbial majority, and might also explain the highly interdependent (connected) behavior of many microbial ecosystems. Streamlining theory is belied by the observation that many successful bacteria are large cells with complex genomes. To fully appreciate streamlining, we must look to the life histories and adaptive strategies of cells, which impose minimum requirements for complexity that vary with niche.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A many-body theory approach is developed for the problem of positron-atom scattering and annihilation. Strong electron- positron correlations are included nonperturbatively through the calculation of the electron-positron vertex function. It corresponds to the sum of an infinite series of ladder diagrams, and describes the physical effect of virtual positronium formation. The vertex function is used to calculate the positron-atom correlation potential and nonlocal corrections to the electron-positron annihilation vertex. Numerically, we make use of B-spline basis sets, which ensures rapid convergence of the sums over intermediate states. We have also devised an extrapolation procedure that allows one to achieve convergence with respect to the number of intermediate- state orbital angular momenta included in the calculations. As a test, the present formalism is applied to positron scattering and annihilation on hydrogen, where it is exact. Our results agree with those of accurate variational calculations. We also examine in detail the properties of the large correlation corrections to the annihilation vertex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuum distorted-wave eikonal initial-state (CDW-EIS) theory of Crothers and McCann (J Phys B 1983, 16, 3229) used to describe ionization in ion-atom collisions is generalized (G) to GCDW-EIS to incorporate the azimuthal angle dependence of each CDW in the final-state wave function. This is accomplished by the analytic continuation of hydrogenic-like wave functions from below to above threshold, using parabolic coordinates and quantum numbers including magnetic quantum numbers, thus providing a more complete set of states. At impact energies lower than 25 keVu(-1), the total ionization cross-section falls off, with decreasing energy, too quickly in comparison with experimental data. The idea behind and motivation for the GCDW-EIS model is to improve the theory with respect to experiment by including contributions from nonzero magnetic quantum numbers. We also therefore incidentally provide a new derivation of the theory of continuum distorted waves for zero magnetic quantum numbers while simultaneously generalizing it. (C) 2004 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the influence of non-ideal boundary and initial conditions (BIC) of a temporal analysis of products (TAP) reactor model on the data (observed exit flux) analysis. The general theory of multi-response state-defining experiments for a multi-zone TAP reactor is extended and applied to model several alternative boundary and initial conditions proposed in the literature. The method used is based on the Laplace transform and the transfer matrix formalism for multi-response experiments. Two non-idealities are studied: (1) the inlet pulse not being narrow enough (gas pulse not entering the reactor in Dirac delta function shape) and (2) the outlet non-ideality due to imperfect vacuum. The effect of these non-idealities is analyzed to the first and second order of approximation. The corresponding corrections were obtained and discussed in detail. It was found that they are negligible. Therefore, the model with ideal boundary conditions is proven to be completely adequate to the description and interpretation of transport-reaction data obtained with TAP-2 reactors.