910 resultados para Set theory.
Resumo:
The ground state properties of the Pb isotopic are studied by using the axially deformed relativistic mean field (RMF) calculation with the parameter set TM1. The pairing correlation is treated by the BCS method and the isospin dependent pairing force is used. The 'blocking' method is used to deal with unpaired nucleons. The theoretical results show that the relativistic mean field theory with non-linear self-interactions of mesons provides a good description of the binding energy and neutron separation energy. The present paper focus on the physical mechanism of the Pb isotope shifts.
Resumo:
By using a combinatorial screening method based on the self-consistent field theory, we investigate the equilibrium morphologies of linear ABCBA and H-shaped (AB)(2)C(BA)(2) block copolymers in two dimensions. The triangle phase diagrams of both block copolymers are constructed by systematically varying the volume fractions of blocks A, B, and C. In this study, the interaction energies between species A, B, and C are set to be equal. Four different equilibrium morphologies are identified, i.e., the lamellar phase (LAM), the hexagonal lattice phase (HEX), the core-shell hexagonal lattice phase (CSH), and the two interpenetrating tetragonal lattice phase (TET2). For the linear ABCBA block copolymer, the reflection symmetry is observed in the phase diagram except for some special grid points, and most of grid points are occupied by LAM morphology. However, for the H-shaped (AB)(2)C(BA)(2) block copolymer, most of the grid points in the triangle phase diagram are occupied by CSH morphology, which is ascribed to the different chain architectures of the two block copolymers. These results may help in the design of block copolymers with different microstructures.
Resumo:
LaC3n+ (n = 0, 1, 2) clusters have been studied using B3LYP (Becke 3-parameter-Lee-Yang-Parr) density functional method. The basis set is Dunning/ Huzinaga valence double zeta for carbon and [2s2p2d] for lanthanum, denoted LANL1DZ. Four isomers are presented for each cluster; two of them are edge binding isomers with C-2 upsilon symmetry, the other two are Linear chains with C-infinity upsilon symmetry. Meanwhile, two spin states for each isomer, that is, singlet and triplet for LaC3+, doublet and quartet for LaC3 and LaC32+, respectively, are also considered. Geometries, vibrational frequencies, infrared intensities, and other quantities are reported and discussed. The results indicate that at some spin states; the C-2 upsilon symmetry isomers are the dominant structures, while for the other spin states, linear isomers are energetically favored. (C) 1998 John Wiley & Sons, Inc.
Resumo:
Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
Resumo:
Learning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. In this note, we extend the theory by introducing ways of dealing with two aspects of learning: learning in the presence of unreliable examples and learning from positive and negative examples. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden.
Resumo:
This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: and element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limted set of associations between it and other elements from teh discourse as well as from general knowledge.
Resumo:
This report describes the implementation of a theory of edge detection, proposed by Marr and Hildreth (1979). According to this theory, the image is first processed independently through a set of different size filters, whose shape is the Laplacian of a Gaussian, ***. Zero-crossings in the output of these filters mark the positions of intensity changes at different resolutions. Information about these zero-crossings is then used for deriving a full symbolic description of changes in intensity in the image, called the raw primal sketch. The theory is closely tied with early processing in the human visual systems. In this report, we first examine the critical properties of the initial filters used in the edge detection process, both from a theoretical and practical standpoint. The implementation is then used as a test bed for exploring aspects of the human visual system; in particular, acuity and hyperacuity. Finally, we present some preliminary results concerning the relationship between zero-crossings detected at different resolutions, and some observations relevant to the process by which the human visual system integrates descriptions of intensity changes obtained at different resolutions.
Resumo:
Booth, Ken, Theory of World Security (Cambridge: Cambridge University Press, 2008), pp.xviii+489 RAE2008
Resumo:
Binding, David; Bell, D.; Walters, K., (2006) 'The Oscillatory Squeeze flow rheometer: Comprehensive theory and a new experimental facility', Rheologica Acta 46 pp.111-121 RAE2008
Resumo:
Various restrictions on the terms allowed for substitution give rise to different cases of semi-unification. Semi-unification on finite and regular terms has already been considered in the literature. We introduce a general case of semi-unification where substitutions are allowed on non-regular terms, and we prove the equivalence of this general case to a well-known undecidable data base dependency problem, thus establishing the undecidability of general semi-unification. We present a unified way of looking at the various problems of semi-unification. We give some properties that are common to all the cases of semi-unification. We also the principality property and the solution set for those problems. We prove that semi-unification on general terms has the principality property. Finally, we present a recursive inseparability result between semi-unification on regular terms and semi-unification on general terms.
Resumo:
Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations arc organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items arc compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object io identified. This search model is algorithmically specified to quantitatively simulate search data using a single set of parameters, as well as to qualitatively explain a still larger data base, including data of Aks and Enns (1992), Bravo and Blake (1990), Chellazzi, Miller, Duncan, and Desimone (1993), Egeth, Viri, and Garbart (1984), Cohen and Ivry (1991), Enno and Rensink (1990), He and Nakayarna (1992), Humphreys, Quinlan, and Riddoch (1989), Mordkoff, Yantis, and Egeth (1990), Nakayama and Silverman (1986), Treisman and Gelade (1980), Treisman and Sato (1990), Wolfe, Cave, and Franzel (1989), and Wolfe and Friedman-Hill (1992). The model hereby provides an alternative to recent variations on the Feature Integration and Guided Search models, and grounds the analysis of visual search in neural models of preattentive vision, attentive object learning and categorization, and attentive spatial localization and orientation.
Resumo:
Future high speed communications networks will transmit data predominantly over optical fibres. As consumer and enterprise computing will remain the domain of electronics, the electro-optical conversion will get pushed further downstream towards the end user. Consequently, efficient tools are needed for this conversion and due to many potential advantages, including low cost and high output powers, long wavelength Vertical Cavity Surface Emitting Lasers (VCSELs) are a viable option. Drawbacks, such as broader linewidths than competing options, can be mitigated through the use of additional techniques such as Optical Injection Locking (OIL) which can require significant expertise and expensive equipment. This thesis addresses these issues by removing some of the experimental barriers to achieving performance increases via remote OIL. Firstly, numerical simulations of the phase and the photon and carrier numbers of an OIL semiconductor laser allowed the classification of the stable locking phase limits into three distinct groups. The frequency detuning of constant phase values (ø) was considered, in particular ø = 0 where the modulation response parameters were shown to be independent of the linewidth enhancement factor, α. A new method to estimate α and the coupling rate in a single experiment was formulated. Secondly, a novel technique to remotely determine the locked state of a VCSEL based on voltage variations of 2mV−30mV during detuned injection has been developed which can identify oscillatory and locked states. 2D & 3D maps of voltage, optical and electrical spectra illustrate corresponding behaviours. Finally, the use of directly modulated VCSELs as light sources for passive optical networks was investigated by successful transmission of data at 10 Gbit/s over 40km of single mode fibre (SMF) using cost effective electronic dispersion compensation to mitigate errors due to wavelength chirp. A widely tuneable MEMS-VCSEL was established as a good candidate for an externally modulated colourless source after a record error free transmission at 10 Gbit/s over 50km of SMF across a 30nm single mode tuning range. The ability to remotely set the emission wavelength using the novel methods developed in this thesis was demonstrated.
Resumo:
The wonder of the last century has been the rapid development in technology. One of the sectors that it has touched immensely is the electronic industry. There has been exponential development in the field and scientists are pushing new horizons. There is an increased dependence in technology for every individual from different strata in the society. Atomic Layer Deposition (ALD) is a unique technique for growing thin films. It is widely used in the semiconductor industry. Films as thin as few nanometers can be deposited using this technique. Although this process has been explored for a variety of oxides, sulphides and nitrides, a proper method for deposition of many metals is missing. Metals are often used in the semiconductor industry and hence are of significant importance. A deficiency in understanding the basic chemistry at the nanoscale for possible reactions has delayed the improvement in metal ALD. In this thesis, we study the intrinsic chemistry involved for Cu ALD. This work reports computational study using Density Functional Theory as implemented in TURBOMOLE program. Both the gas phase and surface reactions are studied in most of the cases. The merits and demerits of a promising transmetallation reaction have been evaluated at the beginning of the study. Further improvements in the structure of precursors and coreagent have been proposed. This has led to the proposal of metallocenes as co-reagents and Cu(I) carbene compounds as new set of precursors. A three step process for Cu ALD that generates ligand free Cu layer after every ALD pulse has also been studied. Although the chemistry has been studied under the umbrella of Cu ALD the basic principles hold true for ALD of other metals (e.g. Co, Ni, Fe ) and also for other branches of science like thin film deposition other than ALD, electrochemical reactions, etc.
Resumo:
© 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.A key component in calculations of exchange and correlation energies is the Coulomb operator, which requires the evaluation of two-electron integrals. For localized basis sets, these four-center integrals are most efficiently evaluated with the resolution of identity (RI) technique, which expands basis-function products in an auxiliary basis. In this work we show the practical applicability of a localized RI-variant ('RI-LVL'), which expands products of basis functions only in the subset of those auxiliary basis functions which are located at the same atoms as the basis functions. We demonstrate the accuracy of RI-LVL for Hartree-Fock calculations, for the PBE0 hybrid density functional, as well as for RPA and MP2 perturbation theory. Molecular test sets used include the S22 set of weakly interacting molecules, the G3 test set, as well as the G2-1 and BH76 test sets, and heavy elements including titanium dioxide, copper and gold clusters. Our RI-LVL implementation paves the way for linear-scaling RI-based hybrid functional calculations for large systems and for all-electron many-body perturbation theory with significantly reduced computational and memory cost.
Resumo:
Whether a small cell, a small genome or a minimal set of chemical reactions with self-replicating properties, simplicity is beguiling. As Leonardo da Vinci reportedly said, 'simplicity is the ultimate sophistication'. Two diverging views of simplicity have emerged in accounts of symbiotic and commensal bacteria and cosmopolitan free-living bacteria with small genomes. The small genomes of obligate insect endosymbionts have been attributed to genetic drift caused by small effective population sizes (Ne). In contrast, streamlining theory attributes small cells and genomes to selection for efficient use of nutrients in populations where Ne is large and nutrients limit growth. Regardless of the cause of genome reduction, lost coding potential eventually dictates loss of function. Consequences of reductive evolution in streamlined organisms include atypical patterns of prototrophy and the absence of common regulatory systems, which have been linked to difficulty in culturing these cells. Recent evidence from metagenomics suggests that streamlining is commonplace, may broadly explain the phenomenon of the uncultured microbial majority, and might also explain the highly interdependent (connected) behavior of many microbial ecosystems. Streamlining theory is belied by the observation that many successful bacteria are large cells with complex genomes. To fully appreciate streamlining, we must look to the life histories and adaptive strategies of cells, which impose minimum requirements for complexity that vary with niche.