995 resultados para basis sets
Resumo:
Objective: The aims of this study were to establish the structure of the potent anticonvulsant enaminone methyl 4-(4′-bromophenyl)amino-6-methyl-2- oxocyclohex-3-en-1-oate (E139), and to determine the energetically preferred conformation of the molecule, which is responsible for the biological activity. Materials and Methods: The structure of the molecule was determined by X-ray crystallography. Theoretical ab initio calculations with different basis sets were used to compare the energies of the different enantiomers and to other structurally related compounds. Results: The X-ray crystal structure revealed two independent molecules of E139, both with absolute configuration C11(S), C12(R), and their inverse. Ab initio calculations with the 6-31G, 3-21G and STO-3G basis sets confirmed that the C11(S), C12(R) enantiomer with both substituents equatorial had the lowest energy. Compared to relevant crystal structures, the geometry of the theoretical structures shows a longer C-N and shorter C=O distance with more cyclohexene ring puckering in the isolated molecule. Conclusion: Based on a pharmacophoric model it is suggested that the enaminone system HN-C=C-C=O and the 4-bromophenyl group in E139 are necessary to confer anticonvulsant property that could lead to the design of new and improved anticonvulsant agents. Copyright © 2003 S. Karger AG, Basel.
Resumo:
An Ab Initio/RRKM study of the reaction mechanism and product branching ratios of neutral-radical ethynyl (C2H) and cyano (CN) radical species with unsaturated hydrocarbons is performed. The reactions studied apply to cold conditions such as planetary atmospheres including Titan, the Interstellar Medium (ISM), icy bodies and molecular clouds. The reactions of C2H and CN additions to gaseous unsaturated hydrocarbons are an active area of study. NASA's Cassini/Huygens mission found a high concentration of C2H and CN from photolysis of ethyne (C2H2) and hydrogen cyanide (HCN), respectively, in the organic haze layers of the atmosphere of Titan. The reactions involved in the atmospheric chemistry of Titan lead to a vast array of larger, more complex intermediates and products and may also serve as a chemical model of Earth's primordial atmospheric conditions. The C2H and CN additions are rapid and exothermic, and often occur barrierlessly to various carbon sites of unsaturated hydrocarbons. The reaction mechanism is proposed on the basis of the resulting potential energy surface (PES) that includes all the possible intermediates and transition states that can occur, and all the products that lie on the surface. The B3LYP/6-311g(d,p) level of theory is employed to determine optimized electronic structures, moments of inertia, vibrational frequencies, and zero-point energy. They are followed by single point higher-level CCSD(T)/cc-vtz calculations, including extrapolations to complete basis sets (CBS) of the reactants and products. A microcanonical RRKM study predicts single-collision (zero-pressure limit) rate constants of all reaction paths on the potential energy surface, which is then used to compute the branching ratios of the products that result. These theoretical calculations are conducted either jointly or in parallel to experimental work to elucidate the chemical composition of Titan's atmosphere, the ISM, and cold celestial bodies.<.
Resumo:
We present a reformulation of the hairy-probe method for introducing electronic open boundaries that is appropriate for steady-state calculations involving nonorthogonal atomic basis sets. As a check on the correctness of the method we investigate a perfect atomic wire of Cu atoms and a perfect nonorthogonal chain of H atoms. For both atom chains we find that the conductance has a value of exactly one quantum unit and that this is rather insensitive to the strength of coupling of the probes to the system, provided values of the coupling are of the same order as the mean interlevel spacing of the system without probes. For the Cu atom chain we find in addition that away from the regions with probes attached, the potential in the wire is uniform, while within them it follows a predicted exponential variation with position. We then apply the method to an initial investigation of the suitability of graphene as a contact material for molecular electronics. We perform calculations on a carbon nanoribbon to determine the correct coupling strength of the probes to the graphene and obtain a conductance of about two quantum units corresponding to two bands crossing the Fermi surface. We then compute the current through a benzene molecule attached to two graphene contacts and find only a very weak current because of the disruption of the π conjugation by the covalent bond between the benzene and the graphene. In all cases we find that very strong or weak probe couplings suppress the current.
Resumo:
The main goal of the research presented in this work is to provide some important insights about computational modeling of open-shell species. Such projects are: the investigation of the size-extensivity error in Equation-of-Motion Coupled Cluster methods, the analysis of the Long-Range corrected scheme in predicting UV-Vis spectra of Cu(II) complexes with the 4-imidazole acetate and its ethylated derivative, and the exploration of the importance of choosing a proper basis set for the description of systems such as the lithium monoxide anion. The most significant findings of this research are: (i) The contribution of the left operator to the size-extensivity error of the CR-EOMCC(2,3) approach, (ii) The cause of d-d shifts when varying the range-separation parameter and the amount of the exact exchange arising from the imbalanced treatment of localized vs. delocalized orbitals via the "tuned" CAM-B3LYP* functional, (iii) The proper acidity trend of the first-row hydrides and their lithiated analogs that may be reversed if the basis sets are not correctly selected.
Resumo:
An Ab Initio/RRKM study of the reaction mechanism and product branching ratios of neutral-radical ethynyl (C2H) and cyano (CN) radical species with unsaturated hydrocarbons is performed. The reactions studied apply to cold conditions such as planetary atmospheres including Titan, the Interstellar Medium (ISM), icy bodies and molecular clouds. The reactions of C2H and CN additions to gaseous unsaturated hydrocarbons are an active area of study. NASA’s Cassini/Huygens mission found a high concentration of C2H and CN from photolysis of ethyne (C2H2) and hydrogen cyanide (HCN), respectively, in the organic haze layers of the atmosphere of Titan. The reactions involved in the atmospheric chemistry of Titan lead to a vast array of larger, more complex intermediates and products and may also serve as a chemical model of Earth’s primordial atmospheric conditions. The C2H and CN additions are rapid and exothermic, and often occur barrierlessly to various carbon sites of unsaturated hydrocarbons. The reaction mechanism is proposed on the basis of the resulting potential energy surface (PES) that includes all the possible intermediates and transition states that can occur, and all the products that lie on the surface. The B3LYP/6-311g(d,p) level of theory is employed to determine optimized electronic structures, moments of inertia, vibrational frequencies, and zero-point energy. They are followed by single point higher-level CCSD(T)/cc-vtz calculations, including extrapolations to complete basis sets (CBS) of the reactants and products. A microcanonical RRKM study predicts single-collision (zero-pressure limit) rate constants of all reaction paths on the potential energy surface, which is then used to compute the branching ratios of the products that result. These theoretical calculations are conducted either jointly or in parallel to experimental work to elucidate the chemical composition of Titan’s atmosphere, the ISM, and cold celestial bodies.
Resumo:
The background to this review paper is research we have performed over recent years aimed at developing a simulation system capable of handling large scale, real world applications implemented in an end-to-end parallel, scalable manner. The particular focus of this paper is the use of a Level Set solid modeling geometry kernel within this parallel framework to enable automated design optimization without topological restrictions and on geometries of arbitrary complexity. Also described is another interesting application of Level Sets: their use in guiding the export of a body-conformal mesh from our basic cut-Cartesian background octree - mesh - this permits third party flow solvers to be deployed. As a practical demonstrations meshes of guaranteed quality are generated and flow-solved for a B747 in full landing configuration and an automated optimization is performed on a cooled turbine tip geometry. Copyright © 2009 by W.N.Dawes.
Resumo:
Background: Protein phosphorylation is a generic way to regulate signal transduction pathways in all kingdoms of life. In many organisms, it is achieved by the large family of Ser/Thr/Tyr protein kinases which are traditionally classified into groups and subfamilies on the basis of the amino acid sequence of their catalytic domains. Many protein kinases are multidomain in nature but the diversity of the accessory domains and their organization are usually not taken into account while classifying kinases into groups or subfamilies. Methodology: Here, we present an approach which considers amino acid sequences of complete gene products, in order to suggest refinements in sets of pre-classified sequences. The strategy is based on alignment-free similarity scores and iterative Area Under the Curve (AUC) computation. Similarity scores are computed by detecting common patterns between two sequences and scoring them using a substitution matrix, with a consistent normalization scheme. This allows us to handle full-length sequences, and implicitly takes into account domain diversity and domain shuffling. We quantitatively validate our approach on a subset of 212 human protein kinases. We then employ it on the complete repertoire of human protein kinases and suggest few qualitative refinements in the subfamily assignment stored in the KinG database, which is based on catalytic domains only. Based on our new measure, we delineate 37 cases of potential hybrid kinases: sequences for which classical classification based entirely on catalytic domains is inconsistent with the full-length similarity scores computed here, which implicitly consider multi-domain nature and regions outside the catalytic kinase domain. We also provide some examples of hybrid kinases of the protozoan parasite Entamoeba histolytica. Conclusions: The implicit consideration of multi-domain architectures is a valuable inclusion to complement other classification schemes. The proposed algorithm may also be employed to classify other families of enzymes with multidomain architecture.
Resumo:
Fuzzy sets in the subject space are transformed to fuzzy solid sets in an increased object space on the basis of the development of the local umbra concept. Further, a counting transform is defined for reconstructing the fuzzy sets from the fuzzy solid sets, and the dilation and erosion operators in mathematical morphology are redefined in the fuzzy solid-set space. The algebraic structures of fuzzy solid sets can lead not only to fuzzy logic but also to arithmetic operations. Thus a fuzzy solid-set image algebra of two image transforms and five set operators is defined that can formulate binary and gray-scale morphological image-processing functions consisting of dilation, erosion, intersection, union, complement, addition, subtraction, and reflection in a unified form. A cellular set-logic array architecture is suggested for executing this image algebra. The optical implementation of the architecture, based on area coding of gray-scale values, is demonstrated. (C) 1995 Optical Society of America
Resumo:
There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.
Resumo:
A new type of advanced encryption standard (AES) implementation using a normal basis is presented. The method is based on a lookup technique that makes use of inversion and shift registers, which leads to a smaller size of lookup for the S-box than its corresponding implementations. The reduction in the lookup size is based on grouping sets of inverses into conjugate sets which in turn leads to a reduction in the number of lookup values. The above technique is implemented in a regular AES architecture using register files, which requires less interconnect and area and is suitable for security applications. The results of the implementation are competitive in throughput and area compared with the corresponding solutions in a polynomial basis.
Resumo:
Mycorrhizal associations, including ericoid, arbuscular and ecto-mycorrhizas, are found colonising highly metal contaminated soils. How do mycorrhizal fungi achieve metal resistance, and does this metal resistance confer enhanced metal resistance to plant symbionts? These are the questions explored in this review by considering the mechanistic basis of mycorrhizal adaptation to metal cations. Recent molecular and physiological studies are discussed. The review reappraises what constitutes metal resistance in the context of mycorrhizal associations and sets out the constitutive and adaptive mechanisms available for mycorrhizas to adapt to contaminated sites. The only direct evidence of mycorrhizal adaptation to metal cation pollutants is the exudation of organic acids to alter pollutant availability in the rhizosphere. This is not to say that other mechanism of adaptation do not exist, but conclusive evidence of adaptive mechanisms of tolerance are lacking. For constitutive mechanisms of resistance, there is much more evidence, and mycorrhizas possess the same constitutive mechanisms for dealing with metal contaminants as other organisms. Rhizosphere chemistry is critical to understanding the interactions of mycorrhizas with polluted soils. Soil pH, mineral weathering, pollutant precipitation with plant excreted organic acids all may have a key role in constitutive and adaptive tolerance of mycorrhizal associations present on contaminated sites. The responses of mycorrhizal fungi to toxic metal cations are diverse. This, linked to the fact that mycorrhizal diversity is normally high, even on highly contaminated sites, suggests that this diversity may have a significant role in colonisation of contaminated sites by mycorrhizas. That is, the environment selects for the fungal community that can best cope with the environment, so having diverse physiological attributes will enable colonisation of a wide range of metal contaminated micro-habitats.
Resumo:
Perchlorate-reducing bacteria fractionate chlorine stable isotopes giving a powerful approach to monitor the extent of microbial consumption of perchlorate in contaminated sites undergoing remediation or natural perchlorate containing sites. This study reports the full experimental data and methodology used to re-evaluate the chlorine isotope fractionation of perchlorate reduction in duplicate culture experiments of Azospira suillum strain PS at 37 degrees C (Delta Cl-37(Cr)--ClO4-) previously reported, without a supporting data set by Coleman et al. [Coleman, M.L., Ader, M., Chaudhuri, S., Coates,J.D., 2003. Microbial Isotopic Fractionation of Perchlorate Chlorine. Appl. Environ. Microbiol. 69, 4997-5000] in a reconnaissance study, with the goal of increasing the accuracy and precision of the isotopic fractionation determination. The method fully described here for the first time, allows the determination of a higher precision Delta Cl-37(Cl)--ClO4- value, either from accumulated chloride content and isotopic composition or from the residual perchlorate content and isotopic composition. The result sets agree perfectly, within error, giving average Delta Cl-37(Cl)--ClO4- = -14.94 +/- 0.15%omicron. Complementary use of chloride and perchlorate data allowed the identification and rejection of poor quality data by applying mass and isotopic balance checks. This precise Delta Cl-37(Cl)--ClO4-, value can serve as a reference point for comparison with future in situ or microcosm studies but we also note its similarity to the theoretical equilibrium isotopic fractionation between a hypothetical chlorine species of redox state +6 and perchlorate at 37 degrees C and suggest that the first electron transfer during perchlorate reduction may occur at isotopic equilibrium between art enzyme-bound chlorine and perchlorate. (C) 2008 Elsevier B.V. All rights reserved.
Nonlinear system identification using particle swarm optimisation tuned radial basis function models
Resumo:
A novel particle swarm optimisation (PSO) tuned radial basis function (RBF) network model is proposed for identification of non-linear systems. At each stage of orthogonal forward regression (OFR) model construction process, PSO is adopted to tune one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is often more efficient in model construction. The effectiveness of the proposed PSO aided OFR algorithm for constructing tunable node RBF models is demonstrated using three real data sets.
Resumo:
The combination of the synthetic minority oversampling technique (SMOTE) and the radial basis function (RBF) classifier is proposed to deal with classification for imbalanced two-class data. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier structure and the parameters of RBF kernels are determined using a particle swarm optimization algorithm based on the criterion of minimizing the leave-one-out misclassification rate. The experimental results on both simulated and real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
It has long been supposed that preference judgments between sets of to-be-considered possibilities are made by means of initially winnowing down the most promising-looking alternatives to form smaller “consideration sets” (Howard, 1963; Wright & Barbour, 1977). In preference choices with >2 options, it is standard to assume that a “consideration set”, based upon some simple criterion, is established to reduce the options available. Inferential judgments, in contrast, have more frequently been investigated in situations in which only two possibilities need to be considered (e.g., which of these two cities is the larger?) Proponents of the “fast and frugal” approach to decision-making suggest that such judgments are also made on the basis of limited, simple criteria. For example, if only one of two cities is recognized and the task is to judge which city has the larger population, the recognition heuristic states that the recognized city should be selected. A multinomial processing tree model is outlined which provides the basis for estimating the extent to which recognition is used as a criterion in establishing a consideration set for inferential judgments between three possible options.