172 resultados para computer programming
Resumo:
Several recent theoretical and computer simulation studies have considered solvation dynamics in a Brownian dipolar lattice which provides a simple model solvent for which detailed calculations can be carried out. In this article a fully microscopic calculation of the solvation dynamics of an ion in a Brownian dipolar lattice is presented. The calculation is based on the non‐Markovian molecular hydrodynamic theory developed recently. The main assumption of the present calculation is that the two‐particle orientational correlation functions of the solid can be replaced by those of the liquid state. It is shown that such a calculation provides an excellent agreement with the computer simulation results. More importantly, the present calculations clearly demonstrate that the frequency‐dependent dielectric friction plays an important role in the long time decay of the solvation time correlation function. We also find that the present calculation provides somewhat better agreement than either the dynamic mean spherical approximation (DMSA) or the Fried–Mukamel theory which use the simulated frequency‐dependent dielectric function. It is found that the dissipative kernels used in the molecular hydrodynamic approach and in the Fried–Mukamel theory are vastly different, especially at short times. However, in spite of this disagreement, the two theories still lead to comparable results in good agreement with computer simulation, which suggests that even a semiquantitatively accurate dissipative kernel may be sufficient to obtain a reliable solvation time correlation function. A new wave vector and frequency‐dependent dissipative kernel (or memory function) is proposed which correctly goes over to the appropriate expressions in both the single particle and the collective limits. This form is expected to lead to better results than all the existing descriptions.
Resumo:
Electronic, magnetic, and structural properties of graphene flakes depend sensitively upon the type of edge atoms. We present a simple software tool for determining the type of edge atoms in a honeycomb lattice. The algorithm is based on nearest neighbor counting. Whether an edge atom is of armchair or zigzag type is decided by the unique pattern of its nearest neighbors. Particular attention is paid to the practical aspects of using the tool, as additional features such as extracting out the edges from the lattice could help in analyzing images from transmission microscopy or other experimental probes. Ultimately, the tool in combination with density-functional theory or tight-binding method can also be helpful in correlating the properties of graphene flakes with the different armchair-to-zigzag ratios. Program summary Program title: edgecount Catalogue identifier: AEIA_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEIA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 66685 No. of bytes in distributed program, including test data, etc.: 485 381 Distribution format: tar.gz Programming language: FORTRAN 90/95 Computer: Most UNIX-based platforms Operating system: Linux, Mac OS Classification: 16.1, 7.8 Nature of problem: Detection and classification of edge atoms in a finite patch of honeycomb lattice. Solution method: Build nearest neighbor (NN) list; assign types to edge atoms on the basis of their NN pattern. Running time: Typically similar to second(s) for all examples. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.
Resumo:
In the past two decades RNase A has been the focus of diverse investigations in order to understand the nature of substrate binding and to know the mechanism of enzyme action. Although this system is reasonably well characterized from the view point of some of the binding sites, the details of interactions in the second base binding (B2) site is insufficient. Further, the nature of ligand-protein interaction is elucidated generally by studies on RNase A-substrate analog complexes (mainly with the help of X-ray crystallography). Hence, the details of interactions at atomic level arising due to substrates are inferred indirectly. In the present paper, the dinucleotide substrate UpA is fitted into the active site of RNase A Several possible substrate conformations are investigated and the binding modes have been selected based on Contact Criteria. Thus identified RNase A-UpA complexes are energy minimized in coordinate space and are analysed in terms of conformations, energetics and interactions. The best possible ligand conformations for binding to RNase A are identified by experimentally known interactions and by the energetics. Upon binding of UpA to RNase A the changes associated,with protein back bone, Side chains in general and at the binding sites in particular are described. Further, the detailed interactions between UpA and RNase A are characterized in terms of hydrogen bonds and energetics. An extensive study has helped in interpreting the diverse results obtained from a number of experiments and also in evaluating the extent of changes the protein and the substrate undergo in order to maximize their interactions.
Resumo:
A theoretical analysis of the three currently popular microscopic theories of solvation dynamics, namely, the dynamic mean spherical approximation (DMSA), the molecular hydrodynamic theory (MHT), and the memory function theory (MFT) is carried out. It is shown that in the underdamped limit of momentum relaxation, all three theories lead to nearly identical results when the translational motions of both the solute ion and the solvent molecules are neglected. In this limit, the theoretical prediction is in almost perfect agreement with the computer simulation results of solvation dynamics in the model Stockmayer liquid. However, the situation changes significantly in the presence of the translational motion of the solvent molecules. In this case, DMSA breaks down but the other two theories correctly predict the acceleration of solvation in agreement with the simulation results. We find that the translational motion of a light solute ion can play an important role in its own solvation. None of the existing theories describe this aspect. A generalization of the extended hydrodynamic theory is presented which, for the first time, includes the contribution of solute motion towards its own solvation dynamics. The extended theory gives excellent agreement with the simulations where solute motion is allowed. It is further shown that in the absence of translation, the memory function theory of Fried and Mukamel can be recovered from the hydrodynamic equations if the wave vector dependent dissipative kernel in the hydrodynamic description is replaced by its long wavelength value. We suggest a convenient memory kernel which is superior to the limiting forms used in earlier descriptions. We also present an alternate, quite general, statistical mechanical expression for the time dependent solvation energy of an ion. This expression has remarkable similarity with that for the translational dielectric friction on a moving ion.
Resumo:
Gauss and Fourier have together provided us with the essential techniques for symbolic computation with linear arithmetic constraints over the reals and the rationals. These variable elimination techniques for linear constraints have particular significance in the context of constraint logic programming languages that have been developed in recent years. Variable elimination in linear equations (Guassian Elimination) is a fundamental technique in computational linear algebra and is therefore quite familiar to most of us. Elimination in linear inequalities (Fourier Elimination), on the other hand, is intimately related to polyhedral theory and aspects of linear programming that are not quite as familiar. In addition, the high complexity of elimination in inequalities has forces the consideration of intricate specializations of Fourier's original method. The intent of this survey article is to acquaint the reader with these connections and developments. The latter part of the article dwells on the thesis that variable elimination in linear constraints over the reals extends quite naturally to constraints in certain discrete domains.
Resumo:
This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.
Resumo:
In this paper we develop a Linear Programming (LP) based decentralized algorithm for a group of multiple autonomous agents to achieve positional consensus. Each agent is capable of exchanging information about its position and orientation with other agents within their sensing region. The method is computationally feasible and easy to implement. Analytical results are presented. The effectiveness of the approach is illustrated with simulation results.
Resumo:
We describe a compiler for the Flat Concurrent Prolog language on a message passing multiprocessor architecture. This compiler permits symbolic and declarative programming in the syntax of Guarded Horn Rules, The implementation has been verified and tested on the 64-node PARAM parallel computer developed by C-DAC (Centre for the Development of Advanced Computing, India), Flat Concurrent Prolog (FCP) is a logic programming language designed for concurrent programming and parallel execution, It is a process oriented language, which embodies dataflow synchronization and guarded-command as its basic control mechanisms. An identical algorithm is executed on every processor in the network, We assume regular network topologies like mesh, ring, etc, Each node has a local memory, The algorithm comprises of two important parts: reduction and communication, The most difficult task is to integrate the solutions of problems that arise in the implementation in a coherent and efficient manner. We have tested the efficacy of the compiler on various benchmark problems of the ICOT project that have been reported in the recent book by Evan Tick, These problems include Quicksort, 8-queens, and Prime Number Generation, The results of the preliminary tests are favourable, We are currently examining issues like indexing and load balancing to further optimize our compiler.
Resumo:
This paper presents a dan-based evolutionary approach for solving control problems. Three selected control problems, viz. linear-quadratic, harvest, and push-cart problems, are solved using the proposed approach. Results are compared with those of the evolutionary programming (EP) approach. In most of the cases, the proposed approach is successful in obtaining (near) optimal solutions for these selected problems.
Resumo:
The modes of binding of adenosine 2'-monophosphate (2'-AMP) to the enzyme ribonuclease (RNase) T1 were determined by computer modelling studies. The phosphate moiety of 2'-AMP binds at the primary phosphate binding site. However, adenine can occupy two distinct sites--(1) The primary base binding site where the guanine of 2'-GMP binds and (2) The subsite close to the N1 subsite for the base on the 3'-side of guanine in a guanyl dinucleotide. The minimum energy conformers corresponding to the two modes of binding of 2'-AMP to RNase T1 were found to be of nearly the same energy implying that in solution 2'-AMP binds to the enzyme in both modes. The conformation of the inhibitor and the predicted hydrogen bonding scheme for the RNase T1-2'-AMP complex in the second binding mode (S) agrees well with the reported x-ray crystallographic study. The existence of the first mode of binding explains the experimental observations that RNase T1 catalyses the hydrolysis of phosphodiester bonds adjacent to adenosine at high enzyme concentrations. A comparison of the interactions of 2'-AMP and 2'-GMP with RNase T1 reveals that Glu58 and Asn98 at the phosphate binding site and Glu46 at the base binding site preferentially stabilise the enzyme-2'-GMP complex.
Resumo:
Bacteriorhodopsin has been the subject of intense study in order to understand its photochemical function. The recent atomic model proposed by Henderson and coworkers based on electron cryo-microscopic studies has helped in understanding many of the structural and functional aspects of bacteriorhodopsin. However, the accuracy of the positions of the side chains is not very high since the model is based on low-resolution data. In this study, we have minimized the energy of this structure of bacteriorhodopsin and analyzed various types of interactions such as - intrahelical and interhelical hydrogen bonds and retinal environment. In order to understand the photochemical action, it is necessary to obtain information on the structures adopted at the intermediate states. In this direction, we have generated some intermediate structures taking into account certain experimental data, by computer modeling studies. Various isomers of retinal with 13-cis and/or 15-cis conformations and all possible staggered orientations of Lys-216 side chain were generated. The resultant structures were examined for the distance between Lys-216-schiff base nitrogen and the carboxylate oxygen atoms of Asp-96 - a residue which is known to reprotonate the schiff base at later stages of photocycle. Some of the structures were selected on the basis of suitable retinal orientation and the stability of these structures were tested by energy minimization studies. Further, the minimized structures are analyzed for the hydrogen bond interactions and retinal environment and the results are compared with those of the minimized rest state structure. The importance of functional groups in stabilizing the structure of bacteriorhodopsin and in participating dynamically during the photocycle have been discussed.
Resumo:
An intelligent computer aided defect analysis (ICADA) system, based on artificial intelligence techniques, has been developed to identify design, process or material parameters which could be responsible for the occurrence of defective castings in a manufacturing campaign. The data on defective castings for a particular time frame, which is an input to the ICADA system, has been analysed. It was observed that a large proportion, i.e. 50-80% of all the defective castings produced in a foundry, have two, three or four types of defects occurring above a threshold proportion, say 10%. Also, a large number of defect types are either not found at all or found in a very small proportion, with a threshold value below 2%. An important feature of the ICADA system is the recognition of this pattern in the analysis. Thirty casting defect types and a large number of causes numbering between 50 and 70 for each, as identified in the AFS analysis of casting defects-the standard reference source for a casting process-constituted the foundation for building the knowledge base. Scientific rationale underlying the formation of a defect during the casting process was identified and 38 metacauses were coded. Process, material and design parameters which contribute to the metacauses were systematically examined and 112 were identified as rootcauses. The interconnections between defects, metacauses and rootcauses were represented as a three tier structured graph and the handling of uncertainty in the occurrence of events such as defects, metacauses and rootcauses was achieved by Bayesian analysis. The hill climbing search technique, associated with forward reasoning, was employed to recognize one or several root causes.