978 resultados para computer art


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The modes of binding of Gp(2',5')A, Gp(2',5')C, Gp(2',5')G and Gp(2',5')U to RNase T1 have been determined by computer modelling studies. All these dinucleoside phosphates assume extended conformations in the active site leading to better interactions with the enzyme. The 5'-terminal guanine of all these ligands is placed in the primary base binding site of the enzyme in an orientation similar to that of 2'-GMP in the RNase T1-2'-GMP complex. The 2'-terminal purines are placed close to the hydrophobic pocket formed by the residues Gly71, Ser72, Pro73 and Gly74 which occur in a loop region. However, the orientation of the 2'-terminal pyrimidines is different from that of 2'-terminal purines. This perhaps explains the higher binding affinity of the 2',5'-linked guanine dinucleoside phosphates with 2'-terminal purines than those with 2'-terminal pyrimidines. A comparison of the binding of the guanine dinucleoside phosphates with 2',5'- and 3',5'-linkages suggests significant differences in the ribose pucker and hydrogen bonding interactions between the catalytic residues and the bound nucleoside phosphate implying that 2',5'-linked dinucleoside phosphates may not be the ideal ligands to probe the role of the catalytic amino acid residues. A change in the amino acid sequence in the surface loop region formed by the residues Gly71 to Gly74 drastically affects the conformation of the base binding subsite, and this may account for the inactivity of the enzyme with altered sequence i.e., with Pro, Gly and Ser at positions 71 to 73 respectively. These results thus suggest that in addition to recognition and catalytic sites, interactions at the loop regions which constitute the subsite for base binding are also crucial in determining the substrate specificity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several recent theoretical and computer simulation studies have considered solvation dynamics in a Brownian dipolar lattice which provides a simple model solvent for which detailed calculations can be carried out. In this article a fully microscopic calculation of the solvation dynamics of an ion in a Brownian dipolar lattice is presented. The calculation is based on the non‐Markovian molecular hydrodynamic theory developed recently. The main assumption of the present calculation is that the two‐particle orientational correlation functions of the solid can be replaced by those of the liquid state. It is shown that such a calculation provides an excellent agreement with the computer simulation results. More importantly, the present calculations clearly demonstrate that the frequency‐dependent dielectric friction plays an important role in the long time decay of the solvation time correlation function. We also find that the present calculation provides somewhat better agreement than either the dynamic mean spherical approximation (DMSA) or the Fried–Mukamel theory which use the simulated frequency‐dependent dielectric function. It is found that the dissipative kernels used in the molecular hydrodynamic approach and in the Fried–Mukamel theory are vastly different, especially at short times. However, in spite of this disagreement, the two theories still lead to comparable results in good agreement with computer simulation, which suggests that even a semiquantitatively accurate dissipative kernel may be sufficient to obtain a reliable solvation time correlation function. A new wave vector and frequency‐dependent dissipative kernel (or memory function) is proposed which correctly goes over to the appropriate expressions in both the single particle and the collective limits. This form is expected to lead to better results than all the existing descriptions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inspite of numerous research advancements made in recent years in the area of formal techniques, specification of real-time systems is still proving to be a very challenging and difficult problem. In this context, this paper critically examines state-of-the-art specification techniques for real-time systems and analyzes the emerging trends.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maurice Merleau-Ponty (1908-1961) has been known as the philosopher of painting. His interest in the theory of perception intertwined with the questions concerning the artist s perception, the experience of an artwork and the possible interpretations of the artwork. For him, aesthetics was not a sub-field of philosophy, and art was not simply a subject matter for the aesthetic experience, but a form of thinking. This study proposes an opening for a dialogue between Merleau-Pontian phenomenology and contemporary art. The thesis examines his phenomenology through certain works of contemporary art and presents readings of these artworks through his phenomenology. The thesis both shows the potentiality of a method, but also engages in the critical task of finding the possible limitations of his approach. The first part lays out the methodological and conceptual points of departure of Merleau-Ponty s phenomenological approach to perception as well as the features that determined his discussion on encountering art. Merleau-Ponty referred to the experience of perceiving art using the notion of seeing with (voir selon). He stressed a correlative reciprocity described in Eye and Mind (1961) as the switching of the roles of the visible and the painter. The choice of artworks is motivated by certain restrictions in the phenomenological readings of visual arts. The examined works include paintings by Tiina Mielonen, a photographic work by Christian Mayer, a film by Douglas Gordon and Philippe Parreno, and an installation by Monika Sosnowska. These works resonate with, and challenge, his phenomenological approach. The chapters with case studies take up different themes that are central to Merleau-Ponty s phenomenology: space, movement, time, and touch. All of the themes are interlinked with the examined artworks. There are also topics that reappear in the thesis, such as the notion of écart and the question of encountering the other. As Merleau-Ponty argued, the sphere of art has a particular capability to address our being in the world. The thesis presents an interpretation that emphasises the notion of écart, which refers to an experience of divergence or dispossession. The sudden dissociation, surprise or rupture that is needed in order for a meeting between the spectator and the artwork, or between two persons, to be possible. Further, the thesis suggests that through artworks it is possible to take into consideration the écart, the divergence, that defines our subjectivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A state-of-the-art model of the coupled ocean-atmosphere system, the climate forecast system (CFS), from the National Centres for Environmental Prediction (NCEP), USA, has been ported onto the PARAM Padma parallel computing system at the Centre for Development of Advanced Computing (CDAC), Bangalore and retrospective predictions for the summer monsoon (June-September) season of 2009 have been generated, using five initial conditions for the atmosphere and one initial condition for the ocean for May 2009. Whereas a large deficit in the Indian summer monsoon rainfall (ISMR; June-September) was experienced over the Indian region (with the all-India rainfall deficit by 22% of the average), the ensemble average prediction was for above-average rainfall during the summer monsoon. The retrospective predictions of ISMR with CFS from NCEP for 1981-2008 have been analysed. The retrospective predictions from NCEP for the summer monsoon of 1994 and that from CDAC for 2009 have been compared with the simulations for each of the seasons with the stand-alone atmospheric component of the model, the global forecast system (GFS), and observations. It has been shown that the simulation with GFS for 2009 showed deficit rainfall as observed. The large error in the prediction for the monsoon of 2009 can be attributed to a positive Indian Ocean Dipole event seen in the prediction from July onwards, which was not present in the observations. This suggests that the error could be reduced with improvement of the ocean model over the equatorial Indian Ocean.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel formulation of the points-to analysis as a system of linear equations. With this, the efficiency of the points-to analysis can be significantly improved by leveraging the advances in solution procedures for solving the systems of linear equations. However, such a formulation is non-trivial and becomes challenging due to various facts, namely, multiple pointer indirections, address-of operators and multiple assignments to the same variable. Further, the problem is exacerbated by the need to keep the transformed equations linear. Despite this, we successfully model all the pointer operations. We propose a novel inclusion-based context-sensitive points-to analysis algorithm based on prime factorization, which can model all the pointer operations. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that our approach is competitive to the state-of-the-art algorithms. With an average memory requirement of mere 21MB, our context-sensitive points-to analysis algorithm analyzes each benchmark in 55 seconds on an average.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past two decades RNase A has been the focus of diverse investigations in order to understand the nature of substrate binding and to know the mechanism of enzyme action. Although this system is reasonably well characterized from the view point of some of the binding sites, the details of interactions in the second base binding (B2) site is insufficient. Further, the nature of ligand-protein interaction is elucidated generally by studies on RNase A-substrate analog complexes (mainly with the help of X-ray crystallography). Hence, the details of interactions at atomic level arising due to substrates are inferred indirectly. In the present paper, the dinucleotide substrate UpA is fitted into the active site of RNase A Several possible substrate conformations are investigated and the binding modes have been selected based on Contact Criteria. Thus identified RNase A-UpA complexes are energy minimized in coordinate space and are analysed in terms of conformations, energetics and interactions. The best possible ligand conformations for binding to RNase A are identified by experimentally known interactions and by the energetics. Upon binding of UpA to RNase A the changes associated,with protein back bone, Side chains in general and at the binding sites in particular are described. Further, the detailed interactions between UpA and RNase A are characterized in terms of hydrogen bonds and energetics. An extensive study has helped in interpreting the diverse results obtained from a number of experiments and also in evaluating the extent of changes the protein and the substrate undergo in order to maximize their interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A theoretical analysis of the three currently popular microscopic theories of solvation dynamics, namely, the dynamic mean spherical approximation (DMSA), the molecular hydrodynamic theory (MHT), and the memory function theory (MFT) is carried out. It is shown that in the underdamped limit of momentum relaxation, all three theories lead to nearly identical results when the translational motions of both the solute ion and the solvent molecules are neglected. In this limit, the theoretical prediction is in almost perfect agreement with the computer simulation results of solvation dynamics in the model Stockmayer liquid. However, the situation changes significantly in the presence of the translational motion of the solvent molecules. In this case, DMSA breaks down but the other two theories correctly predict the acceleration of solvation in agreement with the simulation results. We find that the translational motion of a light solute ion can play an important role in its own solvation. None of the existing theories describe this aspect. A generalization of the extended hydrodynamic theory is presented which, for the first time, includes the contribution of solute motion towards its own solvation dynamics. The extended theory gives excellent agreement with the simulations where solute motion is allowed. It is further shown that in the absence of translation, the memory function theory of Fried and Mukamel can be recovered from the hydrodynamic equations if the wave vector dependent dissipative kernel in the hydrodynamic description is replaced by its long wavelength value. We suggest a convenient memory kernel which is superior to the limiting forms used in earlier descriptions. We also present an alternate, quite general, statistical mechanical expression for the time dependent solvation energy of an ion. This expression has remarkable similarity with that for the translational dielectric friction on a moving ion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper(1) presents novel algorithms and applications for a particular class of mixed-norm regularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given kernels are grouped and employ l(1) norm regularization for promoting sparsity within RKHS norms of each group and l(s), s >= 2 norm regularization for promoting non-sparse combinations across groups. Various sparsity levels in combining the kernels can be achieved by varying the grouping of kernels-hence we name the formulations as Variable Sparsity Kernel Learning (VSKL) formulations. While previous attempts have a non-convex formulation, here we present a convex formulation which admits efficient Mirror-Descent (MD) based solving techniques. The proposed MD based algorithm optimizes over product of simplices and has a computational complexity of O (m(2)n(tot) log n(max)/epsilon(2)) where m is no. training data points, n(max), n(tot) are the maximum no. kernels in any group, total no. kernels respectively and epsilon is the error in approximating the objective. A detailed proof of convergence of the algorithm is also presented. Experimental results show that the VSKL formulations are well-suited for multi-modal learning tasks like object categorization. Results also show that the MD based algorithm outperforms state-of-the-art MKL solvers in terms of computational efficiency.