118 resultados para computer art


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inspite of numerous research advancements made in recent years in the area of formal techniques, specification of real-time systems is still proving to be a very challenging and difficult problem. In this context, this paper critically examines state-of-the-art specification techniques for real-time systems and analyzes the emerging trends.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A state-of-the-art model of the coupled ocean-atmosphere system, the climate forecast system (CFS), from the National Centres for Environmental Prediction (NCEP), USA, has been ported onto the PARAM Padma parallel computing system at the Centre for Development of Advanced Computing (CDAC), Bangalore and retrospective predictions for the summer monsoon (June-September) season of 2009 have been generated, using five initial conditions for the atmosphere and one initial condition for the ocean for May 2009. Whereas a large deficit in the Indian summer monsoon rainfall (ISMR; June-September) was experienced over the Indian region (with the all-India rainfall deficit by 22% of the average), the ensemble average prediction was for above-average rainfall during the summer monsoon. The retrospective predictions of ISMR with CFS from NCEP for 1981-2008 have been analysed. The retrospective predictions from NCEP for the summer monsoon of 1994 and that from CDAC for 2009 have been compared with the simulations for each of the seasons with the stand-alone atmospheric component of the model, the global forecast system (GFS), and observations. It has been shown that the simulation with GFS for 2009 showed deficit rainfall as observed. The large error in the prediction for the monsoon of 2009 can be attributed to a positive Indian Ocean Dipole event seen in the prediction from July onwards, which was not present in the observations. This suggests that the error could be reduced with improvement of the ocean model over the equatorial Indian Ocean.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel formulation of the points-to analysis as a system of linear equations. With this, the efficiency of the points-to analysis can be significantly improved by leveraging the advances in solution procedures for solving the systems of linear equations. However, such a formulation is non-trivial and becomes challenging due to various facts, namely, multiple pointer indirections, address-of operators and multiple assignments to the same variable. Further, the problem is exacerbated by the need to keep the transformed equations linear. Despite this, we successfully model all the pointer operations. We propose a novel inclusion-based context-sensitive points-to analysis algorithm based on prime factorization, which can model all the pointer operations. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that our approach is competitive to the state-of-the-art algorithms. With an average memory requirement of mere 21MB, our context-sensitive points-to analysis algorithm analyzes each benchmark in 55 seconds on an average.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past two decades RNase A has been the focus of diverse investigations in order to understand the nature of substrate binding and to know the mechanism of enzyme action. Although this system is reasonably well characterized from the view point of some of the binding sites, the details of interactions in the second base binding (B2) site is insufficient. Further, the nature of ligand-protein interaction is elucidated generally by studies on RNase A-substrate analog complexes (mainly with the help of X-ray crystallography). Hence, the details of interactions at atomic level arising due to substrates are inferred indirectly. In the present paper, the dinucleotide substrate UpA is fitted into the active site of RNase A Several possible substrate conformations are investigated and the binding modes have been selected based on Contact Criteria. Thus identified RNase A-UpA complexes are energy minimized in coordinate space and are analysed in terms of conformations, energetics and interactions. The best possible ligand conformations for binding to RNase A are identified by experimentally known interactions and by the energetics. Upon binding of UpA to RNase A the changes associated,with protein back bone, Side chains in general and at the binding sites in particular are described. Further, the detailed interactions between UpA and RNase A are characterized in terms of hydrogen bonds and energetics. An extensive study has helped in interpreting the diverse results obtained from a number of experiments and also in evaluating the extent of changes the protein and the substrate undergo in order to maximize their interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A theoretical analysis of the three currently popular microscopic theories of solvation dynamics, namely, the dynamic mean spherical approximation (DMSA), the molecular hydrodynamic theory (MHT), and the memory function theory (MFT) is carried out. It is shown that in the underdamped limit of momentum relaxation, all three theories lead to nearly identical results when the translational motions of both the solute ion and the solvent molecules are neglected. In this limit, the theoretical prediction is in almost perfect agreement with the computer simulation results of solvation dynamics in the model Stockmayer liquid. However, the situation changes significantly in the presence of the translational motion of the solvent molecules. In this case, DMSA breaks down but the other two theories correctly predict the acceleration of solvation in agreement with the simulation results. We find that the translational motion of a light solute ion can play an important role in its own solvation. None of the existing theories describe this aspect. A generalization of the extended hydrodynamic theory is presented which, for the first time, includes the contribution of solute motion towards its own solvation dynamics. The extended theory gives excellent agreement with the simulations where solute motion is allowed. It is further shown that in the absence of translation, the memory function theory of Fried and Mukamel can be recovered from the hydrodynamic equations if the wave vector dependent dissipative kernel in the hydrodynamic description is replaced by its long wavelength value. We suggest a convenient memory kernel which is superior to the limiting forms used in earlier descriptions. We also present an alternate, quite general, statistical mechanical expression for the time dependent solvation energy of an ion. This expression has remarkable similarity with that for the translational dielectric friction on a moving ion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper(1) presents novel algorithms and applications for a particular class of mixed-norm regularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given kernels are grouped and employ l(1) norm regularization for promoting sparsity within RKHS norms of each group and l(s), s >= 2 norm regularization for promoting non-sparse combinations across groups. Various sparsity levels in combining the kernels can be achieved by varying the grouping of kernels-hence we name the formulations as Variable Sparsity Kernel Learning (VSKL) formulations. While previous attempts have a non-convex formulation, here we present a convex formulation which admits efficient Mirror-Descent (MD) based solving techniques. The proposed MD based algorithm optimizes over product of simplices and has a computational complexity of O (m(2)n(tot) log n(max)/epsilon(2)) where m is no. training data points, n(max), n(tot) are the maximum no. kernels in any group, total no. kernels respectively and epsilon is the error in approximating the objective. A detailed proof of convergence of the algorithm is also presented. Experimental results show that the VSKL formulations are well-suited for multi-modal learning tasks like object categorization. Results also show that the MD based algorithm outperforms state-of-the-art MKL solvers in terms of computational efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The modes of binding of adenosine 2'-monophosphate (2'-AMP) to the enzyme ribonuclease (RNase) T1 were determined by computer modelling studies. The phosphate moiety of 2'-AMP binds at the primary phosphate binding site. However, adenine can occupy two distinct sites--(1) The primary base binding site where the guanine of 2'-GMP binds and (2) The subsite close to the N1 subsite for the base on the 3'-side of guanine in a guanyl dinucleotide. The minimum energy conformers corresponding to the two modes of binding of 2'-AMP to RNase T1 were found to be of nearly the same energy implying that in solution 2'-AMP binds to the enzyme in both modes. The conformation of the inhibitor and the predicted hydrogen bonding scheme for the RNase T1-2'-AMP complex in the second binding mode (S) agrees well with the reported x-ray crystallographic study. The existence of the first mode of binding explains the experimental observations that RNase T1 catalyses the hydrolysis of phosphodiester bonds adjacent to adenosine at high enzyme concentrations. A comparison of the interactions of 2'-AMP and 2'-GMP with RNase T1 reveals that Glu58 and Asn98 at the phosphate binding site and Glu46 at the base binding site preferentially stabilise the enzyme-2'-GMP complex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bacteriorhodopsin has been the subject of intense study in order to understand its photochemical function. The recent atomic model proposed by Henderson and coworkers based on electron cryo-microscopic studies has helped in understanding many of the structural and functional aspects of bacteriorhodopsin. However, the accuracy of the positions of the side chains is not very high since the model is based on low-resolution data. In this study, we have minimized the energy of this structure of bacteriorhodopsin and analyzed various types of interactions such as - intrahelical and interhelical hydrogen bonds and retinal environment. In order to understand the photochemical action, it is necessary to obtain information on the structures adopted at the intermediate states. In this direction, we have generated some intermediate structures taking into account certain experimental data, by computer modeling studies. Various isomers of retinal with 13-cis and/or 15-cis conformations and all possible staggered orientations of Lys-216 side chain were generated. The resultant structures were examined for the distance between Lys-216-schiff base nitrogen and the carboxylate oxygen atoms of Asp-96 - a residue which is known to reprotonate the schiff base at later stages of photocycle. Some of the structures were selected on the basis of suitable retinal orientation and the stability of these structures were tested by energy minimization studies. Further, the minimized structures are analyzed for the hydrogen bond interactions and retinal environment and the results are compared with those of the minimized rest state structure. The importance of functional groups in stabilizing the structure of bacteriorhodopsin and in participating dynamically during the photocycle have been discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An intelligent computer aided defect analysis (ICADA) system, based on artificial intelligence techniques, has been developed to identify design, process or material parameters which could be responsible for the occurrence of defective castings in a manufacturing campaign. The data on defective castings for a particular time frame, which is an input to the ICADA system, has been analysed. It was observed that a large proportion, i.e. 50-80% of all the defective castings produced in a foundry, have two, three or four types of defects occurring above a threshold proportion, say 10%. Also, a large number of defect types are either not found at all or found in a very small proportion, with a threshold value below 2%. An important feature of the ICADA system is the recognition of this pattern in the analysis. Thirty casting defect types and a large number of causes numbering between 50 and 70 for each, as identified in the AFS analysis of casting defects-the standard reference source for a casting process-constituted the foundation for building the knowledge base. Scientific rationale underlying the formation of a defect during the casting process was identified and 38 metacauses were coded. Process, material and design parameters which contribute to the metacauses were systematically examined and 112 were identified as rootcauses. The interconnections between defects, metacauses and rootcauses were represented as a three tier structured graph and the handling of uncertainty in the occurrence of events such as defects, metacauses and rootcauses was achieved by Bayesian analysis. The hill climbing search technique, associated with forward reasoning, was employed to recognize one or several root causes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real-time simulation of deformable solids is essential for some applications such as biological organ simulations for surgical simulators. In this work, deformable solids are approximated to be linear elastic, and an easy and straight forward numerical technique, the Finite Point Method (FPM), is used to model three dimensional linear elastostatics. Graphics Processing Unit (GPU) is used to accelerate computations. Results show that the Finite Point Method, together with GPU, can compute three dimensional linear elastostatic responses of solids at rates suitable for real-time graphics, for solids represented by reasonable number of points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distribution of particle reinforcements in cast composites is determined by the morphology of the solidification front. Interestingly, during solidification, the morphology of the interface is intrinsically affected by the presence of dispersed reinforcements. Thus the dispersoid distribution and length scale of matrix microstructure is a result of the interplay between these two. A proper combination of material and process parameters can be used to obtain composites with tailored microstructures. This requires the generation of a broad data base and optimization of the complete solidification process. The length scale of soldification microtructure has a large influence on the mechanical properties of the composites. This presentation addresses the concept of a particle distribution map which can help in predicting particle distribution under different solidification conditions Future research directions have also been indicated.