55 resultados para Software visualization

em Indian Institute of Science - Bangalore - Índia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Understanding channel structures that lead to active sites or traverse the molecule is important in the study of molecular functions such as ion, ligand, and small molecule transport. Efficient methods for extracting, storing, and analyzing protein channels are required to support such studies. Further, there is a need for an integrated framework that supports computation of the channels, interactive exploration of their structure, and detailed visual analysis of their properties. Results: We describe a method for molecular channel extraction based on the alpha complex representation. The method computes geometrically feasible channels, stores both the volume occupied by the channel and its centerline in a unified representation, and reports significant channels. The representation also supports efficient computation of channel profiles that help understand channel properties. We describe methods for effective visualization of the channels and their profiles. These methods and the visual analysis framework are implemented in a software tool, CHEXVIS. We apply the method on a number of known channel containing proteins to extract pore features. Results from these experiments on several proteins show that CHEXVIS performance is comparable to, and in some cases, better than existing channel extraction techniques. Using several case studies, we demonstrate how CHEXVIS can be used to study channels, extract their properties and gain insights into molecular function. Conclusion: CHEXVIS supports the visual exploration of multiple channels together with their geometric and physico-chemical properties thereby enabling the understanding of the basic biology of transport through protein channels. The CHEXVIS web-server is freely available at http://vgl.serc.iisc.ernet.in/chexvis/. The web-server is supported on all modern browsers with latest Java plug-in.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The routine use of proton NMR for the visualization of enantiomers, aligned in the chiral liquid crystal solvent poly-γ-benzyl-l-glutamate (PBLG), is restricted due to severe loss of resolution arising from large number of pair wise interaction of nuclear spins. In the present study, we have designed two experimental techniques for their visualization utilizing the natural abundance 13C edited selective refocusing of single quantum (CH-SERF) and double quantum (CH-DQSERF) coherences. The methods achieve chiral discrimination and aid in the simultaneous determination of homonuclear couplings between active and passive spins and heteronuclear couplings between the excited protons and the participating 13C spin. The CH-SERF also overcomes the problem of overlap of central transitions of the methyl selective refocusing (SERF) experiment resulting in better chiral discrimination. Theoretical description of the evolution of magnetization in both the sequences has been discussed using polarization operator formalism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The NUVIEW software package allows skeletal models of any double helical nucleic acid molecule to be displayed out a graphics monitor and to apply various rotations, translations and scaling transformations interactively, through the keyboard. The skeletal model is generated by connecting any pair of representative points, one from each of the bases in the basepair. In addition to the above mentioned manipulations, the base residues can be identified by using a locator and the distance between any pair of residues can be obtained. A sequence based color coded display allows easy identification of sequence repeats, such as runs of Adenines. The real time interactive manipulation of such skeletal models for large DNA/RNA double helices, can be used to trace the path of the nucleic acid chain in three dimensions and hence get a better idea of its topology, location of linear or curved regions, distances between far off regions in the sequence etc. A physical picture of these features will assist in understanding the relationship between base sequence, structure and biological function in nucleic acids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software packages NUPARM and NUCGEN, are described, which can be used to understand sequence directed structural variations in nucleic acids, by analysis and generation of non-uniform structures. A set of local inter basepair parameters (viz. tilt, roll, twist, shift, slide and rise) have been defined, which use geometry and coordinates of two successive basepairs only and can be used to generate polymeric structures with varying geometries for each of the 16 possible dinucleotide steps. Intra basepair parameters, propeller, buckle, opening and the C6...C8 distance can also be varied, if required, while the sugar phosphate backbone atoms are fixed in some standard conformation ill each of the nucleotides. NUPARM can be used to analyse both DNA and RNA structures, with single as well as double stranded helices. The NUCGEN software generates double helical models with the backbone fixed in B-form DNA, but with appropriate modifications in the input data, it can also generate A-form DNA ar rd RNA duplex structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the presence of ATP, recA protein forms a presynaptic complex with single-stranded DNA that is an obligatory intermediate in homologous pairing. Presynaptic complexes of recA protein and circular single strands that are active in forming joint molecules can be isolated by gel filtration. These isolated active complexes are nucleoprotein filaments with the following characteristics: (i) a contour length that is at least 1.5 times that of the corresponding duplex DNA molecule, (ii) an ordered structure visualized by negative staining as a striated filament with a repeat distance of 9.0 nm and a width of 9.3 nm, (iii) approximately 8 molecules of recA protein and 20 nucleotide residues per striation. The widened spacing between bases in the nucleoprotein filament means that the initial matching of complementary sequences must involve intertwining of the filament and duplex DNA, unwinding of the latter, or some combination of both to equalize the spacing between nascent base pairs. These experiments support the concept that recA protein first forms a filament with single-stranded DNA, which in turn binds to duplex DNA to mediate both homologous pairing and subsequent strand exchange.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE. To understand the molecular features underlying autosomal dominant congenital cataracts caused by the deletion mutations W156X in human gamma D-crystallin and W157X in human gamma C-crystallin. METHODS. Normal and mutant cDNAs (with the enhanced green fluorescent protein [EGFP] tag in the front) were cloned into the pEGFP-C1 vector, transfected into various cell lines, and observed under a confocal microscope for EGFP fluorescence. Normal and W156X gamma D cDNAs were also cloned into the pET21a(+) vector, and the recombinant proteins were overexpressed in the BL-21(DE3) pLysS strain of Escherichia coli, purified, and isolated. The conformational features, structural stability, and solubility in aqueous solution of the mutant protein were compared with those of the wild type using spectroscopic methods. Comparative molecular modeling was performed to provide additional structural information. RESULTS. Transfection of the EGFP-tagged mutant cDNAs into several cell lines led to the visualization of aggregates, whereas that of wild-type cDNAs did not. Turning to the properties of the expressed proteins, the mutant molecules show remarkable reduction in solubility. They also seem to have a greater degree of surface hydrophobicity than the wild-type molecules, most likely accounting for self-aggregation. Molecular modeling studies support these features. CONCLUSIONS. The deletion of C-terminal 18 residues of human gamma C-and gamma D-crystallins exposes the side chains of several hydrophobic residues in the sequence to the solvent, causing the molecule to self-aggregate. This feature appears to be reflected in situ on the introduction of the mutants in human lens epithelial cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem - both scheduling and assignment of filters to processors - as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipelin parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic identification of software faults has enormous practical significance. This requires characterizing program execution behavior and the use of appropriate data mining techniques on the chosen representation. In this paper, we use the sequence of system calls to characterize program execution. The data mining tasks addressed are learning to map system call streams to fault labels and automatic identification of fault causes. Spectrum kernels and SVM are used for the former while latent semantic analysis is used for the latter The techniques are demonstrated for the intrusion dataset containing system call traces. The results show that kernel techniques are as accurate as the best available results but are faster by orders of magnitude. We also show that latent semantic indexing is capable of revealing fault-specific features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proton NMR spectral complexity arising due to severe overlap of peaks hampers their analyses in diverse situations, even by the application of two-dimensional experiments. The selective or complete removal of the couplings and retention of only the chemical shift interactions in indirect dimension aids in the simplification of the spectrum to a large extent with little investment of the instrument time. The present study provides precise enantiodiscrimination employing more anisotropic NMR parameters in the chiral liquid crystalline medium and differentiates the overlapped peaks of many organic molecules and peptides dissolved in isotropic solvents.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The simple two dimensional C-13-satellite J/D-resolved experiments have been proposed for the visualization of enantiomers, extraction of homo- and hetero-nuclear residual dipolar couplings and also H-1 chemical shift differences between the enantiomers in the anisotropic medium. The significant advantages of the techniques are in the determination of scalar couplings of bigger organic molecules. The scalar couplings specific to a second abundant spin such as F-19 can be selectively extracted from the severely overlapped spectrum. The methodologies are demonstrated on a chiral molecule aligned in the chiral liquid crystal medium and two different organic molecules in the isotropic solutions. (C) 2010 Elsevier B.V. All rights reserved.