47 resultados para Discovery Tools

em Indian Institute of Science - Bangalore - Índia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract is not available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Adjuvants enhance or modify an immune response that is made to an antigen. An antagonist of the chemokine CCR4 receptor can display adjuvant-like properties by diminishing the ability of CD4+CD25+ regulatory T cells (Tregs) to down-regulate immune responses. Methodology: Here, we have used protein modelling to create a plausible chemokine receptor model with the aim of using virtual screening to identify potential small molecule chemokine antagonists. A combination of homology modelling and molecular docking was used to create a model of the CCR4 receptor in order to investigate potential lead compounds that display antagonistic properties. Three-dimensional structure-based virtual screening of the CCR4 receptor identified 116 small molecules that were calculated to have a high affinity for the receptor; these were tested experimentally for CCR4 antagonism. Fifteen of these small molecules were shown to inhibit specifically CCR4-mediated cellmigration, including that of CCR4(+) Tregs. Significance: Our CCR4 antagonists act as adjuvants augmenting human T cell proliferation in an in vitro immune response model and compound SP50 increases T cell and antibody responses in vivo when combined with vaccine antigens of Mycobacterium tuberculosis and Plasmodium yoelii in mice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the recent time CFD tools have become increasingly useful in the engineering design studies especially in the area of aerospace vehicles. This is largely due to the advent of high speed computing platforms in addition to the development of new efficient algorithms. The algorithms based on kinetic schemes have been shown to be very robust and further meshless methods offer certain advantages over the other methods. Preliminary investigations of blood flow visualization through artery using CFD tool have shown encouraging results which further needs to be verified and validated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Glycomics is the study of comprehensive structural elucidation and characterization of all glycoforms found in nature and their dynamic spatiotemporal changes that are associated with biological processes. Glycocalyx of mammalian cells actively participate in cell-cell, cell-matrix, and cell-pathogen interactions, which impact embryogenesis, growth and development, homeostasis, infection and immunity, signaling, malignancy, and metabolic disorders. Relative to genomics and proteomics, glycomics is just growing out of infancy with great potential in biomedicine for biomarker discovery, diagnosis, and treatment. However, the immense diversity and complexity of glycan structures and their multiple modes of interactions with proteins pose great challenges for development of analytical tools for delineating structure function relationships and understanding glycocode. Several tools are being developed for glycan profiling based on chromatography,m mass spectrometry, glycan microarrays, and glyco-informatics. Lectins, which have long been used in glyco-immunology, printed on a microarray provide a versatile platform for rapid high throughput analysis of glycoforms of biological samples. Herein, we summarize technological advances in lectin microarrays and critically review their impact on glycomics analysis. Challenges remain in terms of expansion to include nonplant derived lectins, standardization for routine clinical use, development of recombinant lectins, and exploration of plant kingdom for discovery of novel lectins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose - There are many library automation packages available as open-source software, comprising two modules: staff-client module and online public access catalogue (OPAC). Although the OPAC of these library automation packages provides advanced features of searching and retrieval of bibliographic records, none of them facilitate full-text searching. Most of the available open-source digital library software facilitates indexing and searching of full-text documents in different formats. This paper makes an effort to enable full-text search features in the widely used open-source library automation package Koha, by integrating it with two open-source digital library software packages, Greenstone Digital Library Software (GSDL) and Fedora Generic Search Service (FGSS), independently. Design/methodology/approach - The implementation is done by making use of the Search and Retrieval by URL (SRU) feature available in Koha, GSDL and FGSS. The full-text documents are indexed both in Koha and GSDL and FGSS. Findings - Full-text searching capability in Koha is achieved by integrating either GSDL or FGSS into Koha and by passing an SRU request to GSDL or FGSS from Koha. The full-text documents are indexed both in the library automation package (Koha) and digital library software (GSDL, FGSS) Originality/value - This is the first implementation enabling the full-text search feature in a library automation software by integrating it into digital library software.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Service discovery is vital in ubiquitous applications, where a large number of devices and software components collaborate unobtrusively and provide numerous services without user intervention. Existing service discovery schemes use a service matching process in order to offer services of interest to the users. Potentially, the context information of the users and surrounding environment can be used to improve the quality of service matching. To make use of context information in service matching, a service discovery technique needs to address certain challenges. Firstly, it is required that the context information shall have unambiguous representation. Secondly, the devices in the environment shall be able to disseminate high level and low level context information seamlessly in the different networks. And thirdly, dynamic nature of the context information be taken into account. We propose a C-IOB(Context-Information, Observation and Belief) based service discovery model which deals with the above challenges by processing the context information and by formulating the beliefs based on the observations. With these formulated beliefs the required services will be provided to the users. The method has been tested with a typical ubiquitous museum guide application over different cases. The simulation results are time efficient and quite encouraging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An understanding of application I/O access patterns is useful in several situations. First, gaining insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop.All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces.In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enable it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents concepts, designs, and working prototypes of enhanced laparoscopic surgical tools. The enhancements are in equipping the tool with force and temperature sensing as well as image acquisition for stereo vision. Just as the pupils of our eyes are adequately spaced out and the distance between them is adjustable, two minute cameras mounted on a mechanism in our design can be moved closer or farther apart inside the inflated abdomen during the surgery. The cameras are fitted to a deployable mechanism consisting of flexural joints so that they can be inserted through a small incision and then deployed and moved as needed.A temperature sensor and a force sensor are mounted on either of the gripping faces of the surgical grasping tool to measure the temperature and gripping force, which need to be controlled for safe laparoscopic surgery. The sensors are small enough and hence they do not cause interference during surgery and insertion.Prototyping and working of the enhanced laparoscopic tool are presented with details

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Segmental dynamic time warping (DTW) has been demonstrated to be a useful technique for finding acoustic similarity scores between segments of two speech utterances. Due to its high computational requirements, it had to be computed in an offline manner, limiting the applications of the technique. In this paper, we present results of parallelization of this task by distributing the workload in either a static or dynamic way on an 8-processor cluster and discuss the trade-offs among different distribution schemes. We show that online unsupervised pattern discovery using segmental DTW is plausible with as low as 8 processors. This brings the task within reach of today's general purpose multi-core servers. We also show results on a 32-processor system, and discuss factors affecting scalability of our methods.