77 resultados para methodologies
em Indian Institute of Science - Bangalore - Índia
Resumo:
The scalar coupled proton NMR spectra of many organic molecules possessing more than one phenyl ring are generally complex due to degeneracy of transitions arising from the closely resonating protons, in addition to several short- and long- range couplings experienced by each proton. Analogous situations are generally encountered in derivatives of halogenated benzanilides. Extraction of information from such spectra is challenging and demands the differentiation of spectrum pertaining to each phenyl ring and the simplification of their spectral complexity. The present study employs the blend of independent spin system filtering and the spin-state selective detection of single quantum (SO) transitions by the two-dimensional multiple quantum (MQ) methodology in achieving this goal. The precise values of the scalar couplings of very small magnitudes have been derived by double quantum resolved experiments. The experiments also provide the relative signs of heteronuclear couplings. Studies on four isomers of dilhalogenated benzanilides are reported in this work.
Resumo:
It is well known that enantiomers cannot be distinguished by NMR spectroscopy unless diastereomorphic interactions are imposed. Several chiral aligning media have therefore been reported for their visualization, although extensive studies are carried out using the liquid crystal made of polypeptide poly-γ-benzyl-L-glutamate (PBLG) in organic solvent. In PBLG medium the spin systems are weakly coupled and the first order analyses of the spectra are generally possible. But due to large number of pair wise interactions of nuclear spins resulting in many degenerate transitions the 1H NMR spectra are not only complex but also broad and featureless, in addition to an indistinguishable overlap of the spectra of enantiomers. This enormous loss of resolution severely hinders the analyses of proton spectra, even for spin systems with 5–6 interacting protons, thereby restricting itsroutine application. In this review we discuss our recently developed several one and multidimensional NMR experiments to circumvent these difficulties taking specific examples of the molecules containing a single chiral centre.
Resumo:
Understanding the mechanism by which an unfolded polypeptide chain folds to its unique, functional structure is a primary unsolved problem in biochemistry. Fundamental advances towards understanding how proteins fold have come from kinetic studies, Kinetic studies allow the dissection of the folding pathway of a protein into individual steps that are defined by partially-structured folding intermediates. Improvements in both the structural and temporal resolution of physical methods that are used to monitor the folding process, as well as the development of new methodologies, are now making it possible to obtain detailed structural information on protein folding pathways. The protein engineering methodology has been particularly useful in characterizing the structures of folding intermediates as well as the transition state of folding, Several characteristics of protein folding pathways have begun to emerge as general features for the folding of many different proteins. Progress in our understanding of how structure develops during folding is reviewed here.
Resumo:
Background: Tuberculosis still remains one of the largest killer infectious diseases, warranting the identification of newer targets and drugs. Identification and validation of appropriate targets for designing drugs are critical steps in drug discovery, which are at present major bottle-necks. A majority of drugs in current clinical use for many diseases have been designed without the knowledge of the targets, perhaps because standard methodologies to identify such targets in a high-throughput fashion do not really exist. With different kinds of 'omics' data that are now available, computational approaches can be powerful means of obtaining short-lists of possible targets for further experimental validation. Results: We report a comprehensive in silico target identification pipeline, targetTB, for Mycobacterium tuberculosis. The pipeline incorporates a network analysis of the protein-protein interactome, a flux balance analysis of the reactome, experimentally derived phenotype essentiality data, sequence analyses and a structural assessment of targetability, using novel algorithms recently developed by us. Using flux balance analysis and network analysis, proteins critical for survival of M. tuberculosis are first identified, followed by comparative genomics with the host, finally incorporating a novel structural analysis of the binding sites to assess the feasibility of a protein as a target. Further analyses include correlation with expression data and non-similarity to gut flora proteins as well as 'anti-targets' in the host, leading to the identification of 451 high-confidence targets. Through phylogenetic profiling against 228 pathogen genomes, shortlisted targets have been further explored to identify broad-spectrum antibiotic targets, while also identifying those specific to tuberculosis. Targets that address mycobacterial persistence and drug resistance mechanisms are also analysed. Conclusion: The pipeline developed provides rational schema for drug target identification that are likely to have high rates of success, which is expected to save enormous amounts of money, resources and time in the drug discovery process. A thorough comparison with previously suggested targets in the literature demonstrates the usefulness of the integrated approach used in our study, highlighting the importance of systems-level analyses in particular. The method has the potential to be used as a general strategy for target identification and validation and hence significantly impact most drug discovery programmes.
Resumo:
This paper presents three methodologies for determining optimum locations and magnitudes of reactive power compensation in power distribution systems. Method I and Method II are suitable for complex distribution systems with a combination of both radial and ring-main feeders and having different voltage levels. Method III is suitable for low-tension single voltage level radial feeders. Method I is based on an iterative scheme with successive powerflow analyses, with formulation and solution of the optimization problem using linear programming. Method II and Method III are essentially based on the steady state performance of distribution systems. These methods are simple to implement and yield satisfactory results comparable with the results of Method I. The proposed methods have been applied to a few distribution systems, and results obtained for two typical systems are presented for illustration purposes.
Resumo:
Knowledge of drag force is an important design parameter in aerodynamics. Measurement of aerodynamic forces at hypersonic speed is a challenge and usually ground test facilities like shock tunnels are used to carry out such tests. Accelerometer based force balances are commonly employed for measuring aerodynamic drag around bodies in hypersonic shock tunnels. In this study, we present an analysis of the effect of model material on the performance of an accelerometer balance used for measurement of drag in impulse facilities. From the experimental studies performed on models constructed out of Bakelite HYLEM and Aluminum, it is clear that the rigid body assumption does not hold good during the short testing duration available in shock tunnels. This is notwithstanding the fact that the rubber bush used for supporting the model allows unconstrained motion of the model during the short testing time available in the shock tunnel. The vibrations induced in the model on impact loading in the shock tunnel are damped out in metallic model, resulting in a smooth acceleration signal, while the signal become noisy and non-linear when we use non-isotropic materials like Bakelite HYLEM. This also implies that careful analysis and proper data reduction methodologies are necessary for measuring aerodynamic drag for non-metallic models in shock tunnels. The results from the drag measurements carried out using a 60 degrees half angle blunt cone is given in the present analysis.
Resumo:
The paper presents a method for the evaluation of external stability of reinforced soil walls subjected to earthquakes in the framework of the pseudo-dynamic method. The seismic reliability of the wall is evaluated by considering the different possible failure modes such as sliding along the base, overturning about the toe point of the wall, bearing capacity and the eccentricity of the resultant force. The analysis is performed considering properties of the reinforced backfill, foundation soil below the base of the wall, length of the geosynthetic reinforcement and characteristics of earthquake ground motions such as shear wave and primary wave velocity as random variables. The optimum length of reinforcement needed to maintain stability against four modes of failure by targeting various component reliability indices is obtained. Differences between pseudo-static and pseudo-dynamic methods are clearly highlighted in the paper. A complete analysis of pseudo-static and pseudo-dynamic methodologies shows that the pseudodynamic method results in realistic design values for the length of geosynthetic reinforcement under earthquake conditions.
Resumo:
It is well known that the notions of normal forms and acyclicity capture many practical desirable properties for database schemes. The basic schema design problem is to develop design methodologies that strive toward these ideals. The usual approach is to first normalize the database scheme as far as possible. If the resulting scheme is cyclic, then one tries to transform it into an acyclic scheme. In this paper, we argue in favor of carrying out these two phases of design concurrently. In order to do this efficiently, we need to be able to incrementally analyze the acyclicity status of a database scheme as it is being designed. To this end, we propose the formalism of "binary decompositions". Using this, we characterize design sequences that exactly generate theta-acyclic schemes, for theta = agr,beta. We then show how our results can be put to use in database design. Finally, we also show that our formalism above can be effectively used as a proof tool in dependency theory. We demonstrate its power by showing that it leads to a significant simplification of the proofs of some previous results connecting sets of multivalued dependencies and acyclic join dependencies.
Resumo:
A measure of stability of a given epitope is an important parameter in the exploration of the utility of a desired MAb. It defines the conditions necessary for using MAbs as an investigative tool in several research methodologies and therapeutic protocols. Despite these obvious interests the lack of simple and rapid assay systems for quantitating MAb-Ag interactions has largely hampered these studies. A single step MAb-Solid Phase Radioimmunoassay (SS-SPRIA), is described which eliminates errors that may arise with multistep sandwich assays. SS-SPRIA has been used to demonstrate the differential stability of the assembled epitopes on gonadotropins. Differential stability towards specific reagents can be exploited to identify aminoacid residues at the epitopic site. Inactivation of an epitopic region is indicative of the presence of the group modified, provided conformational relaxations are not induced due to modifications at distant sites. Here we provide evidence to validate these conclusions.
Resumo:
The knowledge of hydrological variables (e. g. soil moisture, evapotranspiration) are of pronounced importance in various applications including flood control, agricultural production and effective water resources management. These applications require the accurate prediction of hydrological variables spatially and temporally in watershed/basin. Though hydrological models can simulate these variables at desired resolution (spatial and temporal), often they are validated against the variables, which are either sparse in resolution (e. g. soil moisture) or averaged over large regions (e. g. runoff). A combination of the distributed hydrological model (DHM) and remote sensing (RS) has the potential to improve resolution. Data assimilation schemes can optimally combine DHM and RS. Retrieval of hydrological variables (e. g. soil moisture) from remote sensing and assimilating it in hydrological model requires validation of algorithms using field studies. Here we present a review of methodologies developed to assimilate RS in DHM and demonstrate the application for soil moisture in a small experimental watershed in south India.
Resumo:
Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.
Resumo:
The present study deals with the application of cluster analysis, Fuzzy Cluster Analysis (FCA) and Kohonen Artificial Neural Networks (KANN) methods for classification of 159 meteorological stations in India into meteorologically homogeneous groups. Eight parameters, namely latitude, longitude, elevation, average temperature, humidity, wind speed, sunshine hours and solar radiation, are considered as the classification criteria for grouping. The optimal number of groups is determined as 14 based on the Davies-Bouldin index approach. It is observed that the FCA approach performed better than the other two methodologies for the present study.
Resumo:
With the increased utilization of advanced composites in strategic industries, the concept of Structural Health Monitoring (SHM) with its inherent advantages is gaining ground over the conventional methods of NDE and NDI. The most attractive feature of this concept is on-line evaluation using embedded sensors. Consequently, development of methodologies with identification of appropriate sensors such as PVDF films becomes the key for exploiting the new concept. And, of the methods used for on-line evaluation acoustic emission has been most effective. Thus, Acoustic Emission (AE) generated during static tensile loading of glass fiber reinforced plastic composites was monitored using a Polyvinylidene fluoride (PVDF) film sensor. The frequency response of the film sensor was obtained with pencil lead breakage tests to choose the appropriate band of operation. The specimen considered for the experiments were chosen to characterize the differences in the operation of the failure mechanisms through AE parametric analysis. The results of the investigations can be characterized using AE parameter indicating that a PVDF film sensor was effective as an AE sensor used in structural health monitoring on-line.
Resumo:
In positron emission tomography (PET), image reconstruction is a demanding problem. Since, PET image reconstruction is an ill-posed inverse problem, new methodologies need to be developed. Although previous studies show that incorporation of spatial and median priors improves the image quality, the image artifacts such as over-smoothing and streaking are evident in the reconstructed image. In this work, we use a simple, yet powerful technique to tackle the PET image reconstruction problem. Proposed technique is based on the integration of Bayesian approach with that of finite impulse response (FIR) filter. A FIR filter is designed whose coefficients are determined based on the surface diffusion model. The resulting reconstructed image is iteratively filtered and fed back to obtain the new estimate. Experiments are performed on a simulated PET system. The results show that the proposed approach is better than recently proposed MRP algorithm in terms of image quality and normalized mean square error.