10 resultados para stacking faults

em Cochin University of Science


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of the present study is to understand the surface deformation associated with the Killari and Wadakkancheri earthquake and to examine if there are any evidence of occurrence of paleo-earthquakes in this region or its vicinity. The present study is an attempt to characterize active tectonic structures from two areas within penisular India: the sites of 1993 Killari (Latur) (Mb 6.3) and 1994 Wadakkancheri (M 4.3) earthquakes in the Precambrian shield. The main objectives of the study are to isolate structures related to active tectonism, constraint the style of near – surface deformation and identify previous events by interpreting the deformational features. The study indicates the existence of a NW-SE trending pre-existing fault, passing through the epicentral area of the 1993 Killari earthquake. It presents the salient features obtained during the field investigations in and around the rupture zone. Details of mapping of the scrap, trenching, and shallow drilling are discussed here. It presents the geologic and tectonic settings of the Wadakkancheri area and the local seismicity; interpretation of remote sensing data and a detailed geomorphic analysis. Quantitative geomorphic analysis around the epicenter of the Wadakkancheri earthquake indicates suitable neotectonic rejuvenation. Evaluation of remote sensing data shows distinct linear features including the presence of potentially active WNW-ESE trending fault within the Precambrian shear zone. The study concludes that the earthquakes in the shield area are mostly associated with discrete faults that are developed in association with the preexisting shear zones or structurally weak zones

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mononuclear cobalt(II) complex [CoL2] H2O (where HL is quinoxaline-2-carboxalidine- 2-amino-5-methylphenol) has been prepared and characterized by elemental analysis, conductivity measurement, IR, UV-Vis spectroscopy, TG-DTA, and X-ray structure determination. The crystallographic study shows that cobalt(II) is distorted octahedral with each tridentate NNO Schiff base in a cis arrangement. The crystal exhibits a 2-D polymeric structure parallel to [010] plane, formed by O-H...N and O-H... O intermolecular hydrogen bonds and pye stacking interactions, as a racemic mixture of optical enantiomers. The ligand is a Schiff base derived from quinoxaline-2-carboxaldehyde

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Schiff base compounds N,N0-bis[(E)-quinoxalin-2-ylmethylidene] propane-1,3-diamine, C21H18N6, (I), and N,N0-bis[(E)- quinoxalin-2-ylmethylidene]butane-1,4-diamine, C22H20N6, (II), crystallize in the monoclinic crystal system. These molecules have crystallographically imposed symmetry. Compound (I) is located on a crystallographic twofold axis and (II) is located on an inversion centre. The molecular conformations of these crystal structures are stabilized by aromatic pye stacking interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Drainage basins are durable geomorphic features that provide insights into the long term evolution of the landscape. River basin geometry develop response to the nature and distribution of uplift and subsidence, the spatial arrangement of lineaments (faults and joints), the relative resistance of different rock types and to climatically influenced hydrological parameters . For developing a drainage basin evolution history, it is necessary to understand physiography, drainage patterns, geomorphic features and its structural control and erosion status. The present study records evidences for active tectonic activities which were found to be responsible for the present day geomorphic set up of the study area since the Western Ghat evolution. A model was developed to explain the evolution of Chaliar River drainage basin based on detailed interpretation of morphometry and genesis of landforms with special emphasis on tectonic geomorphic indices and markers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Development of organic molecules that exhibit selective interactions with different biomolecules has immense significance in biochemical and medicinal applications. In this context, our main objective has been to design a few novel functionaIized molecules that can selectively bind and recognize nucleotides and DNA in the aqueous medium through non-covalent interactions. Our strategy was to design novel cycIophane receptor systems based on the anthracene chromophore linked through different bridging moieties and spacer groups. It was proposed that such systems would have a rigid structure with well defined cavity, wherein the aromatic chromophore can undergo pi-stacking interactions with the guest molecules. The viologen and imidazolium moieties have been chosen as bridging units, since such groups, can in principle, could enhance the solubility of these derivatives in the aqueous medium as well as stabilize the inclusion complexes through electrostatic interactions.We synthesized a series of water soluble novel functionalized cyclophanes and have investigated their interactions with nucleotides, DNA and oligonucIeotides through photophysical. chiroptical, electrochemical and NMR techniques. Results indicate that these systems have favorable photophysical properties and exhibit selective interactions with ATP, GTP and DNA involving electrostatic. hydrophobic and pi-stacking interactions inside the cavity and hence can have potential use as probes in biology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Light emitting polymers (LEP) have drawn considerable attention because of their numerous potential applications in the field of optoelectronic devices. Till date, a large number of organic molecules and polymers have been designed and devices fabricated based on these materials. Optoelectronic devices like polymer light emitting diodes (PLED) have attracted wide-spread research attention owing to their superior properties like flexibility, lower operational power, colour tunability and possibility of obtaining large area coatings. PLEDs can be utilized for the fabrication of flat panel displays and as replacements for incandescent lamps. The internal efficiency of the LEDs mainly depends on the electroluminescent efficiency of the emissive polymer such as quantum efficiency, luminance-voltage profile of LED and the balanced injection of electrons and holes. Poly (p-phenylenevinylene) (PPV) and regio-regular polythiophenes are interesting electro-active polymers which exhibit good electrical conductivity, electroluminescent activity and high film-forming properties. A combination of Red, Green and Blue emitting polymers is necessary for the generation of white light which can replace the high energy consuming incandescent lamps. Most of these polymers show very low solubility, stability and poor mechanical properties. Many of these light emitting polymers are based on conjugated extended chains of alternating phenyl and vinyl units. The intra-chain or inter-chain interactions within these polymer chains can change the emitted colour. Therefore an effective way of synthesizing polymers with reduced π-stacking, high solubility, high thermal stability and high light-emitting efficiency is still a challenge for chemists. New copolymers have to be effectively designed so as to solve these issues. Hence, in the present work, the suitability of a few novel copolymers with very high thermal stability, excellent solubility, intense light emission (blue, cyan and green) and high glass transition temperatures have been investigated to be used as emissive layers for polymer light emitting diodes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.