917 resultados para STACKING-FAULTS
Resumo:
Les interactions ARN/ARN de type kissing-loop sont des éléments de structure tertiaire qui jouent souvent des rôles clés chez les ARN, tant au niveau fonctionnel que structural. En effet, ce type d’interaction est crucial pour plusieurs processus dépendant des ARN, notamment pour l’initiation de la traduction, la reconnaissance des ARN antisens et la dimérisation de génome rétroviral. Les interactions kissing-loop sont également importantes pour le repliement des ARN, puisqu’elles permettent d’établir des contacts à longue distance entre différents ARN ou encore entre les domaines éloignés d’un même ARN. Ce type d’interaction stabilise aussi les structures complexes des ARN fonctionnels tels que les ARNt, les riborégulateurs et les ribozymes. Comme d’autres ARN fonctionnels, le ribozyme VS de Neurospora contient une interaction kissing-loop importante. Celle-ci est impliquée dans la reconnaissance du substrat et se forme entre la tige-boucle I (stem-loop I, SLI) du substrat et la tige-boucle V (stem-loop V, SLV) du domaine catalytique. Des études biochimiques ont démontré que l’interaction kissing-loop I/V, dépendante du magnésium, implique trois paires de bases Watson-Crick (W-C). De plus, cette interaction est associée à un réarrangement de la structure du substrat, le faisant passer d’une conformation inactive dite unshifted à une conformation active dite shifted. Les travaux présentés dans cette thèse consistent en une caractérisation structurale et thermodynamique de l’interaction kissing-loop I/V du ribozyme VS, laquelle est formée de fragments d’ARN représentant les tige-boucles I et V dérivées du ribozyme VS (SLI et SLV). Cette caractérisation a été réalisée principalement par spectroscopie de résonance magnétique nucléaire (RMN) et par titrage calorimétrique isotherme (isothermal titration calorimetry, ITC) en utilisant différents complexes SLI/SLV dans lesquels l’ARN SLV est commun à tous les complexes, alors que différentes variations de l’ARN SLI ont été utilisées, soit en conformation shiftable ou preshifted. Les données d’ITC ont permis de démontrer qu’en présence d’une concentration saturante de magnésium, l’affinité d’un substrat SLI preshifted pour SLV est extrêmement élevée, rendant cette interaction plus stable que ce qui est prédit pour un duplexe d’ARN équivalent. De plus, l’étude effectuée par ITC montre que des ARN SLI preshifted présentent une meilleure affinité pour SLV que des ARN SLI shiftable, ce qui a permis de calculer le coût énergétique associé au réarrangement de structure du substrat. En plus de confirmer la formation des trois paires de bases W-C prédites à la jonction I/V, les études de RMN ont permis d’obtenir une preuve structurale directe du réarrangement structural des substrats SLI shiftable en présence de magnésium et de l’ARN SLV. La structure RMN d’un complexe SLI/SLV de grande affinité démontre que les boucles terminales de SLI et SLV forment chacune un motif U-turn, ce qui facilite l’appariement W-C intermoléculaire. Plusieurs autres interactions ont été définies à l’interface I/V, notamment des triplets de bases, ainsi que des empilements de bases. Ces interactions contribuent d’ailleurs à la création d’une structure présentant un empilement continu, c’est-à-dire qui se propage du centre de l’interaction jusqu’aux bouts des tiges de SLI et SLV. Ces études de RMN permettent donc de mieux comprendre la stabilité exceptionnelle de l’interaction kissing-loop I/V au niveau structural et mènent à l’élaboration d’un modèle cinétique de l’activation du substrat par le ribozyme VS. En considérant l’ensemble des données d’ITC et de RMN, l’étonnante stabilité de l’interaction I/V s’explique probablement par une combinaison de facteurs, dont les motifs U-turn, la présence d’un nucléotide exclu de la boucle de SLV (U700), la liaison de cations magnésium et l’empilement de bases continu à la jonction I/V.
Resumo:
Le motif imidazole, un hétérocycle à 5 atomes contenant 2 atomes d’azote et trois atomes de carbone, présente des propriétés physico-chimiques intéressantes qui en font un composé de choix pour plusieurs applications. Parmi ces propriétés, la fonctionnalisation simple des deux atomes d’azote pour former un sel d’imidazolium est très intéressante. Ces sels sont d’excellents précurseurs de carbènes N-hétérocycliques (NHC) et sont couramment utilisés pour synthétiser des ligands en vue d’une utilisation en catalyse organométallique. D’autre part, cette famille de composés possède des propriétés anionophores permettant une utilisation en transport anionique. Le présent travail contient les résultats de travaux concernant ces deux domaines, soit la catalyse et le transport anionique. Dans un premier temps, les propriétés de dérivés de l’imidazole sont exploitées pour former un catalyseur de type palladium-NHC qui est utilisé pour catalyser la réaction de Suzuki-Miyaura en milieu aqueux. L’efficacité de ce catalyseur a été démontrée en utilisant aussi peu que 0,001 mol% pour un rendement quantitatif. Il s’agit de la première occurrence d’un processus hétérogène et recyclable dans l’eau, utilisant un catalyseur de type Pd-NHC et qui ne nécessite aucun additif ou co-solvant. Le recyclage a été prouvé jusqu’à 10 cycles sans diminution apparente de l’activité du catalyseur. Dans un second temps, plusieurs sels d’imidazolium ont été testés en tant que transporteurs transmembranaires d’anions chlorures. Les propriétés intrinsèques des sels utilisés qui en font des transporteurs efficaces ont été élucidées. Ainsi, les paramètres qui semblent affecter le plus le transport anionique sont le changement du contre-anion du sel d’imidazolium de même que la propension de ce dernier à s’auto-assembler via une succession d’empilements-π. De plus, les propriétés du transport ont été élucidées, montrant la formation de canaux transmembranaires qui permettent non-seulement la diffusion d’ions Cl-, mais aussi le transport de protons et d’ions Ca2+. L’intérêt de cette recherche repose d’abord dans le traitement de diverses pathologies voyant leur origine dans le dysfonctionnement du transport anionique. Cependant, les propriétés bactéricides des sels d’imidazolium utilisés ont été identifiées lors des dernières expériences.
Resumo:
Vues imprenables est un récit où se succèdent les monologues de six personnages se trouvant dans un hôtel de luxe le temps d’une fin de semaine. À travers les détours discursifs que chaque personnage emprunte, les mécanismes textuels qu’il ou elle utilise pour éviter de dire et de se confronter aux réminiscences de ses fautes passées, la question du voir et de l’aveuglement se lie étroitement à celle du passage à l’acte. Quels forfaits ces hommes et ces femmes ont-ils commis ? Sont-ils capables de « se voir » réellement ? Quelle est la portée du regard sur le geste qu’ils ont antérieurement posé ? S’inspirant, entre autres, du jeu de société Clue, des Dix Commandements et de l’esthétique du film The Shining, Vues imprenables interroge la notion de repentir, cherchant à savoir jusqu’où le « voile » de la parole peut dissimuler certains actes, jusqu’à quel point le voir peut se révéler insaisissable. L’essai intitulé « Paradoxes du voir et de l’aveuglement dans Ceux d’à côté de Laurent Mauvignier » tisse également des liens avec Vues imprenables : en questionnant les limites et les possibilités du voir dans le roman de Mauvignier, il s’agit en effet d’analyser comment l’avènement de la vue, dans ce récit, laisse en tout temps présager sa possible perte, mais aussi de quelles façons le geste criminel devient « aveugle » au moment même où il est perpétré. En revisitant certains des plus grands mythes grecs, tels ceux d’Œdipe, de Tirésias et de Gorgô, cet essai étudie plus particulièrement la figure de l’alter ego, ce « moi à côté », tantôt coupable tantôt témoin, qui hante le récit de Mauvignier et il propose une réflexion sur les paradoxes du rapport au vu à partir des travaux d’Hélène Cixous, de Georges Didi-Huberman, de J.-B. Pontalis et de Maurice Merleau-Ponty.
Resumo:
Le ribozyme VS de Neurospora catalyse des réactions de clivage et de ligation d’un lien phosphodiester spécifique essentielles à son cycle de réplication. Il est formé de six régions hélicales (I à VI), qui se divisent en deux domaines, soit le substrat (SLI) et le domaine catalytique (tiges II à VI). Ce dernier comprend deux jonctions à trois voies qui permettent de reconnaître le substrat en tige-boucle de façon spécifique. Ce mode de reconnaissance unique pourrait être exploité pour cibler des ARN repliés pour diverses applications. Bien que le ribozyme VS ait été caractérisé biochimiquement de façon exhaustive, aucune structure à haute résolution du ribozyme complet n’a encore été publiée, ce qui limite la compréhension des mécanismes inhérents à son fonctionnement. Précédemment, une approche de divide-and-conquer a été initiée afin d’étudier la structure des sous-domaines importants du ribozyme VS par spectroscopie de résonance magnétique nucléaire (RMN) mais doit être complétée. Dans le cadre de cette thèse, les structures de la boucle A730 et des jonctions III-IV-V et II-III-VI ont été déterminées par spectroscopie RMN hétéronucléaire. De plus, une approche de spectroscopie RMN a été développée pour la localisation des ions divalents, tandis que diverses approches de marquage isotopique ont été implémentées pour l’étude d’ARN de plus grandes tailles. Les structures RMN de la boucle A730 et des deux jonctions à trois voies révèlent que ces sous-domaines sont bien définis, qu’ils sont formés de plusieurs éléments structuraux récurrents (U-turn, S-turn, triplets de bases et empilement coaxial) et qu’ils contiennent plusieurs sites de liaison de métaux. En outre, un modèle du site actif du ribozyme VS a été construit sur la base des similarités identifiées entre les sites actifs des ribozymes VS et hairpin. Dans l’ensemble, ces études contribuent de façon significative à la compréhension de l’architecture globale du ribozyme VS. De plus, elles permettront de construire un modèle à haute résolution du ribozyme VS tout en favorisant de futures études d’ingénierie.
Resumo:
Le présent travail de recherche se propose d’analyser les dispositifs de gouvernance nodale de la sécurité locale en France, alors que le paradigme a vu le jour et s’est développé dans les pays anglo-saxons fortement décentralisés. C’est qu’en France les dispositifs de gouvernance nodale s’apparentent bien plus à un dialogue entre central et local qu’entre secteur public et privé. La recherche identifie ainsi les caractéristiques de la gouvernance nodale au cœur des dispositifs partenariaux de la sécurité locale, supportés par le Contrat Local de Sécurité (CLS), le Conseil Local de Sécurité et de Prévention de la Délinquance (CLSPD) ou encore le Groupe Local de Traitement de la Délinquance (GLTD). La recherche identifie ainsi les stratégies de décentrage de l’État et de transfert de la production de sécurité vers une diversité d’acteurs locaux, dont les maires et les services municipaux. Une diversité de politiques publiques locales de sécurité de pertinences différentes voit alors le jour. Le premier enseignement de cette recherche est l’importance du rôle joué par le node super-structurel, que nous appelons super-node et qui regroupe le maire ou l’élu local à la sécurité, le responsable de la police d’État, celui de la police municipale et le représentant de l’État. Il apparaît que dans le dispositif de gouvernance nodale, ce groupe informel génère la dynamique collective qui permet de regrouper, tant les producteurs que les consommateurs de sécurité locale gravitant au sein du réseau local de sécurité. La quarantaine d’entrevues qualitatives permet également d’identifier que la Justice, productrice de sécurité comme peut l’être aussi la sécurité privée ou la médiation sociale, apparaît plus distante que ce que pouvait laisser penser l’étude des textes réglementaires organisant le partenariat. Les bailleurs sociaux, les transporteurs et l’Éducation nationale apparaissent clairement comme des acteurs importants, mais périphériques de sécurité, en intégrant cette « famille élargie » de la sécurité locale. Le deuxième enseignement est relatif au fonctionnement même du dispositif nodal ainsi que du super-node, la recherche permettant d’identifier les ressources mutualisées par l’ensemble des nodes. Cela permet également d’identifier les mécanismes de répartition des tâches entre les différents acteurs et plus particulièrement entre les deux organisations policières d’État et municipale, travaillant autant en compétition, qu’en complémentarité. Cette recherche explore également le rôle joué par l’information dans le fonctionnement du super-node ainsi que l’importance de la confiance dans les relations interpersonnelles des représentants des nodes au sein du super-node. Enfin, l’étude permet également de mettre en perspective les limites du dispositif actuel de gouvernance nodale : le défaut avéré d’outils performants permettant d’informer convenablement le super-node quant aux phénomènes de violence ainsi que d’évaluer l’efficience du dispositif. Cela permet également de questionner l’autonomie des dispositifs de gouvernance nodale, la confiance pouvant ouvrir à la déviance et la collégialité au défaut de la traçabilité de la responsabilité. La fracture avec la société civile apparaît clairement et ne facilite pas le contrôle sur un mode de production de sécurité qui se développe en parallèle des dispositifs traditionnels de démocratie locale.
Resumo:
The purpose of the present study is to understand the surface deformation associated with the Killari and Wadakkancheri earthquake and to examine if there are any evidence of occurrence of paleo-earthquakes in this region or its vicinity. The present study is an attempt to characterize active tectonic structures from two areas within penisular India: the sites of 1993 Killari (Latur) (Mb 6.3) and 1994 Wadakkancheri (M 4.3) earthquakes in the Precambrian shield. The main objectives of the study are to isolate structures related to active tectonism, constraint the style of near – surface deformation and identify previous events by interpreting the deformational features. The study indicates the existence of a NW-SE trending pre-existing fault, passing through the epicentral area of the 1993 Killari earthquake. It presents the salient features obtained during the field investigations in and around the rupture zone. Details of mapping of the scrap, trenching, and shallow drilling are discussed here. It presents the geologic and tectonic settings of the Wadakkancheri area and the local seismicity; interpretation of remote sensing data and a detailed geomorphic analysis. Quantitative geomorphic analysis around the epicenter of the Wadakkancheri earthquake indicates suitable neotectonic rejuvenation. Evaluation of remote sensing data shows distinct linear features including the presence of potentially active WNW-ESE trending fault within the Precambrian shear zone. The study concludes that the earthquakes in the shield area are mostly associated with discrete faults that are developed in association with the preexisting shear zones or structurally weak zones
Resumo:
The mononuclear cobalt(II) complex [CoL2] H2O (where HL is quinoxaline-2-carboxalidine- 2-amino-5-methylphenol) has been prepared and characterized by elemental analysis, conductivity measurement, IR, UV-Vis spectroscopy, TG-DTA, and X-ray structure determination. The crystallographic study shows that cobalt(II) is distorted octahedral with each tridentate NNO Schiff base in a cis arrangement. The crystal exhibits a 2-D polymeric structure parallel to [010] plane, formed by O-H...N and O-H... O intermolecular hydrogen bonds and pye stacking interactions, as a racemic mixture of optical enantiomers. The ligand is a Schiff base derived from quinoxaline-2-carboxaldehyde
Resumo:
The Schiff base compounds N,N0-bis[(E)-quinoxalin-2-ylmethylidene] propane-1,3-diamine, C21H18N6, (I), and N,N0-bis[(E)- quinoxalin-2-ylmethylidene]butane-1,4-diamine, C22H20N6, (II), crystallize in the monoclinic crystal system. These molecules have crystallographically imposed symmetry. Compound (I) is located on a crystallographic twofold axis and (II) is located on an inversion centre. The molecular conformations of these crystal structures are stabilized by aromatic pye stacking interactions.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
Drainage basins are durable geomorphic features that provide insights into the long term evolution of the landscape. River basin geometry develop response to the nature and distribution of uplift and subsidence, the spatial arrangement of lineaments (faults and joints), the relative resistance of different rock types and to climatically influenced hydrological parameters . For developing a drainage basin evolution history, it is necessary to understand physiography, drainage patterns, geomorphic features and its structural control and erosion status. The present study records evidences for active tectonic activities which were found to be responsible for the present day geomorphic set up of the study area since the Western Ghat evolution. A model was developed to explain the evolution of Chaliar River drainage basin based on detailed interpretation of morphometry and genesis of landforms with special emphasis on tectonic geomorphic indices and markers.
Resumo:
Development of organic molecules that exhibit selective interactions with different biomolecules has immense significance in biochemical and medicinal applications. In this context, our main objective has been to design a few novel functionaIized molecules that can selectively bind and recognize nucleotides and DNA in the aqueous medium through non-covalent interactions. Our strategy was to design novel cycIophane receptor systems based on the anthracene chromophore linked through different bridging moieties and spacer groups. It was proposed that such systems would have a rigid structure with well defined cavity, wherein the aromatic chromophore can undergo pi-stacking interactions with the guest molecules. The viologen and imidazolium moieties have been chosen as bridging units, since such groups, can in principle, could enhance the solubility of these derivatives in the aqueous medium as well as stabilize the inclusion complexes through electrostatic interactions.We synthesized a series of water soluble novel functionalized cyclophanes and have investigated their interactions with nucleotides, DNA and oligonucIeotides through photophysical. chiroptical, electrochemical and NMR techniques. Results indicate that these systems have favorable photophysical properties and exhibit selective interactions with ATP, GTP and DNA involving electrostatic. hydrophobic and pi-stacking interactions inside the cavity and hence can have potential use as probes in biology.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.
Resumo:
Light emitting polymers (LEP) have drawn considerable attention because of their numerous potential applications in the field of optoelectronic devices. Till date, a large number of organic molecules and polymers have been designed and devices fabricated based on these materials. Optoelectronic devices like polymer light emitting diodes (PLED) have attracted wide-spread research attention owing to their superior properties like flexibility, lower operational power, colour tunability and possibility of obtaining large area coatings. PLEDs can be utilized for the fabrication of flat panel displays and as replacements for incandescent lamps. The internal efficiency of the LEDs mainly depends on the electroluminescent efficiency of the emissive polymer such as quantum efficiency, luminance-voltage profile of LED and the balanced injection of electrons and holes. Poly (p-phenylenevinylene) (PPV) and regio-regular polythiophenes are interesting electro-active polymers which exhibit good electrical conductivity, electroluminescent activity and high film-forming properties. A combination of Red, Green and Blue emitting polymers is necessary for the generation of white light which can replace the high energy consuming incandescent lamps. Most of these polymers show very low solubility, stability and poor mechanical properties. Many of these light emitting polymers are based on conjugated extended chains of alternating phenyl and vinyl units. The intra-chain or inter-chain interactions within these polymer chains can change the emitted colour. Therefore an effective way of synthesizing polymers with reduced π-stacking, high solubility, high thermal stability and high light-emitting efficiency is still a challenge for chemists. New copolymers have to be effectively designed so as to solve these issues. Hence, in the present work, the suitability of a few novel copolymers with very high thermal stability, excellent solubility, intense light emission (blue, cyan and green) and high glass transition temperatures have been investigated to be used as emissive layers for polymer light emitting diodes.
Resumo:
Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.