867 resultados para Input sequence
Resumo:
Welche genetische Unterschiede machen uns verschieden von unseren nächsten Verwandten, den Schimpansen, und andererseits so ähnlich zu den Schimpansen? Was wir untersuchen und auch verstehen wollen, ist die komplexe Beziehung zwischen den multiplen genetischen und epigenetischen Unterschieden, deren Interaktion mit diversen Umwelt- und Kulturfaktoren in den beobachteten phänotypischen Unterschieden resultieren. Um aufzuklären, ob chromosomale Rearrangements zur Divergenz zwischen Mensch und Schimpanse beigetragen haben und welche selektiven Kräfte ihre Evolution geprägt haben, habe ich die kodierenden Sequenzen von 2 Mb umfassenden, die perizentrischen Inversionsbruchpunkte flankierenden Regionen auf den Chromosomen 1, 4, 5, 9, 12, 17 und 18 untersucht. Als Kontrolle dienten dabei 4 Mb umfassende kollineare Regionen auf den rearrangierten Chromosomen, welche mindestens 10 Mb von den Bruchpunktregionen entfernt lagen. Dabei konnte ich in den Bruchpunkten flankierenden Regionen im Vergleich zu den Kontrollregionen keine höhere Proteinevolutionsrate feststellen. Meine Ergebnisse unterstützen nicht die chromosomale Speziationshypothese für Mensch und Schimpanse, da der Anteil der positiv selektierten Gene (5,1% in den Bruchpunkten flankierenden Regionen und 7% in den Kontrollregionen) in beiden Regionen ähnlich war. Durch den Vergleich der Anzahl der positiv und negativ selektierten Gene per Chromosom konnte ich feststellen, dass Chromosom 9 die meisten und Chromosom 5 die wenigsten positiv selektierten Gene in den Bruchpunkt flankierenden Regionen und Kontrollregionen enthalten. Die Anzahl der negativ selektierten Gene (68) war dabei viel höher als die Anzahl der positiv selektierten Gene (17). Eine bioinformatische Analyse von publizierten Microarray-Expressionsdaten (Affymetrix Chip U95 und U133v2) ergab 31 Gene, die zwischen Mensch und Schimpanse differentiell exprimiert sind. Durch Untersuchung des dN/dS-Verhältnisses dieser 31 Gene konnte ich 7 Gene als negativ selektiert und nur 1 Gen als positiv selektiert identifizieren. Dieser Befund steht im Einklang mit dem Konzept, dass Genexpressionslevel unter stabilisierender Selektion evolvieren. Die meisten positiv selektierten Gene spielen überdies eine Rolle bei der Fortpflanzung. Viele dieser Speziesunterschiede resultieren eher aus Änderungen in der Genregulation als aus strukturellen Änderungen der Genprodukte. Man nimmt an, dass die meisten Unterschiede in der Genregulation sich auf transkriptioneller Ebene manifestieren. Im Rahmen dieser Arbeit wurden die Unterschiede in der DNA-Methylierung zwischen Mensch und Schimpanse untersucht. Dazu wurden die Methylierungsmuster der Promotor-CpG-Inseln von 12 Genen im Cortex von Menschen und Schimpansen mittels klassischer Bisulfit-Sequenzierung und Bisulfit-Pyrosequenzierung analysiert. Die Kandidatengene wurden wegen ihrer differentiellen Expressionsmuster zwischen Mensch und Schimpanse sowie wegen Ihrer Assoziation mit menschlichen Krankheiten oder dem genomischen Imprinting ausgewählt. Mit Ausnahme einiger individueller Positionen zeigte die Mehrzahl der analysierten Gene keine hohe intra- oder interspezifische Variation der DNA-Methylierung zwischen den beiden Spezies. Nur bei einem Gen, CCRK, waren deutliche intraspezifische und interspezifische Unterschiede im Grad der DNA-Methylierung festzustellen. Die differentiell methylierten CpG-Positionen lagen innerhalb eines repetitiven Alu-Sg1-Elements. Die Untersuchung des CCRK-Gens liefert eine umfassende Analyse der intra- und interspezifischen Variabilität der DNA-Methylierung einer Alu-Insertion in eine regulatorische Region. Die beobachteten Speziesunterschiede deuten darauf hin, dass die Methylierungsmuster des CCRK-Gens wahrscheinlich in Adaption an spezifische Anforderungen zur Feinabstimmung der CCRK-Regulation unter positiver Selektion evolvieren. Der Promotor des CCRK-Gens ist anfällig für epigenetische Modifikationen durch DNA-Methylierung, welche zu komplexen Transkriptionsmustern führen können. Durch ihre genomische Mobilität, ihren hohen CpG-Anteil und ihren Einfluss auf die Genexpression sind Alu-Insertionen exzellente Kandidaten für die Förderung von Veränderungen während der Entwicklungsregulation von Primatengenen. Der Vergleich der intra- und interspezifischen Methylierung von spezifischen Alu-Insertionen in anderen Genen und Geweben stellt eine erfolgversprechende Strategie dar.
Resumo:
This work focused on the synthesis of novel monomers for the design of a series of oligo(p-benzamide)s following two approaches: iterative solution synthesis and automated solid phase protocols. These approaches present a useful method to the sequence-controlled synthesis of side-chain and main-chain functionalized oligomers for the preparation of an immense variety of nanoscaffolds. The challenge in the synthesis of such materials was their modification, while maintaining the characteristic properties (physical-chemical properties, shape persistence and anisotropy). The strategy for the preparation of predictable superstructures was devote to the selective control of noncovalent interactions, monodispersity and monomer sequence. In addition to this, the structure-properties correlation of the prepared rod-like soluble materials was pointed. The first approach involved the solution-based aramide synthesis via introduction of 2,4-dimethoxybenzyl N-amide protective group via an iterative synthetic strategy The second approach focused on the implementation of the salicylic acid scaffold to introduce substituents on the aromatic backbone for the stabilization of the OPBA-rotamers. The prepared oligomers were analyzed regarding their solubility and aggregation properties by systematically changing the degree of rotational freedom of the amide bonds, side chain polarity, monomer sequence and degree of oligomerization. The syntheses were performed on a modified commercial peptide synthesizer using a combination of fluorenylmethoxycarbonyl (Fmoc) and aramide chemistry. The automated synthesis allowed the preparation of aramides with potential applications as nanoscaffolds in supramolecular chemistry, e.g. comb-like-
Resumo:
Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.
Resumo:
The market’s challenges bring firms to collaborate with other organizations in order to create Joint Ventures, Alliances and Consortia that are defined as “Interorganizational Networks” (IONs) (Provan, Fish and Sydow; 2007). Some of these IONs are managed through a shared partecipant governance (Provan and Kenis, 2008): a team composed by entrepreneurs and/or directors of each firm of an ION. The research is focused on these kind of management teams and it is based on an input-process-output model: some input variables (work group’s diversity, intra-team's friendship network density) have a direct influence on the process (team identification, shared leadership, interorganizational trust, team trust and intra-team's communication network density), which influence some team outputs, individual innovation behaviors and team effectiveness (team performance, work group satisfaction and ION affective commitment). Data was collected on a sample of 101 entrepreneurs grouped in 28 ION’s government teams and the research hypotheses are tested trough the path analysis and the multilevel models. As expected trust in team and shared leadership are positively and directly related to team effectiveness while team identification and interorganizational trust are indirectly related to the team outputs. The friendship network density among the team’s members has got positive effects on the trust in team and on the communication network density, and also, through the communication network density it improves the level of the teammates ION affective commitment. The shared leadership and its effects on the team effectiveness are fostered from higher level of team identification and weakened from higher level of work group diversity, specifically gender diversity. Finally, the communication network density and shared leadership at the individual level are related to the frequency of individual innovative behaviors. The dissertation’s results give a wider and more precise indication about the management of interfirm network through “shared” form of governance.
Resumo:
The recent advent of Next-generation sequencing technologies has revolutionized the way of analyzing the genome. This innovation allows to get deeper information at a lower cost and in less time, and provides data that are discrete measurements. One of the most important applications with these data is the differential analysis, that is investigating if one gene exhibit a different expression level in correspondence of two (or more) biological conditions (such as disease states, treatments received and so on). As for the statistical analysis, the final aim will be statistical testing and for modeling these data the Negative Binomial distribution is considered the most adequate one especially because it allows for "over dispersion". However, the estimation of the dispersion parameter is a very delicate issue because few information are usually available for estimating it. Many strategies have been proposed, but they often result in procedures based on plug-in estimates, and in this thesis we show that this discrepancy between the estimation and the testing framework can lead to uncontrolled first-type errors. We propose a mixture model that allows each gene to share information with other genes that exhibit similar variability. Afterwards, three consistent statistical tests are developed for differential expression analysis. We show that the proposed method improves the sensitivity of detecting differentially expressed genes with respect to the common procedures, since it is the best one in reaching the nominal value for the first-type error, while keeping elevate power. The method is finally illustrated on prostate cancer RNA-seq data.
Resumo:
A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
E. coli ist in der Lage unter aeroben sowie anaeroben Bedingungen C4-Dicarbonsäuren zur Energiekonservierung zu nutzen. Das DcuS/DcuR-Zweikomponentensystem detektiert diese und reguliert die Gene für den C4-Dicarboxylat-Transport und Metabolismus. Dabei hängt die Sensitivität der Sensorkinase DcuS für C4-Dicarbonsäuren von der Anwesenheit des aeroben Symporters DctA oder des anaeroben Antiporters DcuB ab. Diese bifunktionalen Transporter bilden mit DcuS über direkte Protein-Protein-Wechselwirkungen Sensoreinheiten. In dieser Arbeit wurden die Funktionen von DctA und DcuS im DctA/DcuS-Sensorkomplex analysiert. Mit DctA(S380D) wurde eine Variante des Transporters identifiziert, in der die regulatorische Eigenschaft von der katalytischen Funktion entkoppelt ist. Stämme von E. coli, die den DctA(S380D)/DcuS-Sensorkomplex enthielten, waren in der Lage C4-Dicarbonsäuren wahrzunehmen, obwohl die Transportfunktion von DctA inaktiviert war. Zudem wurden Unterschiede in den Substratspektren von DctA und DcuS festgestellt. Citrat, ein guter Effektor des DctA/DcuS-Sensorkomplexes, wurde durch DctA nicht gebunden oder transportiert. Anhand von Titrationsexperimenten mit variierenden DctA-Mengen wurde außerdem nachgewiesen, dass die Sensitivität von DcuS für seine Effektoren von der DctA-Konzentration abhängig ist. Es konnte gezeigt werden, dass DctA im DctA/DcuS-Sensorkomplex nicht an der Erkennung von C4-Dicarbonsäuren beteiligt ist. DcuS stellt die Signaleingangsstelle des Komplexes dar, während DctA durch seine Anwesenheit die Sensorkinase in eine funktionsbereite oder sensitive Form überführt, die auf Effektoren reagieren kann. Darüber hinaus wurde die Rolle der Transmembranhelices TM1 und TM2 von DcuS für die Funktion und Dimerisierung der Sensorkinase untersucht. Durch Sequenzanalysen wurden „SmallxxxSmall“-Motive, deren Relevanz als Dimerisierungsschnittstellen bereits in Transmembranhelices anderer Proteine nachgewiesen wurde, in TM1 sowie TM2 identifiziert. Die Homodimerisierung beider Transmembrandomänen wurde im GALLEX Two-Hybrid System nachgewiesen, wobei die TM2-TM2-Interaktion stärker war. Die Substitution G190A/G194A im SxxxGxxxG-Tandemmotiv von TM2 rief zudem einen deutlichen Funktionsverlust der Sensorkinase hervor. Dieser Aktivitätsverlust korrelierte mit Störungen der Homodimerisierung von TM2(G190A/G194A) sowie DcuS(G190A/G194A) bei bakteriellen Two-Hybrid Messungen im GALLEX- bzw. BACTH-System. Demzufolge agiert Transmembranhelix 2 mit seinem SxxxGxxxG-Sequenzmotiv als wesentliche Homodimerisierungsstelle in DcuS. Die Dimerisierung von DcuS ist essentiell für die Funktion der Histidinkinase. Zusätzlich wurde bei fluoreszenzmikroskopischen Studien durch Koexpression von DcuS bzw. DctA die zelluläre Kolokalisierung von DctA und DcuR mit DcuS sowie DauA mit DctA nachgewiesen. Die DctA/DcuS-Sensoreinheit kann demnach zum DauA/DctA/DcuS/DcuR-Komplex erweitert werden.
Resumo:
Accurate placement of lesions is crucial for the effectiveness and safety of a retinal laser photocoagulation treatment. Computer assistance provides the capability for improvements to treatment accuracy and execution time. The idea is to use video frames acquired from a scanning digital ophthalmoscope (SDO) to compensate for retinal motion during laser treatment. This paper presents a method for the multimodal registration of the initial frame from an SDO retinal video sequence to a retinal composite image, which may contain a treatment plan. The retinal registration procedure comprises the following steps: 1) detection of vessel centerline points and identification of the optic disc; 2) prealignment of the video frame and the composite image based on optic disc parameters; and 3) iterative matching of the detected vessel centerline points in expanding matching regions. This registration algorithm was designed for the initialization of a real-time registration procedure that registers the subsequent video frames to the composite image. The algorithm demonstrated its capability to register various pairs of SDO video frames and composite images acquired from patients.
Resumo:
To evaluate, in a prospective pilot study, the feasibility of identifying pathogens in urine using real-time polymerase chain reaction (PCR), and to compare the results with the conventional urine culture-based procedures.
Resumo:
Over the last decade, the end-state comfort effect (e.g., Rosenbaum et al., 2006) has received a considerable amount of attention. However, some of the underlying mechanisms are still to be investigated, amongst others, how sequential planning affects end-state comfort and how this effect develops over learning. In a two-step sequencing task, e.g., postural comfort can be planned on the intermediate position (next state) or on the actual end position (final state). It might be hypothesized that, in initial acquisition, next state’s comfort is crucial for action planning but that, in the course of learning, final state’s comfort is taken more and more into account. To test this hypothesis, a variant of Rosenbaum’s vertical stick transportation task was used. Participants (N = 16, right-handed) received extensive practice on a two-step transportation task (10,000 trials over 12 sessions). From the initial position on the middle stair of a staircase in front of the participant, the stick had to be transported either 20 cm upwards and then 40 cm downwards or 20 cm downwards and then 40 cm upwards (N = 8 per subgroup). Participants were supposed to produce fluid movements without changing grasp. In the pre- and posttest, participants were tested on both two-step sequencing tasks as well as on 20 cm single-step upwards and downwards movements (10 trials per condition). For the test trials, grasp height was calculated kinematographically. In the pretest, large end/next/final-state comfort effects for single-step transportation tasks and large next-state comfort effects for sequenced tasks were found. However, no change in grasp height from pre- to posttest could be revealed. Results show that, in vertical stick transportation sequences, the final state is not taken into account when planning grasp height. Instead, action planning seems to be solely based on aspects of the next action goal that is to be reached.