932 resultados para DOMAIN-DOMAIN INTERACTIONS
Resumo:
In questa tesi verranno trattati sia il problema della creazione di un ambiente di simulazione a domini fisici misti per dispositivi RF-MEMS, che la definizione di un processo di fabbricazione ad-hoc per il packaging e l’integrazione degli stessi. Riguardo al primo argomento, sarà mostrato nel dettaglio lo sviluppo di una libreria di modelli MEMS all’interno dell’ambiente di simulazione per circuiti integrati Cadence c . L’approccio scelto per la definizione del comportamento elettromeccanico dei MEMS è basato sul concetto di modellazione compatta (compact modeling). Questo significa che il comportamento fisico di ogni componente elementare della libreria è descritto per mezzo di un insieme limitato di punti (nodi) di interconnessione verso il mondo esterno. La libreria comprende componenti elementari, come travi flessibili, piatti rigidi sospesi e punti di ancoraggio, la cui opportuna interconnessione porta alla realizzazione di interi dispositivi (come interruttori e capacità variabili) da simulare in Cadence c . Tutti i modelli MEMS sono implementati per mezzo del linguaggio VerilogA c di tipo HDL (Hardware Description Language) che è supportato dal simulatore circuitale Spectre c . Sia il linguaggio VerilogA c che il simulatore Spectre c sono disponibili in ambiente Cadence c . L’ambiente di simulazione multidominio (ovvero elettromeccanico) così ottenuto permette di interfacciare i dispositivi MEMS con le librerie di componenti CMOS standard e di conseguenza la simulazione di blocchi funzionali misti RF-MEMS/CMOS. Come esempio, un VCO (Voltage Controlled Oscillator) in cui l’LC-tank è realizzato in tecnologia MEMS mentre la parte attiva con transistor MOS di libreria sarà simulato in Spectre c . Inoltre, nelle pagine successive verrà mostrata una soluzione tecnologica per la fabbricazione di un substrato protettivo (package) da applicare a dispositivi RF-MEMS basata su vie di interconnessione elettrica attraverso un wafer di Silicio. La soluzione di packaging prescelta rende possibili alcune tecniche per l’integrazione ibrida delle parti RF-MEMS e CMOS (hybrid packaging). Verranno inoltre messe in luce questioni riguardanti gli effetti parassiti (accoppiamenti capacitivi ed induttivi) introdotti dal package che influenzano le prestazioni RF dei dispositivi MEMS incapsulati. Nel dettaglio, tutti i gradi di libertà del processo tecnologico per l’ottenimento del package saranno ottimizzati per mezzo di un simulatore elettromagnetico (Ansoft HFSSTM) al fine di ridurre gli effetti parassiti introdotti dal substrato protettivo. Inoltre, risultati sperimentali raccolti da misure di strutture di test incapsulate verranno mostrati per validare, da un lato, il simulatore Ansoft HFSSTM e per dimostrate, dall’altro, la fattibilit`a della soluzione di packaging proposta. Aldilà dell’apparente debole legame tra i due argomenti sopra menzionati è possibile identificare un unico obiettivo. Da un lato questo è da ricercarsi nello sviluppo di un ambiente di simulazione unificato all’interno del quale il comportamento elettromeccanico dei dispositivi RF-MEMS possa essere studiato ed analizzato. All’interno di tale ambiente, l’influenza del package sul comportamento elettromagnetico degli RF-MEMS può essere tenuta in conto per mezzo di modelli a parametri concentrati (lumped elements) estratti da misure sperimentali e simulazioni agli Elementi Finiti (FEM) della parte di package. Infine, la possibilità offerta dall’ambiente Cadence c relativamente alla simulazione di dipositivi RF-MEMS interfacciati alla parte CMOS rende possibile l’analisi di blocchi funzionali ibridi RF-MEMS/CMOS completi.
Resumo:
Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.
Resumo:
The aim of the present study is understanding the properties of a new group of redox proteins having in common a DOMON-type domain with characteristics of cytochromes b. The superfamily of proteins containing a DOMON of this type includes a few protein families. With the aim of better characterizing this new protein family, the present work addresses both a CyDOM protein (a cytochrome b561) and a protein only comprised of DOMON(AIR12), both of plant origin. Apoplastic ascorbate can be regenerated from monodehydroascorbate by a trans-plasma membrane redox system which uses cytosolic ascorbate as a reductant and comprises a high potential cytochrome b. We identified the major plasma membrane (PM) ascorbate-reducible b-type cytochrome of bean (Phaseolus vulgaris) and soybean (Glycine max) hypocotyls as orthologs of Arabidopsis auxin-responsive gene air12. The protein, which is glycosylated and glycosylphosphatidylinositol-anchored to the external side of the PM in vivo, was expressed in Pichia pastoris in a recombinant form, lacking the glycosylphosphatidylinositol-modification signal, and purified from the culture medium. Recombinant AIR12 is a soluble protein predicted to fold into a β-sandwich domain and belonging to the DOMON superfamily. It is shown to be a b-type cytochrome with a symmetrical α-band at 561 nm, to be fully reduced by ascorbate and fully oxidized by monodehydroascorbate. Redox potentiometry suggests that AIR12 binds two high-potential hemes (Em,7 +135 and +236 mV). Phylogenetic analyses reveal that the auxin-responsive genes AIR12 constitute a new family of plasma membrane b-type cytochromes specific to flowering plants. Although AIR12 is one of the few redox proteins of the PM characterized to date, the role of AIR12 in trans-PM electron transfer would imply interaction with other partners which are still to be identified. Another part of the present project was aimed at understanding of a soybean protein comprised of a DOMON fused with a well-defined b561 cytochrome domain (CyDOM). Various bioinformatic approaches show this protein to be composed of an N-terminal DOMON followed by b561 domain. The latter contains five transmembrane helices featuring highly conserved histidines, which might bind haem groups. The CyDOM has been cloned and expressed in the yeast Pichia pastoris, and spectroscopic analyses have been accomplished on solubilized yeast membranes. CyDOM clearly reveal the properties of b-type cytochrome. The results highlight the fact that CyDOM is clearly able to lead an electron flux through the plasmamembrane. Voltage clamp experiments demonstrate that Xenopus laevis oocytes transformed with CyDOM of soybean exhibit negative electrical currents in presence of an external electron acceptor. Analogous investigations were carried out with SDR2, a CyDOM of Drosophila melanogaster which shows an electron transport capacity even higher than plant CyDOM. As quoted above, these data reinforce those obtained in plant CyDOM on the one hand, and on the other hand allow to attribute to SDR2-like proteins the properties assigned to CyDOM. Was expressed in Regenerated tobacco roots, transiently transformed with infected a with chimeral construct GFP: CyDOM (by A. rhizogenes infection) reveals a plasmamembrane localization of CyDOM both in epidermal cells of the elongation zone of roots and in root hairs. In conclusion. Although the data presented here await to be expanded and in part clarified, it is safe to say they open a new perspective about the role of this group of proteins. The biological relevance of the functional and physiological implications of DOMON redox domains seems noteworthy, and it can but increase with future advances in research. Beyond the very finding, however interesting in itself, of DOMON domains as extracellular cytochromes, the present study testifies to the fact that cytochrome proteins containing DOMON domains of the type of “CyDOM” can transfer electrons through membranes and may represent the most important redox component of the plasmamembrane as yet discovered.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
This thesis presents new methods to simulate systems with hydrodynamic and electrostatic interactions. Part 1 is devoted to computer simulations of Brownian particles with hydrodynamic interactions. The main influence of the solvent on the dynamics of Brownian particles is that it mediates hydrodynamic interactions. In the method, this is simulated by numerical solution of the Navier--Stokes equation on a lattice. To this end, the Lattice--Boltzmann method is used, namely its D3Q19 version. This model is capable to simulate compressible flow. It gives us the advantage to treat dense systems, in particular away from thermal equilibrium. The Lattice--Boltzmann equation is coupled to the particles via a friction force. In addition to this force, acting on {it point} particles, we construct another coupling force, which comes from the pressure tensor. The coupling is purely local, i.~e. the algorithm scales linearly with the total number of particles. In order to be able to map the physical properties of the Lattice--Boltzmann fluid onto a Molecular Dynamics (MD) fluid, the case of an almost incompressible flow is considered. The Fluctuation--Dissipation theorem for the hybrid coupling is analyzed, and a geometric interpretation of the friction coefficient in terms of a Stokes radius is given. Part 2 is devoted to the simulation of charged particles. We present a novel method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. This algorithm scales linearly, too. We focus on the Molecular Dynamics version of the method and show that it is intimately related to the Car--Parrinello approach, while being equivalent to solving Maxwell's equations with freely adjustable speed of light. The Lagrangian formulation of the coupled particles--fields system is derived. The quasi--Hamiltonian dynamics of the system is studied in great detail. For implementation on the computer, the equations of motion are discretized with respect to both space and time. The discretization of the electromagnetic fields on a lattice, as well as the interpolation of the particle charges on the lattice is given. The algorithm is as local as possible: Only nearest neighbors sites of the lattice are interacting with a charged particle. Unphysical self--energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green's function. The method allows easy parallelization using standard domain decomposition. Some benchmarking results of the algorithm are presented and discussed.
Resumo:
Host-Pathogen Interaction is a very vast field of biological sciences, indeed every year many un- known pathogens are uncovered leading to an exponential growth of this field. The present work lyes between its boundaries, touching different aspects of host-pathogen interaction: We have evaluate the permissiveness of Mesenchimal Stem cell (FM-MSC from now on) to all known human affecting herpesvirus. Our study demonstrate that FM-MSC are full permissive to HSV1, HSV2, HCMV and VZV. On the other hand HHV6, HHV7, EBV and HHV8 are susceptible, but failed to activate a lytic infection program. FM-MSC are pluripotent stem cell and have been studied intensely in last decade. FM-MSC are employed in some clinical applications. For this reason it is important to known the degree of susceptibility to transmittable pathogens. Our atten- tion has then moved to bacterial pathogens: we have performed a proteome-wide in silico analy- sis of Chlamydiaceae family, searching for putative Nuclear localization Signal (NLS). Chlamy- diaceae are a family of obligate intracellular parasites. It’s reasonably to think that its members could delivered to nucleus effector proteins via NLS sequences: if that were the case the identifi- cation of NLS carrying proteins could open the way to therapeutic approaches. Our results strengthen this hypothesis: we have identified 72 protein bearing NLS, and verified their func- tionality with in vivo assays. Finally we have conceived a molecular scissor, creating a fusion protein between HIV-1 IN protein and FokI catalytic domain (a deoxyexonuclease domain). Our aim is to obtain chimeric enzyme (trojIN) which selectively identify IN naturally occurring target (HIV LTR sites) and cleaves subsequently LTR carrying DNA (for example integrated HIV1 DNA). Our preliminary results are promising since we have identified trojIN mutated version capable to selectively recognize LTR carrying DNA in an in vitro experiments.
Resumo:
Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
Die lösliche Epoxidhydrolase (sEH) gehört zur Familie der Epoxidhydrolase-Enzyme. Die Rolle der sEH besteht klassischerweise in der Detoxifikation, durch Umwandlung potenziell schädlicher Epoxide in deren unschädliche Diol-Form. Hauptsächlich setzt die sEH endogene, der Arachidonsäure verwandte Signalmoleküle, wie beispielsweise die Epoxyeicosatrienoic acid, zu den entsprechenden Diolen um. Daher könnte die sEH als ein Zielenzym in der Therapie von Bluthochdruck und Entzündungen sowie diverser anderer Erkrankungen eingesetzt werden. rnDie sEH ist ein Homodimer, in dem jede Untereinheit aus zwei Domänen aufgebaut ist. Das katalytische Zentrum der Epoxidhydrolaseaktivität befindet sich in der 35 kD großen C-terminalen Domäne. Dieser Bereich der sEH s wurde bereits im Detail untersucht und nahezu alle katalytischen Eigenschaften des Enzyms sowie deren dazugehörige Funktionen sind in Zusammenhang mit dieser Domäne bekannt. Im Gegensatz dazu ist über die 25 kD große N-terminale Domäne wenig bekannt. Die N-terminale Domäne der sEH wird zur Haloacid Dehalogenase (HAD) Superfamilie von Hydrolasen gezählt, jedoch war die Funktion dieses N-terminal Domäne lange ungeklärt. Wir haben in unserer Arbeitsgruppe zum ersten Mal zeigen können, dass die sEH in Säugern ein bifunktionelles Enzym ist, welches zusätzlich zur allgemein bekannten Enzymaktivität im C-terminalen Bereich eine weitere enzymatische Funktion mit Mg2+-abhängiger Phosphataseaktivität in der N-terminalen Domäne aufweist. Aufgrund der Homologie der N-terminalen Domäne mit anderen Enzymen der HAD Familie wird für die Ausübung der Phosphatasefunktion (Dephosphorylierung) eine Reaktion in zwei Schritten angenommen.rnUm den katalytischen Mechanismus der Dephosphorylierung weiter aufzuklären, wurden biochemische Analysen der humanen sEH Phosphatase durch Generierung von Mutationen im aktiven Zentrum mittels ortsspezifischer Mutagenese durchgeführt. Hiermit sollten die an der katalytischen Aktivität beteiligten Aminosäurereste im aktiven Zentrum identifiziert und deren Rolle bei der Dephosphorylierung spezifiziert werden. rnrnAuf Basis der strukturellen und möglichen funktionellen Ähnlichkeiten der sEH und anderen Mitgliedern der HAD Superfamilie wurden Aminosäuren (konservierte und teilweise konservierte Aminosäuren) im aktiven Zentrum der sEH Phosphatase-Domäne als Kandidaten ausgewählt.rnVon den Phosphatase-Domäne bildenden Aminosäuren wurden acht ausgewählt (Asp9 (D9), Asp11 (D11), Thr123 (T123), Asn124 (N124), Lys160 (K160), Asp184 (D184), Asp185 (D185), Asn189 (N189)), die mittels ortsspezifischer Mutagenese durch nicht funktionelle Aminosäuren ausgetauscht werden sollten. Dazu wurde jede der ausgewählten Aminosäuren durch mindestens zwei alternative Aminosäuren ersetzt: entweder durch Alanin oder durch eine Aminosäure ähnlich der im Wildtyp-Enzym. Insgesamt wurden 18 verschiedene rekombinante Klone generiert, die für eine mutante sEH Phosphatase Domäne kodieren, in dem lediglich eine Aminosäure gegenüber dem Wildtyp-Enzym ersetzt wurde. Die 18 Mutanten sowie das Wildtyp (Sequenz der N-terminalen Domäne ohne Mutation) wurden in einem Expressionsvektor in E.coli kloniert und die Nukleotidsequenz durch Restriktionsverdau sowie Sequenzierung bestätigt. Die so generierte N-terminale Domäne der sEH (25kD Untereinheit) wurde dann mittels Metallaffinitätschromatographie erfolgreich aufgereinigt und auf Phosphataseaktivität gegenüber des allgemeinen Substrats 4-Nitophenylphosphat getestet. Diejenigen Mutanten, die Phosphataseaktivität zeigten, wurden anschließend kinetischen Tests unterzogen. Basiered auf den Ergebnissen dieser Untersuchungen wurden kinetische Parameter mittels vier gut etablierter Methoden berechnet und die Ergebnisse mit der „direct linear blot“ Methode interpretiert. rnDie Ergebnisse zeigten, dass die meisten der 18 generierten Mutanten inaktiv waren oder einen Großteil der Enzymaktivität (Vmax) gegenüber dem Wildtyp verloren (WT: Vmax=77.34 nmol-1 mg-1 min). Dieser Verlust an Enzymaktivität ließ sich nicht durch einen Verlust an struktureller Integrität erklären, da der Wildtyp und die mutanten Proteine in der Chromatographie das gleiche Verhalten zeigten. Alle Aminosäureaustausche Asp9 (D9), Lys160 (K160), Asp184 (D184) und Asn189 (N189) führten zum kompletten Verlust der Phosphataseaktivität, was auf deren katalytische Funktion im N-terminalen Bereich der sEH hindeutet. Bei einem Teil der Aminosäureaustausche die für Asp11 (D11), Thr123 (T123), Asn124 (N124) und Asn185 (D185) durchgeführt wurden, kam es, verglichen mit dem Wildtyp, zu einer starken Reduktion der Phosphataseaktivität, die aber dennoch für die einzelnen Proteinmutanten in unterschiedlichem Ausmaß zu messen war (2 -10% and 40% of the WT enzyme activity). Zudem zeigten die Mutanten dieser Gruppe veränderte kinetische Eigenschaften (Vmax allein oder Vmax und Km). Dabei war die kinetische Analyse des Mutanten Asp11 Asn aufgrund der nur bei dieser Mutanten detektierbaren starken Vmax Reduktion (8.1 nmol-1 mg-1 min) und einer signifikanten Reduktion der Km (Asp11: Km=0.54 mM, WT: Km=1.3 mM), von besonderem Interesse und impliziert eine Rolle von Asp11 (D11) im zweiten Schritt der Hydrolyse des katalytischen Zyklus.rnZusammenfassend zeigen die Ergebnisse, dass alle in dieser Arbeit untersuchten Aminosäuren für die Phosphataseaktivität der sEH nötig sind und das aktive Zentrum der sEH Phosphatase im N-terminalen Bereich des Enzyms bilden. Weiterhin tragen diese Ergebnisse zur Aufklärung der potenziellen Rolle der untersuchten Aminosäuren bei und unterstützen die Hypothese, dass die Dephosphorylierungsreaktion in zwei Schritten abläuft. Somit ist ein kombinierter Reaktionsmechanismus, ähnlich denen anderer Enzyme der HAD Familie, für die Ausübung der Dephosphorylierungsfunktion denkbar. Diese Annahme wird gestützt durch die 3D-Struktur der N-terminalen Domäne, den Ergebnissen dieser Arbeit sowie Resultaten weiterer biochemischer Analysen. Der zweistufige Mechanismus der Dephosphorylierung beinhaltet einen nukleophilen Angriff des Substratphosphors durch das Nukleophil Asp9 (D9) des aktiven Zentrums unter Bildung eines Acylphosphat-Enzym-Zwischenprodukts, gefolgt von der anschließenden Freisetzung des dephosphorylierten Substrats. Im zweiten Schritt erfolgt die Hydrolyse des Enzym-Phosphat-Zwischenprodukts unterstützt durch Asp11 (D11), und die Freisetzung der Phosphatgruppe findet statt. Die anderen untersuchten Aminosäuren sind an der Bindung von Mg 2+ und/oder Substrat beteiligt. rnMit Hilfe dieser Arbeit konnte der katalytischen Mechanismus der sEH Phosphatase weiter aufgeklärt werden und wichtige noch zu untersuchende Fragestellungen, wie die physiologische Rolle der sEH Phosphatase, deren endogene physiologische Substrate und der genaue Funktionsmechanismus als bifunktionelles Enzym (die Kommunikation der zwei katalytischen Einheiten des Enzyms) wurden aufgezeigt und diskutiert.rn
Resumo:
In chronic myeloid leukemia and Philadelphia-positive acute lymphoblastic leukemia patients resistant to tyrosine kinase inhibitors (TKIs), BCR-ABL kinase domain mutation status is an essential component of the therapeutic decision algorithm. The recent development of Ultra-Deep Sequencing approach (UDS) has opened the way to a more accurate characterization of the mutant clones surviving TKIs conjugating assay sensitivity and throughput. We decided to set-up and validated an UDS-based for BCR-ABL KD mutation screening in order to i) resolve qualitatively and quantitatively the complexity and the clonal structure of mutated populations surviving TKIs, ii) study the dynamic of expansion of mutated clones in relation to TKIs therapy, iii) assess whether UDS may allow more sensitive detection of emerging clones, harboring critical 2GTKIs-resistant mutations predicting for an impending relapse, earlier than SS. UDS was performed on a Roche GS Junior instrument, according to an amplicon sequencing design and protocol set up and validated in the framework of the IRON-II (Interlaboratory Robustness of Next-Generation Sequencing) International consortium.Samples from CML and Ph+ ALL patients who had developed resistance to one or multiple TKIs and collected at regular time-points during treatment were selected for this study. Our results indicate the technical feasibility, accuracy and robustness of our UDS-based BCR-ABL KD mutation screening approach. UDS was found to provide a more accurate picture of BCR-ABL KD mutation status, both in terms of presence/absence of mutations and in terms of clonal complexity and showed that BCR-ABL KD mutations detected by SS are only the “tip of iceberg”. In addition UDS may reliably pick 2GTKIs-resistant mutations earlier than SS in a significantly greater proportion of patients.The enhanced sensitivity as well as the possibility to identify low level mutations point the UDS-based approach as an ideal alternative to conventional sequencing for BCR-ABL KD mutation screening in TKIs-resistant Ph+ leukemia patients
Resumo:
Magnetic Resonance Spectroscopy (MRS) is an advanced clinical and research application which guarantees a specific biochemical and metabolic characterization of tissues by the detection and quantification of key metabolites for diagnosis and disease staging. The "Associazione Italiana di Fisica Medica (AIFM)" has promoted the activity of the "Interconfronto di spettroscopia in RM" working group. The purpose of the study is to compare and analyze results obtained by perfoming MRS on scanners of different manufacturing in order to compile a robust protocol for spectroscopic examinations in clinical routines. This thesis takes part into this project by using the GE Signa HDxt 1.5 T at the Pavillion no. 11 of the S.Orsola-Malpighi hospital in Bologna. The spectral analyses have been performed with the jMRUI package, which includes a wide range of preprocessing and quantification algorithms for signal analysis in the time domain. After the quality assurance on the scanner with standard and innovative methods, both spectra with and without suppression of the water peak have been acquired on the GE test phantom. The comparison of the ratios of the metabolite amplitudes over Creatine computed by the workstation software, which works on the frequencies, and jMRUI shows good agreement, suggesting that quantifications in both domains may lead to consistent results. The characterization of an in-house phantom provided by the working group has achieved its goal of assessing the solution content and the metabolite concentrations with good accuracy. The goodness of the experimental procedure and data analysis has been demonstrated by the correct estimation of the T2 of water, the observed biexponential relaxation curve of Creatine and the correct TE value at which the modulation by J coupling causes the Lactate doublet to be inverted in the spectrum. The work of this thesis has demonstrated that it is possible to perform measurements and establish protocols for data analysis, based on the physical principles of NMR, which are able to provide robust values for the spectral parameters of clinical use.
Resumo:
Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.
Resumo:
La tesi ha lo scopo di indagare le tecnologie disponibili per la realizzazione di linguaggi di programmazione e linguaggi domain specific in ambiente Java. In particolare, vengono proposti e analizzati tre strumenti presenti sul mercato: JavaCC, ANTLR e Xtext. Al termine dell’elaborato, il lettore dovrebbe avere un’idea generale dei principali meccanismi e sistemi utilizzati (come lexer, parser, AST, parse trees, etc.), oltre che del funzionamento dei tre tools presentati. Inoltre, si vogliono individuare vantaggi e svantaggi di ciascuno strumento attraverso un’analisi delle funzionalità offerte, così da fornire un giudizio critico per la scelta e la valutazione dei sistemi da utilizzare.