956 resultados para Orthogonal projectors
Resumo:
Synchronization is a key issue in any communication system, but it becomes fundamental in the navigation systems, which are entirely based on the estimation of the time delay of the signals coming from the satellites. Thus, even if synchronization has been a well known topic for many years, the introduction of new modulations and new physical layer techniques in the modern standards makes the traditional synchronization strategies completely ineffective. For this reason, the design of advanced and innovative techniques for synchronization in modern communication systems, like DVB-SH, DVB-T2, DVB-RCS, WiMAX, LTE, and in the modern navigation system, like Galileo, has been the topic of the activity. Recent years have seen the consolidation of two different trends: the introduction of Orthogonal Frequency Division Multiplexing (OFDM) in the communication systems, and of the Binary Offset Carrier (BOC) modulation in the modern Global Navigation Satellite Systems (GNSS). Thus, a particular attention has been given to the investigation of the synchronization algorithms in these areas.
Resumo:
Investigations on formation and specification of neural precursor cells in the central nervous system of the Drosophila melanogaster embryoSpecification of a unique cell fate during development of a multicellular organism often is a function of its position. The Drosophila central nervous system (CNS) provides an ideal system to dissect signalling events during development that lead to cell specific patterns. Different cell types in the CNS are formed from a relatively few precursor cells, the neuroblasts (NBs), which delaminate from the neurogenic region of the ectoderm. The delamination occurs in five waves, S1-S5, finally leading to a subepidermal layer consisting of about 30 NBs, each with a unique identity, arranged in a stereotyped spatial pattern in each hemisegment. This information depends on several factors such as the concentrations of various morphogens, cell-cell interactions and long range signals present at the position and time of its birth. The early NBs, delaminating during S1 and S2, form an orthogonal array of four rows (2/3,4,5,6/7) and three columns (medial, intermediate, and lateral) . However, the three column and four row-arrangement pattern is only transitory during early stages of neurogenesis which is obscured by late emerging (S3-S5) neuroblasts (Doe and Goodman, 1985; Goodman and Doe, 1993). Therefore the aim of my study has been to identify novel genes which play a role in the formation or specification of late delaminating NBs.In this study the gene anterior open or yan was picked up in a genetic screen to identity novel and yet unidentified genes in the process of late neuroblast formation and specification. I have shown that the gene yan is responsible for maintaining the cells of the neuroectoderm in an undifferentiated state by interfering with the Notch signalling mechanism. Secondly, I have studied the function and interactions of segment polarity genes within a certain neuroectodermal region, namely the engrailed (en) expressing domain, with regard to the fate specification of a set of late neuroblasts, namely NB 6-4 and NB 7-3. I have dissected the regulatory interaction of the segment polarity genes wingless (wg), hedgehog (hh) and engrailed (en) as they maintain each others expression to show that En is a prerequisite for neurogenesis and show that the interplay of the segmentation genes naked (nkd) and gooseberry (gsb), both of which are targets of wingless (wg) activity, leads to differential commitment of NB 7-3 and NB 6-4 cell fate. I have shown that in the absence of either nkd or gsb one NB fate is replaced by the other. However, the temporal sequence of delamination is maintained, suggesting that formation and specification of these two NBs are under independent control.
Resumo:
“Cartographic heritage” is different from “cartographic history”. The second term refers to the study of the development of surveying and drawing techniques related to maps, through time, i.e. through different types of cultural environment which were background for the creation of maps. The first term concerns the whole amount of ancient maps, together with these different types of cultural environment, which the history has brought us and which we perceive as cultural values to be preserved and made available to many users (public, institutions, experts). Unfortunately, ancient maps often suffer preservation problems of their analog support, mostly due to aging. Today, metric recovery in digital form and digital processing of historical cartography allow preserving map heritage. Moreover, modern geomatic techniques give us new chances of using historical information, which would be unachievable on analog supports. In this PhD thesis, the whole digital processing of recovery and elaboration of ancient cartography is reported, with a special emphasis on the use of digital tools in preservation and elaboration of cartographic heritage. It is possible to divide the workflow into three main steps, that reflect the chapter structure of the thesis itself: • map acquisition: conversion of the ancient map support from analog to digital, by means of high resolution scanning or 3D surveying (digital photogrammetry or laser scanning techniques); this process must be performed carefully, with special instruments, in order to reduce deformation as much as possible; • map georeferencing: reproducing in the digital image the native metric content of the map, or even improving it by selecting a large number of still existing ground control points; this way it is possible to understand the projection features of the historical map, as well as to evaluate and represent the degree of deformation induced by the old type of cartographic transformation (that can be unknown to us), by surveying errors or by support deformation, usually all errors of too high value with respect to our standards; • data elaboration and management in a digital environment, by means of modern software tools: vectorization, giving the map a new and more attractive graphic view (for instance, by creating a 3D model), superimposing it on current base maps, comparing it to other maps, and finally inserting it in GIS or WebGIS environment as a specific layer. The study is supported by some case histories, each of them interesting from the point of view of one digital cartographic elaboration step at least. The ancient maps taken into account are the following ones: • three maps of the Po river delta, made at the end of the XVI century by a famous land-surveyor, Ottavio Fabri (he is single author in the first map, co-author with Gerolamo Pontara in the second map, co-author with Bonajuto Lorini and others in the third map), who wrote a methodological textbook where he explains a new topographical instrument, the squadra mobile (mobile square) invented and used by himself; today all maps are preserved in the State Archive of Venice; • the Ichnoscenografia of Bologna by Filippo de’ Gnudi, made in the 1702 and today preserved in the Archiginnasio Library of Bologna; it is a scenographic view of the city, captured in a bird’s eye flight, but also with an icnographic value, as the author himself declares; • the map of Bologna by the periti Gregorio Monari and Antonio Laghi, the first map of the city derived from a systematic survey, even though it was made only ten years later (1711–1712) than the map by de’ Gnudi; in this map the scenographic view was abandoned, in favor of a more correct representation by means of orthogonal projection; today the map is preserved in the State Archive of Bologna; • the Gregorian Cadastre of Bologna, made in 1831 and updated until 1927, now preserved in the State Archive of Bologna; it is composed by 140 maps and 12 brogliardi (register volumes). In particular, the three maps of the Po river delta and the Cadastre were studied with respect to their acquisition procedure. Moreover, the first maps were analyzed from the georeferencing point of view, and the Cadastre was analyzed with respect to a possible GIS insertion. Finally, the Ichnoscenografia was used to illustrate a possible application of digital elaboration, such as 3D modeling. Last but not least, we must not forget that the study of an ancient map should start, whenever possible, from the consultation of the precious original analogical document; analysis by means of current digital techniques allow us new research opportunities in a rich and modern multidisciplinary context.
Resumo:
Streulängen beschreiben die s-Wellen-Streuung niederenergetischer Neutronen an Kernen. Solche Streuprozesse laufen nahezu ausschließlich über die starke Wechselwirkung ab. Wegen der Spinabhängigkeit der starken Wechselwirkung werden den Multiplett-Streulängen, d.h. den Streulängen der Gesamtspinzustände J, im Allgemeinen verschiedene Werte zugeordnet. Im Experiment sind die Multiplett-Streuzustände an makroskopischen Proben in der Regel nicht unmittelbar zugänglich. Messbar sind jedoch der polarisationsabhängige und -unabhängige Anteil der Streulänge, welche als inkohärente Streulänge und kohärente Streulänge bezeichnet werden und Linearkombinationen der Multiplettstreulängen sind. Durch komplexe Streulängen lässt sich der für reine Streuprozesse entwickelte Formalismus erweitern: Der Imaginärteil der Streulänge beschreibt dann die Absorption von Projektilen im Target. Sämtliche Reaktionsquerschnitte lassen sich als Funktionen der Streulänge angeben. Verbesserte Messungen der 3He-Streulängen sind für die Entwicklung theoretischer Modelle von Wenig-Nukleonen-Systemen wichtig. Für die Systeme (n,D) und (n,T) wurden in den letzten Jahren u.a. präzise theoretische Vorhersagen für die Multiplett-Streulängen gemacht. Die Übereinstimmung mit den experimentellen Ergebnissen untermauert, dass die theoretischen Unsicherheiten dieser Werte nur etwa 1 Promille betragen. Demgegenüber ist die theoretische Behandlung des n-3He-Systems aufwändiger. Bis zu Beginn der 1980er Jahre wurde eine Reihe von Vorhersagen für die Multiplett-Streulängen gemacht, die auf erfolgreichen Dreinukleon-Potentialmodellen basierten, untereinander aber vollkommen inkompatibel waren. Daneben waren zwei disjunkte Wertepaare für die Multiplett-Streulängen mit den experimentellen Ergebnissen verträglich. Obwohl es begründete Argumente zugunsten eines der Wertepaare gab, bestand die Hoffnung auf eine experimentelle Verifikation durch direkte Messung der inkohärenten Streulänge bereits 1980. Die Bestimmung des Realteils der inkohärenten Streulänge liefert in der Multiplettstreulängenebene eine Gerade, die fast orthogonal zum Band des Realteils der kohärenten Streulänge verläuft. Vermutlich aufgrund der unzureichenden Kenntnis der Realteile hat in den letzten Jahren keine nennenswerte Weiterentwicklung der Modelle für das System n–3He stattgefunden. Diese Arbeit entstand in der Absicht, durch polarisierte und unpolarisierte Experimente an 3He quantitative Fakten zur Beurteilung konkurrierender Vier-Nukleonen-Modelle zu schaffen und somit der theoretischen Arbeit auf diesem Feld einen neuen Impuls zu geben. Eine jüngst veröffentlichte theoretische Arbeit [H. M. Hofmann und G. M. Hale. Phys. Rev. C, 68(021002(R)): 1–4, Apr. 2003] zur spinabhängigen Streulänge des 3He belegt, dass die im Rahmen dieser Arbeit unternommenen Anstrengungen auf reges Interesse stoßen. Durch die Anwendung zweier sehr unterschiedlicher experimenteller Konzepte wurden Präzisionsmessungen der Realteile der kohärenten und inkohärenten Neutronenstreulänge des 3He durchgeführt. Während sich die Methode der Neutroneninterferometrie seit Ende der 1970er Jahre als Standardverfahren zur Messung von spinunabhängigen Streulängen etabliert hat, handelt es sich bei der Messung des pseudomagnetischen Präzessionswinkels am Spinecho-Spektrometer um ein neues experimentelles Verfahren. Wir erhalten aus den Experimenten für die gebundenen kohärenten und inkohärenten Streulängen neue Werte, welche die Unsicherheiten im Falle der kohärenten Streulänge um eine Größenordnung, im Falle der inkohärenten Streulänge sogar um den Faktor 30 reduzieren. Die Kombination dieser Resultate liefert verbesserte Werte der, für die nukleare Wenigkörper-Theorie wichtigen, Singulett- und Triplett-Streulängen. Wir erhalten neue Werte für die kohärenten und inkohärenten Anteile des gebundenen Streuquerschnitts, das für die Neutronenstreuung an der 3He-Quantenflüssigkeit wichtige Verhältnis von inkohärentem und kohärentem Streuquerschnitt und für den freien totalen Streuquerschnitt.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
The aim of this thesis was to investigate novel techniques to create complex hierarchical chemical patterns on silica surfaces with micro to nanometer sized features. These surfaces were used for a site-selective assembly of colloidal particles and oligonucleotides. To do so, functionalised alkoxysilanes (commercial and synthesised ones) were deposited onto planar silica surfaces. The functional groups can form reversible attractive interactions with the complementary surface layers of the opposing objects that need to be assembled. These interactions determine the final location and density of the objects onto the surface. Photolithographically patterned silica surfaces were modified with commercial silanes, in order to create hydrophilic and hydrophobic regions on the surface. Assembly of hydrophobic silica particles onto these surfaces was investigated and finally, pH and charge effects on the colloidal assembly were analysed. In the second part of this thesis the concept of novel, "smart" alkoxysilanes is introduced that allows parallel surface activation and patterning in a one-step irradiation process. These novel species bear a photoreactive head-group in a protected form. Surface layers made from these molecules can be irradiated through a mask to remove the protecting group from selected regions and thus generate lateral chemical patterns of active and inert regions on the substrate. The synthesis of an azide-reactive alkoxysilane was successfully accomplished. Silanisation conditions were carefully optimised as to guarantee a smooth surface layer, without formation of micellar clusters. NMR and DLS experiments corroborated the absence of clusters when using neither water nor NaOH as catalysts during hydrolysis, but only the organic solvent itself. Upon irradiation of the azide layer, the resulting nitrene may undergo a variety of reactions depending on the irradiation conditions. Contact angle measurements demonstrated that the irradiated surfaces were more hydrophilic than the non-irradiated azide layer and therefore the formation of an amine upon irradiation was postulated. Successful photoactivation could be demonstrated using condensation patterns, which showed a change in wettability on the wafer surface upon irradiation. Colloidal deposition with COOH functionalised particles further underlined the formation of more hydrophilic species. Orthogonal photoreactive silanes are described in the third part of this thesis. The advantage of orthogonal photosensitive silanes is the possibility of having a coexistence of chemical functionalities homogeneously distributed in the same layer, by using appropriate protecting groups. For this purpose, a 3',5'-dimethoxybenzoin protected carboxylic acid silane was successfully synthesised and the kinetics of its hydrolysis and condensation in solution were analysed in order to optimise the silanisation conditions. This compound was used together with a nitroveratryl protected amino silane to obtain bicomponent surface layers. The optimum conditions for an orthogonal deprotection of surfaces modified with this two groups were determined. A 2-step deprotection process through a mask generated a complex pattern on the substrate by activating two different chemistries at different sites. This was demonstrated by colloidal adsorption and fluorescence labelling of the resulting substrates. Moreover, two different single stranded oligodeoxynucleotides were immobilised onto the two different activated areas and then hybrid captured with their respective complementary, fluorescent labelled strand. Selective hybridisation could be shown, although non-selective adsorption issues need to be resolved, making this technique attractive for possible DNA microarrays.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
In dieser Arbeit werden die QCD-Strahlungskorrekturen in erster Ordnung der starken Kopplungskonstanten für verschiedene Polarisationsobservablen zu semileptonischen Zerfällen eines bottom-Quarks in ein charm-Quark und ein Leptonpaar berechnet. Im ersten Teil wird der Zerfall eines unpolarisierten b-Quarks in ein polarisiertes c-Quark sowie ein geladenes Lepton und ein Antineutrino im Ruhesystem des b-Quarks analysiert. Es werden die Strahlungskorrekturen für den unpolarisierten und den polarisierten Beitrag zur differentiellen Zerfallsrate nach der Energie des c-Quarks berechnet, wobei das geladene Lepton als leicht angesehen und seine Masse daher vernachlässigt wird. Die inklusive differentielle Rate wird durch zwei Strukturfunktionen in analytischer Form dargestellt. Anschließend werden die Strukturfunktionen und die Polarisation des c-Quarks numerisch ausgewertet. Nach der Einführung der Helizitäts-Projektoren befaßt sich der zweite Teil mit dem kaskadenartigen Zerfall eines polarisierten b-Quarks in ein unpolarisiertes c-Quark und ein virtuelles W-Boson, welches weiter in ein Paar leichter Leptonen zerfällt. Es werden die inklusiven Strahlungskorrekturen zu drei unpolarisierten und fünf polarisierten Helizitäts-Strukturfunktionen in analytischer Form berechnet, welche die Winkelverteilung für die differentielle Zerfallsrate nach dem Viererimpulsquadrat des W-Bosons beschreiben. Die Strukturfunktionen enthalten die Informationen sowohl über die polare Winkelverteilung zwischen dem Spinvektor des b-Quarks und dem Impulsvektor des W-Bosons als auch über die räumliche Winkelverteilung zwischen den Impulsen des W-Bosons und des Leptonpaars. Der Impuls und der Spinvektor des b-Quarks sowie der Impuls des W-Bosons werden im b-Ruhesystem analysiert, während die Impulse des Leptonpaars im W-Ruhesystem ausgewertet werden. Zusätzlich zu den genannten Strukturfunktionen werden noch die unpolarisierte und die polarisierte skalare Strukturfunktion angegeben, die in Anwendungen bei hadronischen Zerfällen eine Rolle spielen. Anschließend folgt eine numerische Auswertung aller berechneten Strukturfunktionen. Im dritten Teil werden die nichtperturbativen HQET-Korrekturen zu inklusiven semileptonischen Zerfällen schwerer Hadronen diskutiert, welche ein b-Quark enthalten. Sie beschreiben hadronische Korrekturen, die durch die feste Bindung des b-Quarks in Hadronen hervorgerufen werden. Es werden insgesamt fünf unpolarisierte und neun polarisierte Helizitäts-Strukturfunktionen in analytischer Form angegeben, die auch eine endliche Masse und den Spin des geladenen Leptons berücksichtigen. Die Strukturfunktionen werden sowohl in differentieller Form in Abhängigkeit des quadrierten Viererimpulses des W-Bosons als auch in integrierter Form präsentiert. Zum Schluß werden die zuvor erhaltenen Resultate auf die semi-inklusiven hadronischen Zerfälle eines polarisierten Lambda_b-Baryons oder eines B-Mesons in ein D_s- oder ein D_s^*-Meson unter Berücksichtigung der D_s^*-Polarisation angewandt. Für die zugehörigen Winkelverteilungen werden die inklusiven QCD- und die nichtperturbativen HQET-Korrekturen zu den Helizitäts-Strukturfunktionen in analytischer Form angegeben und anschließend numerisch ausgewertet.
Resumo:
Im Rahmen der vorliegenden Dissertation wurde, basierend auf der Parallel-/Orthogonalraum-Methode, eine neue Methode zur Berechnung von allgemeinen massiven Zweischleifen-Dreipunkt-Tensorintegralen mit planarer und gedrehter reduzierter planarer Topologie entwickelt. Die Ausarbeitung und Implementation einer Tensorreduktion fuer Integrale, welche eine allgemeine Tensorstruktur im Minkowski-Raum besitzen koennen, wurde durchgefuehrt. Die Entwicklung und Implementation eines Algorithmus zur semi-analytischen Berechnung der schwierigsten Integrale, die nach der Tensorreduktion verbleiben, konnte vollendet werden. (Fuer die anderen Basisintegrale koennen wohlbekannte Methoden verwendet werden.) Die Implementation ist bezueglich der UV-endlichen Anteile der Masterintegrale, die auch nach Tensorreduktion noch die zuvor erwaehnten Topologien besitzen, abgeschlossen. Die numerischen Integrationen haben sich als stabil erwiesen. Fuer die verbleibenden Teile des Projektes koennen wohlbekannte Methoden verwendet werden. In weiten Teilen muessen lediglich noch Links zu existierenden Programmen geschrieben werden. Fuer diejenigen wenigen verbleibenden speziellen Topologien, welche noch zu implementieren sind, sind (wohlbekannte) Methoden zu implementieren. Die Computerprogramme, die im Rahmen dieses Projektes entstanden, werden auch fuer allgemeinere Prozesse in das xloops-Projekt einfliessen. Deswegen wurde sie soweit moeglich fuer allgemeine Prozesse entwickelt und implementiert. Der oben erwaehnte Algorithmus wurde insbesondere fuer die Evaluation der fermionischen NNLO-Korrekturen zum leptonischen schwachen Mischungswinkel sowie zu aehnlichen Prozessen entwickelt. Im Rahmen der vorliegenden Dissertation wurde ein Grossteil der fuer die fermionischen NNLO-Korrekturen zu den effektiven Kopplungskonstanten des Z-Zerfalls (und damit fuer den schachen Mischungswinkel) notwendigen Arbeit durchgefuehrt.
Resumo:
The common thread of this thesis is the will of investigating properties and behavior of assemblies. Groups of objects display peculiar properties, which can be very far from the simple sum of respective components’ properties. This is truer, the smaller is inter-objects distance, i.e. the higher is their density, and the smaller is the container size. “Confinement” is in fact a key concept in many topics explored and here reported. It can be conceived as a spatial limitation, that yet gives origin to unexpected processes and phenomena based on inter-objects communication. Such phenomena eventually result in “non-linear properties”, responsible for the low predictability of large assemblies. Chapter 1 provides two insights on surface chemistry, namely (i) on a supramolecular assembly based on orthogonal forces, and (ii) on selective and sensitive fluorescent sensing in thin polymeric film. In chapters 2 to 4 confinement of molecules plays a major role. Most of the work focuses on FRET within core-shell nanoparticles, investigated both through a simulation model and through experiments. Exciting results of great applicative interest are drawn, such as a method of tuning emission wavelength at constant excitation, and a way of overcoming self-quenching processes by setting up a competitive deactivation channel. We envisage applications of these materials as labels for multiplexing analysis, and in all fields of fluorescence imaging, where brightness coupled with biocompatibility and water solubility is required. Adducts of nanoparticles and molecular photoswitches are investigated in the context of superresolution techniques for fluorescence microscopy. In chapter 5 a method is proposed to prepare a library of functionalized Pluronic F127, which gives access to a twofold “smart” nanomaterial, namely both (i)luminescent and (ii)surface-functionalized SCSSNPs. Focus shifts in chapter 6 to confinement effects in an upper size scale. Moving from nanometers to micrometers, we investigate the interplay between microparticles flowing in microchannels where a constriction affects at very long ranges structure and dynamics of the colloidal paste.
Resumo:
Die Immuntherapie stellt eine hoffnungsvolle Alternative zu etablierten Behandlungsmethoden für Krebserkrankungen dar. Durch die Aktivierung des Immunsystems erhofft man sich eine selektive Abtötung von Tumorzellen. Eine solche Aktivierung kann durch Vakzinierung mit Glycopeptiden, welche Partialstrukturen tumorassoziierter Oberflächenglycoproteine darstellen, erfolgen. Um eine effektive Immunantwort zu erreichen, ist allerdings eine Konjugation dieser Glycopeptide mit immunogenen Trägern nötig. Zur Darstellung solcher Konjugate wurden im Rahmen dieser Arbeit zunächst mehrere, mit tumorassoziierten Kohlenhydraten glycosylierte Aminosäurebausteine dargestellt. Diese Bausteine wurden anschließend zur Festphasensynthese von Glycopeptiden eingesetzt. Durch ein neuartiges, chemoselektives Kupplungsverfahren konnten diese tumorassoziierten Glycopeptide an ein immunogenes Trägerprotein angebunden werden. Weiterhin wurde durch Festphasenpeptidsynthese ausgehend von einem tetrafunktionellen Lysin-Baustein ein dendrimeres Glycopeptid (MAP) erzeugt. Die Darstellung von vollsynthetischen Vakzinen gelang in Form von Konjugaten bestehend aus einem universellen T-Zell-Epitop und einem tumorassoziierten Glycopeptid. Diese Synthesen wurden ausgehend von einem festphasengebundenen, orthogonal geschützten Lysin durchgeführt. Abschließend wurde die Synthese von Konjugaten bestehend aus einem tumorassoziierten Glycopeptid und dem Mitogen Pam3Cys untersucht.
Resumo:
Aminoglydosid-Antibiotika wie Neomycin B oder Cyclopeptid-Antibiotike wie Viaomycin sind dafür bekannt, daß sie selektiv an RNA binden können. Diese Interaktionen beruhen sowohl auf elektrostatischen Wechselwirkungen als auch auf H-Brücken-Bindungen. Des weiteren ist die definierte räumliche Anordnung von Donor- und Akzeptor-Resten in den Strukturen der RNA-Liganden wichtig für die Affinität. Eine Möglichkeit natürliche RNA-Liganden zu imitieren ist der Einsatz polyfunktioneller Template wie zum Beispiel das 2,6-Diamino-2,6-didesoxy-D-glucose-Scaffold. Mit Hilfe dieser Scaffolds können dann verschiedene positv geladene Reste und Donatoren sowie Akzeptoren für H-Brücken-Bindungen oder auch Interkalatoren räumlich definiert präsentiert werden. Für die unabhängige Funktionalisierung einer jeden Position ist ein Satz orthogonal stabiler Schutzgruppen nötig, wobei eine Hydroxylguppe durch einen Anker ersetzt wird, der eine Anbindung des Scaffolds an einen polymeren Träger ermöglicht. Das neu entwickelte 2,6-Diamino-2,6-didesoxy-D-glucose-Scaffold ist das erste Monosaccharid-Templat, das in allen fünf Positionen mit orthogonal stabilen Schutzgruppen blockiert ist. Alle Positionen könne in beliebiger Reihenfolge selektiv deblockiert und anschließend derivatisiert werden. Das Scaffold kann mit Aminosäuren, Guanidinen oder Interkalatoren umgesetzt werden, um so natürlich vorkommende RNA-bindende Aminoglycoside oder Peptide zu imitieren. Aufbauend auf diesem Monosaccharid-Templat wurde eine Bibliothek von über 100 potentiellen RNA-Liganden synthetisiert, die im Rahmen des Sonderforschungsbereichs 579 (RNA-Liganden-Wechselwirkungen) in Zellassays auf ihre Fähigkeit zur Hemmung der Tat/TAR-Wechselwirkung untersucht wurden, wobei bis jetzt 9 Verbindungen mit einer hemmenden Wirkung im micromolaren Bereich gefunden wurden.
Resumo:
Parallel mechanisms show desirable characteristics such as a large payload to robot weight ratio, considerable stiffness, low inertia and high dynamic performances. In particular, parallel manipulators with fewer than six degrees of freedom have recently attracted researchers’ attention, as their employ may prove valuable in those applications in which a higher mobility is uncalled-for. The attention of this dissertation is focused on translational parallel manipulators (TPMs), that is on parallel manipulators whose output link (platform) is provided with a pure translational motion with respect to the frame. The first part deals with the general problem of the topological synthesis and classification of TPMs, that is it identifies the architectures that TPM legs must possess for the platform to be able to freely translate in space without altering its orientation. The second part studies both constraint and direct singularities of TPMs. In particular, special families of fully-isotropic mechanisms are identified. Such manipulators exhibit outstanding properties, as they are free from singularities and show a constant orthogonal Jacobian matrix throughout their workspace. As a consequence, both the direct and the inverse position problems are linear and the kinematic analysis proves straightforward.
Resumo:
The southern Apennines of Italy have been experienced several destructive earthquakes both in historic and recent times. The present day seismicity, characterized by small-to-moderate magnitude earthquakes, was used like a probe to obatin a deeper knowledge of the fault structures where the largest earthquakes occurred in the past. With the aim to infer a three dimensional seismic image both the problem of data quality and the selection of a reliable and robust tomographic inversion strategy have been faced. The data quality has been obtained to develop optimized procedures for the measurements of P- and S-wave arrival times, through the use of polarization filtering and to the application of a refined re-picking technique based on cross-correlation of waveforms. A technique of iterative tomographic inversion, linearized, damped combined with a strategy of multiscale inversion type has been adopted. The retrieved P-wave velocity model indicates the presence of a strong velocity variation along a direction orthogonal to the Apenninic chain. This variation defines two domains which are characterized by a relatively low and high velocity values. From the comparison between the inferred P-wave velocity model with a portion of a structural section available in literature, the high velocity body was correlated with the Apulia carbonatic platforms whereas the low velocity bodies was associated to the basinal deposits. The deduced Vp/Vs ratio shows that the ratio is lower than 1.8 in the shallower part of the model, while for depths ranging between 5 km and 12 km the ratio increases up to 2.1 in correspondence to the area of higher seismicity. This confirms that areas characterized by higher values are more prone to generate earthquakes as a response to the presence of fluids and higher pore-pressures.
Resumo:
Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead. This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.