889 resultados para combining ability
Resumo:
Context-aware computing is currently considered the most promising approach to overcome information overload and to speed up access to relevant information and services. Context-awareness may be derived from many sources, including user profile and preferences, network information, sensor analysis; usually context-awareness relies on the ability of computing devices to interact with the physical world, i.e. with the natural and artificial objects hosted within the "environment”. Ideally, context-aware applications should not be intrusive and should be able to react according to user’s context, with minimum user effort. Context is an application dependent multidimensional space and the location is an important part of it since the very beginning. Location can be used to guide applications, in providing information or functions that are most appropriate for a specific position. Hence location systems play a crucial role. There are several technologies and systems for computing location to a vary degree of accuracy and tailored for specific space model, i.e. indoors or outdoors, structured spaces or unstructured spaces. The research challenge faced by this thesis is related to pedestrian positioning in heterogeneous environments. Particularly, the focus will be on pedestrian identification, localization, orientation and activity recognition. This research was mainly carried out within the “mobile and ambient systems” workgroup of EPOCH, a 6FP NoE on the application of ICT to Cultural Heritage. Therefore applications in Cultural Heritage sites were the main target of the context-aware services discussed. Cultural Heritage sites are considered significant test-beds in Context-aware computing for many reasons. For example building a smart environment in museums or in protected sites is a challenging task, because localization and tracking are usually based on technologies that are difficult to hide or harmonize within the environment. Therefore it is expected that the experience made with this research may be useful also in domains other than Cultural Heritage. This work presents three different approaches to the pedestrian identification, positioning and tracking: Pedestrian navigation by means of a wearable inertial sensing platform assisted by the vision based tracking system for initial settings an real-time calibration; Pedestrian navigation by means of a wearable inertial sensing platform augmented with GPS measurements; Pedestrian identification and tracking, combining the vision based tracking system with WiFi localization. The proposed localization systems have been mainly used to enhance Cultural Heritage applications in providing information and services depending on the user’s actual context, in particular depending on the user’s location.
Resumo:
[EN]This work introduces a new technique for tetrahedral mesh optimization. The procedure relocates boundary and inner nodes without changing the mesh topology. In order to maintain the boundary approximation while boundary nodes are moved, a local refinement of tetrahedra with faces on the solid boundary is necessary in some cases. New nodes are projected on the boundary by using a surface parameterization. In this work, the proposed method is applied to tetrahedral meshes of genus-zero solids that are generated by the meccano method. In this case, the solid boundary is automatically decomposed into six surface patches which are parameterized into the six faces of a cube with the Floater parameterization...
Resumo:
The aim of this PhD thesis was to study at a microscopic level different liquid crystal (LC) systems, in order to determine their physical properties, resorting to two distinct methodologies, one involving computer simulations, and the other spectroscopic techniques, in particular electron spin resonance (ESR) spectroscopy. By means of the computer simulation approach we tried to demonstrate this tool effectiveness for calculating anisotropic static properties of a LC material, as well as for predicting its behaviour and features. This required the development and adoption of suitable molecular models based on a convenient intermolecular potentials reflecting the essential molecular features of the investigated system. In particular, concerning the simulation approach, we have set up models for discotic liquid crystal dimers and we have studied, by means of Monte Carlo simulations, their phase behaviour and self-assembling properties, with respect to the simple monomer case. Each discotic dimer is described by two oblate GayBerne ellipsoids connected by a flexible spacer, modelled by a harmonic "spring" of three different lengths. In particular we investigated the effects of dimerization on the transition temperatures, as well as on the characteristics of molecular aggregation displayed and the relative orientational order. Moving to the experimental results, among the many experimental techniques that are typically employed to evaluate LC system distinctive features, ESR has proved to be a powerful tool in microscopic scale investigation of the properties, structure, order and dynamics of these materials. We have taken advantage of the high sensitivity of the ESR spin probe technique to investigate increasingly complex LC systems ranging from devices constituted by a polymer matrix in which LC molecules are confined in shape of nano- droplets, as well as biaxial liquid crystalline elastomers, and dimers whose monomeric units or lateral groups are constituted by rod-like mesogens (11BCB). Reflection-mode holographic-polymer dispersed liquid crystals (H-PDLCs) are devices in which LCs are confined into nanosized (50-300 nm) droplets, arranged in layers which alternate with polymer layers, forming a diffraction grating. We have determined the configuration of the LC local director and we have derived a model of the nanodroplet organization inside the layers. Resorting also to additional information on the nanodroplet size and shape distribution provided by SEM images of the H-PDLC cross-section, the observed director configuration has been modeled as a bidimensional distribution of elongated nanodroplets whose long axis is, on the average, parallel to the layers and whose internal director configuration is a uniaxial quasi- monodomain aligned along the nanodroplet long axis. The results suggest that the molecular organization is dictated mainly by the confinement, explaining, at least in part, the need for switching voltages significantly higher and the observed faster turn-off times in H-PDLCs compared to standard PDLC devices. Liquid crystal elastomers consist in cross-linked polymers, in which mesogens represent the monomers constituting the main chain or the laterally attached side groups. They bring together three important aspects: orientational order in amorphous soft materials, responsive molecular shape and quenched topological constraints. In biaxial nematic liquid crystalline elastomers (BLCEs), two orthogonal directions, rather than the one of normal uniaxial nematic, can be controlled, greatly enhancing their potential value for applications as novel actuators. Two versions of a side-chain BLCEs were characterized: side-on and end-on. Many tests have been carried out on both types of LCE, the main features detected being the lack of a significant dynamical behaviour, together with a strong permanent alignment along the principal director, and the confirmation of the transition temperatures already determined by DSC measurements. The end-on sample demonstrates a less hindered rotation of the side group mesogenic units and a greater freedom of alignment to the magnetic field, as already shown by previous NMR studies. Biaxial nematic ESR static spectra were also obtained on the basis of Molecular Dynamics generated biaxial configurations, to be compared to the experimentally determined ones, as a mean to establish a possible relation between biaxiality and the spectral features. This provides a concrete example of the advantages of combining the computer simulation and spectroscopic approaches. Finally, the dimer α,ω-bis(4'-cyanobiphenyl-4-yl)undecane (11BCB), synthesized in the "quest" for the biaxial nematic phase has been analysed. Its importance lies in the dimer significance as building blocks in the development of new materials to be employed in innovative technological applications, such as faster switching displays, resorting to the easier aligning ability of the secondary director in biaxial phases. A preliminary series of tests were performed revealing the population of mesogenic molecules as divided into two groups: one of elongated straightened conformers sharing a common director, and one of bent molecules, which display no order, being equally distributed in the three dimensions. Employing this model, the calculated values show a consistent trend, confirming at the same time the transition temperatures indicated by the DSC measurements, together with rotational diffusion tensor values that follow closely those of the constituting monomer 5CB.
Resumo:
Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.
Resumo:
The research interest of this study is to investigate surface immobilization strategies for proteins and other biomolecules by the surface plasmon field-enhanced fluorescence spectroscopy (SPFS) technique. The recrystallization features of the S-layer proteins and the possibility of combining the S-layer lattice arrays with other functional molecules make this protein a prime candidate for supramolecular architectures. The recrystallization behavior on gold or on the secondary cell wall polymer (SCWP) was recorded by SPR. The optical thicknesses and surface densities for different protein layers were calculated. In DNA hybridization tests performed in order to discriminate different mismatches, recombinant S-layer-streptavidin fusion protein matrices showed their potential for new microarrays. Moreover, SCWPs coated gold chips, covered with a controlled and oriented assembly of S-layer fusion proteins, represent an even more sensitive fluorescence testing platform. Additionally, S-layer fusion proteins as the matrix for LHCII immobilization strongly demonstrate superiority over routine approaches, proving the possibility of utilizing them as a new strategy for biomolecular coupling. In the study of the SPFS hCG immunoassay, the biophysical and immunological characteristics of this glycoprotein hormone were presented first. After the investigation of the effect of the biotin thiol dilution on the coupling efficiently, the interfacial binding model including the appropriate binary SAM structure and the versatile streptavidin-biotin interaction was chosen as the basic supramolecular architecture for the fabrication of a SPFS-based immunoassay. Next, the affinity characteristics between different antibodies and hCG were measured via an equilibrium binding analysis, which is the first example for the titration of such a high affinity interaction by SPFS. The results agree very well with the constants derived from the literature. Finally, a sandwich assay and a competitive assay were selected as templates for SPFS-based hCG detection, and an excellent LOD of 0.15 mIU/ml was attained via the “one step” sandwich method. Such high sensitivity not only fulfills clinical requirements, but is also better than most other biosensors. Fully understanding how LHCII complexes transfer the sunlight energy directionally and efficiently to the reaction center is potentially useful for constructing biomimetic devices as solar cells. After the introduction of the structural and the spectroscopic features of LHCII, different surface immobilization strategies of LHCII were summarized next. Among them the strategy based on the His-tag and the immobilized metal (ion) affinity chromatography (IMAC) technique were of great interest and resulted in different kinds of home-fabricated His-tag chelating chips. Their substantial protein coupling capacity, maintenance of high biological activity and a remarkably repeatable binding ability on the same chip after regeneration was demonstrated. Moreover, different parameters related to the stability of surface coupled reconstituted complexes, including sucrose, detergent, lipid, oligomerization, temperature and circulation rate, were evaluated in order to standardize the most effective immobilization conditions. In addition, partial lipid bilayers obtained from LHCII contained proteo-liposomes fusion on the surface were observed by the QCM technique. Finally, the inter-complex energy transfer between neighboring LHCIIs on a gold protected silver surface by excitation with a blue laser (λ = 473nm) was recorded for the first time, and the factors influencing the energy transfer efficiency were evaluated.
Resumo:
Gels are materials that are easier to recognize than to define. For all practical purpose, a material is termed a gel if the whole volume of liquid is completely immobilized as usually tested by the ‘tube inversion’ method. Recently, supramolecular gels obtained from low molecular weight gelators (LMWGs) have attracted considerable attention in materials science since they represent a new class of smart materials sensitive to external stimuli, such as temperature, ultrasounds, light, chemical species and so on. Accordingly, during the past years a large variety of potentialities and applications of these soft materials in optoelectronics, as electronic devices, light harvesting systems and sensors, in bio-materials and in drug delivery have been reported. Spontaneous self-assembly of low molecular weight molecules is a powerful tool that allows complex supramolecular nanoscale structures to be built. The weak and non-covalent interactions such as hydrogen bonding, π–π stacking, coordination, electrostatic and van der Waals interactions are usually considered as the most important features for promoting sol-gel equilibria. However, the occurrence of gelation processes is ruled by further “external” factors, among which the temperature and the nature of the solvents that are employed are of crucial importance. For example, some gelators prefer aromatic or halogenated solvents and in some cases both the gelation temperature and the type of the solvent affect the morphologies of the final aggregation. Functionalized cyclopentadienones are fascinating systems largely employed as building blocks for the synthesis of polyphenylene derivatives. In addition, it is worth noting that structures containing π-extended conjugated chromophores with enhanced absorption properties are of current interest in the field of materials science since they can be used as “organic metals”, as semiconductors, and as emissive or absorbing layers for OLEDs or photovoltaics. The possibility to decorate the framework of such structures prompted us to study the synthesis of new hydroxy propargyl arylcyclopentadienone derivatives. Considering the ability of such systems to give π–π stacking interactions, the introduction on a polyaromatic structure of polar substituents able to generate hydrogen bonding could open the possibility to form gels, although any gelation properties has been never observed for these extensively studied systems. we have synthesized a new class of 3,4-bis (4-(3-hydroxy- propynyl) phenyl) -2, 5-diphenylcyclopentadienone derivatives, one of which (1a) proved to be, for the first time, a powerful organogelator. The experimental results indicated that the hydroxydimethylalkynyl substituents are fundamental to guarantee the gelation properties of the tetraarylcyclopentadienone unit. Combining the results of FT-IR, 1H NMR, UV-vis and fluorescence emission spectra, we believe that H-bonding and π–π interactions are the driving forces played for the gel formation. The importance of soft materials lies on their ability to respond to external stimuli, that can be also of chemical nature. In particular, high attention has been recently devoted to anion responsive properties of gels. Therefore the behaviour of organogels of 1a in toluene, ACN and MeNO2 towards the addition of 1 equivalent of various tetrabutylammonium salts were investigated. The rheological properties of gels in toluene, ACN and MeNO2 with and without the addition of Bu4N+X- salts were measured. In addition a qualitative analysis on cation recognition was performed. Finally the nature of the cyclic core of the gelator was changed in order to verify how the carbonyl group was essential to gel solvents. Until now, 4,5-diarylimidazoles have been synthesized.
Resumo:
Introduction. Neutrophil Gelatinase-Associated Lipocalin (NGAL) belongs to the family of lipocalins and it is produced by several cell types, including renal tubular epithelium. In the kidney its production increases during acute damage and this is reflected by the increase in serum and urine levels. In animal studies and clinical trials, NGAL was found to be a sensitive and specific indicator of acute kidney injury (AKI). Purpose. The aim of this work was to investigate, in a prospective manner, whether urine NGAL can be used as a marker in preeclampsia, kidney transplantation, VLBI and diabetic nephropathy. Materials and methods. The study involved 44 consecutive patients who received renal transplantation; 18 women affected by preeclampsia (PE); a total of 55 infants weighing ≤1500 g and 80 patients with Type 1 diabetes. Results. A positive correlation was found between urinary NGAL and 24 hours proteinuria within the PE group. The detection of higher uNGAL values in case of severe PE, even in absence of statistical significance, confirms that these women suffer from an initial renal damage. In our population of VLBW infants, we found a positive correlation of uNGAL values at birth with differences in sCreat and eGFR values from birth to day 21, but no correlation was found between uNGAL values at birth and sCreat and eGFR at day 7. systolic an diastolic blood pressure decreased with increasing levels of uNGAL. The patients with uNGAL <25 ng/ml had significantly higher levels of systolic blood pressure compared with the patients with uNGAL >50 ng/ml ( p<0.005). Our results indicate the ability of NGAL to predict the delay in functional recovery of the graft. Conclusions. In acute renal pathology, urinary NGAL confirms to be a valuable predictive marker of the progress and status of acute injury.
Resumo:
This thesis investigates context-aware wireless networks, capable to adapt their behavior to the context and the application, thanks to the ability of combining communication, sensing and localization. Problems of signals demodulation, parameters estimation and localization are addressed exploiting analytical methods, simulations and experimentation, for the derivation of the fundamental limits, the performance characterization of the proposed schemes and the experimental validation. Ultrawide-bandwidth (UWB) signals are in certain cases considered and non-coherent receivers, allowing the exploitation of the multipath channel diversity without adopting complex architectures, investigated. Closed-form expressions for the achievable bit error probability of novel proposed architectures are derived. The problem of time delay estimation (TDE), enabling network localization thanks to ranging measurement, is addressed from a theoretical point of view. New fundamental bounds on TDE are derived in the case the received signal is partially known or unknown at receiver side, as often occurs due to propagation or due to the adoption of low-complexity estimators. Practical estimators, such as energy-based estimators, are revised and their performance compared with the new bounds. The localization issue is addressed with experimentation for the characterization of cooperative networks. Practical algorithms able to improve the accuracy in non-line-of-sight (NLOS) channel conditions are evaluated on measured data. With the purpose of enhancing the localization coverage in NLOS conditions, non-regenerative relaying techniques for localization are introduced and ad hoc position estimators are devised. An example of context-aware network is given with the study of the UWB-RFID system for detecting and locating semi-passive tags. In particular a deep investigation involving low-complexity receivers capable to deal with problems of multi-tag interference, synchronization mismatches and clock drift is presented. Finally, theoretical bounds on the localization accuracy of this and others passive localization networks (e.g., radar) are derived, also accounting for different configurations such as in monostatic and multistatic networks.
Resumo:
Proxy data are essential for the investigation of climate variability on time scales larger than the historical meteorological observation period. The potential value of a proxy depends on our ability to understand and quantify the physical processes that relate the corresponding climate parameter and the signal in the proxy archive. These processes can be explored under present-day conditions. In this thesis, both statistical and physical models are applied for their analysis, focusing on two specific types of proxies, lake sediment data and stable water isotopes.rnIn the first part of this work, the basis is established for statistically calibrating new proxies from lake sediments in western Germany. A comprehensive meteorological and hydrological data set is compiled and statistically analyzed. In this way, meteorological times series are identified that can be applied for the calibration of various climate proxies. A particular focus is laid on the investigation of extreme weather events, which have rarely been the objective of paleoclimate reconstructions so far. Subsequently, a concrete example of a proxy calibration is presented. Maxima in the quartz grain concentration from a lake sediment core are compared to recent windstorms. The latter are identified from the meteorological data with the help of a newly developed windstorm index, combining local measurements and reanalysis data. The statistical significance of the correlation between extreme windstorms and signals in the sediment is verified with the help of a Monte Carlo method. This correlation is fundamental for employing lake sediment data as a new proxy to reconstruct windstorm records of the geological past.rnThe second part of this thesis deals with the analysis and simulation of stable water isotopes in atmospheric vapor on daily time scales. In this way, a better understanding of the physical processes determining these isotope ratios can be obtained, which is an important prerequisite for the interpretation of isotope data from ice cores and the reconstruction of past temperature. In particular, the focus here is on the deuterium excess and its relation to the environmental conditions during evaporation of water from the ocean. As a basis for the diagnostic analysis and for evaluating the simulations, isotope measurements from Rehovot (Israel) are used, provided by the Weizmann Institute of Science. First, a Lagrangian moisture source diagnostic is employed in order to establish quantitative linkages between the measurements and the evaporation conditions of the vapor (and thus to calibrate the isotope signal). A strong negative correlation between relative humidity in the source regions and measured deuterium excess is found. On the contrary, sea surface temperature in the evaporation regions does not correlate well with deuterium excess. Although requiring confirmation by isotope data from different regions and longer time scales, this weak correlation might be of major importance for the reconstruction of moisture source temperatures from ice core data. Second, the Lagrangian source diagnostic is combined with a Craig-Gordon fractionation parameterization for the identified evaporation events in order to simulate the isotope ratios at Rehovot. In this way, the Craig-Gordon model can be directly evaluated with atmospheric isotope data, and better constraints for uncertain model parameters can be obtained. A comparison of the simulated deuterium excess with the measurements reveals that a much better agreement can be achieved using a wind speed independent formulation of the non-equilibrium fractionation factor instead of the classical parameterization introduced by Merlivat and Jouzel, which is widely applied in isotope GCMs. Finally, the first steps of the implementation of water isotope physics in the limited-area COSMO model are described, and an approach is outlined that allows to compare simulated isotope ratios to measurements in an event-based manner by using a water tagging technique. The good agreement between model results from several case studies and measurements at Rehovot demonstrates the applicability of the approach. Because the model can be run with high, potentially cloud-resolving spatial resolution, and because it contains sophisticated parameterizations of many atmospheric processes, a complete implementation of isotope physics will allow detailed, process-oriented studies of the complex variability of stable isotopes in atmospheric waters in future research.rn
Resumo:
Throughout the alpine domain, shallow landslides represent a serious geologic hazard, often causing severe damages to infrastructures, private properties, natural resources and in the most catastrophic events, threatening human lives. Landslides are a major factor of landscape evolution in mountainous and hilly regions and represent a critical issue for mountainous land management, since they cause loss of pastoral lands. In several alpine contexts, shallow landsliding distribution is strictly connected to the presence and condition of vegetation on the slopes. With the aid of high-resolution satellite images, it's possible to divide automatically the mountainous territory in land cover classes, which contribute with different magnitude to the stability of the slopes. The aim of this research is to combine EO (Earth Observation) land cover maps with ground-based measurements of the land cover properties. In order to achieve this goal, a new procedure has been developed to automatically detect grass mantle degradation patterns from satellite images. Moreover, innovative surveying techniques and instruments are tested to measure in situ the shear strength of grass mantle and the geomechanical and geotechnical properties of these alpine soils. Shallow landsliding distribution is assessed with the aid of physically based models, which use the EO-based map to distribute the resistance parameters across the landscape.
Resumo:
Die Dissertationsschrift beschäftigt sich mit der Entwicklung und Anwendung einer alternativen Probenzuführungstechnik für flüssige Proben in der Massenspektrometrie. Obwohl bereits einige Anstrengungen zur Verbesserung unternommen wurden, weisen konventionelle pneumatische Zerstäuber- und Sprühkammersysteme, die in der Elementspurenanalytik mittels induktiv gekoppeltem Plasma (ICP) standardmäßig verwendet werden, eine geringe Gesamteffizienz auf. Pneumatisch erzeugtes Aerosol ist durch eine breite Tropfengrößenverteilung gekennzeichnet, was den Einsatz einer Sprühkammer bedingt, um die Aerosolcharakteristik an die Betriebsbedingungen des ICPs anzupassen.. Die Erzeugung von Tropfen mit einer sehr engen Tropfengrößenverteilung oder sogar monodispersen Tropfen könnte die Effizienz des Probeneintrags verbessern. Ein Ziel dieser Arbeit ist daher, Tropfen, die mittels des thermischen Tintenstrahldruckverfahrens erzeugt werden, zum Probeneintrag in der Elementmassenspektrometrie einzusetzen. Das thermische Tintenstrahldruckverfahren konnte in der analytischen Chemie im Bereich der Oberflächenanalytik mittels TXRF oder Laserablation bisher zur gezielten, reproduzierbaren Deposition von Tropfen auf Oberflächen eingesetzt werden. Um eine kontinuierliche Tropfenerzeugung zu ermöglichen, wurde ein elektronischer Mikrokontroller entwickelt, der eine Dosiereinheit unabhängig von der Hard- und Software des Druckers steuern kann. Dabei sind alle zur Tropfenerzeugung relevanten Parameter (Frequenz, Heizpulsenergie) unabhängig voneinander einstellbar. Die Dosiereinheit, der "drop-on-demand" Aerosolgenerator (DOD), wurde auf eine Aerosoltransportkammer montiert, welche die erzeugten Tropfen in die Ionisationsquelle befördert. Im Bereich der anorganischen Spurenanalytik konnten durch die Kombination des DOD mit einem automatischen Probengeber 53 Elemente untersucht und die erzielbare Empfindlichkeiten sowie exemplarisch für 15 Elemente die Nachweisgrenzen und die Untergrundäquivalentkonzentrationen ermittelt werden. Damit die Vorteile komfortabel genutzt werden können, wurde eine Kopplung des DOD-Systems mit der miniaturisierten Fließinjektionsanalyse (FIA) sowie miniaturisierten Trenntechniken wie der µHPLC entwickelt. Die Fließinjektionsmethode wurde mit einem zertifizierten Referenzmaterial validiert, wobei für Vanadium und Cadmium die zertifizierten Werte gut reproduziert werden konnten. Transiente Signale konnten bei der Kopplung des Dosiersystems in Verbindung mit der ICP-MS an eine µHPLC abgebildet werden. Die Modifikation der Dosiereinheit zum Ankoppeln an einen kontinuierlichen Probenfluss bedarf noch einer weiteren Reduzierung des verbleibenden Totvolumens. Dazu ist die Unabhängigkeit von den bisher verwendeten, kommerziell erhältlichen Druckerpatronen anzustreben, indem die Dosiereinheit selbst gefertigt wird. Die Vielseitigkeit des Dosiersystems wurde mit der Kopplung an eine kürzlich neu entwickelte Atmosphärendruck-Ionisationsmethode, die "flowing atmospheric-pressure afterglow" Desorptions/Ionisations Ionenquelle (FAPA), aufgezeigt. Ein direkter Eintrag von flüssigen Proben in diese Quelle war bislang nicht möglich, es konnte lediglich eine Desorption von eingetrockneten Rückständen oder direkt von der Flüssigkeitsoberfläche erfolgen. Die Präzision der Analyse ist dabei durch die variable Probenposition eingeschränkt. Mit dem Einsatz des DOD-Systems können flüssige Proben nun direkt in die FAPA eingetragen, was ebenfalls das Kalibrieren bei quantitativen Analysen organischer Verbindungen ermöglicht. Neben illegalen Drogen und deren Metaboliten konnten auch frei verkäufliche Medikamente und ein Sprengstoffanalogon in entsprechend präpariertem reinem Lösungsmittel nachgewiesen werden. Ebenso gelang dies in Urinproben, die mit Drogen und Drogenmetaboliten versetzt wurden. Dabei ist hervorzuheben, dass keinerlei Probenvorbereitung notwendig war und zur Ermittlung der NWG der einzelnen Spezies keine interne oder isotopenmarkierte Standards verwendet wurden. Dennoch sind die ermittelten NWG deutlich niedriger, als die mit der bisherigen Prozedur zur Analyse flüssiger Proben erreichbaren. Um im Vergleich zu der bisher verwendeten "pin-to-plate" Geometrie der FAPA die Lösungsmittelverdampfung zu beschleunigen, wurde eine alternative Elektrodenanordnung entwickelt, bei der die Probe länger in Kontakt mit der "afterglow"-Zone steht. Diese Glimmentladungsquelle ist ringförmig und erlaubt einen Probeneintrag mittels eines zentralen Gasflusses. Wegen der ringförmigen Entladung wird der Name "halo-FAPA" (h-FAPA) für diese Entladungsgeometrie verwendet. Eine grundlegende physikalische und spektroskopische Charakterisierung zeigte, dass es sich tatsächlich um eine FAPA Desorptions/Ionisationsquelle handelt.
Resumo:
I lantibiotici sono molecole peptidiche prodotte da un gran numero di batteri Gram-positivi, posseggono attività antibatterica contro un ampio spettro di germi, e rappresentano una potenziale soluzione alla crescente problematica dei patogeni multi-resistenti. La loro attività consiste nel legame alla membrana del bersaglio, che viene quindi destabilizzata mediante l’induzione di pori che determinano la morte del patogeno. Tipicamente i lantibiotici sono formati da un “leader-peptide” e da un “core-peptide”. Il primo è necessario per il riconoscimento della molecola da parte di enzimi che effettuano modifiche post-traduzionali del secondo - che sarà la regione con attività battericida una volta scissa dal “leader-peptide”. Le modifiche post-traduzionali anticipate determinano il contenuto di amminoacidi lantionina (Lan) e metil-lantionina (MeLan), caratterizzati dalla presenza di ponti-tioetere che conferiscono maggior resistenza contro le proteasi, e permettono di aggirare la principale limitazione all’uso dei peptidi in ambito terapeutico. La nisina è il lantibiotico più studiato e caratterizzato, prodotto dal batterio L. lactis che è stato utilizzato per oltre venti anni nell’industria alimentare. La nisina è un peptide lungo 34 amminoacidi, che contiene anelli di lantionina e metil-lantionina, introdotti dall’azione degli enzimi nisB e nisC, mentre il taglio del “leader-peptide” è svolto dall’enzima nisP. Questo elaborato affronta l’ingegnerizzazione della sintesi e della modifica di lantibiotici nel batterio E.coli. In particolare si affronta l’implementazione dell’espressione eterologa in E.coli del lantibiotico cinnamicina, prodotto in natura dal batterio Streptomyces cinnamoneus. Questo particolare lantibiotico, lungo diciannove amminoacidi dopo il taglio del leader, subisce modifiche da parte dell’enzima CinM, responsabile dell’introduzione degli aminoacidi Lan e MeLan, dell’enzima CinX responsabile dell’idrossilazione dell’acido aspartico (Asp), e infine dell’enzima cinorf7 deputato all’introduzione del ponte di lisinoalanina (Lal). Una volta confermata l’attività della cinnamicina e di conseguenza quella dell’enzima CinM, si è deciso di tentare la modifica della nisina da parte di CinM. A tal proposito è stato necessario progettare un gene sintetico che codifica nisina con un leader chimerico, formato cioè dalla fusione del leader della cinnamicina e del leader della nisina. Il prodotto finale, dopo il taglio del leader da parte di nisP, è una nisina completamente modificata. Questo risultato ne permette però la modifica utilizzando un solo enzima invece di due, riducendo il carico metabolico sul batterio che la produce, e inoltre apre la strada all’utilizzo di CinM per la modifica di altri lantibiotici seguendo lo stesso approccio, nonché all’introduzione del ponte di lisinoalanina, in quanto l’enzima cinorf7 necessita della presenza di CinM per svolgere la sua funzione.
Resumo:
Human activities strongly influence environmental processes, and while human domination increases, biodiversity progressively declines in ecosystems worldwide. High genetic and phenotypic variability ensures functionality and stability of ecosystem processes through time and increases the resilience and the adaptive capacity of populations and communities, while a reduction in functional diversity leads to a decrease in the ability to respond in a changing environment. Pollution is becoming one of the major threats in aquatic ecosystem, and pharmaceutical and personal care products (PPCPs) in particular are a relatively new group of environmental contaminants suspected to have adverse effects on aquatic organisms. There is still a lake of knowledge on the responses of communities to complex chemical mixtures in the environment. We used an individual-trait-based approach to assess the response of a phytoplankton community in a scenario of combined pollution and environmental change (steady increasing in temperature). We manipulated individual-level trait diversity directly (by filtering out size classes) and indirectly (through exposure to PPCPs mixture), and studied how reduction in trait-diversity affected community structure, production of biomass and the ability of the community to track a changing environment. We found that exposure to PPCPs slows down the ability of the community to respond to an increasing temperature. Our study also highlights how physiological responses (induced by PPCPs exposure) are important for ecosystem processes: although from an ecological point of view experimental communities converged to a similar structure, they were functionally different.
Resumo:
Context: Through overexpression and aberrant activation in many human tumors, the IGF system plays a key role in tumor development and tumor cell proliferation. Different strategies targeting IGF-I receptor (IGFI-R) have been developed, and recent studies demonstrated that combined treatments with cytostatic drugs enhance the potency of anti-IGFI-R therapies. Objective: The objective of the study was to examine the IGFI-R expression status in neuroendocrine tumors of the gastroenteropancreatic system (GEP-NETs) in comparison with healthy tissues and use potential overexpression as a target for novel anti-IGFI-R immunoliposomes. Experimental Design: A human tumor tissue array and samples from different normal tissues were investigated by immunohistochemistry. An IGFI-R antagonistic antibody (1H7) was coupled to the surface of sterically stabilized liposomes loaded with doxorubicin. Cell lines from different tumor entities were investigated for liposomal association studies in vitro. For in vivo experiments, neuroendocrine tumor xenografts were used for evaluation of pharmacokinetic and therapeutic properties of the novel compound. Results: Immunohistochemistry revealed significant IGFI-R overexpression in all investigated GEP-NETs (n = 59; staining index, 229.1 +/- 3.1%) in comparison with normal tissues (115.7 +/- 3.7%). Furthermore, anti-IGFI-R immunoliposomes displayed specific tumor cell association (44.2 +/- 1.6% vs. IgG liposomes, 0.8 +/- 0.3%; P < 0.0001) and internalization in human neuroendocrine tumor cells in vitro and superior antitumor efficacy in vivo (life span 31.5 +/- 2.2 d vs. untreated control, 19 +/- 0.6, P = 0.008). Conclusion: IGFI-R overexpression seems to be a common characteristic of otherwise heterogenous NETs. Novel anti-IGFI-R immunoliposomes have been developed and successfully tested in a preclinical model for human GEP-NETs. Moreover in vitro experiments indicate that usage of this agent could also present a promising approach for other tumor entities.
Resumo:
The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.