943 resultados para short range order
Resumo:
Understanding the complex relationships between quantities measured by volcanic monitoring network and shallow magma processes is a crucial headway for the comprehension of volcanic processes and a more realistic evaluation of the associated hazard. This question is very relevant at Campi Flegrei, a volcanic quiescent caldera immediately north-west of Napoli (Italy). The system activity shows a high fumarole release and periodic ground slow movement (bradyseism) with high seismicity. This activity, with the high people density and the presence of military and industrial buildings, makes Campi Flegrei one of the areas with higher volcanic hazard in the world. In such a context my thesis has been focused on magma dynamics due to the refilling of shallow magma chambers, and on the geophysical signals detectable by seismic, deformative and gravimetric monitoring networks that are associated with this phenomenologies. Indeed, the refilling of magma chambers is a process frequently occurring just before a volcanic eruption; therefore, the faculty of identifying this dynamics by means of recorded signal analysis is important to evaluate the short term volcanic hazard. The space-time evolution of dynamics due to injection of new magma in the magma chamber has been studied performing numerical simulations with, and implementing additional features in, the code GALES (Longo et al., 2006), recently developed and still on the upgrade at the Istituto Nazionale di Geofisica e Vulcanologia in Pisa (Italy). GALES is a finite element code based on a physico-mathematical two dimensional, transient model able to treat fluids as multiphase homogeneous mixtures, compressible to incompressible. The fundamental equations of mass, momentum and energy balance are discretised both in time and space using the Galerkin Least-Squares and discontinuity-capturing stabilisation technique. The physical properties of the mixture are computed as a function of local conditions of magma composition, pressure and temperature.The model features enable to study a broad range of phenomenologies characterizing pre and sin-eruptive magma dynamics in a wide domain from the volcanic crater to deep magma feeding zones. The study of displacement field associated with the simulated fluid dynamics has been carried out with a numerical code developed by the Geophysical group at the University College Dublin (O’Brien and Bean, 2004b), with whom we started a very profitable collaboration. In this code, the seismic wave propagation in heterogeneous media with free surface (e.g. the Earth’s surface) is simulated using a discrete elastic lattice where particle interactions are controlled by the Hooke’s law. This method allows to consider medium heterogeneities and complex topography. The initial and boundary conditions for the simulations have been defined within a coordinate project (INGV-DPC 2004-06 V3_2 “Research on active volcanoes, precursors, scenarios, hazard and risk - Campi Flegrei”), to which this thesis contributes, and many researchers experienced on Campi Flegrei in volcanological, seismic, petrological, geochemical fields, etc. collaborate. Numerical simulations of magma and rock dynamis have been coupled as described in the thesis. The first part of the thesis consists of a parametric study aimed at understanding the eect of the presence in magma of carbon dioxide in magma in the convection dynamics. Indeed, the presence of this volatile was relevant in many Campi Flegrei eruptions, including some eruptions commonly considered as reference for a future activity of this volcano. A set of simulations considering an elliptical magma chamber, compositionally uniform, refilled from below by a magma with volatile content equal or dierent from that of the resident magma has been performed. To do this, a multicomponent non-ideal magma saturation model (Papale et al., 2006) that considers the simultaneous presence of CO2 and H2O, has been implemented in GALES. Results show that the presence of CO2 in the incoming magma increases its buoyancy force promoting convection ad mixing. The simulated dynamics produce pressure transients with frequency and amplitude in the sensitivity range of modern geophysical monitoring networks such as the one installed at Campi Flegrei . In the second part, simulations more related with the Campi Flegrei volcanic system have been performed. The simulated system has been defined on the basis of conditions consistent with the bulk of knowledge of Campi Flegrei and in particular of the Agnano-Monte Spina eruption (4100 B.P.), commonly considered as reference for a future high intensity eruption in this area. The magmatic system has been modelled as a long dyke refilling a small shallow magma chamber; magmas with trachytic and phonolitic composition and variable volatile content of H2O and CO2 have been considered. The simulations have been carried out changing the condition of magma injection, the system configuration (magma chamber geometry, dyke size) and the resident and refilling magma composition and volatile content, in order to study the influence of these factors on the simulated dynamics. Simulation results allow to follow each step of the gas-rich magma ascent in the denser magma, highlighting the details of magma convection and mixing. In particular, the presence of more CO2 in the deep magma results in more ecient and faster dynamics. Through this simulations the variation of the gravimetric field has been determined. Afterward, the space-time distribution of stress resulting from numerical simulations have been used as boundary conditions for the simulations of the displacement field imposed by the magmatic dynamics on rocks. The properties of the simulated domain (rock density, P and S wave velocities) have been based on data from literature on active and passive tomographic experiments, obtained through a collaboration with A. Zollo at the Dept. of Physics of the Federici II Univeristy in Napoli. The elasto-dynamics simulations allow to determine the variations of the space-time distribution of deformation and the seismic signal associated with the studied magmatic dynamics. In particular, results show that these dynamics induce deformations similar to those measured at Campi Flegrei and seismic signals with energies concentrated on the typical frequency bands observed in volcanic areas. The present work shows that an approach based on the solution of equations describing the physics of processes within a magmatic fluid and the surrounding rock system is able to recognise and describe the relationships between geophysical signals detectable on the surface and deep magma dynamics. Therefore, the results suggest that the combined study of geophysical data and informations from numerical simulations can allow in a near future a more ecient evaluation of the short term volcanic hazard.
Resumo:
Negli ultimi anni, un crescente numero di studiosi ha focalizzato la propria attenzione sullo sviluppo di strategie che permettessero di caratterizzare le proprietà ADMET dei farmaci in via di sviluppo, il più rapidamente possibile. Questa tendenza origina dalla consapevolezza che circa la metà dei farmaci in via di sviluppo non viene commercializzato perché ha carenze nelle caratteristiche ADME, e che almeno la metà delle molecole che riescono ad essere commercializzate, hanno comunque qualche problema tossicologico o ADME [1]. Infatti, poco importa quanto una molecola possa essere attiva o specifica: perché possa diventare farmaco è necessario che venga ben assorbita, distribuita nell’organismo, metabolizzata non troppo rapidamente, ne troppo lentamente e completamente eliminata. Inoltre la molecola e i suoi metaboliti non dovrebbero essere tossici per l’organismo. Quindi è chiaro come una rapida determinazione dei parametri ADMET in fasi precoci dello sviluppo del farmaco, consenta di risparmiare tempo e denaro, permettendo di selezionare da subito i composti più promettenti e di lasciar perdere quelli con caratteristiche negative. Questa tesi si colloca in questo contesto, e mostra l’applicazione di una tecnica semplice, la biocromatografia, per caratterizzare rapidamente il legame di librerie di composti alla sieroalbumina umana (HSA). Inoltre mostra l’utilizzo di un’altra tecnica indipendente, il dicroismo circolare, che permette di studiare gli stessi sistemi farmaco-proteina, in soluzione, dando informazioni supplementari riguardo alla stereochimica del processo di legame. La HSA è la proteina più abbondante presente nel sangue. Questa proteina funziona da carrier per un gran numero di molecole, sia endogene, come ad esempio bilirubina, tiroxina, ormoni steroidei, acidi grassi, che xenobiotici. Inoltre aumenta la solubilità di molecole lipofile poco solubili in ambiente acquoso, come ad esempio i tassani. Il legame alla HSA è generalmente stereoselettivo e ad avviene a livello di siti di legame ad alta affinità. Inoltre è ben noto che la competizione tra farmaci o tra un farmaco e metaboliti endogeni, possa variare in maniera significativa la loro frazione libera, modificandone l’attività e la tossicità. Per queste sue proprietà la HSA può influenzare sia le proprietà farmacocinetiche che farmacodinamiche dei farmaci. Non è inusuale che un intero progetto di sviluppo di un farmaco possa venire abbandonato a causa di un’affinità troppo elevata alla HSA, o a un tempo di emivita troppo corto, o a una scarsa distribuzione dovuta ad un debole legame alla HSA. Dal punto di vista farmacocinetico, quindi, la HSA è la proteina di trasporto del plasma più importante. Un gran numero di pubblicazioni dimostra l’affidabilità della tecnica biocromatografica nello studio dei fenomeni di bioriconoscimento tra proteine e piccole molecole [2-6]. Il mio lavoro si è focalizzato principalmente sull’uso della biocromatografia come metodo per valutare le caratteristiche di legame di alcune serie di composti di interesse farmaceutico alla HSA, e sul miglioramento di tale tecnica. Per ottenere una miglior comprensione dei meccanismi di legame delle molecole studiate, gli stessi sistemi farmaco-HSA sono stati studiati anche con il dicroismo circolare (CD). Inizialmente, la HSA è stata immobilizzata su una colonna di silice epossidica impaccata 50 x 4.6 mm di diametro interno, utilizzando una procedura precedentemente riportata in letteratura [7], con alcune piccole modifiche. In breve, l’immobilizzazione è stata effettuata ponendo a ricircolo, attraverso una colonna precedentemente impaccata, una soluzione di HSA in determinate condizioni di pH e forza ionica. La colonna è stata quindi caratterizzata per quanto riguarda la quantità di proteina correttamente immobilizzata, attraverso l’analisi frontale di L-triptofano [8]. Di seguito, sono stati iniettati in colonna alcune soluzioni raceme di molecole note legare la HSA in maniera enantioselettiva, per controllare che la procedura di immobilizzazione non avesse modificato le proprietà di legame della proteina. Dopo essere stata caratterizzata, la colonna è stata utilizzata per determinare la percentuale di legame di una piccola serie di inibitori della proteasi HIV (IPs), e per individuarne il sito(i) di legame. La percentuale di legame è stata calcolata attraverso il fattore di capacità (k) dei campioni. Questo parametro in fase acquosa è stato estrapolato linearmente dal grafico log k contro la percentuale (v/v) di 1-propanolo presente nella fase mobile. Solamente per due dei cinque composti analizzati è stato possibile misurare direttamente il valore di k in assenza di solvente organico. Tutti gli IPs analizzati hanno mostrato un’elevata percentuale di legame alla HSA: in particolare, il valore per ritonavir, lopinavir e saquinavir è risultato maggiore del 95%. Questi risultati sono in accordo con dati presenti in letteratura, ottenuti attraverso il biosensore ottico [9]. Inoltre, questi risultati sono coerenti con la significativa riduzione di attività inibitoria di questi composti osservata in presenza di HSA. Questa riduzione sembra essere maggiore per i composti che legano maggiormente la proteina [10]. Successivamente sono stati eseguiti degli studi di competizione tramite cromatografia zonale. Questo metodo prevede di utilizzare una soluzione a concentrazione nota di un competitore come fase mobile, mentre piccole quantità di analita vengono iniettate nella colonna funzionalizzata con HSA. I competitori sono stati selezionati in base al loro legame selettivo ad uno dei principali siti di legame sulla proteina. In particolare, sono stati utilizzati salicilato di sodio, ibuprofene e valproato di sodio come marker dei siti I, II e sito della bilirubina, rispettivamente. Questi studi hanno mostrato un legame indipendente dei PIs ai siti I e II, mentre è stata osservata una debole anticooperatività per il sito della bilirubina. Lo stesso sistema farmaco-proteina è stato infine investigato in soluzione attraverso l’uso del dicroismo circolare. In particolare, è stato monitorata la variazione del segnale CD indotto di un complesso equimolare [HSA]/[bilirubina], a seguito dell’aggiunta di aliquote di ritonavir, scelto come rappresentante della serie. I risultati confermano la lieve anticooperatività per il sito della bilirubina osservato precedentemente negli studi biocromatografici. Successivamente, lo stesso protocollo descritto precedentemente è stato applicato a una colonna di silice epossidica monolitica 50 x 4.6 mm, per valutare l’affidabilità del supporto monolitico per applicazioni biocromatografiche. Il supporto monolitico monolitico ha mostrato buone caratteristiche cromatografiche in termini di contropressione, efficienza e stabilità, oltre che affidabilità nella determinazione dei parametri di legame alla HSA. Questa colonna è stata utilizzata per la determinazione della percentuale di legame alla HSA di una serie di poliamminochinoni sviluppati nell’ambito di una ricerca sulla malattia di Alzheimer. Tutti i composti hanno mostrato una percentuale di legame superiore al 95%. Inoltre, è stata osservata una correlazione tra percentuale di legame è caratteristiche della catena laterale (lunghezza e numero di gruppi amminici). Successivamente sono stati effettuati studi di competizione dei composti in esame tramite il dicroismo circolare in cui è stato evidenziato un effetto anticooperativo dei poliamminochinoni ai siti I e II, mentre rispetto al sito della bilirubina il legame si è dimostrato indipendente. Le conoscenze acquisite con il supporto monolitico precedentemente descritto, sono state applicate a una colonna di silice epossidica più corta (10 x 4.6 mm). Il metodo di determinazione della percentuale di legame utilizzato negli studi precedenti si basa su dati ottenuti con più esperimenti, quindi è necessario molto tempo prima di ottenere il dato finale. L’uso di una colonna più corta permette di ridurre i tempi di ritenzione degli analiti, per cui la determinazione della percentuale di legame alla HSA diventa molto più rapida. Si passa quindi da una analisi a medio rendimento a una analisi di screening ad alto rendimento (highthroughput- screening, HTS). Inoltre, la riduzione dei tempi di analisi, permette di evitare l’uso di soventi organici nella fase mobile. Dopo aver caratterizzato la colonna da 10 mm con lo stesso metodo precedentemente descritto per le altre colonne, sono stati iniettati una serie di standard variando il flusso della fase mobile, per valutare la possibilità di utilizzare flussi elevati. La colonna è stata quindi impiegata per stimare la percentuale di legame di una serie di molecole con differenti caratteristiche chimiche. Successivamente è stata valutata la possibilità di utilizzare una colonna così corta, anche per studi di competizione, ed è stata indagato il legame di una serie di composti al sito I. Infine è stata effettuata una valutazione della stabilità della colonna in seguito ad un uso estensivo. L’uso di supporti cromatografici funzionalizzati con albumine di diversa origine (ratto, cane, guinea pig, hamster, topo, coniglio), può essere proposto come applicazione futura di queste colonne HTS. Infatti, la possibilità di ottenere informazioni del legame dei farmaci in via di sviluppo alle diverse albumine, permetterebbe un migliore paragone tra i dati ottenuti tramite esperimenti in vitro e i dati ottenuti con esperimenti sull’animale, facilitando la successiva estrapolazione all’uomo, con la velocità di un metodo HTS. Inoltre, verrebbe ridotto anche il numero di animali utilizzati nelle sperimentazioni. Alcuni lavori presenti in letteratura dimostrano l’affidabilita di colonne funzionalizzate con albumine di diversa origine [11-13]: l’utilizzo di colonne più corte potrebbe aumentarne le applicazioni.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Human reactions to vibration have been extensively investigated in the past. Vibration, as well as whole-body vibration (WBV), has been commonly considered as an occupational hazard for its detrimental effects on human condition and comfort. Although long term exposure to vibrations may produce undesirable side-effects, a great part of the literature is dedicated to the positive effects of WBV when used as method for muscular stimulation and as an exercise intervention. Whole body vibration training (WBVT) aims to mechanically activate muscles by eliciting neuromuscular activity (muscle reflexes) via the use of vibrations delivered to the whole body. The most mentioned mechanism to explain the neuromuscular outcomes of vibration is the elicited neuromuscular activation. Local tendon vibrations induce activity of the muscle spindle Ia fibers, mediated by monosynaptic and polysynaptic pathways: a reflex muscle contraction known as the Tonic Vibration Reflex (TVR) arises in response to such vibratory stimulus. In WBVT mechanical vibrations, in a range from 10 to 80 Hz and peak to peak displacements from 1 to 10 mm, are usually transmitted to the patient body by the use of oscillating platforms. Vibrations are then transferred from the platform to a specific muscle group through the subject body. To customize WBV treatments, surface electromyography (SEMG) signals are often used to reveal the best stimulation frequency for each subject. Use of SEMG concise parameters, such as root mean square values of the recordings, is also a common practice; frequently a preliminary session can take place in order to discover the more appropriate stimulation frequency. Soft tissues act as wobbling masses vibrating in a damped manner in response to mechanical excitation; Muscle Tuning hypothesis suggest that neuromuscular system works to damp the soft tissue oscillation that occurs in response to vibrations; muscles alters their activity to dampen the vibrations, preventing any resonance phenomenon. Muscle response to vibration is however a complex phenomenon as it depends on different parameters, like muscle-tension, muscle or segment-stiffness, amplitude and frequency of the mechanical vibration. Additionally, while in the TVR study the applied vibratory stimulus and the muscle conditions are completely characterised (a known vibration source is applied directly to a stretched/shortened muscle or tendon), in WBV study only the stimulus applied to a distal part of the body is known. Moreover, mechanical response changes in relation to the posture. The transmissibility of vibratory stimulus along the body segment strongly depends on the position held by the subject. The aim of this work was the investigation on the effects that the use of vibrations, in particular the effects of whole body vibrations, may have on muscular activity. A new approach to discover the more appropriate stimulus frequency, by the use of accelerometers, was also explored. Different subjects, not affected by any known neurological or musculoskeletal disorders, were voluntarily involved in the study and gave their informed, written consent to participate. The device used to deliver vibration to the subjects was a vibrating platform. Vibrations impressed by the platform were exclusively vertical; platform displacement was sinusoidal with an intensity (peak-to-peak displacement) set to 1.2 mm and with a frequency ranging from 10 to 80 Hz. All the subjects familiarized with the device and the proper positioning. Two different posture were explored in this study: position 1 - hack squat; position 2 - subject standing on toes with heels raised. SEMG signals from the Rectus Femoris (RF), Vastus Lateralis (VL) and Vastus medialis (VM) were recorded. SEMG signals were amplified using a multi-channel, isolated biomedical signal amplifier The gain was set to 1000 V/V and a band pass filter (-3dB frequency 10 - 500 Hz) was applied; no notch filters were used to suppress line interference. Tiny and lightweight (less than 10 g) three-axial MEMS accelerometers (Freescale semiconductors) were used to measure accelerations of onto patient’s skin, at EMG electrodes level. Accelerations signals provided information related to individuals’ RF, Biceps Femoris (BF) and Gastrocnemius Lateralis (GL) muscle belly oscillation; they were pre-processed in order to exclude influence of gravity. As demonstrated by our results, vibrations generate peculiar, not negligible motion artifact on skin electrodes. Artifact amplitude is generally unpredictable; it appeared in all the quadriceps muscles analysed, but in different amounts. Artifact harmonics extend throughout the EMG spectrum, making classic high-pass filters ineffective; however, their contribution was easy to filter out from the raw EMG signal with a series of sharp notch filters centred at the vibration frequency and its superior harmonics (1.5 Hz wide). However, use of these simple filters prevents the revelation of EMG power potential variation in the mentioned filtered bands. Moreover our experience suggests that the possibility of reducing motion artefact, by using particular electrodes and by accurately preparing the subject’s skin, is not easily viable; even though some small improvements were obtained, it was not possible to substantially decrease the artifact. Anyway, getting rid of those artifacts lead to some true EMG signal loss. Nevertheless, our preliminary results suggest that the use of notch filters at vibration frequency and its harmonics is suitable for motion artifacts filtering. In RF SEMG recordings during vibratory stimulation only a little EMG power increment should be contained in the mentioned filtered bands due to synchronous electromyographic activity of the muscle. Moreover, it is better to remove the artifact that, in our experience, was found to be more than 40% of the total signal power. In summary, many variables have to be taken into account: in addition to amplitude, frequency and duration of vibration treatment, other fundamental variables were found to be subject anatomy, individual physiological condition and subject’s positioning on the platform. Studies on WBV treatments that include surface EMG analysis to asses muscular activity during vibratory stimulation should take into account the presence of motion artifacts. Appropriate filtering of artifacts, to reveal the actual effect on muscle contraction elicited by vibration stimulus, is mandatory. However as a result of our preliminary study, a simple multi-band notch filtering may help to reduce randomness of the results. Muscle tuning hypothesis seemed to be confirmed. Our results suggested that the effects of WBV are linked to the actual muscle motion (displacement). The greater was the muscle belly displacement the higher was found the muscle activity. The maximum muscle activity has been found in correspondence with the local mechanical resonance, suggesting a more effective stimulation at the specific system resonance frequency. Holding the hypothesis that muscle activation is proportional to muscle displacement, treatment optimization could be obtained by simply monitoring local acceleration (resonance). However, our study revealed some short term effects of vibratory stimulus; prolonged studies should be assembled in order to consider the long term effectiveness of these results. Since local stimulus depends on the kinematic chain involved, WBV muscle stimulation has to take into account the transmissibility of the stimulus along the body segment in order to ensure that vibratory stimulation effectively reaches the target muscle. Combination of local resonance and muscle response should also be further investigated to prevent hazards to individuals undergoing WBV treatments.
Resumo:
There is a widening consensus around the fact that, in many developed countries, food production-consumption patterns are in recent years interested by a process of deep change towards diversification and re-localisation practices, as a counter-tendency to the trend to the increasing disconnection between farming and food, producers and consumers. The relevance of these initiatives doesn't certainly lie on their economic dimension, but rather in their intense diffusion and growth rate, their spontaneous and autonomous nature and, especially, their intrinsic innovative potential. These dynamics involve a wide range of actors around local food patterns, embedding short food supply chains initiatives within a more complex and wider process of rural development, based on principles of sustainability, multifunctionality and valorisation of endogenous resources. In this work we have been analysing these features through a multi-level perspective, with reference to the dynamics between niche and regime and the inherent characteristics of the innovation paths. We apply this approach, through a qualitative methodology, to the analysis of the experience of farmers’ markets and Solidarity-Based Consumers Groups (Gruppi di Acquisto Solidale) ongoing in Tuscany, seeking to highlight the dynamics that are affecting the establishment of this alternative food production-consumption model (and its related innovative potential) from within and from without. To verify if and in which conditions they can constitute a niche, a protected space where radical innovations can develop, we make reference to the three interrelated analytic dimensions of socio-technical systems: the actors (i.e. individuals. social groups, organisations), the rules and institutions system, and the artefacts (i.e. the material and immaterial contexts in which the actors move). Through it, we analyse the innovative potential of niches and the level of their structuration and , then, the mechanisms of system transition, focusing on the new dynamics within the niche and between the niche and the policy regime emerging after the growth of interest by mass-media and public institutions and their direct involvement in the initiatives. Following the development of these significant experiences, we explore more deeply social, economic, cultural, political and organisational factors affecting innovations in face-to-face interactions, underpinning the critical aspects (sharing of alternative values, coherence at individual choices level, frictions on organisational aspects, inclusion/exclusion, attitudes towards integration at territorial level), towards uncovering until to the emergence of tensions and the risks of opportunistic behaviours that might arise from their growth. Finally, a comparison with similar experiences abroad is drawn (specifically with Provence), in order to detect food for thought, potentially useful for leading regional initiativestowards transition path.
Resumo:
Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
Resumo:
Con il trascorrere del tempo, le reti di stazioni permanenti GNSS (Global Navigation Satellite System) divengono sempre più un valido supporto alle tecniche di rilevamento satellitare. Esse sono al tempo stesso un’efficace materializzazione del sistema di riferimento e un utile ausilio ad applicazioni di rilevamento topografico e di monitoraggio per il controllo di deformazioni. Alle ormai classiche applicazioni statiche in post-processamento, si affiancano le misure in tempo reale sempre più utilizzate e richieste dall’utenza professionale. In tutti i casi risulta molto importante la determinazione di coordinate precise per le stazioni permanenti, al punto che si è deciso di effettuarla tramite differenti ambienti di calcolo. Sono stati confrontati il Bernese, il Gamit (che condividono l’approccio differenziato) e il Gipsy (che utilizza l’approccio indifferenziato). L’uso di tre software ha reso indispensabile l’individuazione di una strategia di calcolo comune in grado di garantire che, i dati ancillari e i parametri fisici adottati, non costituiscano fonte di diversificazione tra le soluzioni ottenute. L’analisi di reti di dimensioni nazionali oppure di reti locali per lunghi intervalli di tempo, comporta il processamento di migliaia se non decine di migliaia di file; a ciò si aggiunge che, talora a causa di banali errori, oppure al fine di elaborare test scientifici, spesso risulta necessario reiterare le elaborazioni. Molte risorse sono quindi state investite nella messa a punto di procedure automatiche finalizzate, da un lato alla preparazione degli archivi e dall’altro all’analisi dei risultati e al loro confronto qualora si sia in possesso di più soluzioni. Dette procedure sono state sviluppate elaborando i dataset più significativi messi a disposizione del DISTART (Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento del Territorio - Università di Bologna). E’ stato così possibile, al tempo stesso, calcolare la posizione delle stazioni permanenti di alcune importanti reti locali e nazionali e confrontare taluni fra i più importanti codici scientifici che assolvono a tale funzione. Per quanto attiene il confronto fra i diversi software si è verificato che: • le soluzioni ottenute dal Bernese e da Gamit (i due software differenziati) sono sempre in perfetto accordo; • le soluzioni Gipsy (che utilizza il metodo indifferenziato) risultano, quasi sempre, leggermente più disperse rispetto a quelle degli altri software e mostrano talvolta delle apprezzabili differenze numeriche rispetto alle altre soluzioni, soprattutto per quanto attiene la coordinata Est; le differenze sono però contenute in pochi millimetri e le rette che descrivono i trend sono comunque praticamente parallele a quelle degli altri due codici; • il citato bias in Est tra Gipsy e le soluzioni differenziate, è più evidente in presenza di determinate combinazioni Antenna/Radome e sembra essere legato all’uso delle calibrazioni assolute da parte dei diversi software. E’ necessario altresì considerare che Gipsy è sensibilmente più veloce dei codici differenziati e soprattutto che, con la procedura indifferenziata, il file di ciascuna stazione di ciascun giorno, viene elaborato indipendentemente dagli altri, con evidente maggior elasticità di gestione: se si individua un errore strumentale su di una singola stazione o se si decide di aggiungere o togliere una stazione dalla rete, non risulta necessario il ricalcolo dell’intera rete. Insieme alle altre reti è stato possibile analizzare la Rete Dinamica Nazionale (RDN), non solo i 28 giorni che hanno dato luogo alla sua prima definizione, bensì anche ulteriori quattro intervalli temporali di 28 giorni, intercalati di sei mesi e che coprono quindi un intervallo temporale complessivo pari a due anni. Si è così potuto verificare che la RDN può essere utilizzata per l’inserimento in ITRF05 (International Terrestrial Reference Frame) di una qualsiasi rete regionale italiana nonostante l’intervallo temporale ancora limitato. Da un lato sono state stimate le velocità ITRF (puramente indicative e non ufficiali) delle stazioni RDN e, dall’altro, è stata effettuata una prova di inquadramento di una rete regionale in ITRF, tramite RDN, e si è verificato che non si hanno differenze apprezzabili rispetto all’inquadramento in ITRF, tramite un congruo numero di stazioni IGS/EUREF (International GNSS Service / European REference Frame, SubCommission for Europe dello International Association of Geodesy).
Resumo:
The last decade has witnessed an exponential growth of activities in the field of nanoscience and nanotechnology worldwide, driven both by the excitement of understanding new science and by the potential hope for applications and economic impacts. The largest activity in this field up to date has been in the synthesis and characterization of new materials consisting of particles with dimensions in the order of a few nanometers, so-called nanocrystalline materials. [1-8] Semiconductor nanomaterials such as III/V or II/VI compound semiconductors exhibit strong quantum confinement behavior in the size range from 1 to 10 nm. Therefore, preparation of high quality semiconductor nanocrystals has been a challenge for synthetic chemists, leading to the recent rapid progress in delivering a wide variety of semiconducting nanomaterials. Semiconductor nanocrystals, also called quantum dots, possess physical properties distinctly different from those of the bulk material. Typically, in the size range from 1 to 10 nm, when the particle size is changed, the band gap between the valence and the conduction band will change, too. In a simple approximation a particle in a box model has been used to describe the phenomenon[9]: at nanoscale dimensions the degenerate energy states of a semiconductor separate into discrete states and the system behaves like one big molecule. The size-dependent transformation of the energy levels of the particles is called “quantum size-effect”. Quantum confinement of both the electron and hole in all three dimensions leads to an increase in the effective bandgap of the material with decreasing crystallite size. Consequently, both the optical absorption and emission of semiconductor nanaocrystals shift to the blue (higher energies) as the size of the particles gets smaller. This color tuning is well documented for CdSe nanocrystals whose absorption and emission covers almost the whole visible spectral range. As particle sizes become smaller the ratio of surface atoms to those in the interior increases, which has a strong impact on particle properties, too. Prominent examples are the low melting point [8] and size/shape dependent pressure resistance [10] of semiconductor nanocrystals. Given the size dependence of particle properties, chemists and material scientists now have the unique opportunity to change the electronic and chemical properties of a material by simply controlling the particle size. In particular, CdSe nanocrystals have been widely investigated. Mainly due to their size-dependent optoelectronic properties [11, 12] and flexible chemical processibility [13], they have played a distinguished role for a number of seminal studies [11, 12, 14, 15]. Potential technical applications have been discussed, too. [8, 16-27] Improvement of the optoelectronic properties of semiconductor nanocrystals is still a prominent research topic. One of the most important approaches is fabricating composite type-I core-shell structures which exhibit improved properties, making them attractive from both a fundamental and a practical point of view. Overcoating of nanocrystallites with higher band gap inorganic materials has been shown to increase the photoluminescence quantum yields by eliminating surface nonradiative recombination sites. [28] Particles passivated with inorganic shells are more robust than nanocrystals covered by organic ligands only and have greater tolerance to processing conditions necessary for incorporation into solid state structures or for other applications. Some examples of core-shell nanocrystals reported earlier include CdS on CdSe [29], CdSe on CdS, [30], ZnS on CdS, [31] ZnS on CdSe[28, 32], ZnSe on CdSe [33] and CdS/HgS/CdS [34]. The characterization and preparation of a new core-shell structure, CdSe nanocrystals overcoated by different shells (CdS, ZnS), is presented in chapter 4. Type-I core-shell structures as mentioned above greatly improve the photoluminescence quantum yield and chemical and photochemical stability of nanocrystals. The emission wavelengths of type-I core/shell nanocrystals typically only shows a small red-shift when compared to the plain core nanocrystals. [30, 31, 35] In contrast to type-I core-shell nanocrystals, only few studies have been conducted on colloidal type-II core/shell structures [36-38] which are characterized by a staggered alignment of conduction and valence bands giving rise to a broad tunability of absorption and emission wavelengths, as was shown for CdTe/CdSe core-shell nanocrystals. [36] The emission of type-II core/shell nanocrystals mainly originates from the radiative recombination of electron-hole pairs across the core-shell interface leading to a long photoluminescence lifetime. Type-II core/shell nanocrystals are promising with respect to photoconduction or photovoltaic applications as has been discussed in the literature.[39] Novel type-II core-shell structures with ZnTe cores are reported in chapter 5. The recent progress in the shape control of semiconductor nanocrystals opens new fields of applications. For instance, rod shaped CdSe nanocrystals can enhance the photo-electro conversion efficiency of photovoltaic cells, [40, 41] and also allow for polarized emission in light emitting diodes. [42, 43] Shape control of anisotropic nanocrystals can be achieved by the use of surfactants, [44, 45] regular or inverse micelles as regulating agents, [46, 47] electrochemical processes, [48] template-assisted [49, 50] and solution-liquid-solution (SLS) growth mechnism. [51-53] Recently, formation of various CdSe nanocrystal shapes has been reported by the groups of Alivisatos [54] and Peng, [55] respectively. Furthermore, it has been reported by the group of Prasad [56] that noble metal nanoparticles can induce anisotropic growth of CdSe nanocrystals at lower temperatures than typically used in other methods for preparing anisotropic CdSe structures. Although several approaches for anisotropic crystal growth have been reported by now, developing new synthetic methods for the shape control of colloidal semiconductor nanocrystals remains an important goal. Accordingly, we have attempted to utilize a crystal phase control approach for the controllable synthesis of colloidal ZnE/CdSe (E = S, Se, Te) heterostructures in a variety of morphologies. The complex heterostructures obtained are presented in chapter 6. The unique optical properties of nanocrystals make them appealing as in vivo and in vitro fluorophores in a variety of biological and chemical investigations, in which traditional fluorescence labels based on organic molecules fall short of providing long-term stability and simultaneous detection of multiple emission colours [References]. The ability to prepare water soluble nanocrystals with high stability and quantum yield has led to promising applications in cellular labeling, [57, 58] deep-tissue imaging, [59, 60] and assay labeling [61, 62]. Furthermore, appropriately solubilized nanocrystals have been used as donors in fluorescence resonance energy transfer (FRET) couples. [63-65] Despite recent progress, much work still needs to be done to achieve reproducible and robust surface functionalization and develop flexible (bio-) conjugation techniques. Based on multi-shell CdSe nanocrystals, several new solubilization and ligand exchange protocols have been developed which are presented in chapter 7. The organization of this thesis is as follows: A short overview describing synthesis and properties of CdSe nanocrystals is given in chapter 2. Chapter 3 is the experimental part providing some background information about the optical and analytical methods used in this thesis. The following chapters report the results of this work: synthesis and characterization of type-I multi-shell and type-II core/shell nanocrystals are described in chapter 4 and chapter 5, respectively. In chapter 6, a high–yield synthesis of various CdSe architectures by crystal phase control is reported. Experiments about surface modification of nanocrystals are described in chapter 7. At last, a short summary of the results is given in chapter 8.
Resumo:
Come si evince dal titolo della tesi, la ricerca effettuata dal presente candidato nel corso del dottorato di ricerca ha avuto ad oggetto lo studio della materia relativa ai trasporti nell’ambito di diversi sistemi giuridici europei, con particolare attenzione ai risvolti di carattere pratico che l’interpretazione delle normative uniformi poteva produrre nell’ambito delle diverse giurisdizioni nonché alle diverse impostazioni e alla variegata gamma di soluzioni interpretative che nell’ambito di problemi simili possono essere adottate a seconda che una stessa questione venga discussa in un ordinamento piuttosto che in un altro. Dall’avvento del trasporto marittimo di containers alla necessità di disciplinare l’intera materia attraverso una normativa multimodale il passo è estremamente breve, posto che, proprio in considerazione delle caratteristiche proprie del trasporto containerizzato, gli aventi diritto al carico sono principalmente interessati al completamente del trasferimento door-to-door inteso nella sua globalità, piuttosto che al buon esito del trasporto sulla singola tratta marittima. Il progetto di revisione delle Regole dell’Aja-Visby adottato dall’United Nations Commission on International Trade Law (Uncitral) e il Comité Maritime International (CMI) costituisce per definizione un progetto limitato ad un trasporto multimodale comprendente necessariamente una tratta marittima, ma rappresenta comunque un interessante banco di prova per valutare la funzionalità di strumenti di recente impiego, come ad esempio i cosiddetti e-documents, concetto peraltro già inserito nel progetto UNCITRAL, anche se con scarsi elementi di reale novità rispetto alla tradizionale disciplina relativa ai documenti cartacei. Proprio la parte relativa ai documenti e titoli di viaggio merita particolare attenzione soprattutto in riferimento alle problematiche connesse al traffico containerizzato, con particolare riferimento al concetto di transhipment, e alla conseguente necessità che la polizza garantisca al legittimo portatore una copertura completa sull’intero viaggio della merce, oltre a dargli la possibilità di individuare agevolmente la propria controparte contrattuale, e cioè il vettore.
Resumo:
Die DNA-Doppelhelix ist eine relativ dicke (Ø ≈ 2 nm), kompakte und dadurch auf kurzen Längenskalen relativ steife Verbindung (lp[dsDNA] ≈ 50-60 nm), mit einer klar definierten Struktur, die durch biologische Methoden sehr präzise manipuliert werden kann. Die Auswirkungen der primären Sequenz auf die dreidimensionale Strukturbildung ist gut verstanden und exakt vorhersagbar. Des Weiteren kann DNA an verschiedenen Stellen mit anderen Molekülen verknüpft werden, ohne dass ihre Selbsterkennung gestört wird. Durch die helikale Struktur besteht außerdem ein Zusammenhang zwischen der Lage und der räumlichen Orientierung von eingeführten Modifikationen. Durch moderne Syntheseverfahren lassen sich beliebige Oligonukleotidsequenzen im Bereich bis etwa 150-200 Basen relativ preiswert im Milligrammmaßstab herstellen. Diese Eigenschaften machen die DNA zu einem idealen Kandidaten zur Erzeugung komplexer Strukturen, die durch Selbsterkennung der entsprechenden Sequenzen gebildet werden. In der hier vorgelegten Arbeit wurden einzelsträngige DNA-Abschnitte (ssDNA) als adressierbare Verknüpfungsstellen eingesetzt, um verschiedene molekulare Bausteine zu diskreten nicht periodischen Strukturen zu verbinden. Als Bausteine dienten flexible synthetische Polymerblöcke und semiflexible Doppelstrang-DNA-Abschnitte (dsDNA), die an beiden Enden mit unterschiedlichen Oligonukleotidsequenzen „funktionalisiert“ sind. Die zur Verknüpfung genutzten Oligonukleotidabschnitte wurden so gewählt (n > 20 Basen), dass ihre Hybridisierung zu einer bei Raumtemperatur stabilen Doppelstrangbildung führt. Durch Kombination der Phosphoramiditsynthese von DNA mit einer festkörpergestützten Blockkopplungsreaktion konnte am Beispiel von Polyethylenoxiden ein sehr effektiver Syntheseweg zur Herstellung von ssDNA1-PEO-ssDNA2-Triblockcopolymeren entwickelt werden, der sich problemlos auf andere Polymere übertragen lassen sollte. Die Längen und Basenabfolgen der beiden Oligonukleotidsequenzen können dabei unabhängig voneinander frei gewählt werden. Somit wurden die Voraussetzungen geschaffen, um die Selbsterkennung von Oligonukleotiden durch Kombination verschiedener Triblockcopolymere zur Erzeugung von Multiblockcopolymeren zu nutzen, die mit klassischen Synthesetechniken nicht zugänglich sind. Semiflexible Strukturelemente lassen sich durch die Synthese von Doppelstrangfragmenten mit langen überstehenden Enden (sticky-ends) realisieren. Die klassischen Ansätze der molekularen Genetik zur Erzeugung von sticky-ends sind in diesem Fall nicht praktikabel, da sie zu Einschränkungen im Bezug auf Länge und Sequenz der überhängenden Enden führen. Als Methode der Wahl haben sich zwei verschiedene Varianten der Polymerase Kettenreaktion (PCR) erwiesen, die auf der Verwendung von teilkomplementären Primern beruhen. Die eigentlichen Primersequenzen wurden am 5´-Ende entweder über ein 2´-Desoxyuridin oder über einen kurzen Polyethylenoxid-Spacer (n = 6) mit einer frei wählbaren „sticky-end-Sequenz“ verknüpft. Mit diesen Methoden sind sowohl 3´- als auch 5´-Überhänge zugänglich und die Länge der Doppelstrangabschnitte kann über einen breiten Molmassenbereich sehr exakt eingestellt werden. Durch Kombination derartiger Doppelstrangfragmente mit den biosynthetischen Triblockcopolymeren lassen sich Strukturen erzeugen, die als Modellsysteme zur Untersuchung verschiedener Biomoleküle genutzt werden können, die in Form eines mehrfach gebrochenen Stäbchens vorliegen. Im letzten Abschnitt wurde gezeigt, dass durch geeignete Wahl der überstehenden Enden bzw. durch Hybridisierung der Doppelstrangfragmente mit passenden Oligonukleotiden verzweigte DNA-Strukturen mit Armlängen von einigen hundert Nanometern zugänglich sind. Im Vergleich zu den bisher veröffentlichten Methoden bietet diese Herangehensweise zwei entscheidende Vorteile: Zum einen konnte der Syntheseaufwand auf ein Minimum reduziert werden, zum anderen ist es auf diesem Weg möglich die Längen der einzelnen Arme, unabhängig voneinander, über einen breiten Molmassenbereich zu variieren.
Resumo:
It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.
Resumo:
A 2D Unconstrained Third Order Shear Deformation Theory (UTSDT) is presented for the evaluation of tangential and normal stresses in moderately thick functionally graded conical and cylindrical shells subjected to mechanical loadings. Several types of graded materials are investigated. The functionally graded material consists of ceramic and metallic constituents. A four parameter power law function is used. The UTSDT allows the presence of a finite transverse shear stress at the top and bottom surfaces of the graded shell. In addition, the initial curvature effect included in the formulation leads to the generalization of the present theory (GUTSDT). The Generalized Differential Quadrature (GDQ) method is used to discretize the derivatives in the governing equations, the external boundary conditions and the compatibility conditions. Transverse and normal stresses are also calculated by integrating the three dimensional equations of equilibrium in the thickness direction. In this way, the six components of the stress tensor at a point of the conical or cylindrical shell or panel can be given. The initial curvature effect and the role of the power law functions are shown for a wide range of functionally conical and cylindrical shells under various loading and boundary conditions. Finally, numerical examples of the available literature are worked out.
Resumo:
Key technology applications like magnetoresistive sensors or the Magnetic Random Access Memory (MRAM) require reproducible magnetic switching mechanisms. i.e. predefined remanent states. At the same time advanced magnetic recording schemes push the magnetic switching time into the gyromagnetic regime. According to the Landau-Lifschitz-Gilbert formalism, relevant questions herein are associated with magnetic excitations (eigenmodes) and damping processes in confined magnetic thin film structures.rnObjects of study in this thesis are antiparallel pinned synthetic spin valves as they are extensively used as read heads in today’s magnetic storage devices. In such devices a ferromagnetic layer of high coercivity is stabilized via an exchange bias field by an antiferromagnet. A second hard magnetic layer, separated by a non-magnetic spacer of defined thickness, aligns antiparallel to the first. The orientation of the magnetization vector in the third ferromagnetic NiFe layer of low coercivity - the freelayer - is then sensed by the Giant MagnetoResistance (GMR) effect. This thesis reports results of element specific Time Resolved Photo-Emission Electron Microscopy (TR-PEEM) to image the magnetization dynamics of the free layer alone via X-ray Circular Dichroism (XMCD) at the Ni-L3 X-ray absorption edge.rnThe ferromagnetic systems, i.e. micron-sized spin valve stacks of typically deltaR/R = 15% and Permalloy single layers, were deposited onto the pulse leading centre stripe of coplanar wave guides, built in thin film wafer technology. The ferromagnetic platelets have been applied with varying geometry (rectangles, ellipses and squares), lateral dimension (in the range of several micrometers) and orientation to the magnetic field pulse to study the magnetization behaviour in dependence of these magnitudes. The observation of magnetic switching processes in the gigahertz range became only possible due to the joined effort of producing ultra-short X-ray pulses at the synchrotron source BESSY II (operated in the so-called low-alpha mode) and optimizing the wave guide design of the samples for high frequency electromagnetic excitation (FWHM typically several 100 ps). Space and time resolution of the experiment could be reduced to d = 100 nm and deltat = 15 ps, respectively.rnIn conclusion, it could be shown that the magnetization dynamics of the free layer of a synthetic GMR spin valve stack deviates significantly from a simple phase coherent rotation. In fact, the dynamic response of the free layer is a superposition of an averaged critically damped precessional motion and localized higher order spin wave modes. In a square platelet a standing spin wave with a period of 600 ps (1.7 GHz) was observed. At a first glance, the damping coefficient was found to be independent of the shape of the spin-valve element, thus favouring the model of homogeneous rotation and damping. Only by building the difference in the magnetic rotation between the central region and the outer rim of the platelet, the spin wave becomes visible. As they provide an additional efficient channel for energy dissipation, spin waves contribute to a higher effective damping coefficient (alpha = 0.01). Damping and magnetic switching behaviour in spin valves thus depend on the geometry of the element. Micromagnetic simulations reproduce the observed higher-order spin wave mode.rnBesides the short-run behaviour of the magnetization of spin valves Permalloy single layers with thicknesses ranging from 3 to 40 nm have been studied. The phase velocity of a spin wave in a 3 nm thick ellipse could be determined to 8.100 m/s. In a rectangular structure exhibiting a Landau-Lifschitz like domain pattern, the speed of the field pulse induced displacement of a 90°-Néel wall has been determined to 15.000 m/s.rn
Resumo:
To assist rational compound design of organic semiconductors, two problems need to be addressed. First, the material morphology has to be known at an atomistic level. Second, with the morphology at hand, an appropriate charge transport model needs to be developed in order to link charge carrier mobility to structure.rnrnThe former can be addressed by generating atomistic morphologies using molecular dynamics simulations. However, the accessible range of time- and length-scales is limited. To overcome these limitations, systematic coarse-graining methods can be used. In the first part of the thesis, the Versatile Object-oriented Toolkit for Coarse-graining Applications is introduced, which provides a platform for the implementation of coarse-graining methods. Tools to perform Boltzmann inversion, iterative Boltzmann inversion, inverse Monte Carlo, and force-matching are available and have been tested on a set of model systems (water, methanol, propane and a single hexane chain). Advantages and problems of each specific method are discussed.rnrnIn partially disordered systems, the second issue is closely connected to constructing appropriate diabatic states between which charge transfer occurs. In the second part of the thesis, the description initially used for small conjugated molecules is extended to conjugated polymers. Here, charge transport is modeled by introducing conjugated segments on which charge carriers are localized. Inter-chain transport is then treated within a high temperature non-adiabatic Marcus theory while an adiabatic rate expression is used for intra-chain transport. The charge dynamics is simulated using the kinetic Monte Carlo method.rnrnThe entire framework is finally employed to establish a relation between the morphology and the charge mobility of the neutral and doped states of polypyrrole, a conjugated polymer. It is shown that for short oligomers, charge carrier mobility is insensitive to the orientational molecular ordering and is determined by the threshold transfer integral which connects percolating clusters of molecules that form interconnected networks. The value of this transfer integral can be related to the radial distribution function. Hence, charge mobility is mainly determined by the local molecular packing and is independent of the global morphology, at least in such a non-crystalline state of a polymer.