958 resultados para Finite Difference Time Domain Method
Resumo:
The need for high bandwidth, due to the explosion of new multi\-media-oriented IP-based services, as well as increasing broadband access requirements is leading to the need of flexible and highly reconfigurable optical networks. While transmission bandwidth does not represent a limit due to the huge bandwidth provided by optical fibers and Dense Wavelength Division Multiplexing (DWDM) technology, the electronic switching nodes in the core of the network represent the bottleneck in terms of speed and capacity for the overall network. For this reason DWDM technology must be exploited not only for data transport but also for switching operations. In this Ph.D. thesis solutions for photonic packet switches, a flexible alternative with respect to circuit-switched optical networks are proposed. In particular solutions based on devices and components that are expected to mature in the near future are proposed, with the aim to limit the employment of complex components. The work presented here is the result of part of the research activities performed by the Networks Research Group at the Department of Electronics, Computer Science and Systems (DEIS) of the University of Bologna, Italy. In particular, the work on optical packet switching has been carried on within three relevant research projects: the e-Photon/ONe and e-Photon/ONe+ projects, funded by the European Union in the Sixth Framework Programme, and the national project OSATE funded by the Italian Ministry of Education, University and Scientific Research. The rest of the work is organized as follows. Chapter 1 gives a brief introduction to network context and contention resolution in photonic packet switches. Chapter 2 presents different strategies for contention resolution in wavelength domain. Chapter 3 illustrates a possible implementation of one of the schemes proposed in chapter 2. Then, chapter 4 presents multi-fiber switches, which employ jointly wavelength and space domains to solve contention. Chapter 5 shows buffered switches, to solve contention in time domain besides wavelength domain. Finally chapter 6 presents a cost model to compare different switch architectures in terms of cost.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
Mögliche Verformungsmechanismen, die zu den verschiedenen Glimmer- und Mineralfischen führen, sind: intrakristalline Verformung, Kristallrotation, Biegung und Faltung, Drucklösung in Kombination mit Ausfällung und dynamische Rekristallisation oder Mechanismen, die ein großes Mineral in mehrere kleine, fischförmige Kristalle aufspalten.Experimente mit ein neues Verformungsgerät und Objekten in zwei verschiedenen Matrixmaterialien werden beschrieben. Das eine ist PDMS, (Newtonianisch viskoses Polymer), und das andere Tapioca Perlen (Mohr-Couloumb Verhalten). Die Rotation von fischförmigen Objekten in PDMS stimmt mit der theoretischen Rotationsrate für ellipsenförmige Objekte in einem Newtonianischen Material überein. In einer Matrix von Tapioca Perlen nehmen die Objekte eine stabile Lage ein. Diese Orientierung ist vergleichbar mit der von Glimmerfischen. Die Verformung in der Matrix von Tapioca Perlen ist konzentriert auf dünne Scherzonen. Diese Ergebnisse implizieren, daß die Verformung in natürlichen Gesteinen auch in dünnen Scherzonen konzentriert ist.Computersimulationen werden beschrieben, mit denen der Einfluß der Eigenschaften einer Matrix auf die Rotation von Objekten und Verteilung von Deformation untersucht wird.Mit diesen Experimenten wird gezeigt, daß die Orientierung von Glimmerfischen nicht mit Verformung in einem nicht-linearen viskosen Material erklärt werden kann. Eine solche nicht-lineare Rheologie wird im Allgemeinen für die Erdkurste angenommen. Die stabile Orientierung eines Objektes kann mit weicheren Lagen in der Matrix erklärt werden.
Resumo:
Traditional procedures for rainfall-runoff model calibration are generally based on the fit of the individual values of simulated and observed hydrographs. It is used here an alternative option that is carried out by matching, in the optimisation process, a set of statistics of the river flow. Such approach has the additional, significant advantage to allow also a straightforward regional calibration of the model parameters, based on the regionalisation of the selected statistics. The minimisation of the set of objective functions is carried out by using the AMALGAM algorithm, leading to the identification of behavioural parameter sets. The procedure is applied to a set of river basins located in central Italy: the basins are treated alternatively as gauged and ungauged and, as a term of comparison, the results obtained with a traditional time-domain calibration is also presented. The results show that a suitable choice of the statistics to be optimised leads to interesting results in real world case studies as far as the reproduction of the different flow regimes is concerned.
Resumo:
Conjugated polymers have attracted tremendous academical and industrial research interest over the past decades due to the appealing advantages that organic / polymeric materials offer for electronic applications and devices such as organic light emitting diodes (OLED), organic field effect transistors (OFET), organic solar cells (OSC), photodiodes and plastic lasers. The optimization of organic materials for applications in optoelectronic devices requires detailed knowledge of their photophysical properties, for instance energy levels of excited singlet and triplet states, excited state decay mechanisms and charge carrier mobilities. In the present work a variety of different conjugated (co)polymers, mainly polyspirobifluorene- and polyfluorene-type materials, was investigated using time-resolved photoluminescence spectroscopy in the picosecond to second time domain to study their elementary photophysical properties and to get a deeper insight into structure-property relationships. The experiments cover fluorescence spectroscopy using Streak Camera techniques as well as time-delayed gated detection techniques for the investigation of delayed fluorescence and phosphorescence. All measurements were performed on the solid state, i.e. thin polymer films and on diluted solutions. Starting from the elementary photophysical properties of conjugated polymers the experiments were extended to studies of singlet and triplet energy transfer processes in polymer blends, polymer-triplet emitter blends and copolymers. The phenomenon of photonenergy upconversion was investigated in blue light-emitting polymer matrices doped with metallated porphyrin derivatives supposing an bimolecular annihilation upconversion mechanism which could be experimentally verified on a series of copolymers. This mechanism allows for more efficient photonenergy upconversion than previously reported for polyfluorene derivatives. In addition to the above described spectroscopical experiments, amplified spontaneous emission (ASE) in thin film polymer waveguides was studied employing a fully-arylated poly(indenofluorene) as the gain medium. It was found that the material exhibits a very low threshold value for amplification of blue light combined with an excellent oxidative stability, which makes it interesting as active material for organic solid state lasers. Apart from spectroscopical experiments, transient photocurrent measurements on conjugated polymers were performed as well to elucidate the charge carrier mobility in the solid state, which is an important material parameter for device applications. A modified time-of-flight (TOF) technique using a charge carrier generation layer allowed to study hole transport in a series of spirobifluorene copolymers to unravel the structure-mobility relationship by comparison with the homopolymer. Not only the charge carrier mobility could be determined for the series of polymers but also field- and temperature-dependent measurements analyzed in the framework of the Gaussian disorder model showed that results coincide very well with the predictions of the model. Thus, the validity of the disorder concept for charge carrier transport in amorphous glassy materials could be verified for the investigated series of copolymers.
Resumo:
Precision measurements of phenomena related to fermion mixing require the inclusion of higher order corrections in the calculation of corresponding theoretical predictions. For this, a complete renormalization scheme for models that allow for fermion mixing is highly required. The correct treatment of unstable particles makes this task difficult and yet, no satisfactory and general solution can be found in the literature. In the present work, we study the renormalization of the fermion Lagrange density with Dirac and Majorana particles in models that involve mixing. The first part of the thesis provides a general renormalization prescription for the Lagrangian, while the second one is an application to specific models. In a general framework, using the on-shell renormalization scheme, we identify the physical mass and the decay width of a fermion from its full propagator. The so-called wave function renormalization constants are determined such that the subtracted propagator is diagonal on-shell. As a consequence of absorptive parts in the self-energy, the constants that are supposed to renormalize the incoming fermion and the outgoing antifermion are different from the ones that should renormalize the outgoing fermion and the incoming antifermion and not related by hermiticity, as desired. Instead of defining field renormalization constants identical to the wave function renormalization ones, we differentiate the two by a set of finite constants. Using the additional freedom offered by this finite difference, we investigate the possibility of defining field renormalization constants related by hermiticity. We show that for Dirac fermions, unless the model has very special features, the hermiticity condition leads to ill-defined matrix elements due to self-energy corrections of external legs. In the case of Majorana fermions, the constraints for the model are less restrictive. Here one might have a better chance to define field renormalization constants related by hermiticity. After analysing the complete renormalized Lagrangian in a general theory including vector and scalar bosons with arbitrary renormalizable interactions, we consider two specific models: quark mixing in the electroweak Standard Model and mixing of Majorana neutrinos in the seesaw mechanism. A counter term for fermion mixing matrices can not be fixed by only taking into account self-energy corrections or fermion field renormalization constants. The presence of unstable particles in the theory can lead to a non-unitary renormalized mixing matrix or to a gauge parameter dependence in its counter term. Therefore, we propose to determine the mixing matrix counter term by fixing the complete correction terms for a physical process to experimental measurements. As an example, we calculate the decay rate of a top quark and of a heavy neutrino. We provide in each of the chosen models sample calculations that can be easily extended to other theories.
Resumo:
Le reti di oggetti intelligenti costituiscono una realtà che si sta affermando nel mondo quotidiano. Dispositivi capaci di comunicare tra loro, oltre che svolgere la propria funzione primaria, possono comporre una nuvola che faccia riferimento al legittimo proprietario. Un aspetto fondamentale di questo scenario riguarda la sicurezza, in particolar modo per garantire una comunicazione protetta. Il soddisfacimento di questo requisito è fondamentale anche per altri punti come l'integrità dell'informazione condivisa e l'autenticazione. Lo strumento più antico e tutt'ora adatto alla riservatezza di una comunicazione è costituito dalla crittografia. Una tecnica crittografica è schematicamente composta da un algoritmo che, a seconda di una chiave e del messaggio in ingresso, restituisce in uscita un messaggio cifrato, il crittogramma. Questo viene poi inviato e al legittimo destinatario che, essendo in possesso della chiave e dell'algoritmo, lo converte nel messaggio originale. L'obiettivo è rendere impossibile ad un utente malevolo - non dotato di chiave - la ricostruzione del messaggio. L'assunzione che l'algoritmo possa essere noto anche a terze parti concentra l'attenzione sul tema della chiave. La chiave deve essere sufficientemente lunga e casuale, ma soprattutto deve essere nota ai due utenti che intendono instaurare la comunicazione. Quest'ultimo problema, noto come distribuzione della chiave, è stato risolto con il sistema RSA a chiave pubblica. Il costo computazionale di questa tecnica, specialmente in un contesto di dispositivi non caratterizzati da grandi potenze di calcolo, suggerisce però la ricerca di strade alternative e meno onerose. Una soluzione promettente ed attualmente oggetto di studio sembra essere costituita dalle proprietà del canale wireless. Un ponte radio è caratterizzato da una funzione di trasferimento che dipende dall'ambiente in cui ci si trova e, per il teorema di reciprocità, risulta essere lo stesso per i due utenti che l'hanno instaurato. Oggetto della tesi è lo studio ed il confronto di alcune delle tecniche possibili per estrarre una chiave segreta da un mezzo condiviso, come quello del canale wireless. Si presenterà il contesto in cui verrà sviluppato l'elaborato. Si affronteranno in particolare due casi di interesse, costituiti dalla attuale tecnologia di propagazione del segnale a banda stretta (impiegata per la maggior parte delle trasmissioni senza fili) per passare a quella relativamente più recente della banda Ultra-larga (UWB). Verranno poi illustrate delle tecniche per ottenere stringhe di bit dai segnali acquisiti, e saranno proposti dei metodi per la loro correzione da eventuali discordanze. Saranno infine riportate le conclusioni sul lavoro svolto e le possibili strade future.
Resumo:
The aim of this study was to investigate the influence of the diaphragm flexibility on the behavior of out-of-plane walls in masonry buildings. Simplified models have been developed to perform kinematic and dynamic analyses in order to compare the response of walls with different restraint conditions. Kinematic non linear analyses of assemblages of rigid blocks have been performed to obtain the acceleration-displacement curves for walls with different restraint conditions at the top. A simplified 2DOF model has been developed to analyse the dynamic response of the wall with an elastic spring at the top, following the Housner rigid behaviour hypothesis. The dissipation of energy is concentrated at every impact at the base of the wall and is modelled through the introduction of the coefficient of restitution. The sets of equations of the possible configurations of the wall, depending on the different positions of the centre of rotation at the base and at the intermediate hinge have been obtained. An algorithm for the numerical integration of the sets of the equations of motion in the time domain has been developed. Dynamic analyses of a set of walls with Gaussian impulses and recorded accelerograms inputs have been performed in order to compare the response of the simply supported wall with the one of the wall with elastic spring at the top. The influence of diaphragm stiffness Kd has been investigated determining the variation of maximum displacement demand with the value of Kd. A more regular trend has been obtained for the Gaussian input than for the recorded accelerograms.
Resumo:
This Ph.D. thesis focuses on the investigation of some chemical and sensorial analytical parameters linked to the quality and purity of different categories of oils obtained by olives: extra virgin olive oils, both those that are sold in the large retail trade (supermarkets and discounts) and those directly collected at some Italian mills, and lower-quality oils (refined, lampante and “repaso”). Concurrently with the adoption of traditional and well-known analytical procedures such as gas chromatography and high-performance liquid chromatography, I carried out a set-up of innovative, fast and environmentally-friend methods. For example, I developed some analytical approaches based on Fourier transform medium infrared spectroscopy (FT-MIR) and time domain reflectometry (TDR), coupled with a robust chemometric elaboration of the results. I investigated some other freshness and quality markers that are not included in official parameters (in Italian and European regulations): the adoption of such a full chemical and sensorial analytical plan allowed me to obtain interesting information about the degree of quality of the EVOOs, mostly within the Italian market. Here the range of quality of EVOOs resulted very wide, in terms of sensory attributes, price classes and chemical parameters. Thanks to the collaboration with other Italian and foreign research groups, I carried out several applicative studies, especially focusing on the shelf-life of oils obtained by olives and on the effects of thermal stresses on the quality of the products. I also studied some innovative technological treatments, such as the clarification by using inert gases, as an alternative to the traditional filtration. Moreover, during a three-and-a-half months research stay at the University of Applied Sciences in Zurich, I also carried out a study related to the application of statistical methods for the elaboration of sensory results, obtained thanks to the official Swiss Panel and to some consumer tests.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
Il primo scopo di questo progetto sperimentale è stato quello di valutare la qualità chimica e sensoriale di oli vergini d’oliva ottenuti da olive conservate per tempi diversi prima della loro lavorazione. Spesso, infatti, quando le olive arrivano in frantoio, la loro trasformazione può essere ritardata a causa della ridotta capacità di lavorazione degli impianti rispetto alla quantità di prodotto da lavorare. Il tempo e le modalità di conservazione delle drupe risultano fattori molto importanti ai fini della qualità organolettica dell’olio prodotto. Per questa finalità sono state acquistate olive di due diverse varietà (Correggiolo e Canino) e divise in due aliquote. La prima aliquota di ogni partita di olive è stata franta al momento della consegna (tempo 0), la seconda invece è stata stoccata in condizioni non ideali di conservazione ed umidità per 7 giorni (varietà Correggiolo) e 15 giorni (varietà Canino), quindi franta. Per tutti i campioni sono stati valutati i parametri di qualità stabiliti dal Reg. CEE n. 2568/91 e successive modifiche: acidità libera e numero di perossidi con valutazioni titrimetriche, estinzioni specifiche nell'ultravioletto mediante spettrofotometria, determinazione gascromatografica degli alchil esteri degli acidi grassi ed analisi sensoriale secondo “Panel test”. Sono stati inoltre valutati parametri compositivi che attualmente non sono contemplati nei regolamenti ufficiali: il profilo qualitativo e quantitativo gascromatografico in digliceridi e quello in componenti volatili (SPME). I risultati ottenuti per i campioni ottenuti da olive “fresche” e da olive conservate per tempi diversi sono stati comparati con i limiti di legge che definiscono l'appartenenza alle categorie merceologiche degli oli di oliva vergini (extravergine, vergine o lampante) e, per i parametri non normati, sono state valutate le principali differenze in relazione alla qualità del prodotto. Il secondo scopo del progetto è stato quello di valutare la possibilità di una rapida discriminazione di oli d’oliva caratterizzati da differenti concentrazioni di alchil esteri degli acidi grassi mediante Riflettometria nel Dominio del Tempo (TDR, Time Domain Reflectometry). Il metodo chimico proposto dal Reg. UE n. 61/2011 risulta, infatti, lungo e dispendioso prevedendo una separazione preparativa seguita da analisi gascromatografica. La TDR si presenta, invece, come un metodo alternativo rapido, economico e non distruttivo per la valutazione e discriminazione degli oli d’oliva sulla base di questo parametro qualitativo.
Resumo:
Aim of this research is the development and validation of a comprehensive multibody motorcycle model featuring rigid-ring tires, taking into account both slope and roughness of road surfaces. A novel parametrization for the general kinematics of the motorcycle is proposed, using a mixed reference-point and relative-coordinates approach. The resulting description, developed in terms of dependent coordinates, makes it possible to efficiently include rigid-ring kinematics as well as road elevation and slope. The equations of motion for the multibody system are derived symbolically and the constraint equations arising from the dependent-coordinate formulation are handled using a projection technique. Therefore the resulting system of equations can be integrated in time domain using a standard ODE algorithm. The model is validated with respect to maneuvers experimentally measured on the race track, showing consistent results and excellent computational efficiency. More in detail, it is also capable of reproducing the chatter vibration of racing motorcycles. The chatter phenomenon, appearing during high speed cornering maneuvers, consists of a self-excited vertical oscillation of both the front and rear unsprung masses in the range of frequency between 17 and 22 Hz. A critical maneuver is numerically simulated, and a self-excited vibration appears, consistent with the experimentally measured chatter vibration. Finally, the driving mechanism for the self-excitation is highlighted and a physical interpretation is proposed.
Resumo:
Die Lunge stellt einen Hauptort der CMV-Latenz dar. Die akute CMV-Infektion wird durch infiltrierende antivirale CD8 T-Zellen terminiert. Das virale Genom verbleibt jedoch im Lungengewebe in einem nicht replikativen Zustand, der Latenz, erhalten. Es konnte bereits gezeigt werden, dass während der Latenz die Major Immediate Early- (MIE) Gene ie1- und ie2 sporadisch transkribiert werden. Bisher konnte diese beginnende Reaktivierung latenter CMV-Genome nur in einer Momentaufnahme gezeigt werden (Kurz et al., 1999; Grzimek et al., 2001; Simon et al., 2005; zur Übersicht: Reddehase et al., 2008). Die sporadische Expression der MIE-Gene führt jedoch zur Präsentation eines antigenen IE1-Peptids und somit zur Stimulation antiviraler IE1-Peptid-spezifischer CD8 T-Zellen, die durch ihre Effektorfunktion die beginnende Reaktivierung wieder beenden. Dies führte uns zu der Hypothese, dass MIE-Genexpression über einen Zeitraum betrachtet (period prevalence) häufiger stattfindet als es in einer Momentaufnahme (point prevalence) beobachtet werden kann.rnrnUm die Häufigkeit der MIE-Genexpression in der Dynamik in einem definierten Zeitraum zu erfassen, sollte eine Methode entwickelt werden, welche es erstmals ermöglicht, selektiv und konditional transkriptionell aktive Zellen sowohl während der akuten Infektion als auch während der Latenz auszulöschen. Dazu wurde mit Hilfe der Zwei-Schritt BAC-Mutagenese ein rekombinantes death-tagged Virus hergestellt, welches das Gen für den Diphtherie Toxin Rezeptor (DTR) unter Kontrolle des ie2-Promotors (P2) enthält. Ist der P2 transkriptionell aktiv, wird der DTR an der Zelloberfläche präsentiert und die Zelle wird suszeptibel für den Liganden Diphtherie Toxin (DT). Durch Gabe von DT werden somit alle Zellen ausgelöscht, in denen virale Genome transkriptionell aktiv sind. Mit zunehmender Dauer der DT-Behandlung sollte also die Menge an latenten viralen Genomen abnehmen.rnrnIn Western Blot-Analysen konnte das DTR-Protein bereits 2h nach der Infektion nachgewiesen werden. Die Präsentation des DTR an der Zelloberfläche wurde indirekt durch dessen Funktionalität bewiesen. Das rekombinante Virus konnte in Fibroblasten in Gegenwart von DT nicht mehr replizieren. In akut infizierten Tieren konnte die virale DNA-Menge durch eine einmalige intravenöse (i.v.) DT-Gabe signifikant reduziert werden. Verstärkt wurde dieser Effekt durch eine repetitive i.v. DT-Gabe. Auch während der Latenz gelang es, die Zahl der latenten viralen Genome durch repetitive i.v. und anschließende intraperitoneale (i.p.) DT-Gabe zu reduzieren, wobei wir abhängig von der Dauer der DT-Gabe eine Reduktion um 60\% erreichen konnten. Korrespondierend zu der Reduktion der DNA-Menge sank auch die Reaktivierungshäufigkeit des rekombinanten Virus in Lungenexplantatkulturen. rnrnrnUm die Reaktivierungshäufigkeit während der Latenz berechnen zu können, wurde durch eine Grenzverdünnungsanalyse die Anzahl an latenten viralen Genomen pro Zelle bestimmt. Dabei ergab sich eine Kopienzahl von 9 (6 bis 13). Ausgehend von diesen Ergebnissen lässt sich berechnen, dass, bezogen auf die gesamte Lunge, in dem getesteten Zeitraum von 184h durch die DT-Behandlung 1.000 bis 2.500 Genome pro Stunde ausgelöscht wurden. Dies entspricht einer Auslöschung von 110 bis 280 MIE-Gen-exprimierenden Lungenzellen pro Stunde. Damit konnte in dieser Arbeit erstmals die Latenz-assoziierte Genexpression in ihrer Dynamik dargestellt werden.rn
Resumo:
Hefen stellen einen großen und wichtigen Teil der Mikrobiota während der Weinbereitung dar, da ohne ihre alkoholische Fermentation die Umwandlung von Most und Wein nicht möglich wäre. Ferner ist es ihre Vielzahl an Stoffwechselprodukten, die dem Aroma des fertigen Weines eine zusätzliche Komplexität verleihen. Auf der anderen Seite steht durch den Metabolismus verschiedenster so genannter Wildhefen die Gefahr von Qualitätsabstufungen der Weine, was allgemein als „Weinfehler“ betrachtet wird. Ziel dieser Arbeit war zum einen die taxonomische Einordnung von Saccharomyces-Spezies, sowie die Quantifizierung und Hemmung von ausgewählten Wildhefen während der Weinbereitung.rnEin Teil dieser Arbeit umfasste die Identifizierung der nahverwandten Mitglieder der Saccharomyces sensu stricto-Gruppe. Durch den Einsatz des DNA-Fingerpinting-Systems SAPD-PCR konnten alle die Gruppe umfassenden Spezies anhand spezifischer Bandenmuster nachgewiesen werden, wodurch eine Einordnung dieser schwer zu differenzierenden Arten möglich war. Die Differenzierung zwischen den einzelnen Spezies war in jedem Fall deutlicher als dies die Sequenzierung der 5.8S rDNA und ihre flankierenden ITS-Regionen vermochte. Die SAPD-PCR zeichnete sich zudem durch eine geringe Muster-Varianz bei verschiedenen Stämmen einer Art aus und konnte zuverlässig unbekannte Stämme bestimmen und bereits hinterlegte Stämme neu klassifizieren. Zudem konnte mit Hilfe dieses Systems Hybride aus Saccharomyces cerevisiae und S. bayanus bzw. S. cerevisiae und S. kudriavzevii detektiert werden, wenn diese Hybride aus relativ gleichen genomischen Anteilen der Eltern bestanden. rnZusätzlich wurde ein quantitatives PCR-System entwickelt, um die Gattungen Saccharomyces, Hanseniaspora und Brettanomyces in Most und Wein detektieren und quantifizieren zu können. Die hierfür entwickelten Primer zeigten sich spezifisch für die untersuchten Arten. Durch die serielle Verdünnung definierter DNA-Mengen konnte für alle drei Systeme eine Kalibrierungskurve erstellt werden, mit Hilfe derer die tatsächlichen Quantifizierungen durchgeführt wurden. Die qPCR-Analyse lieferte ähnliche Zellzahlen wie Lebendzellzahl-Bestimmungen und wurde nicht von anderen Spezies und von Traubensaft gestört. Die maximal detektierbare Zellzahl betrug 2 x 107 Zellen/ml, während die minimale Detektionsgrenze je nach Art zwischen 1 x 102 Zellen/ml und 1 x 103 Zellen/ml lag. Allerdings konnte eine effektive DNA-Isolierung dieser geringen Zellzahlen nur erreicht werden, wenn die Zellzahl durch artfremde Hefen künstlich erhöht wurde. Die Analyse einer Most-Vergärung mit den drei Spezies zeigte schlussendlich, dass die quantitative PCR sicher und schnell Veränderungen und Sukzessionen detektiert und so ein geeignetes Mittel darstellt, um Populationsdynamiken während der Weinherstellung zu beobachten. rnDer letzte Teil dieser Arbeit befasste sich mit der Inhibierung von Schadhefen durch zellwand-hydrolysierende Enzyme. Es konnte hierbei eine endoglykosidisch wirkende β-1,3-Glucanase aus dem Bakterium Delftia tsuruhatensis isoliert werden. Diese besaß eine ungefähre Masse von 28 kDa, einen isolektrischen Punkt von ca. 4,3 und wirkte mit einer spezifischen Aktivität von 10 U/mg Protein gegen das Glucan Laminarin. Zudem zeigte das Enzym ein Temperaturoptimum von 50 °C und ein pH-Optimum bei pH 4,0. Weinparameter wie erhöhte Konzentrationen an Ethanol, Phenolen und Sulfit beeinflussten die Wirkung des Enzyms nicht oder nur wenig. Neben der allgemeinen Wirkung gegen β-1,3-Glucane konnte hier auch gezeigt werden, dass ebenso gut die β-1,3-Glucane in der Zellwand verschiedener Hefen hydrolysiert wurden. Fluoreszenz- und rasterelektronen-mikroskopische Aufnahmen von Hefezellen nach Inkubation mit der β-1,3-Glucanase zeigten zusätzlich die Zerstörung der Zelloberfläche der Hefen. Die lytische Wirkung des Enzyms wurde an verschiedenen weintypischen Hefen getestet. Hierbei zeigten sich stammspezifische Unterschiede in der Sensitivität gegenüber dem Enzym. Außerdem konnte festgestellt werden, dass sowohl Wachstumsphase als auch Medium der Hefen Einfluss auf deren Zellwand hat und somit auch auf die Wirkung des Enzyms.rn
Resumo:
A new control scheme has been presented in this thesis. Based on the NonLinear Geometric Approach, the proposed Active Control System represents a new way to see the reconfigurable controllers for aerospace applications. The presence of the Diagnosis module (providing the estimation of generic signals which, based on the case, can be faults, disturbances or system parameters), mean feature of the depicted Active Control System, is a characteristic shared by three well known control systems: the Active Fault Tolerant Controls, the Indirect Adaptive Controls and the Active Disturbance Rejection Controls. The standard NonLinear Geometric Approach (NLGA) has been accurately investigated and than improved to extend its applicability to more complex models. The standard NLGA procedure has been modified to take account of feasible and estimable sets of unknown signals. Furthermore the application of the Singular Perturbations approximation has led to the solution of Detection and Isolation problems in scenarios too complex to be solved by the standard NLGA. Also the estimation process has been improved, where multiple redundant measuremtent are available, by the introduction of a new algorithm, here called "Least Squares - Sliding Mode". It guarantees optimality, in the sense of the least squares, and finite estimation time, in the sense of the sliding mode. The Active Control System concept has been formalized in two controller: a nonlinear backstepping controller and a nonlinear composite controller. Particularly interesting is the integration, in the controller design, of the estimations coming from the Diagnosis module. Stability proofs are provided for both the control schemes. Finally, different applications in aerospace have been provided to show the applicability and the effectiveness of the proposed NLGA-based Active Control System.