951 resultados para finite-difference time-domain (FDTD) method
Resumo:
Nuclear magnetic resonance (NMR) is one of the most versatile analytical techniques for chemical, biochemical and medical applications. Despite this great success, NMR is seldom used as a tool in industrial applications. The first application of NMR in flowing samples was published in 1951. However, only in the last ten years Flow NMR has gained momentum and new and potential applications have been proposed. In this review we present the historical evolution of flow or online NMR spectroscopy and imaging, and current developments for use in the automation of industrial processes.
Resumo:
This study compared the effectiveness of the multifocal visual evoked cortical potentials (mfVEP) elicited by pattern pulse stimulation with that of pattern reversal in producing reliable responses (signal-to-noise ratio >1.359). Participants were 14 healthy subjects. Visual stimulation was obtained using a 60-sector dartboard display consisting of 6 concentric rings presented in either pulse or reversal mode. Each sector, consisting of 16 checks at 99% Michelson contrast and 80 cd/m² mean luminance, was controlled by a binary m-sequence in the time domain. The signal-to-noise ratio was generally larger in the pattern reversal than in the pattern pulse mode. The number of reliable responses was similar in the central sectors for the two stimulation modes. At the periphery, pattern reversal showed a larger number of reliable responses. Pattern pulse stimuli performed similarly to pattern reversal stimuli to generate reliable waveforms in R1 and R2. The advantage of using both protocols to study mfVEP responses is their complementarity: in some patients, reliable waveforms in specific sectors may be obtained with only one of the two methods. The joint analysis of pattern reversal and pattern pulse stimuli increased the rate of reliability for central sectors by 7.14% in R1, 5.35% in R2, 4.76% in R3, 3.57% in R4, 2.97% in R5, and 1.78% in R6. From R1 to R4 the reliability to generate mfVEPs was above 70% when using both protocols. Thus, for a very high reliability and thorough examination of visual performance, it is recommended to use both stimulation protocols.
Resumo:
Il lavoro è incentrato sull’applicazione ed integrazione di differenti tecniche di indagine geofisica in campo ambientale e ingegneristico/archeologico. Alcuni esempi sono stati descritti al fine di dimostrare l’utilità delle metodologie geofisiche nella risoluzione di svariate problematiche. Nello specifico l’attenzione è stata rivolta all’utilizzo delle tecniche del Ground Penetrating Radar e del Time Domain Reflectometry in misure condotte su un corpo sabbioso simulante una Zona Insatura. L’esperimento è stato realizzato all’interno di un’area test costruita presso l’azienda agricola dell’Università La Tuscia di Viterbo. Hanno partecipato al progetto le Università di Roma Tre, Roma La Sapienza, La Tuscia, con il supporto tecnico della Sensore&Software. Nello studio è stato condotto un approccio definito idrogeofisico al fine di ottenere informazioni da misure dei parametri fisici relativi alla Zona Insatura simulata nell’area test. Il confronto e l’integrazione delle due differenti tecniche di indagine ha offerto la possibilità di estendere la profondità di indagine all’interno del corpo sabbioso e di verificare l’utilità della tecnica GPR nello studio degli effetti legati alle variazioni del contenuto d’acqua nel suolo, oltre a determinare la posizione della superficie piezometrica per i differenti scenari di saturazione. Uno specifico studio è stato realizzato sul segnale radar al fine di stabilire i fattori di influenza sulla sua propagazione all’interno del suolo. Il comportamento dei parametri dielettrici nelle condizioni di drenaggio e di imbibizione del corpo sabbioso è stato riprodotto attraverso una modellizzazione delle proprietà dielettriche ed idrologiche sulla base della dimensione, forma e distribuzione dei granuli di roccia e pori, nonché sulla base della storia relativa alla distribuzione dei fluidi di saturazione all’interno del mezzo. La modellizzazione è stata operata sulle basi concettuali del Differential Effective Medium Approximation.
Resumo:
Da ormai sette anni la stazione permanente GPS di Baia Terranova acquisisce dati giornalieri che opportunamente elaborati consentono di contribuire alla comprensione della dinamica antartica e a verificare se modelli globali di natura geofisica siano aderenti all’area di interesse della stazione GPS permanente. Da ricerche bibliografiche condotte si è dedotto che una serie GPS presenta molteplici possibili perturbazioni principalmente dovute a errori nella modellizzazione di alcuni dati ancillari necessari al processamento. Non solo, da alcune analisi svolte, è emerso come tali serie temporali ricavate da rilievi geodetici, siano afflitte da differenti tipologie di rumore che possono alterare, se non opportunamente considerate, i parametri di interesse per le interpretazioni geofisiche del dato. Il lavoro di tesi consiste nel comprendere in che misura tali errori, possano incidere sui parametri dinamici che caratterizzano il moto della stazione permanente, facendo particolare riferimento alla velocità del punto sul quale la stazione è installata e sugli eventuali segnali periodici che possono essere individuati.
Resumo:
The need for high bandwidth, due to the explosion of new multi\-media-oriented IP-based services, as well as increasing broadband access requirements is leading to the need of flexible and highly reconfigurable optical networks. While transmission bandwidth does not represent a limit due to the huge bandwidth provided by optical fibers and Dense Wavelength Division Multiplexing (DWDM) technology, the electronic switching nodes in the core of the network represent the bottleneck in terms of speed and capacity for the overall network. For this reason DWDM technology must be exploited not only for data transport but also for switching operations. In this Ph.D. thesis solutions for photonic packet switches, a flexible alternative with respect to circuit-switched optical networks are proposed. In particular solutions based on devices and components that are expected to mature in the near future are proposed, with the aim to limit the employment of complex components. The work presented here is the result of part of the research activities performed by the Networks Research Group at the Department of Electronics, Computer Science and Systems (DEIS) of the University of Bologna, Italy. In particular, the work on optical packet switching has been carried on within three relevant research projects: the e-Photon/ONe and e-Photon/ONe+ projects, funded by the European Union in the Sixth Framework Programme, and the national project OSATE funded by the Italian Ministry of Education, University and Scientific Research. The rest of the work is organized as follows. Chapter 1 gives a brief introduction to network context and contention resolution in photonic packet switches. Chapter 2 presents different strategies for contention resolution in wavelength domain. Chapter 3 illustrates a possible implementation of one of the schemes proposed in chapter 2. Then, chapter 4 presents multi-fiber switches, which employ jointly wavelength and space domains to solve contention. Chapter 5 shows buffered switches, to solve contention in time domain besides wavelength domain. Finally chapter 6 presents a cost model to compare different switch architectures in terms of cost.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
Mögliche Verformungsmechanismen, die zu den verschiedenen Glimmer- und Mineralfischen führen, sind: intrakristalline Verformung, Kristallrotation, Biegung und Faltung, Drucklösung in Kombination mit Ausfällung und dynamische Rekristallisation oder Mechanismen, die ein großes Mineral in mehrere kleine, fischförmige Kristalle aufspalten.Experimente mit ein neues Verformungsgerät und Objekten in zwei verschiedenen Matrixmaterialien werden beschrieben. Das eine ist PDMS, (Newtonianisch viskoses Polymer), und das andere Tapioca Perlen (Mohr-Couloumb Verhalten). Die Rotation von fischförmigen Objekten in PDMS stimmt mit der theoretischen Rotationsrate für ellipsenförmige Objekte in einem Newtonianischen Material überein. In einer Matrix von Tapioca Perlen nehmen die Objekte eine stabile Lage ein. Diese Orientierung ist vergleichbar mit der von Glimmerfischen. Die Verformung in der Matrix von Tapioca Perlen ist konzentriert auf dünne Scherzonen. Diese Ergebnisse implizieren, daß die Verformung in natürlichen Gesteinen auch in dünnen Scherzonen konzentriert ist.Computersimulationen werden beschrieben, mit denen der Einfluß der Eigenschaften einer Matrix auf die Rotation von Objekten und Verteilung von Deformation untersucht wird.Mit diesen Experimenten wird gezeigt, daß die Orientierung von Glimmerfischen nicht mit Verformung in einem nicht-linearen viskosen Material erklärt werden kann. Eine solche nicht-lineare Rheologie wird im Allgemeinen für die Erdkurste angenommen. Die stabile Orientierung eines Objektes kann mit weicheren Lagen in der Matrix erklärt werden.
Resumo:
Traditional procedures for rainfall-runoff model calibration are generally based on the fit of the individual values of simulated and observed hydrographs. It is used here an alternative option that is carried out by matching, in the optimisation process, a set of statistics of the river flow. Such approach has the additional, significant advantage to allow also a straightforward regional calibration of the model parameters, based on the regionalisation of the selected statistics. The minimisation of the set of objective functions is carried out by using the AMALGAM algorithm, leading to the identification of behavioural parameter sets. The procedure is applied to a set of river basins located in central Italy: the basins are treated alternatively as gauged and ungauged and, as a term of comparison, the results obtained with a traditional time-domain calibration is also presented. The results show that a suitable choice of the statistics to be optimised leads to interesting results in real world case studies as far as the reproduction of the different flow regimes is concerned.
Resumo:
Conjugated polymers have attracted tremendous academical and industrial research interest over the past decades due to the appealing advantages that organic / polymeric materials offer for electronic applications and devices such as organic light emitting diodes (OLED), organic field effect transistors (OFET), organic solar cells (OSC), photodiodes and plastic lasers. The optimization of organic materials for applications in optoelectronic devices requires detailed knowledge of their photophysical properties, for instance energy levels of excited singlet and triplet states, excited state decay mechanisms and charge carrier mobilities. In the present work a variety of different conjugated (co)polymers, mainly polyspirobifluorene- and polyfluorene-type materials, was investigated using time-resolved photoluminescence spectroscopy in the picosecond to second time domain to study their elementary photophysical properties and to get a deeper insight into structure-property relationships. The experiments cover fluorescence spectroscopy using Streak Camera techniques as well as time-delayed gated detection techniques for the investigation of delayed fluorescence and phosphorescence. All measurements were performed on the solid state, i.e. thin polymer films and on diluted solutions. Starting from the elementary photophysical properties of conjugated polymers the experiments were extended to studies of singlet and triplet energy transfer processes in polymer blends, polymer-triplet emitter blends and copolymers. The phenomenon of photonenergy upconversion was investigated in blue light-emitting polymer matrices doped with metallated porphyrin derivatives supposing an bimolecular annihilation upconversion mechanism which could be experimentally verified on a series of copolymers. This mechanism allows for more efficient photonenergy upconversion than previously reported for polyfluorene derivatives. In addition to the above described spectroscopical experiments, amplified spontaneous emission (ASE) in thin film polymer waveguides was studied employing a fully-arylated poly(indenofluorene) as the gain medium. It was found that the material exhibits a very low threshold value for amplification of blue light combined with an excellent oxidative stability, which makes it interesting as active material for organic solid state lasers. Apart from spectroscopical experiments, transient photocurrent measurements on conjugated polymers were performed as well to elucidate the charge carrier mobility in the solid state, which is an important material parameter for device applications. A modified time-of-flight (TOF) technique using a charge carrier generation layer allowed to study hole transport in a series of spirobifluorene copolymers to unravel the structure-mobility relationship by comparison with the homopolymer. Not only the charge carrier mobility could be determined for the series of polymers but also field- and temperature-dependent measurements analyzed in the framework of the Gaussian disorder model showed that results coincide very well with the predictions of the model. Thus, the validity of the disorder concept for charge carrier transport in amorphous glassy materials could be verified for the investigated series of copolymers.
Resumo:
In der vorliegenden Arbeit wird die Faktorisierungsmethode zur Erkennung von Gebieten mit sprunghaft abweichenden Materialparametern untersucht. Durch eine abstrakte Formulierung beweisen wir die der Methode zugrunde liegende Bildraumidentität für allgemeine reelle elliptische Probleme und deduzieren bereits bekannte und neue Anwendungen der Methode. Für das spezielle Problem, magnetische oder perfekt elektrisch leitende Objekte durch niederfrequente elektromagnetische Strahlung zu lokalisieren, zeigen wir die eindeutige Lösbarkeit des direkten Problems für hinreichend kleine Frequenzen und die Konvergenz der Lösungen gegen die der elliptischen Gleichungen der Magnetostatik. Durch Anwendung unseres allgemeinen Resultats erhalten wir die eindeutige Rekonstruierbarkeit der gesuchten Objekte aus elektromagnetischen Messungen und einen numerischen Algorithmus zur Lokalisierung der Objekte. An einem Musterproblem untersuchen wir, wie durch parabolische Differentialgleichungen beschriebene Einschlüsse in einem durch elliptische Differentialgleichungen beschriebenen Gebiet rekonstruiert werden können. Dabei beweisen wir die eindeutige Lösbarkeit des zugrunde liegenden parabolisch-elliptischen direkten Problems und erhalten durch eine Erweiterung der Faktorisierungsmethode die eindeutige Rekonstruierbarkeit der Einschlüsse sowie einen numerischen Algorithmus zur praktischen Umsetzung der Methode.
Resumo:
Precision measurements of phenomena related to fermion mixing require the inclusion of higher order corrections in the calculation of corresponding theoretical predictions. For this, a complete renormalization scheme for models that allow for fermion mixing is highly required. The correct treatment of unstable particles makes this task difficult and yet, no satisfactory and general solution can be found in the literature. In the present work, we study the renormalization of the fermion Lagrange density with Dirac and Majorana particles in models that involve mixing. The first part of the thesis provides a general renormalization prescription for the Lagrangian, while the second one is an application to specific models. In a general framework, using the on-shell renormalization scheme, we identify the physical mass and the decay width of a fermion from its full propagator. The so-called wave function renormalization constants are determined such that the subtracted propagator is diagonal on-shell. As a consequence of absorptive parts in the self-energy, the constants that are supposed to renormalize the incoming fermion and the outgoing antifermion are different from the ones that should renormalize the outgoing fermion and the incoming antifermion and not related by hermiticity, as desired. Instead of defining field renormalization constants identical to the wave function renormalization ones, we differentiate the two by a set of finite constants. Using the additional freedom offered by this finite difference, we investigate the possibility of defining field renormalization constants related by hermiticity. We show that for Dirac fermions, unless the model has very special features, the hermiticity condition leads to ill-defined matrix elements due to self-energy corrections of external legs. In the case of Majorana fermions, the constraints for the model are less restrictive. Here one might have a better chance to define field renormalization constants related by hermiticity. After analysing the complete renormalized Lagrangian in a general theory including vector and scalar bosons with arbitrary renormalizable interactions, we consider two specific models: quark mixing in the electroweak Standard Model and mixing of Majorana neutrinos in the seesaw mechanism. A counter term for fermion mixing matrices can not be fixed by only taking into account self-energy corrections or fermion field renormalization constants. The presence of unstable particles in the theory can lead to a non-unitary renormalized mixing matrix or to a gauge parameter dependence in its counter term. Therefore, we propose to determine the mixing matrix counter term by fixing the complete correction terms for a physical process to experimental measurements. As an example, we calculate the decay rate of a top quark and of a heavy neutrino. We provide in each of the chosen models sample calculations that can be easily extended to other theories.
Resumo:
Le reti di oggetti intelligenti costituiscono una realtà che si sta affermando nel mondo quotidiano. Dispositivi capaci di comunicare tra loro, oltre che svolgere la propria funzione primaria, possono comporre una nuvola che faccia riferimento al legittimo proprietario. Un aspetto fondamentale di questo scenario riguarda la sicurezza, in particolar modo per garantire una comunicazione protetta. Il soddisfacimento di questo requisito è fondamentale anche per altri punti come l'integrità dell'informazione condivisa e l'autenticazione. Lo strumento più antico e tutt'ora adatto alla riservatezza di una comunicazione è costituito dalla crittografia. Una tecnica crittografica è schematicamente composta da un algoritmo che, a seconda di una chiave e del messaggio in ingresso, restituisce in uscita un messaggio cifrato, il crittogramma. Questo viene poi inviato e al legittimo destinatario che, essendo in possesso della chiave e dell'algoritmo, lo converte nel messaggio originale. L'obiettivo è rendere impossibile ad un utente malevolo - non dotato di chiave - la ricostruzione del messaggio. L'assunzione che l'algoritmo possa essere noto anche a terze parti concentra l'attenzione sul tema della chiave. La chiave deve essere sufficientemente lunga e casuale, ma soprattutto deve essere nota ai due utenti che intendono instaurare la comunicazione. Quest'ultimo problema, noto come distribuzione della chiave, è stato risolto con il sistema RSA a chiave pubblica. Il costo computazionale di questa tecnica, specialmente in un contesto di dispositivi non caratterizzati da grandi potenze di calcolo, suggerisce però la ricerca di strade alternative e meno onerose. Una soluzione promettente ed attualmente oggetto di studio sembra essere costituita dalle proprietà del canale wireless. Un ponte radio è caratterizzato da una funzione di trasferimento che dipende dall'ambiente in cui ci si trova e, per il teorema di reciprocità, risulta essere lo stesso per i due utenti che l'hanno instaurato. Oggetto della tesi è lo studio ed il confronto di alcune delle tecniche possibili per estrarre una chiave segreta da un mezzo condiviso, come quello del canale wireless. Si presenterà il contesto in cui verrà sviluppato l'elaborato. Si affronteranno in particolare due casi di interesse, costituiti dalla attuale tecnologia di propagazione del segnale a banda stretta (impiegata per la maggior parte delle trasmissioni senza fili) per passare a quella relativamente più recente della banda Ultra-larga (UWB). Verranno poi illustrate delle tecniche per ottenere stringhe di bit dai segnali acquisiti, e saranno proposti dei metodi per la loro correzione da eventuali discordanze. Saranno infine riportate le conclusioni sul lavoro svolto e le possibili strade future.
Resumo:
The aim of this study was to investigate the influence of the diaphragm flexibility on the behavior of out-of-plane walls in masonry buildings. Simplified models have been developed to perform kinematic and dynamic analyses in order to compare the response of walls with different restraint conditions. Kinematic non linear analyses of assemblages of rigid blocks have been performed to obtain the acceleration-displacement curves for walls with different restraint conditions at the top. A simplified 2DOF model has been developed to analyse the dynamic response of the wall with an elastic spring at the top, following the Housner rigid behaviour hypothesis. The dissipation of energy is concentrated at every impact at the base of the wall and is modelled through the introduction of the coefficient of restitution. The sets of equations of the possible configurations of the wall, depending on the different positions of the centre of rotation at the base and at the intermediate hinge have been obtained. An algorithm for the numerical integration of the sets of the equations of motion in the time domain has been developed. Dynamic analyses of a set of walls with Gaussian impulses and recorded accelerograms inputs have been performed in order to compare the response of the simply supported wall with the one of the wall with elastic spring at the top. The influence of diaphragm stiffness Kd has been investigated determining the variation of maximum displacement demand with the value of Kd. A more regular trend has been obtained for the Gaussian input than for the recorded accelerograms.
Resumo:
This Ph.D. thesis focuses on the investigation of some chemical and sensorial analytical parameters linked to the quality and purity of different categories of oils obtained by olives: extra virgin olive oils, both those that are sold in the large retail trade (supermarkets and discounts) and those directly collected at some Italian mills, and lower-quality oils (refined, lampante and “repaso”). Concurrently with the adoption of traditional and well-known analytical procedures such as gas chromatography and high-performance liquid chromatography, I carried out a set-up of innovative, fast and environmentally-friend methods. For example, I developed some analytical approaches based on Fourier transform medium infrared spectroscopy (FT-MIR) and time domain reflectometry (TDR), coupled with a robust chemometric elaboration of the results. I investigated some other freshness and quality markers that are not included in official parameters (in Italian and European regulations): the adoption of such a full chemical and sensorial analytical plan allowed me to obtain interesting information about the degree of quality of the EVOOs, mostly within the Italian market. Here the range of quality of EVOOs resulted very wide, in terms of sensory attributes, price classes and chemical parameters. Thanks to the collaboration with other Italian and foreign research groups, I carried out several applicative studies, especially focusing on the shelf-life of oils obtained by olives and on the effects of thermal stresses on the quality of the products. I also studied some innovative technological treatments, such as the clarification by using inert gases, as an alternative to the traditional filtration. Moreover, during a three-and-a-half months research stay at the University of Applied Sciences in Zurich, I also carried out a study related to the application of statistical methods for the elaboration of sensory results, obtained thanks to the official Swiss Panel and to some consumer tests.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.