893 resultados para wavelet transforms
Resumo:
Programa de doctorado: Actividad Física, Salud y Rendimiento Deportivo
Resumo:
Premio Extraordinario de Doctorado. Rama de Ciencias.
Resumo:
Da ormai sette anni la stazione permanente GPS di Baia Terranova acquisisce dati giornalieri che opportunamente elaborati consentono di contribuire alla comprensione della dinamica antartica e a verificare se modelli globali di natura geofisica siano aderenti all’area di interesse della stazione GPS permanente. Da ricerche bibliografiche condotte si è dedotto che una serie GPS presenta molteplici possibili perturbazioni principalmente dovute a errori nella modellizzazione di alcuni dati ancillari necessari al processamento. Non solo, da alcune analisi svolte, è emerso come tali serie temporali ricavate da rilievi geodetici, siano afflitte da differenti tipologie di rumore che possono alterare, se non opportunamente considerate, i parametri di interesse per le interpretazioni geofisiche del dato. Il lavoro di tesi consiste nel comprendere in che misura tali errori, possano incidere sui parametri dinamici che caratterizzano il moto della stazione permanente, facendo particolare riferimento alla velocità del punto sul quale la stazione è installata e sugli eventuali segnali periodici che possono essere individuati.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.
Resumo:
Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.
Resumo:
During the last few years, several methods have been proposed in order to study and to evaluate characteristic properties of the human skin by using non-invasive approaches. Mostly, these methods cover aspects related to either dermatology, to analyze skin physiology and to evaluate the effectiveness of medical treatments in skin diseases, or dermocosmetics and cosmetic science to evaluate, for example, the effectiveness of anti-aging treatments. To these purposes a routine approach must be followed. Although very accurate and high resolution measurements can be achieved by using conventional methods, such as optical or mechanical profilometry for example, their use is quite limited primarily to the high cost of the instrumentation required, which in turn is usually cumbersome, highlighting some of the limitations for a routine based analysis. This thesis aims to investigate the feasibility of a noninvasive skin characterization system based on the analysis of capacitive images of the skin surface. The system relies on a CMOS portable capacitive device which gives 50 micron/pixel resolution capacitance map of the skin micro-relief. In order to extract characteristic features of the skin topography, image analysis techniques, such as watershed segmentation and wavelet analysis, have been used to detect the main structures of interest: wrinkles and plateau of the typical micro-relief pattern. In order to validate the method, the features extracted from a dataset of skin capacitive images acquired during dermatological examinations of a healthy group of volunteers have been compared with the age of the subjects involved, showing good correlation with the skin ageing effect. Detailed analysis of the output of the capacitive sensor compared with optical profilometry of silicone replica of the same skin area has revealed potentiality and some limitations of this technology. Also, applications to follow-up studies, as needed to objectively evaluate the effectiveness of treatments in a routine manner, are discussed.
Resumo:
This thesis will describe the development of a relationship which is not necessarily verbal, but which generates communication, creates sense and meaning between human beings and produces “becomings” in the body that feels, perceives and physically transforms itself. This leads to a biosemiotic understanding of both the seen and unseen figure.
Resumo:
Zusammenfassung Das Ziel der Arbeit bestand darin, mit der Aufklärung der Abbindereaktion von Zinkphosphatzement eine Grundlage für eine gezielte Modifikation bzw. Optimierung zu schaffen, insbesondere im Hinblick auf einen Einsatz als permanenter Füllungswerkstoff. Über den Abbindechemismus war bislang lediglich bekannt, daß es sich bei den primär gebildeten Reaktionsprodukten um röntgenamorphe Phasen handelt, die sich nach Wochen bzw. Monaten in das thermodynamisch stabile Reaktionsprodukt alpha-Hopeit (alpha-Zn3(PO4)2·4H2O) umwandeln.Im Rahmen der vorliegenden Arbeit gelang durch den Einsatz der Infrarot-Reflexionsspek-troskopie (DRIFT) die Identifikation von Dizink Cyclotetraphosphat-Octahydrat (Zn2P4O12·8H2O) als röntgenamorpher Vorläuferphase. Das kondensierte Phosphat ist hydrolyseempfindlich sowie thermodynamisch instabil. Mit Hilfe der zeitaufgelösten 1H NMR-Spektroskopie konnte gezeigt werden, daß bereits nach ca. 10 Minuten eine Phasenumwandlung (topochemische Disproportionierung) in ein zunächst röntgenamorphes Orthophosphat stattfindet. Mittels 31P Doppelquanten-NMR-Spektroskopie gelang der Nachweis, daß innerhalb des röntgenamorphen Bereiches lokal nahgeordnete (nanokristalline) Bereiche auftreten, deren Ordnung sich auf einer Längenskala von 10 bis 30 Å erstreckt. Die Nanokristallite unterliegen einem Wachstumsprozeß, der schließlich zu alpha-Hopeit-Kristallen mit Ausdehnungen im Mikrometerbereich führt. Die Ursache für die primäre Ausbildung röntgenamorpher Reaktionsprodukte kann zunächst gelösten Aluminophosphatkomplexen zugeordnet werden, die im Verlauf der Abbindereaktion zu anorganischen Polymeren aggregieren und damit als Kristallisationsinhibitoren fungieren.
Resumo:
Thermal infrared (IR, 10.5 – 12.5 m) images from the Meteosat Visible and Infrared Imager (MVIRI) of cold cloud episodes (cloud top brightness temperature < 241 K) are used as a proxy of precipitating clouds to derive a warm season (May-August) climatology of their coherency, duration, span, and speed over Europe and the Mediterranean. The analysis focuses over the 30°-54°N, 15°W-40°E domain in May-August 1996-2005. Harmonic analysis using discrete Fourier transforms is applied together with a statistical analysis and an investigation of the diurnal cycle. This study has the objective to make available a set of results on the propagation dynamics of the cloud systems with the aim of assist numerical modellers in improving summer convection parameterization. The zonal propagation of cold cloud systems is accompanied by a weak meridional component confined to narrow latitude belts. The persistence of cold clouds over the area evidences the role of orography, the Pyrenees, the Alps, the Balkans and Anatolia. A diurnal oscillation is found with a maximum marking the initiation of convection in the lee of the mountains and shifting from about 1400 UTC at 40°E to 1800 UTC at 0°. A moderate eastward propagation of the frequency maximum from all mountain chains across the domain exists and the diurnal maxima are completely suppressed west of 5°W. The mean power spectrum of the cold cloud frequency distribution evidences a period of one day all over Europe disappearing over the ocean (west of 10°W). Other maxima are found in correspondence of 6 to 10 days in the longitudes from 15° W to 0° and indicate the activity of the westerlies with frontal passage over the continent. Longer periods activities (from 15 up to 30 days) were stronger around 10° W and from 5° W to 15° E and are likely related to the Madden Julian Oscillation influence. The maxima of the diurnal signal are in phase with the presence of elevated terrain and with land masses. A median zonal phase speed of 16.1 ms-1 is found for all events ≥ 1000 km and ≥ 20 h and a full set of results divided by years and recurrence categories is also presented.
Resumo:
Zusammenfassung Um zu einem besseren Verständnis des Prozesses der Biomineralisation zu gelangen, muss das Zusammenwirken der verschiedenen Typen biologischer Makromoleküle, die am Keimbildungs- und Wachstumsprozess der Minerale beteiligt sind, berücksichtigt werden. In dieser Arbeit wird ein neues Modellsystem eingeführt, das aus einem SAM (self-assembled monolayer) mit verschiedenen Funktionalitäten und unterschiedlichen, gelösten Makromolekülen besteht. Es konnte gezeigt werden, dass die Kristallisation von Vaterit (CaCO3) sowie Strontianit (SrCO3) Nanodrähten der Präsenz von Polyacrylat in Kooperation mit einer COOH-funktionalisierten SAM-Oberfläche zugeschrieben werden kann. Die Kombination bestehend aus einer polaren SAM-Oberfläche und Polyacrylat fungiert als Grenzfläche für die Struktur dirigierende Kristallisation von Nanodraht-Kristallen. Weiter konnte gezeigt werden, dass die Phasenselektion von CaCO3 durch die kooperative Wechselwirkung zwischen einer SAM-Oberfläche und einem daran adsorbierten hb-Polyglycerol kontrolliert wird. Auch die Funktionalität einer SAM-Oberfläche in Gegenwart von Carboxymethyl-cellulose übt einen entscheidenden Einfluss auf die Phasenselektion des entstehenden Produktes aus. In der vorliegenden Arbeit wurden Untersuchungen an CaCO3 zur homogenen Keimbildung, zur Nukleation in Gegenwart eines Proteins sowie auf Kolloiden, die als Template fungieren, mittels Kleinwinkel-Neutronenstreuung durchgeführt. Die homogene Kristallisation in wässriger Lösung stellte sich als ein mehrstufiger Prozess heraus. In Gegenwart des Eiweißproteins Ovalbumin konnten drei Phasen identifiziert werden, darunter eine anfänglich vorhandene amorphe sowie zwei kristalline Phasen.
Resumo:
The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.
Resumo:
This research focuses on the definition of the complex relationship that exists between theory and project, which - in the architectural work by Oswald Mathias Ungers - is based on several essays and on the publications that - though they have never been collected in an organic text - make up an articulated corpus, so that it is possible to consider it as the foundations of a theory. More specifically, this thesis deals with the role of metaphor in Unger’s theory and its subsequent practical application to his projects. The path leading from theoretical analysis to architectural project is in Ungers’ view a slow and mediated path, where theory is an instrument without which it would not be possible to create the project's foundations. The metaphor is a figure of speech taken from disciplines such as philosophy, aesthetics, linguistics. Using a metaphor implies a transfer of meaning, as it is essentially based on the replacement of a real object with a figurative one. The research is articulated in three parts, each of them corresponding to a text by Ungers that is considered as crucial to understand the development of his architectural thinking. Each text marks three decades of Ungers’ work: the sixties, seventies and eighties. The first part of the research deals with the topic of Großform expressed by Ungers in his publication of 1966 Grossformen im Wohnungsbau, where he defines four criteria based on which architecture identifies with a Großform. One of the hypothesis underlying this study is that there is a relationship between the notion of Großform and the figure of metaphor. The second part of the thesis analyzes the time between the end of the sixties and the seventies, i.e. the time during which Ungers lived in the USA and taught at the Cornell University of Ithaca. The analysis focuses on the text Entwerfen und Denken in Vorstellungen, Metaphern und Analogien, written by Ungers in 1976, for the exhibition MAN transFORMS organized in the Cooper - Hewitt Museum in New York. This text, through which Ungers creates a sort of vocabulary to explain the notions of metaphor, analogy, signs, symbols and allegories, can be defined as the Manifesto of his architectural theory, the latter being strictly intertwined with the metaphor as a design instrument and which is best expressed when he introduces the 11 thesis with P. Koolhaas, P. Riemann, H. Kollhoff and A. Ovaska in Die Stadt in der Stadt in 1977. Berlin das grüne Stadtarchipel. The third part analyzes the indissoluble tie between the use of metaphor and the choice of the topic on which the project is based and, starting from Ungers’ publication in 1982 Architecture as theme, the relationship between idea/theme and image/metaphor is explained. Playing with shapes requires metaphoric thinking, i.e. taking references to create new ideas from the world of shapes and not just from architecture. The metaphor as a tool to interpret reality becomes for Ungers an inquiry method that precedes a project and makes it possible to define the theme on which the project will be based. In Ungers’ case, the architecture of ideas matches the idea of architecture; for Ungers the notions of idea and theme, image and metaphor cannot be separated from each other, the text on thematization of architecture is not a report of his projects, but it represents the need to put them in order and highlight the theme on which they are based.
Resumo:
Poly-N-Isopropylacrylamide (PNIPAM) colloidal particles form crystal phases that show a thermosensitive behaviour and can be used as atomic model systems. This polymer has both hydrophilic and hydrophobic character and has interesting stimuli-responsive properties in aqueous solution, of which the most important is the temperature response. Above a certain temperature, called Lower Critical Solution Temperature (LCST), the system undergoes a volume phase transition (VPT). Above the LCST, the water is expelled from the polymer network and the swollen state at low temperature transforms into a shrunken state at high temperature. The thermoresponsive behaviour of PNIPAM can be influenced by pH and ionic strength, as well as by the presence of copolymers, such as acrylic acid. In a system formed both by particles of PNIPAM and PNIPAM doped with acrylic acid, one can control the size ratio of the two components by changing the temperature of the mixture, while keeping particle interactions relatively the same. It is therefore possible to obtain thermoresponsive colloidal crystal in which temperature changes induce defects whose formation processes and dynamics can be analysed in an optical microscope at a convenient spatial and temporal scale. The goal of this thesis project was to find the conditions in which such a system could be formed, by using characterization techniques such as Static Light Scattering, Dynamic Light Scattering and Confocal Laser Scanning Microscopy. Two PNIPAM-AAc systems were available, and after characterization it was possible to select a suitable one, on the basis of its low polydispersity and the lack of a VPT, regardless of the external conditions (system JPN_7). The synthesis of a PNIPAM system was attempted, with particles of dimensions matching the JPN_7 system and, unlike JPN_7, displaying a VPT, and one suitable candidate for the mixed system was finally found (system CB_5). The best conditions to obtain thermoresponsive crystal were selected, and the formation and healing of defects were investigated with CLSM temperature scans. The obtained results show that the approach is the correct one and that the present report could represent a useful start for future developments in defect analysis and defect dynamics studies.
Resumo:
Background: l’epilessia è una malattia cerebrale che colpisce oggigiorno circa l’1% della popolazione mondiale e causa, a chi ne soffre, convulsioni ricorrenti e improvvise che danneggiano la vita quotidiana del paziente. Le convulsioni sono degli eventi che bloccano istantaneamente la normale attività cerebrale; inoltre differiscono tra i pazienti e, perciò, non esiste un trattamento comune generalizzato. Solitamente, medici neurologi somministrano farmaci, e, in rari casi, l’epilessia è trattata con operazioni neurochirurgiche. Tuttavia, le operazioni hanno effetti positivi nel ridurre le crisi, ma raramente riescono a eliminarle del tutto. Negli ultimi anni, nel campo della ricerca scientifica è stato provato che il segnale EEG contiene informazioni utili per diagnosticare l'arrivo di un attacco epilettico. Inoltre, diversi algoritmi automatici sono stati sviluppati per rilevare automaticamente le crisi epilettiche. Scopo: lo scopo finale di questa ricerca è l'applicabilità e l'affidabilità di un dispositivo automatico portatile in grado di rilevare le convulsioni e utilizzabile come sistema di monitoraggio. L’analisi condotta in questo progetto, è eseguita con tecniche di misure classiche e avanzate, in modo tale da provare tecnicamente l’affidabilità di un tale sistema. La comparazione è stata eseguita sui segnali elettroencefalografici utilizzando due diversi sistemi di acquisizione EEG: il metodo standard utilizzato nelle cliniche e il nuovo dispositivo portatile. Metodi: è necessaria una solida validazione dei segnali EEG registrati con il nuovo dispositivo. I segnali saranno trattati con tecniche classiche e avanzate. Dopo le operazioni di pulizia e allineamento, verrà utilizzato un nuovo metodo di rappresentazione e confronto di segnali : Bump model. In questa tesi il metodo citato verrà ampiamente descritto, testato, validato e adattato alle esigenze del progetto. Questo modello è definito come un approccio economico per la mappatura spazio-frequenziale di wavelet; in particolare, saranno presenti solo gli eventi con un’alta quantità di energia. Risultati: il modello Bump è stato implementato come toolbox su MATLAB dallo sviluppatore F. Vialatte, e migliorato dall’Autore per l’utilizzo di registrazioni EEG da sistemi diversi. Il metodo è validato con segnali artificiali al fine di garantire l’affidabilità, inoltre, è utilizzato su segnali EEG processati e allineati, che contengono eventi epilettici. Questo serve per rilevare la somiglianza dei due sistemi di acquisizione. Conclusioni: i risultati visivi garantiscono la somiglianza tra i due sistemi, questa differenza la si può notare specialmente comparando i grafici di attività background EEG e quelli di artefatti o eventi epilettici. Bump model è uno strumento affidabile per questa applicazione, e potrebbe essere utilizzato anche per lavori futuri (ad esempio utilizzare il metodo di Sincronicità Eventi Stocas- tici SES) o differenti applicazioni, così come le informazioni estratte dai Bump model potrebbero servire come input per misure di sincronicità, dalle quali estrarre utili risultati.