857 resultados para Discrete wavelet packet transform
Resumo:
The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.
Resumo:
Oceanic islands can be divided, according to their origin, in volcanic and tectonic. Volcanic islands are due to excess volcanism. Tectonic islands are mainly formed due to vertical tectonic motions of blocks of oceanic lithosphere along transverse ridges flanking transform faults at slow and ultraslow mid-ocean ridges. Vertical tectonic motions are due to a reorganization of the geometry of the transform plate boundary, with the transition from a transcurrent tectonics to a transtensive and/or transpressive tectonics, with the formation of the transverse ridges. Tectonic islands can be located also at the ridge–transform intersection: in this case the uplift is due by the movement of the long-lived detachment faults located along the flanks of the mid-ocean ridges. The "Vema" paleoisland (equatorial Atlantic) is at the summit of the southern transverse ridge of the Vema transform. It is now 450 m bsl and it is capped by a carbonate platform 500 m-thick, dated by 87Sr/86Sr at 10 Ma. Three tectonic paleoislands are on the summit of the transverse ridge flanking the Romanche megatrasform (equatorial Atlantic). They are now about 1,000 m bsl and they are formed by 300 m-thick carbonate platforms dated by 87Sr/86Sr, between 11 and 6 Ma. The tectonic paleoisland “Atlantis Bank" is located in the South-Western Indian Ridge, along the Atlantis II transform, and it is today 700 m bsl. The only modern example of oceanic tectonics island is the St. Paul Rocks (equatorial Atlantic), located along the St. Paul transform. This archipelago is the top of a peridotitic massif that it is now a left overstep undergoing transpression. Oceanic volcanic islands are characterized by rapid growth and subsequent thermal subsidence and drowning; in contrast, oceanic tectonic islands may have one or more stages of emersion related to vertical tectonic events along the large oceanic fracture zones.
Resumo:
Until few years ago, 3D modelling was a topic confined into a professional environment. Nowadays technological innovations, the 3D printer among all, have attracted novice users to this application field. This sudden breakthrough was not supported by adequate software solutions. The 3D editing tools currently available do not assist the non-expert user during the various stages of generation, interaction and manipulation of 3D virtual models. This is mainly due to the current paradigm that is largely supported by two-dimensional input/output devices and strongly affected by obvious geometrical constraints. We have identified three main phases that characterize the creation and management of 3D virtual models. We investigated these directions evaluating and simplifying the classic editing techniques in order to propose more natural and intuitive tools in a pure 3D modelling environment. In particular, we focused on freehand sketch-based modelling to create 3D virtual models, interaction and navigation in a 3D modelling environment and advanced editing tools for free-form deformation and objects composition. To pursuing these goals we wondered how new gesture-based interaction technologies can be successfully employed in a 3D modelling environments, how we could improve the depth perception and the interaction in 3D environments and which operations could be developed to simplify the classical virtual models editing paradigm. Our main aims were to propose a set of solutions with which a common user can realize an idea in a 3D virtual model, drawing in the air just as he would on paper. Moreover, we tried to use gestures and mid-air movements to explore and interact in 3D virtual environment, and we studied simple and effective 3D form transformations. The work was carried out adopting the discrete representation of the models, thanks to its intuitiveness, but especially because it is full of open challenges.
Resumo:
Autism Spectrum Disorders (ASDs) describe a set of neurodevelopmental disorders. ASD represents a significant public health problem. Currently, ASDs are not diagnosed before the 2nd year of life but an early identification of ASDs would be crucial as interventions are much more effective than specific therapies starting in later childhood. To this aim, cheap an contact-less automatic approaches recently aroused great clinical interest. Among them, the cry and the movements of the newborn, both involving the central nervous system, are proposed as possible indicators of neurological disorders. This PhD work is a first step towards solving this challenging problem. An integrated system is presented enabling the recording of audio (crying) and video (movements) data of the newborn, their automatic analysis with innovative techniques for the extraction of clinically relevant parameters and their classification with data mining techniques. New robust algorithms were developed for the selection of the voiced parts of the cry signal, the estimation of acoustic parameters based on the wavelet transform and the analysis of the infant’s general movements (GMs) through a new body model for segmentation and 2D reconstruction. In addition to a thorough literature review this thesis presents the state of the art on these topics that shows that no studies exist concerning normative ranges for newborn infant cry in the first 6 months of life nor the correlation between cry and movements. Through the new automatic methods a population of control infants (“low-risk”, LR) was compared to a group of “high-risk” (HR) infants, i.e. siblings of children already diagnosed with ASD. A subset of LR infants clinically diagnosed as newborns with Typical Development (TD) and one affected by ASD were compared. The results show that the selected acoustic parameters allow good differentiation between the two groups. This result provides new perspectives both diagnostic and therapeutic.
Resumo:
The Hilbert transform is an important tool in both pure and applied mathematics. It is largely used in the field of signal processing. Lately has been used in mathematical finance as the fast Hilbert transform method is an efficient and accurate algorithm for pricing discretely monitored barrier and Bermudan style options. The purpose of this report is to show the basic properties of the Hilbert transform and to check the domain of definition of this operator.
Resumo:
La presente tesi vuole dare una descrizione delle Trasformate Wavelet indirizzata alla codifica dell’immagine in formato JPEG2000. Dopo aver quindi descritto le prime fasi della codifica di un’immagine, procederemo allo studio dei difetti derivanti dall’analisi tramite la Trasformata Discreta del Coseno (utilizzata nel formato predecessore JPEG). Dopo aver quindi descritto l’analisi multirisoluzione e le caratteristiche che la differenziano da quest’ultima, analizzeremo la Trasformata Wavelet dandone solo pochi accenni teorici e cercando di dedurla, in una maniera più indirizzata all’applicazione. Concluderemo la tesi descrivendo la codifica dei coefficienti calcolati, e portando esempi delle innumerevoli applicazioni dell’analisi multirisoluzione nei diversi campi scientifici e di trasmissione dei segnali.
Resumo:
Der technische Fortschritt konfrontiert die medizinische Bildgebung wie keine andere Sparte der Medizin mit einem rasanten Anstieg zu speichernder Daten. Anschaffung, Wartung und Ausbau der nötigen Infrastruktur entwickeln sich zunehmend zu einem ökonomischen Faktor. Ein Verfahren, welches diesem Trend etwas entgegensetzten könnte ist die irreversible Bilddatenkompression. Sie ist seit über 10 Jahren Gegenstand vieler Studien, deren Ergebnisse sich wiederum in Empfehlungen zum Einsatz irreversibler Kompression mehrerer nationaler und internationaler Organisation, wie CAR, DRG, RCR und ESR wiederspiegeln. Tenor dieser Empfehlungen ist, dass der Einsatz von moderater irreversibler Bilddatenkompression sicher und sinnvoll ist. Teil dieser Empfehlungen sind auch Angaben über das Maß an Kompression, ausgedrückt in Kompressionsraten, welche je nach Untersuchung und anatomischer Region als sicher anwendbar gelten und keinen diagnostisch relevanten Verlust der komprimierten Bilder erzeugen.rnVerschiedene Kompressionsalgorithmen wurden vorgeschlagen. Letztendlich haben sich vor allem die beiden weit verbreiteten Algorithmen JPEG und JPEG2000 bewährt. Letzterer erfährt in letzter Zeit zunehmen Anwendung, aufgrund seiner einfacheren Handhabung und seiner umfangreichen Zusatzfunktionen.rnAufgrund rechtlich-ethischer Bedenken hat die irreversible Kompression keine breite praktische Anwendung finden können. Dafür verantwortlich ist unter anderem auch die Unklarheit, wie sich irreversible Kompression auf Nach- und Weiterverarbeitung (sog. Postprocessing) medizinischer Bilder, wie Segmentierung, Volumetrie oder 3D-Darstellung, auswirkt. Bisherige Studien zu diesem Thema umfassen vier verschiedene Postprocessing-Algorithmen. Die untersuchten Algorithmen zeigten sich bei verlustbehafteter Kompression im Bereich der erwähnten, publizierten Kompressionsraten weitgehend unbeeinflusst. Lediglich die computergestützte Messung von Stenosegraden in der digitalen Koronarangiographie kollidiert mit den in Großbritannien geltenden Empfehlungen. Die Verwendung unterschiedlicher Kompressionsalgorithmen schränkt die allgemeinernAussagekraft dieser Studienergebnisse außerdem ein.rnZur Erweiterung der Studienlage wurden vier weitere Nach- und Weiterverarbeitungsalgorithmen auf ihre Kompressionstoleranz untersucht. Dabei wurden die Kompressionsraten von 8:1, 10:1 und 15:1 verwendet, welche um die empfohlenen Kompressionsraten von CAR, DRG, RCR und ESR liegen und so ein praxisnahes Setting bieten. Als Kompressionsalgorithmus wurde JPEG2000 verwendet, aufgrund seiner zunehmenden Nutzung in Studien sowie seiner bereits erwähnten Vorzüge in Sachen Handhabung und Zusatzfunktionen. Die vier Algorithmen umfassten das 3D-Volume rendering von CT-Angiographien der Becken-Bein-Gefäße, die Computer-assistierte Detektion von Lungenrundherden, die automatisierte Volumetrie von Leberrundherden und die funktionelle Bestimmung der Ejektionsfraktion in computertomographischen Aufnahmen des Herzens.rnAlle vier Algorithmen zeigten keinen Einfluss durch irreversibler Bilddatenkompression in denrngewählten Kompressionsraten (8:1, 10:1 und 15:1). Zusammen mit der bestehenden Literatur deuten die Ergebnisse an, dass moderate irreversible Kompression im Rahmen aktueller Empfehlungen keinen Einfluss auf Nach- und Weiterverarbeitung medizinischer Bilder hat. Eine explizitere Vorhersage zu einem bestimmten, noch nicht untersuchten Algorithmus ist jedoch aufgrund der unterschiedlichen Funktionsweisen und Programmierungen nicht sicher möglich.rnSofern ein Postprocessing Algorithmus auf komprimiertes Bildmaterial angewendet werden soll, muss dieser zunächst auf seine Kompressionstoleranz getestet werden. Dabei muss der Test eine rechtlich-ethische Grundlage für den Einsatz des Algorithmus bei komprimiertem Bildmaterial schaffen. Es sind vor allem zwei Optionen denkbar, die Testung institutsintern, eventuell unter Zuhilfenahme von vorgefertigten Bibliotheken, oder die Testung durch den Hersteller des Algorithmus.
Resumo:
Questa tesi si pone l'obiettivo di presentare la teoria dei giochi, in particolare di quelli cooperativi, insieme alla teoria delle decisioni, inquadrandole formalmente in termini di matematica discreta. Si tratta di due campi dove l'indagine si origina idealmente da questioni applicative, e dove tuttavia sono sorti e sorgono problemi più tipicamente teorici che hanno interessato e interessano gli ambienti matematico e informatico. Anche se i contributi iniziali sono stati spesso formulati in ambito continuo e utilizzando strumenti tipici di teoria della misura, tuttavia oggi la scelta di modelli e metodi discreti appare la più idonea. L'idea generale è quindi quella di guardare fin da subito al complesso dei modelli e dei risultati che si intendono presentare attraverso la lente della teoria dei reticoli. Ciò consente di avere una visione globale più nitida e di riuscire agilmente ad intrecciare il discorso considerando congiuntamente la teoria dei giochi e quella delle decisioni. Quindi, dopo avere introdotto gli strumenti necessari, si considerano modelli e problemi con il fine preciso di analizzare dapprima risultati storici e solidi, proseguendo poi verso situazioni più recenti, più complesse e nelle quali i risultati raggiunti possono suscitare perplessità. Da ultimo, vengono presentate alcune questioni aperte ed associati spunti per la ricerca.
Resumo:
With the outlook of improving seismic vulnerability assessment for the city of Bishkek (Kyrgyzstan), the global dynamic behaviour of four nine-storey r.c. large-panel buildings in elastic regime is studied. The four buildings were built during the Soviet era within a serial production system. Since they all belong to the same series, they have very similar geometries both in plan and in height. Firstly, ambient vibration measurements are performed in the four buildings. The data analysis composed of discrete Fourier transform, modal analysis (frequency domain decomposition) and deconvolution interferometry, yields the modal characteristics and an estimate of the linear impulse response function for the structures of the four buildings. Then, finite element models are set up for all four buildings and the results of the numerical modal analysis are compared with the experimental ones. The numerical models are finally calibrated considering the first three global modes and their results match the experimental ones with an error of less then 20%.
Resumo:
Questo elaborato si concentra sullo studio della trasformata di Fourier e della trasformata Wavelet. Nella prima parte della tesi si analizzano gli aspetti fondamentali della trasformata di Fourier. Si definisce poi la trasformata di Fourier su gruppi abeliani finiti, richiamando opportunamente la struttura di tali gruppi. Si mostra che calcolare la trasformata di Fourier nel quoziente richiede un minor numero di operazioni rispetto a calcolarla direttamente nel gruppo di partenza. L'ultima parte dell'elaborato si occupa dello studio delle Wavelet, dette ondine. Viene presentato quindi il sistema di Haar che permette di definire una funzione come serie di funzioni di Haar in alternativa alla serie di Fourier. Si propone poi un vero e proprio metodo per la costruzione delle ondine e si osserva che tale costruzione è strettamente legata all'analisi multirisoluzione. Un ruolo cruciale viene svolto dall'identità di scala, un'identità algebrica che permette di definire certi coefficienti che determinano completamente le ondine. Interviene poi la trasformata di Fourier che riduce la ricerca dei coefficienti sopra citati, alla ricerca di certe funzioni opportune che determinano esplicitamente le Wavelet. Non tutte le scelte di queste funzioni sono accettabili. Ci sono vari approcci, qui viene presentato l'approccio di Ingrid Daubechies. Le Wavelet costituiscono basi per lo spazio di funzioni a quadrato sommabile e sono particolarmente interessanti per la decomposizione dei segnali. Sono quindi in relazione con l'analisi armonica e sono adottate in un gran numero di applicazioni. Spesso sostituiscono la trasformata di Fourier convenzionale.
Resumo:
The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)