358 resultados para MAPPINGS
Resumo:
This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification.
In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information.
In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data.
Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear.
We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale vocalization data set. The word error rate of the DCTNet feature is similar to the MFSC in speech recognition tasks, suggesting that the convolutional network is able to reveal acoustic content of speech signals.
Resumo:
We propose a novel method to harmonize diffusion MRI data acquired from multiple sites and scanners, which is imperative for joint analysis of the data to significantly increase sample size and statistical power of neuroimaging studies. Our method incorporates the following main novelties: i) we take into account the scanner-dependent spatial variability of the diffusion signal in different parts of the brain; ii) our method is independent of compartmental modeling of diffusion (e.g., tensor, and intra/extra cellular compartments) and the acquired signal itself is corrected for scanner related differences; and iii) inter-subject variability as measured by the coefficient of variation is maintained at each site. We represent the signal in a basis of spherical harmonics and compute several rotation invariant spherical harmonic features to estimate a region and tissue specific linear mapping between the signal from different sites (and scanners). We validate our method on diffusion data acquired from seven different sites (including two GE, three Philips, and two Siemens scanners) on a group of age-matched healthy subjects. Since the extracted rotation invariant spherical harmonic features depend on the accuracy of the brain parcellation provided by Freesurfer, we propose a feature based refinement of the original parcellation such that it better characterizes the anatomy and provides robust linear mappings to harmonize the dMRI data. We demonstrate the efficacy of our method by statistically comparing diffusion measures such as fractional anisotropy, mean diffusivity and generalized fractional anisotropy across multiple sites before and after data harmonization. We also show results using tract-based spatial statistics before and after harmonization for independent validation of the proposed methodology. Our experimental results demonstrate that, for nearly identical acquisition protocol across sites, scanner-specific differences can be accurately removed using the proposed method.
Resumo:
Many dynamical processes are subject to abrupt changes in state. Often these perturbations can be periodic and of short duration relative to the evolving process. These types of phenomena are described well by what are referred to as impulsive differential equations, systems of differential equations coupled with discrete mappings in state space. In this thesis we employ impulsive differential equations to model disease transmission within an industrial livestock barn. In particular we focus on the poultry industry and a viral disease of poultry called Marek's disease. This system lends itself well to impulsive differential equations. Entire cohorts of poultry are introduced and removed from a barn concurrently. Additionally, Marek's disease is transmitted indirectly and the viral particles can survive outside the host for weeks. Therefore, depopulating, cleaning, and restocking of the barn are integral factors in modelling disease transmission and can be completely captured by the impulsive component of the model. Our model allows us to investigate how modern broiler farm practices can make disease elimination difficult or impossible to achieve. It also enables us to investigate factors that may contribute to virulence evolution. Our model suggests that by decrease the cohort duration or by decreasing the flock density, Marek's disease can be eliminated from a barn with no increase in cleaning effort. Unfortunately our model also suggests that these practices will lead to disease evolution towards greater virulence. Additionally, our model suggests that if intensive cleaning between cohorts does not rid the barn of disease, it may drive evolution and cause the disease to become more virulent.
Resumo:
Default invariance is the idea that default does not change at any scale of law and finance. Default is a conserved quantity in a universe where fundamental principles of law and finance operate. It exists at the micro-level as part of the fundamental structure of every financial transaction, and at the macro- level, as a fixed critical point within the relatively stable phases of the law and finance cycle. A key point is that default is equivalent to maximizing uncertainty at the micro-level and at the macro-level, is equivalent to the phase transition where unbearable fluctuations occur in all forms of risk transformation, including maturity, liquidity and credit. As such, default invariance is the glue that links the micro and macro structures of law and finance. In this essay, we apply naïve category theory (NCT), a type of mapping logic, to these types of phenomena. The purpose of using NCT is to introduce a rigorous (but simple) mathematical methodology to law and finance discourse and to show that these types of structural considerations are of prime practical importance and significance to law and finance practitioners. These mappings imply a number of novel areas of investigation. From the micro- structure, three macro-approximations are implied. These approximations form the core analytical framework which we will use to examine the phenomena and hypothesize rules governing law and finance. Our observations from these approximations are grouped into five findings. While the entirety of the five findings can be encapsulated by the three approximations, since the intended audience of this paper is the non-specialist in law, finance and category theory, for ease of access we will illustrate the use of the mappings with relatively common concepts drawn from law and finance, focusing especially on financial contracts, derivatives, Shadow Banking, credit rating agencies and credit crises.
Resumo:
Let A be a unital dense algebra of linear mappings on a complex vector space X. Let φ = Σn i=1 Mai,bi be a locally quasi-nilpotent elementary operator of length n on A. We show that, if {a1, . . . , an} is locally linearly independent, then the local dimension of V (φ) = span{biaj : 1 ≤ i, j ≤ n} is at most n(n−1) 2 . If ldim V (φ) = n(n−1) 2 , then there exists a representation of φ as φ = Σn i=1 Mui,vi with viuj = 0 for i ≥ j. Moreover, we give a complete characterization of locally quasinilpotent elementary operators of length 3.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Les anodes de carbone sont des éléments consommables servant d’électrode dans la réaction électrochimique d’une cuve Hall-Héroult. Ces dernières sont produites massivement via une chaine de production dont la mise en forme est une des étapes critiques puisqu’elle définit une partie de leur qualité. Le procédé de mise en forme actuel n’est pas pleinement optimisé. Des gradients de densité importants à l’intérieur des anodes diminuent leur performance dans les cuves d’électrolyse. Encore aujourd’hui, les anodes de carbone sont produites avec comme seuls critères de qualité leur densité globale et leurs propriétés mécaniques finales. La manufacture d’anodes est optimisée de façon empirique directement sur la chaine de production. Cependant, la qualité d’une anode se résume en une conductivité électrique uniforme afin de minimiser les concentrations de courant qui ont plusieurs effets néfastes sur leur performance et sur les coûts de production d’aluminium. Cette thèse est basée sur l’hypothèse que la conductivité électrique de l’anode n’est influencée que par sa densité considérant une composition chimique uniforme. L’objectif est de caractériser les paramètres d’un modèle afin de nourrir une loi constitutive qui permettra de modéliser la mise en forme des blocs anodiques. L’utilisation de la modélisation numérique permet d’analyser le comportement de la pâte lors de sa mise en forme. Ainsi, il devient possible de prédire les gradients de densité à l’intérieur des anodes et d’optimiser les paramètres de mise en forme pour en améliorer leur qualité. Le modèle sélectionné est basé sur les propriétés mécaniques et tribologiques réelles de la pâte. La thèse débute avec une étude comportementale qui a pour objectif d’améliorer la compréhension des comportements constitutifs de la pâte observés lors d’essais de pressage préliminaires. Cette étude est basée sur des essais de pressage de pâte de carbone chaude produite dans un moule rigide et sur des essais de pressage d’agrégats secs à l’intérieur du même moule instrumenté d’un piézoélectrique permettant d’enregistrer les émissions acoustiques. Cette analyse a précédé la caractérisation des propriétés de la pâte afin de mieux interpréter son comportement mécanique étant donné la nature complexe de ce matériau carboné dont les propriétés mécaniques sont évolutives en fonction de la masse volumique. Un premier montage expérimental a été spécifiquement développé afin de caractériser le module de Young et le coefficient de Poisson de la pâte. Ce même montage a également servi dans la caractérisation de la viscosité (comportement temporel) de la pâte. Il n’existe aucun essai adapté pour caractériser ces propriétés pour ce type de matériau chauffé à 150°C. Un moule à paroi déformable instrumenté de jauges de déformation a été utilisé pour réaliser les essais. Un second montage a été développé pour caractériser les coefficients de friction statique et cinétique de la pâte aussi chauffée à 150°C. Le modèle a été exploité afin de caractériser les propriétés mécaniques de la pâte par identification inverse et pour simuler la mise en forme d’anodes de laboratoire. Les propriétés mécaniques de la pâte obtenues par la caractérisation expérimentale ont été comparées à celles obtenues par la méthode d’identification inverse. Les cartographies tirées des simulations ont également été comparées aux cartographies des anodes pressées en laboratoire. La tomodensitométrie a été utilisée pour produire ces dernières cartographies de densité. Les résultats des simulations confirment qu’il y a un potentiel majeur à l’utilisation de la modélisation numérique comme outil d’optimisation du procédé de mise en forme de la pâte de carbone. La modélisation numérique permet d’évaluer l’influence de chacun des paramètres de mise en forme sans interrompre la production et/ou d’implanter des changements coûteux dans la ligne de production. Cet outil permet donc d’explorer des avenues telles la modulation des paramètres fréquentiels, la modification de la distribution initiale de la pâte dans le moule, la possibilité de mouler l’anode inversée (upside down), etc. afin d’optimiser le processus de mise en forme et d’augmenter la qualité des anodes.
Resumo:
Vorliegende Arbeit beschäftigt sich mit den Auswirkungen von selbst-definierten Extensions auf Kompatibilität von SKOS-Thesauri untereinander. Zu diesem Zweck werden als Grundlage zunächst die Funktionsweisen von RDF, SKOS, SKOS-XL und Dublin Core Metadaten erläutert und die verwendete Syntax geklärt. Es folgt eine Beschreibung des Aufbaus von konventionellen Thesauri inkl. der für sie geltenden Normen. Danach wird der Vorgang der Konvertierung eines konventionellen Thesaurus in SKOS dargestellt. Um dann die selbst-definierten Erweiterungen und ihre Folgen betrachten zu können, werden fünf SKOS-Thesauri beispielhaft beschrieben. Dazu gehören allgemeine Informationen, ihre Struktur, die verwendeten Erweiterungen und ein Schaubild, das die Struktur als Übersicht darstellt. Anhand dieser Thesauri wird dann beschrieben wie Mappings zwischen den Thesauri erstellt werden und welche Herausforderungen dabei bestehen.
Resumo:
The process of building Data Warehouses (DW) is well known with well defined stages but at the same time, mostly carried out manually by IT people in conjunction with business people. Web Warehouses (WW) are DW whose data sources are taken from the web. We define a flexible WW, which can be configured accordingly to different domains, through the selection of the web sources and the definition of data processing characteristics. A Business Process Management (BPM) System allows modeling and executing Business Processes (BPs) providing support for the automation of processes. To support the process of building flexible WW we propose a two BPs level: a configuration process to support the selection of web sources and the definition of schemas and mappings, and a feeding process which takes the defined configuration and loads the data into the WW. In this paper we present a proof of concept of both processes, with focus on the configuration process and the defined data.
Resumo:
The Dendritic Cell algorithm (DCA) is inspired by recent work in innate immunity. In this paper a formal description of the DCA is given. The DCA is described in detail, and its use as an anomaly detector is illustrated within the context of computer security. A port scan detection task is performed to substantiate the influence of signal selection on the behaviour of the algorithm. Experimental results provide a comparison of differing input signal mappings.
Resumo:
This paper reports a direct observation of an interesting split of the (022)(022) four-beam secondary peak into two (022) and (022) three-beam peaks, in a synchrotron radiation Renninger scan (phi-scan), as an evidence of the layer tetragonal distortion in two InGaP/GaAs (001) epitaxial structures with different thicknesses. The thickness, composition, (a perpendicular to) perpendicular lattice parameter, and (01) in-plane lattice parameter of the two epitaxial ternary layers were obtained from rocking curves (omega-scan) as well as from the simulation of the (022)(022) split, and then, it allowed for the determination of the perpendicular and parallel (in-plane) strains. Furthermore, (022)(022) omega:phi mappings were measured in order to exhibit the multiple diffraction condition of this four-beam case with their split measurement.
Resumo:
The purpose of this dissertation is to study literary representations of Eastern Europe in the works of celebrated and less-known American authors, who visited and narrated the region between the mid-1960s and early 2000s. The main critical body focuses on Eastern Europe before 1989 and encompasses three major voices of American literature: John Updike, Joyce Carol Oates, and Philip Roth. However, in the last chapter I also explore American literary perceptions of the area following the collapse of communism. Importantly, the term “Eastern Europe” as used in this dissertation is charged with significance. I approach it not only as a space on the map or the geopolitical construct which emerged in the aftermath of the Second World War, but rather as a conceptual category and a repository of meanings built out of fact and fantasy: specific historical, political and cultural realities interlaced with subjective worldviews, preconceptions, and mental images. The critical framework of this dissertation is twofold. I reach for the concept of liminality to elucidate the indeterminacy and malleability which lies at the heart of the object of study—the idea, image, and experience of Eastern Europe. Bearing in mind the nature of the works under analysis, all of which were inspired by actual visits behind the Iron Curtain, I propose to interpret these transatlantic literary journeys in terms of generative experience, where Eastern Europe is mapped as a liminal space of possibility; a contact zone between cultures and, potentially, the locus of self-discovery and individual transformation. If liminality is the metaphor or a lens that I employ in order to account for the nature of the analyzed works and the complex terrain they map, imagology, whose purpose is to study the processes of constructing selfhood and otherness in literature, provides me with the method and the critical vocabulary for analyzing selected literary representations. The dissertation is divided into six chapters, the last of which serves as coda to the previous discussion. The first two chapters constitute the critical foundation of this work. Then, in chapters 3, 4, and 5 I study American images of Eastern Europe in the works written by John Updike, Joyce Carol Oates, and Philip Roth, respectively. The last, sixth chapter of this dissertation is divided into two parts. In the first one, I discuss new critical perspectives and avenues of research in the study of Eastern Europe following the collapse of communism. Then, I carry out a joint analysis of four works written after 1989 by Eva Hoffman, Arthur Phillips, John Beckman, and Gary Shteyngart. The dissertation ends with conclusions in which I summarize my findings and reflections, and suggest implications for future research. As this dissertation seeks to demonstrate, Eastern Europe portrayed in the analyzed works oscillates between contradictory representations which are contingent upon a number of factors, most importantly who maps it and in what context. Even though each experience of Eastern Europe is distinct and fueled by the profiles, identities, and interests of the characters and their creators, I have found out that certain patterns of othering are present in all the works. Thus, my research seems to suggest that there is something of a recurrent literary image of Eastern Europe, which goes beyond the context of the Cold War. Accordingly, while this dissertation hopes to be a valid contribution to the study of literary and cultural mappings of Eastern Europe, it also generates new questions regarding the current, post-communist representation of the area and its relationship to the national tropes explored in my work.
Resumo:
We studied the Paraíba do Sul river watershed , São Paulo state (PSWSP), Southeastern Brazil, in order to assess the land use and cover (LULC) and their implication s to the amount of carbon (C) stored in the forest cover between the years 1985 and 2015. Th e region covers a n area of 1,395,975 ha . We used images made by the Operational Land Imager (OLI) sensor (OLI/Landsat - 8) to produce mappings , and image segmentation techniques to produce vectors with homogeneous characteristics. The training samples and the samples used for classification and validation were collected from the segmented image. To quantify the C stocked in aboveground live biomass (AGLB) , we used an indirect method and applied literature - based reference values. The recovery of 205,690 ha of a secondary Native Forest (NF) after 1985 sequestered 9.7 Tg (Teragram) of C . Considering the whole NF area (455,232 ha), the amount of C accumulated al ong the whole watershed was 3 5 .5 Tg , and the whole Eucalyptus crop (EU) area (113,600 ha) sequester ed 4. 4 Tg of C. Thus, the total amount of C sequestered in the whole watershed (NF + EU) was 3 9 . 9 Tg of C or 1 45 . 6 Tg of CO 2 , and the NF areas were responsible for the large st C stock at the watershed (8 9 %). Therefore , the increase of the NF cover contribut es positively to the reduction of CO 2 concentration in the atmosphere, and Reducing Emissions from Deforestation and Forest Degradation (REDD + ) may become one of the most promising compensation mechanisms for the farmers who increased forest cover at their farms.