388 resultados para Lipschitz Mappings
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Les anodes de carbone sont des éléments consommables servant d’électrode dans la réaction électrochimique d’une cuve Hall-Héroult. Ces dernières sont produites massivement via une chaine de production dont la mise en forme est une des étapes critiques puisqu’elle définit une partie de leur qualité. Le procédé de mise en forme actuel n’est pas pleinement optimisé. Des gradients de densité importants à l’intérieur des anodes diminuent leur performance dans les cuves d’électrolyse. Encore aujourd’hui, les anodes de carbone sont produites avec comme seuls critères de qualité leur densité globale et leurs propriétés mécaniques finales. La manufacture d’anodes est optimisée de façon empirique directement sur la chaine de production. Cependant, la qualité d’une anode se résume en une conductivité électrique uniforme afin de minimiser les concentrations de courant qui ont plusieurs effets néfastes sur leur performance et sur les coûts de production d’aluminium. Cette thèse est basée sur l’hypothèse que la conductivité électrique de l’anode n’est influencée que par sa densité considérant une composition chimique uniforme. L’objectif est de caractériser les paramètres d’un modèle afin de nourrir une loi constitutive qui permettra de modéliser la mise en forme des blocs anodiques. L’utilisation de la modélisation numérique permet d’analyser le comportement de la pâte lors de sa mise en forme. Ainsi, il devient possible de prédire les gradients de densité à l’intérieur des anodes et d’optimiser les paramètres de mise en forme pour en améliorer leur qualité. Le modèle sélectionné est basé sur les propriétés mécaniques et tribologiques réelles de la pâte. La thèse débute avec une étude comportementale qui a pour objectif d’améliorer la compréhension des comportements constitutifs de la pâte observés lors d’essais de pressage préliminaires. Cette étude est basée sur des essais de pressage de pâte de carbone chaude produite dans un moule rigide et sur des essais de pressage d’agrégats secs à l’intérieur du même moule instrumenté d’un piézoélectrique permettant d’enregistrer les émissions acoustiques. Cette analyse a précédé la caractérisation des propriétés de la pâte afin de mieux interpréter son comportement mécanique étant donné la nature complexe de ce matériau carboné dont les propriétés mécaniques sont évolutives en fonction de la masse volumique. Un premier montage expérimental a été spécifiquement développé afin de caractériser le module de Young et le coefficient de Poisson de la pâte. Ce même montage a également servi dans la caractérisation de la viscosité (comportement temporel) de la pâte. Il n’existe aucun essai adapté pour caractériser ces propriétés pour ce type de matériau chauffé à 150°C. Un moule à paroi déformable instrumenté de jauges de déformation a été utilisé pour réaliser les essais. Un second montage a été développé pour caractériser les coefficients de friction statique et cinétique de la pâte aussi chauffée à 150°C. Le modèle a été exploité afin de caractériser les propriétés mécaniques de la pâte par identification inverse et pour simuler la mise en forme d’anodes de laboratoire. Les propriétés mécaniques de la pâte obtenues par la caractérisation expérimentale ont été comparées à celles obtenues par la méthode d’identification inverse. Les cartographies tirées des simulations ont également été comparées aux cartographies des anodes pressées en laboratoire. La tomodensitométrie a été utilisée pour produire ces dernières cartographies de densité. Les résultats des simulations confirment qu’il y a un potentiel majeur à l’utilisation de la modélisation numérique comme outil d’optimisation du procédé de mise en forme de la pâte de carbone. La modélisation numérique permet d’évaluer l’influence de chacun des paramètres de mise en forme sans interrompre la production et/ou d’implanter des changements coûteux dans la ligne de production. Cet outil permet donc d’explorer des avenues telles la modulation des paramètres fréquentiels, la modification de la distribution initiale de la pâte dans le moule, la possibilité de mouler l’anode inversée (upside down), etc. afin d’optimiser le processus de mise en forme et d’augmenter la qualité des anodes.
Resumo:
Given a Lipschitz continuous multifunction $F$ on ${\mathbb{R}}^{n}$, we construct a probability measure on the set of all solutions to the Cauchy problem $\dot x\in F(x)$ with $x(0)=0$. With probability one, the derivatives of these random solutions take values within the set $ext F(x)$ of extreme points for a.e.~time $t$. This provides an alternative approach in the analysis of solutions to differential inclusions with non-convex right hand side.
Resumo:
Vorliegende Arbeit beschäftigt sich mit den Auswirkungen von selbst-definierten Extensions auf Kompatibilität von SKOS-Thesauri untereinander. Zu diesem Zweck werden als Grundlage zunächst die Funktionsweisen von RDF, SKOS, SKOS-XL und Dublin Core Metadaten erläutert und die verwendete Syntax geklärt. Es folgt eine Beschreibung des Aufbaus von konventionellen Thesauri inkl. der für sie geltenden Normen. Danach wird der Vorgang der Konvertierung eines konventionellen Thesaurus in SKOS dargestellt. Um dann die selbst-definierten Erweiterungen und ihre Folgen betrachten zu können, werden fünf SKOS-Thesauri beispielhaft beschrieben. Dazu gehören allgemeine Informationen, ihre Struktur, die verwendeten Erweiterungen und ein Schaubild, das die Struktur als Übersicht darstellt. Anhand dieser Thesauri wird dann beschrieben wie Mappings zwischen den Thesauri erstellt werden und welche Herausforderungen dabei bestehen.
Resumo:
The quotient of a finite-dimensional Euclidean space by a finite linear group inherits different structures from the initial space, e.g. a topology, a metric and a piecewise linear structure. The question when such a quotient is a manifold leads to the study of finite groups generated by reflections and rotations, i.e. by orthogonal transformations whose fixed point subspace has codimension one or two. We classify such groups and thereby complete earlier results by M. A. Mikhaîlova from the 70s and 80s. Moreover, we show that a finite group is generated by reflections and) rotations if and only if the corresponding quotient is a Lipschitz-, or equivalently, a piecewise linear manifold (with boundary). For the proof of this statement we show in addition that each piecewise linear manifold of dimension up to four on which a finite group acts by piecewise linear homeomorphisms admits a compatible smooth structure with respect to which the group acts smoothly. This solves a challenge by Thurston and confirms a conjecture by Kwasik and Lee. In the topological category a counterexample to the above mentioned characterization is given by the binary icosahedral group. We show that this is the only counterexample up to products. In particular, we answer the question by Davis of when the underlying space of an orbifold is a topological manifold. As a corollary of our results we generalize a fixed point theorem by Steinberg on unitary reflection groups to finite groups generated by reflections and rotations. As an application thereof we answer a question by Petrunin on quotients of spheres.
Resumo:
The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.
Resumo:
The process of building Data Warehouses (DW) is well known with well defined stages but at the same time, mostly carried out manually by IT people in conjunction with business people. Web Warehouses (WW) are DW whose data sources are taken from the web. We define a flexible WW, which can be configured accordingly to different domains, through the selection of the web sources and the definition of data processing characteristics. A Business Process Management (BPM) System allows modeling and executing Business Processes (BPs) providing support for the automation of processes. To support the process of building flexible WW we propose a two BPs level: a configuration process to support the selection of web sources and the definition of schemas and mappings, and a feeding process which takes the defined configuration and loads the data into the WW. In this paper we present a proof of concept of both processes, with focus on the configuration process and the defined data.
Resumo:
The Dendritic Cell algorithm (DCA) is inspired by recent work in innate immunity. In this paper a formal description of the DCA is given. The DCA is described in detail, and its use as an anomaly detector is illustrated within the context of computer security. A port scan detection task is performed to substantiate the influence of signal selection on the behaviour of the algorithm. Experimental results provide a comparison of differing input signal mappings.
Resumo:
We extend previous papers in the literature concerning the homogenization of Robin type boundary conditions for quasilinear equations, in the case of microscopic obstacles of critical size: here we consider nonlinear boundary conditions involving some maximal monotone graphs which may correspond to discontinuous or non-Lipschitz functions arising in some catalysis problems.
Resumo:
This paper reports a direct observation of an interesting split of the (022)(022) four-beam secondary peak into two (022) and (022) three-beam peaks, in a synchrotron radiation Renninger scan (phi-scan), as an evidence of the layer tetragonal distortion in two InGaP/GaAs (001) epitaxial structures with different thicknesses. The thickness, composition, (a perpendicular to) perpendicular lattice parameter, and (01) in-plane lattice parameter of the two epitaxial ternary layers were obtained from rocking curves (omega-scan) as well as from the simulation of the (022)(022) split, and then, it allowed for the determination of the perpendicular and parallel (in-plane) strains. Furthermore, (022)(022) omega:phi mappings were measured in order to exhibit the multiple diffraction condition of this four-beam case with their split measurement.
Resumo:
The purpose of this dissertation is to study literary representations of Eastern Europe in the works of celebrated and less-known American authors, who visited and narrated the region between the mid-1960s and early 2000s. The main critical body focuses on Eastern Europe before 1989 and encompasses three major voices of American literature: John Updike, Joyce Carol Oates, and Philip Roth. However, in the last chapter I also explore American literary perceptions of the area following the collapse of communism. Importantly, the term “Eastern Europe” as used in this dissertation is charged with significance. I approach it not only as a space on the map or the geopolitical construct which emerged in the aftermath of the Second World War, but rather as a conceptual category and a repository of meanings built out of fact and fantasy: specific historical, political and cultural realities interlaced with subjective worldviews, preconceptions, and mental images. The critical framework of this dissertation is twofold. I reach for the concept of liminality to elucidate the indeterminacy and malleability which lies at the heart of the object of study—the idea, image, and experience of Eastern Europe. Bearing in mind the nature of the works under analysis, all of which were inspired by actual visits behind the Iron Curtain, I propose to interpret these transatlantic literary journeys in terms of generative experience, where Eastern Europe is mapped as a liminal space of possibility; a contact zone between cultures and, potentially, the locus of self-discovery and individual transformation. If liminality is the metaphor or a lens that I employ in order to account for the nature of the analyzed works and the complex terrain they map, imagology, whose purpose is to study the processes of constructing selfhood and otherness in literature, provides me with the method and the critical vocabulary for analyzing selected literary representations. The dissertation is divided into six chapters, the last of which serves as coda to the previous discussion. The first two chapters constitute the critical foundation of this work. Then, in chapters 3, 4, and 5 I study American images of Eastern Europe in the works written by John Updike, Joyce Carol Oates, and Philip Roth, respectively. The last, sixth chapter of this dissertation is divided into two parts. In the first one, I discuss new critical perspectives and avenues of research in the study of Eastern Europe following the collapse of communism. Then, I carry out a joint analysis of four works written after 1989 by Eva Hoffman, Arthur Phillips, John Beckman, and Gary Shteyngart. The dissertation ends with conclusions in which I summarize my findings and reflections, and suggest implications for future research. As this dissertation seeks to demonstrate, Eastern Europe portrayed in the analyzed works oscillates between contradictory representations which are contingent upon a number of factors, most importantly who maps it and in what context. Even though each experience of Eastern Europe is distinct and fueled by the profiles, identities, and interests of the characters and their creators, I have found out that certain patterns of othering are present in all the works. Thus, my research seems to suggest that there is something of a recurrent literary image of Eastern Europe, which goes beyond the context of the Cold War. Accordingly, while this dissertation hopes to be a valid contribution to the study of literary and cultural mappings of Eastern Europe, it also generates new questions regarding the current, post-communist representation of the area and its relationship to the national tropes explored in my work.
Resumo:
For each quasi-metric space X we consider the convex lattice SLip(1)(X) of all semi-Lipschitz functions on X with semi-Lipschitz constant not greater than 1. If X and Y are two complete quasi-metric spaces, we prove that every convex lattice isomorphism T from SLip(1)(Y) onto SLip(1)(X) can be written in the form Tf = c . (f o tau) + phi, where tau is an isometry, c > 0 and phi is an element of SLip(1)(X). As a consequence, we obtain that two complete quasi-metric spaces are almost isometric if, and only if, there exists an almost-unital convex lattice isomorphism between SLip(1)(X) and SLip(1) (Y).
Resumo:
We studied the Paraíba do Sul river watershed , São Paulo state (PSWSP), Southeastern Brazil, in order to assess the land use and cover (LULC) and their implication s to the amount of carbon (C) stored in the forest cover between the years 1985 and 2015. Th e region covers a n area of 1,395,975 ha . We used images made by the Operational Land Imager (OLI) sensor (OLI/Landsat - 8) to produce mappings , and image segmentation techniques to produce vectors with homogeneous characteristics. The training samples and the samples used for classification and validation were collected from the segmented image. To quantify the C stocked in aboveground live biomass (AGLB) , we used an indirect method and applied literature - based reference values. The recovery of 205,690 ha of a secondary Native Forest (NF) after 1985 sequestered 9.7 Tg (Teragram) of C . Considering the whole NF area (455,232 ha), the amount of C accumulated al ong the whole watershed was 3 5 .5 Tg , and the whole Eucalyptus crop (EU) area (113,600 ha) sequester ed 4. 4 Tg of C. Thus, the total amount of C sequestered in the whole watershed (NF + EU) was 3 9 . 9 Tg of C or 1 45 . 6 Tg of CO 2 , and the NF areas were responsible for the large st C stock at the watershed (8 9 %). Therefore , the increase of the NF cover contribut es positively to the reduction of CO 2 concentration in the atmosphere, and Reducing Emissions from Deforestation and Forest Degradation (REDD + ) may become one of the most promising compensation mechanisms for the farmers who increased forest cover at their farms.