823 resultados para Boolean Computations
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
The main goal of this thesis is to understand and link together some of the early works by Michel Rumin and Pierre Julg. The work is centered around the so-called Rumin complex, which is a construction in subRiemannian geometry. A Carnot manifold is a manifold endowed with a horizontal distribution. If further a metric is given, one gets a subRiemannian manifold. Such data arise in different contexts, such as: - formulation of the second principle of thermodynamics; - optimal control; - propagation of singularities for sums of squares of vector fields; - real hypersurfaces in complex manifolds; - ideal boundaries of rank one symmetric spaces; - asymptotic geometry of nilpotent groups; - modelization of human vision. Differential forms on a Carnot manifold have weights, which produces a filtered complex. In view of applications to nilpotent groups, Rumin has defined a substitute for the de Rham complex, adapted to this filtration. The presence of a filtered complex also suggests the use of the formal machinery of spectral sequences in the study of cohomology. The goal was indeed to understand the link between Rumin's operator and the differentials which appear in the various spectral sequences we have worked with: - the weight spectral sequence; - a special spectral sequence introduced by Julg and called by him Forman's spectral sequence; - Forman's spectral sequence (which turns out to be unrelated to the previous one). We will see that in general Rumin's operator depends on choices. However, in some special cases, it does not because it has an alternative interpretation as a differential in a natural spectral sequence. After defining Carnot groups and analysing their main properties, we will introduce the concept of weights of forms which will produce a splitting on the exterior differential operator d. We shall see how the Rumin complex arises from this splitting and proceed to carry out the complete computations in some key examples. From the third chapter onwards we will focus on Julg's paper, describing his new filtration and its relationship with the weight spectral sequence. We will study the connection between the spectral sequences and Rumin's complex in the n-dimensional Heisenberg group and the 7-dimensional quaternionic Heisenberg group and then generalize the result to Carnot groups using the weight filtration. Finally, we shall explain why Julg required the independence of choices in some special Rumin operators, introducing the Szego map and describing its main properties.
Resumo:
This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.
Resumo:
Sei $\pi:X\rightarrow S$ eine \"uber $\Z$ definierte Familie von Calabi-Yau Varietaten der Dimension drei. Es existiere ein unter dem Gauss-Manin Zusammenhang invarianter Untermodul $M\subset H^3_{DR}(X/S)$ von Rang vier, sodass der Picard-Fuchs Operator $P$ auf $M$ ein sogenannter {\em Calabi-Yau } Operator von Ordnung vier ist. Sei $k$ ein endlicher K\"orper der Charaktetristik $p$, und sei $\pi_0:X_0\rightarrow S_0$ die Reduktion von $\pi$ \uber $k$. F\ur die gew\ohnlichen (ordinary) Fasern $X_{t_0}$ der Familie leiten wir eine explizite Formel zur Berechnung des charakteristischen Polynoms des Frobeniusendomorphismus, des {\em Frobeniuspolynoms}, auf dem korrespondierenden Untermodul $M_{cris}\subset H^3_{cris}(X_{t_0})$ her. Sei nun $f_0(z)$ die Potenzreihenl\osung der Differentialgleichung $Pf=0$ in einer Umgebung der Null. Da eine reziproke Nullstelle des Frobeniuspolynoms in einem Teichm\uller-Punkt $t$ durch $f_0(z)/f_0(z^p)|_{z=t}$ gegeben ist, ist ein entscheidender Schritt in der Berechnung des Frobeniuspolynoms die Konstruktion einer $p-$adischen analytischen Fortsetzung des Quotienten $f_0(z)/f_0(z^p)$ auf den Rand des $p-$adischen Einheitskreises. Kann man die Koeffizienten von $f_0$ mithilfe der konstanten Terme in den Potenzen eines Laurent-Polynoms, dessen Newton-Polyeder den Ursprung als einzigen inneren Gitterpunkt enth\alt, ausdr\ucken,so beweisen wir gewisse Kongruenz-Eigenschaften unter den Koeffizienten von $f_0$. Diese sind entscheidend bei der Konstruktion der analytischen Fortsetzung. Enth\alt die Faser $X_{t_0}$ einen gew\ohnlichen Doppelpunkt, so erwarten wir im Grenz\ubergang, dass das Frobeniuspolynom in zwei Faktoren von Grad eins und einen Faktor von Grad zwei zerf\allt. Der Faktor von Grad zwei ist dabei durch einen Koeffizienten $a_p$ eindeutig bestimmt. Durchl\auft nun $p$ die Menge aller Primzahlen, so erwarten wir aufgrund des Modularit\atssatzes, dass es eine Modulform von Gewicht vier gibt, deren Koeffizienten durch die Koeffizienten $a_p$ gegeben sind. Diese Erwartung hat sich durch unsere umfangreichen Rechnungen best\atigt. Dar\uberhinaus leiten wir weitere Formeln zur Bestimmung des Frobeniuspolynoms her, in welchen auch die nicht-holomorphen L\osungen der Gleichung $Pf=0$ in einer Umgebung der Null eine Rolle spielen.
Resumo:
Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).
Resumo:
Recent research has shown that the performance of a single, arbitrarily efficient algorithm can be significantly outperformed by using a portfolio of —possibly on-average slower— algorithms. Within the Constraint Programming (CP) context, a portfolio solver can be seen as a particular constraint solver that exploits the synergy between the constituent solvers of its portfolio for predicting which is (or which are) the best solver(s) to run for solving a new, unseen instance. In this thesis we examine the benefits of portfolio solvers in CP. Despite portfolio approaches have been extensively studied for Boolean Satisfiability (SAT) problems, in the more general CP field these techniques have been only marginally studied and used. We conducted this work through the investigation, the analysis and the construction of several portfolio approaches for solving both satisfaction and optimization problems. We focused in particular on sequential approaches, i.e., single-threaded portfolio solvers always running on the same core. We started from a first empirical evaluation on portfolio approaches for solving Constraint Satisfaction Problems (CSPs), and then we improved on it by introducing new data, solvers, features, algorithms, and tools. Afterwards, we addressed the more general Constraint Optimization Problems (COPs) by implementing and testing a number of models for dealing with COP portfolio solvers. Finally, we have come full circle by developing sunny-cp: a sequential CP portfolio solver that turned out to be competitive also in the MiniZinc Challenge, the reference competition for CP solvers.
Resumo:
The uncertainties in the determination of the stratigraphic profile of natural soils is one of the main problems in geotechnics, in particular for landslide characterization and modeling. The study deals with a new approach in geotechnical modeling which relays on a stochastic generation of different soil layers distributions, following a boolean logic – the method has been thus called BoSG (Boolean Stochastic Generation). In this way, it is possible to randomize the presence of a specific material interdigitated in a uniform matrix. In the building of a geotechnical model it is generally common to discard some stratigraphic data in order to simplify the model itself, assuming that the significance of the results of the modeling procedure would not be affected. With the proposed technique it is possible to quantify the error associated with this simplification. Moreover, it could be used to determine the most significant zones where eventual further investigations and surveys would be more effective to build the geotechnical model of the slope. The commercial software FLAC was used for the 2D and 3D geotechnical model. The distribution of the materials was randomized through a specifically coded MatLab program that automatically generates text files, each of them representing a specific soil configuration. Besides, a routine was designed to automate the computation of FLAC with the different data files in order to maximize the sample number. The methodology is applied with reference to a simplified slope in 2D, a simplified slope in 3D and an actual landslide, namely the Mortisa mudslide (Cortina d’Ampezzo, BL, Italy). However, it could be extended to numerous different cases, especially for hydrogeological analysis and landslide stability assessment, in different geological and geomorphological contexts.
Resumo:
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory.rnAs its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained.rnThe constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point.rnFinally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of existing computations, taking the independent running of the Euler topological term into account. Known perturbative results are reproduced in this case from the renormalization group equation, identifying however a unique non-Gaussian fixed point.rn
Resumo:
In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.
Resumo:
Automatic design has become a common approach to evolve complex networks, such as artificial neural networks (ANNs) and random boolean networks (RBNs), and many evolutionary setups have been discussed to increase the efficiency of this process. However networks evolved in this way have few limitations that should not be overlooked. One of these limitations is the black-box problem that refers to the impossibility to analyze internal behaviour of complex networks in an efficient and meaningful way. The aim of this study is to develop a methodology that make it possible to extract finite-state automata (FSAs) descriptions of robot behaviours from the dynamics of automatically designed complex controller networks. These FSAs unlike complex networks from which they're extracted are both readable and editable thus making the resulting designs much more valuable.
Resumo:
After almost 10 years from “The Free Lunch Is Over” article, where the need to parallelize programs started to be a real and mainstream issue, a lot of stuffs did happened: • Processor manufacturers are reaching the physical limits with most of their approaches to boosting CPU performance, and are instead turning to hyperthreading and multicore architectures; • Applications are increasingly need to support concurrency; • Programming languages and systems are increasingly forced to deal well with concurrency. This thesis is an attempt to propose an overview of a paradigm that aims to properly abstract the problem of propagating data changes: Reactive Programming (RP). This paradigm proposes an asynchronous non-blocking approach to concurrency and computations, abstracting from the low-level concurrency mechanisms.
Resumo:
The first part of this essay aims at investigating the already available and promising technologies for the biogas and bio-hydrogen production from anaerobic digestion of different organic substrates. One strives to show all the peculiarities of this complicate process, such as continuity, number of stages, moisture, biomass preservation and rate of feeding. The main outcome of this part is the awareness of the huge amount of reactor configurations, each of which suitable for a few types of substrate and circumstance. Among the most remarkable results, one may consider first of all the wet continuous stirred tank reactors (CSTR), right to face the high waste production rate in urbanised and industrialised areas. Then, there is the up-flow anaerobic sludge blanket reactor (UASB), aimed at the biomass preservation in case of highly heterogeneous feedstock, which can also be treated in a wise co-digestion scheme. On the other hand, smaller and scattered rural realities can be served by either wet low-rate digesters for homogeneous agricultural by-products (e.g. fixed-dome) or the cheap dry batch reactors for lignocellulose waste and energy crops (e.g. hybrid batch-UASB). The biological and technical aspects raised during the first chapters are later supported with bibliographic research on the important and multifarious large-scale applications the products of the anaerobic digestion may have. After the upgrading techniques, particular care was devoted to their importance as biofuels, highlighting a further and more flexible solution consisting in the reforming to syngas. Then, one shows the electricity generation and the associated heat conversion, stressing on the high potential of fuel cells (FC) as electricity converters. Last but not least, both the use as vehicle fuel and the injection into the gas pipes are considered as promising applications. The consideration of the still important issues of the bio-hydrogen management (e.g. storage and delivery) may lead to the conclusion that it would be far more challenging to implement than bio-methane, which can potentially “inherit” the assets of the similar fossil natural gas. Thanks to the gathered knowledge, one devotes a chapter to the energetic and financial study of a hybrid power system supplied by biogas and made of different pieces of equipment (natural gas thermocatalitic unit, molten carbonate fuel cell and combined-cycle gas turbine structure). A parallel analysis on a bio-methane-fed CCGT system is carried out in order to compare the two solutions. Both studies show that the apparent inconvenience of the hybrid system actually emphasises the importance of extending the computations to a broader reality, i.e. the upstream processes for the biofuel production and the environmental/social drawbacks due to fossil-derived emissions. Thanks to this “boundary widening”, one can realise the hidden benefits of the hybrid over the CCGT system.
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
L'applicazione di misure, derivanti dalla teoria dell'informazione, fornisce un valido strumento per quantificare alcune delle proprietà dei sistemi complessi. Le stesse misure possono essere utilizzate in robotica per favorire l'analisi e la sintesi di sistemi di controllo per robot. In questa tesi si è analizzata la correlazione tra alcune misure di complessità e la capacità dei robot di portare a termine, con successo, tre differenti task. I risultati ottenuti suggeriscono che tali misure di complessità rappresentano uno strumento promettente anche nel campo della robotica, ma che il loro utilizzo può diventare difficoltoso quando applicate a task compositi.
Resumo:
In dentistry the restoration of decayed teeth is challenging and makes great demands on both the dentist and the materials. Hence, fiber-reinforced posts have been introduced. The effects of different variables on the ultimate load on teeth restored using fiber-reinforced posts is controversial, maybe because the results are mostly based on non-standardized in vitro tests and, therefore, give inhomogeneous results. This study combines the advantages of in vitro tests and finite element analysis (FEA) to clarify the effects of ferrule height, post length and cementation technique used for restoration. Sixty-four single rooted premolars were decoronated (ferrule height 1 or 2 mm), endodontically treated and restored using fiber posts (length 2 or 7 mm), composite fillings and metal crowns (resin bonded or cemented). After thermocycling and chewing simulation the samples were loaded until fracture, recording first damage events. Using UNIANOVA to analyze recorded fracture loads, ferrule height and cementation technique were found to be significant, i.e. increased ferrule height and resin bonding of the crown resulted in higher fracture loads. Post length had no significant effect. All conventionally cemented crowns with a 1-mm ferrule height failed during artificial ageing, in contrast to resin-bonded crowns (75% survival rate). FEA confirmed these results and provided information about stress and force distribution within the restoration. Based on the findings of in vitro tests and computations we concluded that crowns, especially those with a small ferrule height, should be resin bonded. Finally, centrally positioned fiber-reinforced posts did not contribute to load transfer as long as the bond between the tooth and composite core was intact.