4 resultados para Random finite set theory
em AMS Tesi di Dottorato - Alm@DL - Universit
Resumo:
This dissertation has two main themes: first, the economic impact of tourism on cities and, secondly, the determinants of European long-run development, with a focus on the pre-Industrial era. The common thread is the attempt to develop economic geography models that incorporate spatial frictions and are liable to be given empirical content. Chapter 1, written in conjunction with G. Alfredo Minerva, provides an empirical analysis of the relationship between tourism and economic activity across Italian municipalities, and lays down the basic elements of an urban theory of tourism in an a-spatial setting. Chapter 2 extends these ideas to a quantitative urban framework to study the economic impact and the welfare consequences of tourism into the city of Venice. The model is given empirical content thanks to a large collection of data at the Census tract level for the Municipality of Venice, and then used to perform counterfactual policty analysis. In chapter 3, with Matteo Santacesaria, we consider a setting where agents are continuously distributed over a two-dimensional heterogeneous geography, and are allowed to do business at a finite set of markets. We study the equilibrium partition of the economic space into a collection of mutually-exclusive market areas, and provide condition for this equilibrium partition to exist and to be unique. Finally, chapter 4 "The rise of (urban) Europe: a Quantitative-Spatial analysis", co-authored with Matteo Cervellati and Alex Lehner, sets up a quantitative economic geography model to understand the roots of the Industrial Revolution, in an attempt to match the evolution of the European urban network, and the corresponding city-size distribution, over the period A.D. 1000-1850. It highlights the importance of agricultural trade across cities for the emergence of large manufacturing hubs.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
The aim of this dissertation is to improve the knowledge of knots and links in lens spaces. If the lens space L(p,q) is defined as a 3-ball with suitable boundary identifications, then a link in L(p,q) can be represented by a disk diagram, i.e. a regular projection of the link on a disk. In this contest, we obtain a complete finite set of Reidemeister-type moves establishing equivalence, up to ambient isotopy. Moreover, the connections of this new diagram with both grid and band diagrams for links in lens spaces are shown. A Wirtinger-type presentation for the group of the link and a diagrammatic method giving the first homology group are described. A class of twisted Alexander polynomials for links in lens spaces is computed, showing its correlation with Reidemeister torsion. One of the most important geometric invariants of links in lens spaces is the lift in 3-sphere of a link L in L(p,q), that is the counterimage of L under the universal covering of L(p,q). Starting from the disk diagram of the link, we obtain a diagram of the lift in the 3-sphere. Using this construction it is possible to find different knots and links in L(p,q) having equivalent lifts, hence we cannot distinguish different links in lens spaces only from their lift. The two final chapters investigate whether several existing invariants for links in lens spaces are essential, i.e. whether they may assume different values on links with equivalent lift. Namely, we consider the fundamental quandle, the group of the link, the twisted Alexander polynomials, the Kauffman Bracket Skein Module and an HOMFLY-PT-type invariant.
Resumo:
The Three-Dimensional Single-Bin-Size Bin Packing Problem is one of the most studied problem in the Cutting & Packing category. From a strictly mathematical point of view, it consists of packing a finite set of strongly heterogeneous “small” boxes, called items, into a finite set of identical “large” rectangles, called bins, minimizing the unused volume and requiring that the items are packed without overlapping. The great interest is mainly due to the number of real-world applications in which it arises, such as pallet and container loading, cutting objects out of a piece of material and packaging design. Depending on these real-world applications, more objective functions and more practical constraints could be needed. After a brief discussion about the real-world applications of the problem and a exhaustive literature review, the design of a two-stage algorithm to solve the aforementioned problem is presented. The algorithm must be able to provide the spatial coordinates of the placed boxes vertices and also the optimal boxes input sequence, while guaranteeing geometric, stability, fragility constraints and a reduced computational time. Due to NP-hard complexity of this type of combinatorial problems, a fusion of metaheuristic and machine learning techniques is adopted. In particular, a hybrid genetic algorithm coupled with a feedforward neural network is used. In the first stage, a rich dataset is created starting from a set of real input instances provided by an industrial company and the feedforward neural network is trained on it. After its training, given a new input instance, the hybrid genetic algorithm is able to run using the neural network output as input parameter vector, providing as output the optimal solution. The effectiveness of the proposed works is confirmed via several experimental tests.