12 resultados para Caristi theorem

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent default of important Italian agri-business companies provides a challenging issue to be investigated through an appropriate scientific approach. The events involving CIRIO, FERRUZZI or PARMALAT rise an important research question: what are the determinants of performance for Italian companies in the Italian agri – food sector? My aim is not to investigate all the factors that are relevant in explaining performance. Performance depends on a wide set of political, social, economic variables that are strongly interconnected and that are often very difficult to express by formal or mathematical tools. Rather, in my thesis I mainly focus on those aspects that are strictly related to the governance and ownership structure of agri – food companies representing a strand of research that has been quite neglected by previous scholars. The conceptual framework from which I move to justify the existence of a relationship between the ownership structure of a company, governance and performance is the model set up by Airoldi and Zattoni (2005). In particular the authors investigate the existence of complex relationships arising within the company and between the company and the environment that can bring different strategies and performances. They do not try to find the “best” ownership structure, rather they outline what variables are connected and how they could vary endogenously within the whole economic system. In spite of the fact that the Airoldi and Zattoni’s model highlights the existence of a relationship between ownership and structure that is crucial for the set up of the thesis the authors fail to apply quantitative analyses in order to verify the magnitude, sign and the causal direction of the impact. In order to fill this gap we start from the literature trying to investigate the determinants of performance. Even in this strand of research studies analysing the relationship between different forms of ownership and performance are still lacking. In this thesis, after a brief description of the Italian agri – food sector and after an introduction including a short explanation of the definitions of performance and ownership structure, I implement a model in which the performance level (interpreted here as Return on Investments and Return on Sales) is related to variables that have been previously identified by the literature as important such as the financial variables (cash and leverage indices), the firm location (North Italy, Centre Italy, South Italy), the power concentration (lower than 25%, between 25% and 50% and between 50% and 100% of ownership control) and the specific agri – food sector (agriculture, food and beverage). Moreover we add a categorical variable representing different forms of ownership structure (public limited company, limited liability company, cooperative) that is the core of our study. All those variables are fully analysed by a preliminary descriptive analysis. As in many previous contributions we apply a panel least squares analysis for 199 Italian firms in the period 1998 – 2007 with data taken from the Bureau Van Dijck Dataset. We apply two different models in which the dependant variables are respectively the Return on Investments (ROI) and the Return on Sales (ROS) indicators. Not surprisingly we find that companies located in the North Italy representing the richest area in Italy perform better than the ones located in the Centre and South of Italy. In contrast with the Modigliani - Miller theorem financial variables could be significant and the specific sector within the agri – food market could play a relevant role. As the power concentration, we find that a strong property control (higher than 50%) or a fragmented concentration (lower than 25%) perform better. This result apparently could suggest that “hybrid” forms of concentrations could create bad functioning in the decision process. As our key variables representing the ownership structure we find that public limited companies and limited liability companies perform better than cooperatives. This is easily explainable by the fact that law establishes that cooperatives are less profit – oriented. Beyond cooperatives public limited companies perform better than limited liability companies and show a more stable path over time. Results are quite consistent when we consider both ROI and ROS as dependant variables. These results should not lead us to claim that public limited company is the “best” among all possible governance structures. First, every governance solution should be considered according to specific situations. Second more robustness analyses are needed to confirm our results. At this stage we deem these findings, the model set up and our approach represent original contributions that could stimulate fruitful future studies aimed at investigating the intriguing issue concerning the effect of ownership structure on the performance levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are constructed, and to difficulties related to the management of notions typical of programming languages like variable binding. This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers: - the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code; - a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art. In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours: - a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita; - a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’attività di ricerca contenuta in questa tesi si è concentrata nello sviluppo e nell’implementazione di tecniche per la co-simulazione e il co-progetto non lineare/elettromagnetico di sistemi wireless non convenzionali. Questo lavoro presenta un metodo rigoroso per considerare le interazioni tra due sistemi posti sia in condizioni di campo vicino che in condizioni di campo lontano. In sostanza, gli effetti del sistema trasmittente sono rappresentati da un generatore equivalente di Norton posto in parallelo all’antenna del sistema ricevente, calcolato per mezzo del teorema di reciprocità e del teorema di equivalenza. La correttezza del metodo è stata verificata per mezzo di simulazioni e misure, concordi tra loro. La stessa teoria, ampliata con l’introduzione degli effetti di scattering, è stata usata per valutare una condizione analoga, dove l’elemento trasmittente coincide con quello ricevente (DIE) contenuto all’interno di una struttura metallica (package). I risultati sono stati confrontati con i medesimi ottenibili tramite tecniche FEM e FDTD/FIT, che richiedono tempi di simulazione maggiori di un ordine di grandezza. Grazie ai metodi di co-simulazione non lineari/EM sopra esposti, è stato progettato e verificato un sistema di localizzazione e identificazione di oggetti taggati posti in ambiente indoor. Questo è stato ottenuto dotando il sistema di lettura, denominato RID (Remotely Identify and Detect), di funzioni di scansione angolare e della tecnica di RADAR mono-pulse. Il sistema sperimentale, creato con dispositivi low cost, opera a 2.5 GHz ed ha le dimensioni paragonabili ad un normale PDA. E’ stato sperimentata la capacità del RID di localizzare, in scenari indoor, oggetti statici e in movimento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present PhD thesis exploits the design skills I have been improving since my master thesis’ research. A brief description of the chapters’ content follows. Chapter 1: the simulation of a complete front–end is a very complex problem and, in particular, is the basis upon which the prediction of the overall performance of the system is possible. By means of a commercial EM simulation tool and a rigorous nonlinear/EM circuit co–simulation based on the Reciprocity Theorem, the above–mentioned prediction can be achieved and exploited for wireless links characterization. This will represent the theoretical basics of the entire present thesis and will be supported by two RF applications. Chapter 2: an extensive dissertation about Magneto–Dielectric (MD) materials will be presented, together with their peculiar characteristics as substrates for antenna miniaturization purposes. A designed and tested device for RF on–body applications will be described in detail. Finally, future research will be discussed. Chapter 3: this chapter will deal with the issue regarding the exploitation of renewable energy sources for low–energy consumption devices. Hence the problem related to the so–called energy harvesting will be tackled and a first attempt to deploy THz solar energy in an innovative way will be presented and discussed. Future research will be proposed as well. Chapter 4: graphene is a very promising material for devices to be exploited in the RF and THz frequency range for a wide range of engineering applications, including those ones marked as the main research goal of the present thesis. This chapter will present the results obtained during my research period at the National Institute for Research and Development in Microtechnologies (IMT) in Bucharest, Romania. It will concern the design and manufacturing of antennas and diodes made in graphene–based technology for detection/rectification purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this Thesis we consider a class of second order partial differential operators with non-negative characteristic form and with smooth coefficients. Main assumptions on the relevant operators are hypoellipticity and existence of a well-behaved global fundamental solution. We first make a deep analysis of the L-Green function for arbitrary open sets and of its applications to the Representation Theorems of Riesz-type for L-subharmonic and L-superharmonic functions. Then, we prove an Inverse Mean value Theorem characterizing the superlevel sets of the fundamental solution by means of L-harmonic functions. Furthermore, we establish a Lebesgue-type result showing the role of the mean-integal operator in solving the homogeneus Dirichlet problem related to L in the Perron-Wiener sense. Finally, we compare Perron-Wiener and weak variational solutions of the homogeneous Dirichlet problem, under specific hypothesis on the boundary datum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method for automatic scaling of oblique ionograms has been introduced. This method also provides a rejection procedure for ionograms that are considered to lack sufficient information, depicting a very good success rate. Observing the Kp index of each autoscaled ionogram, can be noticed that the behavior of the autoscaling program does not depend on geomagnetic conditions. The comparison between the values of the MUF provided by the presented software and those obtained by an experienced operator indicate that the procedure developed for detecting the nose of oblique ionogram traces is sufficiently efficient and becomes much more efficient as the quality of the ionograms improves. These results demonstrate the program allows the real-time evaluation of MUF values associated with a particular radio link through an oblique radio sounding. The automatic recognition of a part of the trace allows determine for certain frequencies, the time taken by the radio wave to travel the path between the transmitter and receiver. The reconstruction of the ionogram traces, suggests the possibility of estimating the electron density between the transmitter and the receiver, from an oblique ionogram. The showed results have been obtained with a ray-tracing procedure based on the integration of the eikonal equation and using an analytical ionospheric model with free parameters. This indicates the possibility of applying an adaptive model and a ray-tracing algorithm to estimate the electron density in the ionosphere between the transmitter and the receiver An additional study has been conducted on a high quality ionospheric soundings data set and another algorithm has been designed for the conversion of an oblique ionogram into a vertical one, using Martyn's theorem. This allows a further analysis of oblique soundings, throw the use of the INGV Autoscala program for the automatic scaling of vertical ionograms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.