8 resultados para lambda-cialotrina

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we investigate the influence of dark energy on structure formation, within five different cosmological models, namely a concordance $\Lambda$CDM model, two models with dynamical dark energy, viewed as a quintessence scalar field (using a RP and a SUGRA potential form) and two extended quintessence models (EQp and EQn) where the quintessence scalar field interacts non-minimally with gravity (scalar-tensor theories). We adopted for all models the normalization of the matter power spectrum $\sigma_{8}$ to match the CMB data. For each model, we perform hydrodynamical simulations in a cosmological box of $(300 \ {\rm{Mpc}} \ h^{-1})^{3}$ including baryons and allowing for cooling and star formation. We find that, in models with dynamical dark energy, the evolving cosmological background leads to different star formation rates and different formation histories of galaxy clusters, but the baryon physics is not affected in a relevant way. We investigate several proxies for the cluster mass function based on X-ray observables like temperature, luminosity, $M_{gas}$, and $Y_{X}$. We confirm that the overall baryon fraction is almost independent of the dark energy models within few percentage points. The same is true for the gas fraction. This evidence reinforces the use of galaxy clusters as cosmological probe of the matter and energy content of the Universe. We also study the $c-M$ relation in the different cosmological scenarios, using both dark matter only and hydrodynamical simulations. We find that the normalization of the $c-M$ relation is directly linked to $\sigma_{8}$ and the evolution of the density perturbations for $\Lambda$CDM, RP and SUGRA, while for EQp and EQn it depends also on the evolution of the linear density contrast. These differences in the $c-M$ relation provide another way to use galaxy clusters to constrain the underlying cosmology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Curry-Howard isomorphism is the idea that proofs in natural deduction can be put in correspondence with lambda terms in such a way that this correspondence is preserved by normalization. The concept can be extended from Intuitionistic Logic to other systems, such as Linear Logic. One of the nice conseguences of this isomorphism is that we can reason about functional programs with formal tools which are typical of proof systems: such analysis can also include quantitative qualities of programs, such as the number of steps it takes to terminate. Another is the possiblity to describe the execution of these programs in terms of abstract machines. In 1990 Griffin proved that the correspondence can be extended to Classical Logic and control operators. That is, Classical Logic adds the possiblity to manipulate continuations. In this thesis we see how the things we described above work in this larger context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation investigates the relations between logic and TCS in the probabilistic setting. It is motivated by two main considerations. On the one hand, since their appearance in the 1960s-1970s, probabilistic models have become increasingly pervasive in several fast-growing areas of CS. On the other, the study and development of (deterministic) computational models has considerably benefitted from the mutual interchanges between logic and CS. Nevertheless, probabilistic computation was only marginally touched by such fruitful interactions. The goal of this thesis is precisely to (start) bring(ing) this gap, by developing logical systems corresponding to specific aspects of randomized computation and, therefore, by generalizing standard achievements to the probabilistic realm. To do so, our key ingredient is the introduction of new, measure-sensitive quantifiers associated with quantitative interpretations. The dissertation is tripartite. In the first part, we focus on the relation between logic and counting complexity classes. We show that, due to our classical counting propositional logic, it is possible to generalize to counting classes, the standard results by Cook and Meyer and Stockmeyer linking propositional logic and the polynomial hierarchy. Indeed, we show that the validity problem for counting-quantified formulae captures the corresponding level in Wagner's hierarchy. In the second part, we consider programming language theory. Type systems for randomized \lambda-calculi, also guaranteeing various forms of termination properties, were introduced in the last decades, but these are not "logically oriented" and no Curry-Howard correspondence is known for them. Following intuitions coming from counting logics, we define the first probabilistic version of the correspondence. Finally, we consider the relationship between arithmetic and computation. We present a quantitative extension of the language of arithmetic able to formalize basic results from probability theory. This language is also our starting point to define randomized bounded theories and, so, to generalize canonical results by Buss.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The enhanced production of strange hadrons in heavy-ion collisions relative to that in minimum-bias pp collisions is historically considered one of the first signatures of the formation of a deconfined quark-gluon plasma. At the LHC, the ALICE experiment observed that the ratio of strange to non-strange hadron yields increases with the charged-particle multiplicity at midrapidity, starting from pp collisions and evolving smoothly across interaction systems and energies, ultimately reaching Pb-Pb collisions. The understanding of the origin of this effect in small systems remains an open question. This thesis presents a comprehensive study of the production of $K^{0}_{S}$, $\Lambda$ ($\bar{\Lambda}$) and $\Xi^{-}$ ($\bar{\Xi}^{+}$) strange hadrons in pp collisions at $\sqrt{s}$ = 13 TeV collected in LHC Run 2 with ALICE. A novel approach is exploited, introducing, for the first time, the concept of effective energy in the study of strangeness production in hadronic collisions at the LHC. In this work, the ALICE Zero Degree Calorimeters are used to measure the energy carried by forward emitted baryons in pp collisions, which reduces the effective energy available for particle production with respect to the nominal centre-of-mass energy. The results presented in this thesis provide new insights into the interplay, for strangeness production, between the initial stages of the collision and the produced final hadronic state. Finally, the first Run 3 results on the production of $\Omega^{\pm}$ ($\bar{\Omega}^{+}$) multi-strange baryons are presented, measured in pp collisions at $\sqrt{s}$ = 13.6 TeV and 900 GeV, the highest and lowest collision energies reached so far at the LHC. This thesis also presents the development and validation of the ALICE Time-Of-Flight (TOF) data quality monitoring system for LHC Run 3. This work was fundamental to assess the performance of the TOF detector during the commissioning phase, in the Long Shutdown 2, and during the data taking period.