947 resultados para PROBABILISTIC FORECASTS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cosmological constant Λ seems to be a not satisfactory explanation of the late-time accelerated expansion of the Universe, for which a number of experimental evidences exist; therefore, it has become necessary in the last years to consider alternative models of dark energy, meant as cause of the accelerated expansion. In the study of dark energy models, it is important to understand which quantities can be determined starting from observational data, without assuming any hypothesis on the cosmological model; such quantities have been determined in Amendola, Kunz et al., 2012. In the same paper it has been further shown that it is possible to estabilish a relation between the model-independent parameters and the anisotropic stress η, which can be also expressed as a combination of the functions appearing in the most general Lagrangian for the scalar-tensor theories, the Horndeski Lagrangian. In the present thesis, the Fisher matrix formalism is used to perform a forecast on the constraints that will be possible to make on the anisotropic stress η in the future, starting from the estimated uncertainties for the galaxy clustering and weak lensing measurements which will be performed by the European Space Agency Euclid mission, to be launched in 2020. Further, constraints coming from supernovae-Ia observations are considered. The forecast is performed for two cases in which (a) η is considered as depending from redshift only and (b) η is constant and equal to one, as in the ΛCDM model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims to evaluate the reliability of these levee systems, calculating the probability of “failure” of determined levee stretches under different loads, using probabilistic methods that take into account the fragility curves obtained through the Monte Carlo Method. For this study overtopping and piping are considered as failure mechanisms (since these are the most frequent) and the major levee system of the Po River with a primary focus on the section between Piacenza and Cremona, in the lower-middle area of the Padana Plain, is analysed. The novelty of this approach is to check the reliability of individual embankment stretches, not just a single section, while taking into account the variability of the levee system geometry from one stretch to another. This work takes also into consideration, for each levee stretch analysed, a probability distribution of the load variables involved in the definition of the fragility curves, where it is influenced by the differences in the topography and morphology of the riverbed along the sectional depth analysed as it pertains to the levee system in its entirety. A type of classification is proposed, for both failure mechanisms, to give an indication of the reliability of the levee system based of the information obtained by the fragility curve analysis. To accomplish this work, an hydraulic model has been developed where a 500-year flood is modelled to determinate the residual hazard value of failure for each stretch of levee near the corresponding water depth, then comparing the results with the obtained classifications. This work has the additional the aim of acting as an interface between the world of Applied Geology and Environmental Hydraulic Engineering where a strong collaboration is needed between the two professions to resolve and improve the estimation of hydraulic risk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il cervello umano è composto da una rete complessa, formata da fasci di assoni, che connettono le diverse aree cerebrali. Il fascio arcuato collega l’area imputata alla com- prensione del linguaggio con quella dedicata alla sua produzione. Il fascio arcuato è presente in entrambi gli emisferi cerebrali, anche se spesso è utilizzato prevalente- mente il sinistro. In questa tesi sono state valutate, in un campione di soggetti sani, le differenze tra fascio arcuato destro e sinistro, utilizzando la trattografia, metodica avanzata e non invasiva che permette la ricostruzione della traiettoria delle fibre con immagini RM (Risonanza Magnetica) pesate in diffusione. A questo scopo ho utilizzato un algoritmo probabilistico, che permette la stima di probabilità di connessione della fibra in oggetto con le diverse aree cerebrali, anche nelle sedi di incrocio con fibre di fasci diversi. Grazie all’implementazione di questo metodo, è stato possibile ottenere una ricostruzione accurata del fascio arcuato, an- che nell’emisfero destro dove è spesso critica, tanto da non essere possibile con altri algoritmi trattografici. Parametrizzando poi la geometria del tratto ho diviso il fascio arcuato in venti seg- menti e ho confrontato i parametri delle misure di diffusione, valutate nell’emisfero destro e sinistro. Da queste analisi emerge un’ampia variabilità nella geometria dell’arcuato, sia tra diversi soggetti che diversi emisferi. Nell’emisfero destro l’arcuato incrocia maggiormente fibre appartenenti ad altri fasci. Nell’emisfero sinistro le fibre dell’arcuato sono più compatte e si misura anche una maggiore connettività con altre aree del cervello coinvolte nelle funzioni linguistiche. Nella seconda fase dello studio ho applicato la stessa metodica in due pazienti con lesioni cerebrali, con l’obiettivo di testare il danno del fascio arcuato ipsilaterale alla lesione e stimare se nell’emisfero controlaterale si innescassero meccanismi di plastic- ità strutturale. Questa metodica può essere implementata, in un gruppo di pazienti omogenei, per identificare marcatori RM diagnostici nella fase di pianificazione pre- chirurgica e marcatori RM prognostici di recupero funzionale del linguaggio.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the advent of cheaper and faster DNA sequencing technologies, assembly methods have greatly changed. Instead of outputting reads that are thousands of base pairs long, new sequencers parallelize the task by producing read lengths between 35 and 400 base pairs. Reconstructing an organism’s genome from these millions of reads is a computationally expensive task. Our algorithm solves this problem by organizing and indexing the reads using n-grams, which are short, fixed-length DNA sequences of length n. These n-grams are used to efficiently locate putative read joins, thereby eliminating the need to perform an exhaustive search over all possible read pairs. Our goal was develop a novel n-gram method for the assembly of genomes from next-generation sequencers. Specifically, a probabilistic, iterative approach was utilized to determine the most likely reads to join through development of a new metric that models the probability of any two arbitrary reads being joined together. Tests were run using simulated short read data based on randomly created genomes ranging in lengths from 10,000 to 100,000 nucleotides with 16 to 20x coverage. We were able to successfully re-assemble entire genomes up to 100,000 nucleotides in length.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A protein of a biological sample is usually quantified by immunological techniques based on antibodies. Mass spectrometry offers alternative approaches that are not dependent on antibody affinity and avidity, protein isoforms, quaternary structures, or steric hindrance of antibody-antigen recognition in case of multiprotein complexes. One approach is the use of stable isotope-labeled internal standards; another is the direct exploitation of mass spectrometric signals recorded by LC-MS/MS analysis of protein digests. Here we assessed the peptide match score summation index based on probabilistic peptide scores calculated by the PHENYX protein identification engine for absolute protein quantification in accordance with the protein abundance index as proposed by Mann and co-workers (Rappsilber, J., Ryder, U., Lamond, A. I., and Mann, M. (2002) Large-scale proteomic analysis of the human spliceosome. Genome Res. 12, 1231-1245). Using synthetic protein mixtures, we demonstrated that this approach works well, although proteins can have different response factors. Applied to high density lipoproteins (HDLs), this new approach compared favorably to alternative protein quantitation methods like UV detection of protein peaks separated by capillary electrophoresis or quantitation of protein spots on SDS-PAGE. We compared the protein composition of a well defined HDL density class isolated from plasma of seven hypercholesterolemia subjects having low or high HDL cholesterol with HDL from nine normolipidemia subjects. The quantitative protein patterns distinguished individuals according to the corresponding concentration and distribution of cholesterol from serum lipid measurements of the same samples and revealed that hypercholesterolemia in unrelated individuals is the result of different deficiencies. The presented approach is complementary to HDL lipid analysis; does not rely on complicated sample treatment, e.g. chemical reactions, or antibodies; and can be used for projective clinical studies of larger patient groups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The past decade has brought significant advancements in seasonal climate forecasting. However, water resources decision support and management continues to be based almost entirely on historical observations and does not take advantage of climate forecasts. This study builds on previous work that conditioned streamflow ensemble forecasts on observable climate indicators, such as the El Niño-Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO) for use in a decision support model for the Highland Lakes multi-reservoir system in central Texas operated by the Lower Colorado River Authority (LCRA). In the current study, seasonal soil moisture is explored as a climate indicator and predictor of annual streamflow for the LCRA region. The main purpose of this study is to evaluate the correlation of fractional soil moisture with streamflow using the 1950-2000 Variable Infiltration Capacity (VIC) Retrospective Land Surface Data Set over the LCRA region. Correlations were determined by examining different annual and seasonal combinations of VIC modeled fractional soil moisture and observed streamflow. The applicability of the VIC Retrospective Land Surface Data Set as a data source for this study is tested along with establishing and analyzing patterns of climatology for the watershed study area using the selected data source (VIC model) and historical data. Correlation results showed potential for the use of soil moisture as a predictor of streamflow over the LCRA region. This was evident by the good correlations found between seasonal soil moisture and seasonal streamflow during coincident seasons as well as between seasonal and annual soil moisture with annual streamflow during coincident years. With the findings of good correlation between seasonal soil moisture from the VIC Retrospective Land Surface Data Set with observed annual streamflow presented in this study, future research would evaluate the application of NOAA Climate Prediction Center (CPC) forecasts of soil moisture in predicting annual streamflow for use in the decision support model for the LCRA.