906 resultados para Timed and Probabilistic Automata


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marine soft bottom systems show a high variability across multiple spatial and temporal scales. Both natural and anthropogenic sources of disturbance act together in affecting benthic sedimentary characteristics and species distribution. The description of such spatial variability is required to understand the ecological processes behind them. However, in order to have a better estimate of spatial patterns, methods that take into account the complexity of the sedimentary system are required. This PhD thesis aims to give a significant contribution both in improving the methodological approaches to the study of biological variability in soft bottom habitats and in increasing the knowledge of the effect that different process (both natural and anthropogenic) could have on the benthic communities of a large area in the North Adriatic Sea. Beta diversity is a measure of the variability in species composition, and Whittaker’s index has become the most widely used measure of beta-diversity. However, application of the Whittaker index to soft bottom assemblages of the Adriatic Sea highlighted its sensitivity to rare species (species recorded in a single sample). This over-weighting of rare species induces biased estimates of the heterogeneity, thus it becomes difficult to compare assemblages containing a high proportion of rare species. In benthic communities, the unusual large number of rare species is frequently attributed to a combination of sampling errors and insufficient sampling effort. In order to reduce the influence of rare species on the measure of beta diversity, I have developed an alternative index based on simple probabilistic considerations. It turns out that this probability index is an ordinary Michaelis-Menten transformation of Whittaker's index but behaves more favourably when species heterogeneity increases. The suggested index therefore seems appropriate when comparing patterns of complexity in marine benthic assemblages. Although the new index makes an important contribution to the study of biodiversity in sedimentary environment, it remains to be seen which processes, and at what scales, influence benthic patterns. The ability to predict the effects of ecological phenomena on benthic fauna highly depends on both spatial and temporal scales of variation. Once defined, implicitly or explicitly, these scales influence the questions asked, the methodological approaches and the interpretation of results. Problem often arise when representative samples are not taken and results are over-generalized, as can happen when results from small-scale experiments are used for resource planning and management. Such issues, although globally recognized, are far from been resolved in the North Adriatic Sea. This area is potentially affected by both natural (e.g. river inflow, eutrophication) and anthropogenic (e.g. gas extraction, fish-trawling) sources of disturbance. Although few studies in this area aimed at understanding which of these processes mainly affect macrobenthos, these have been conducted at a small spatial scale, as they were designated to examine local changes in benthic communities or particular species. However, in order to better describe all the putative processes occurring in the entire area, a high sampling effort performed at a large spatial scale is required. The sedimentary environment of the western part of the Adriatic Sea was extensively studied in this thesis. I have described, in detail, spatial patterns both in terms of sedimentary characteristics and macrobenthic organisms and have suggested putative processes (natural or of human origin) that might affect the benthic environment of the entire area. In particular I have examined the effect of off shore gas platforms on benthic diversity and tested their effect over a background of natural spatial variability. The results obtained suggest that natural processes in the North Adriatic such as river outflow and euthrophication show an inter-annual variability that might have important consequences on benthic assemblages, affecting for example their spatial pattern moving away from the coast and along a North to South gradient. Depth-related factors, such as food supply, light, temperature and salinity play an important role in explaining large scale benthic spatial variability (i.e., affecting both the abundance patterns and beta diversity). Nonetheless, more locally, effects probably related to an organic enrichment or pollution from Po river input has been observed. All these processes, together with few human-induced sources of variability (e.g. fishing disturbance), have a higher effect on macrofauna distribution than any effect related to the presence of gas platforms. The main effect of gas platforms is restricted mainly to small spatial scales and related to a change in habitat complexity due to a natural dislodgement or structure cleaning of mussels that colonize their legs. The accumulation of mussels on the sediment reasonably affects benthic infauna composition. All the components of the study presented in this thesis highlight the need to carefully consider methodological aspects related to the study of sedimentary habitats. With particular regards to the North Adriatic Sea, a multi-scale analysis along natural and anthopogenic gradients was useful for detecting the influence of all the processes affecting the sedimentary environment. In the future, applying a similar approach may lead to an unambiguous assessment of the state of the benthic community in the North Adriatic Sea. Such assessment may be useful in understanding if any anthropogenic source of disturbance has a negative effect on the marine environment, and if so, planning sustainable strategies for a proper management of the affected area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forecasting the time, location, nature, and scale of volcanic eruptions is one of the most urgent aspects of modern applied volcanology. The reliability of probabilistic forecasting procedures is strongly related to the reliability of the input information provided, implying objective criteria for interpreting the historical and monitoring data. For this reason both, detailed analysis of past data and more basic research into the processes of volcanism, are fundamental tasks of a continuous information-gain process; in this way the precursor events of eruptions can be better interpreted in terms of their physical meanings with correlated uncertainties. This should lead to better predictions of the nature of eruptive events. In this work we have studied different problems associated with the long- and short-term eruption forecasting assessment. First, we discuss different approaches for the analysis of the eruptive history of a volcano, most of them generally applied for long-term eruption forecasting purposes; furthermore, we present a model based on the characteristics of a Brownian passage-time process to describe recurrent eruptive activity, and apply it for long-term, time-dependent, eruption forecasting (Chapter 1). Conversely, in an effort to define further monitoring parameters as input data for short-term eruption forecasting in probabilistic models (as for example, the Bayesian Event Tree for eruption forecasting -BET_EF-), we analyze some characteristics of typical seismic activity recorded in active volcanoes; in particular, we use some methodologies that may be applied to analyze long-period (LP) events (Chapter 2) and volcano-tectonic (VT) seismic swarms (Chapter 3); our analysis in general are oriented toward the tracking of phenomena that can provide information about magmatic processes. Finally, we discuss some possible ways to integrate the results presented in Chapters 1 (for long-term EF), 2 and 3 (for short-term EF) in the BET_EF model (Chapter 4).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the work is the evaluation of the potential capabilities of navigation satellite signals to retrieve basic atmospheric parameters. A capillary study have been performed on the assumptions more or less explicitly contained in the common processing steps of navigation signals. A probabilistic procedure has been designed for measuring vertical discretised profiles of pressure, temperature and water vapour and their associated errors. Numerical experiments on a synthetic dataset have been performed with the main objective of quantifying the information that could be gained from such approach, using entropy and relative entropy as testing parameters. A simulator of phase delay and bending of a GNSS signal travelling across the atmosphere has been developed to this aim.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis was to investigate the respective contribution of prior information and sensorimotor constraints to action understanding, and to estimate their consequences on the evolution of human social learning. Even though a huge amount of literature is dedicated to the study of action understanding and its role in social learning, these issues are still largely debated. Here, I critically describe two main perspectives. The first perspective interprets faithful social learning as an outcome of a fine-grained representation of others’ actions and intentions that requires sophisticated socio-cognitive skills. In contrast, the second perspective highlights the role of simpler decision heuristics, the recruitment of which is determined by individual and ecological constraints. The present thesis aims to show, through four experimental works, that these two contributions are not mutually exclusive. A first study investigates the role of the inferior frontal cortex (IFC), the anterior intraparietal area (AIP) and the primary somatosensory cortex (S1) in the recognition of other people’s actions, using a transcranial magnetic stimulation adaptation paradigm (TMSA). The second work studies whether, and how, higher-order and lower-order prior information (acquired from the probabilistic sampling of past events vs. derived from an estimation of biomechanical constraints of observed actions) interacts during the prediction of other people’s intentions. Using a single-pulse TMS procedure, the third study investigates whether the interaction between these two classes of priors modulates the motor system activity. The fourth study tests the extent to which behavioral and ecological constraints influence the emergence of faithful social learning strategies at a population level. The collected data contribute to elucidate how higher-order and lower-order prior expectations interact during action prediction, and clarify the neural mechanisms underlying such interaction. Finally, these works provide/open promising perspectives for a better understanding of social learning, with possible extensions to animal models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral dissertation presents a new method to asses the influence of clearancein the kinematic pairs on the configuration of planar and spatial mechanisms. The subject has been widely investigated in both past and present scientific literature, and is approached in different ways: a static/kinetostatic way, which looks for the clearance take-up due to the external loads on the mechanism; a probabilistic way, which expresses clearance-due displacements using probability density functions; a dynamic way, which evaluates dynamic effects like the actual forces in the pairs caused by impacts, or the consequent vibrations. This dissertation presents a new method to approach the problem of clearance. The problem is studied from a purely kinematic perspective. With reference to a given mechanism configuration, the pose (position and orientation) error of the mechanism link of interest is expressed as a vector function of the degrees of freedom introduced in each pair by clearance: the presence of clearance in a kinematic pair, in facts, causes the actual pair to have more degrees of freedom than the theoretical clearance-free one. The clearance-due degrees of freedom are bounded by the pair geometry. A proper modelling of clearance-affected pairs allows expressing such bounding through analytical functions. It is then possible to study the problem as a maximization problem, where a continuous function (the pose error of the link of interest) subject to some constraints (the analytical functions bounding clearance- due degrees of freedom) has to be maximize. Revolute, prismatic, cylindrical, and spherical clearance-affected pairs have been analytically modelled; with reference to mechanisms involving such pairs, the solution to the maximization problem has been obtained in a closed form.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis presents a probabilistic approach to the theory of semigroups of operators, with particular attention to the Markov and Feller semigroups. The first goal of this work is the proof of the fundamental Feynman-Kac formula, which gives the solution of certain parabolic Cauchy problems, in terms of the expected value of the initial condition computed at the associated stochastic diffusion processes. The second target is the characterization of the principal eigenvalue of the generator of a semigroup with Markov transition probability function and of second order elliptic operators with real coefficients not necessarily self-adjoint. The thesis is divided into three chapters. In the first chapter we study the Brownian motion and some of its main properties, the stochastic processes, the stochastic integral and the Itô formula in order to finally arrive, in the last section, at the proof of the Feynman-Kac formula. The second chapter is devoted to the probabilistic approach to the semigroups theory and it is here that we introduce Markov and Feller semigroups. Special emphasis is given to the Feller semigroup associated with the Brownian motion. The third and last chapter is divided into two sections. In the first one we present the abstract characterization of the principal eigenvalue of the infinitesimal generator of a semigroup of operators acting on continuous functions over a compact metric space. In the second section this approach is used to study the principal eigenvalue of elliptic partial differential operators with real coefficients. At the end, in the appendix, we gather some of the technical results used in the thesis in more details. Appendix A is devoted to the Sion minimax theorem, while in appendix B we prove the Chernoff product formula for not necessarily self-adjoint operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to evaluate the reliability of these levee systems, calculating the probability of “failure” of determined levee stretches under different loads, using probabilistic methods that take into account the fragility curves obtained through the Monte Carlo Method. For this study overtopping and piping are considered as failure mechanisms (since these are the most frequent) and the major levee system of the Po River with a primary focus on the section between Piacenza and Cremona, in the lower-middle area of the Padana Plain, is analysed. The novelty of this approach is to check the reliability of individual embankment stretches, not just a single section, while taking into account the variability of the levee system geometry from one stretch to another. This work takes also into consideration, for each levee stretch analysed, a probability distribution of the load variables involved in the definition of the fragility curves, where it is influenced by the differences in the topography and morphology of the riverbed along the sectional depth analysed as it pertains to the levee system in its entirety. A type of classification is proposed, for both failure mechanisms, to give an indication of the reliability of the levee system based of the information obtained by the fragility curve analysis. To accomplish this work, an hydraulic model has been developed where a 500-year flood is modelled to determinate the residual hazard value of failure for each stretch of levee near the corresponding water depth, then comparing the results with the obtained classifications. This work has the additional the aim of acting as an interface between the world of Applied Geology and Environmental Hydraulic Engineering where a strong collaboration is needed between the two professions to resolve and improve the estimation of hydraulic risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of cheaper and faster DNA sequencing technologies, assembly methods have greatly changed. Instead of outputting reads that are thousands of base pairs long, new sequencers parallelize the task by producing read lengths between 35 and 400 base pairs. Reconstructing an organism’s genome from these millions of reads is a computationally expensive task. Our algorithm solves this problem by organizing and indexing the reads using n-grams, which are short, fixed-length DNA sequences of length n. These n-grams are used to efficiently locate putative read joins, thereby eliminating the need to perform an exhaustive search over all possible read pairs. Our goal was develop a novel n-gram method for the assembly of genomes from next-generation sequencers. Specifically, a probabilistic, iterative approach was utilized to determine the most likely reads to join through development of a new metric that models the probability of any two arbitrary reads being joined together. Tests were run using simulated short read data based on randomly created genomes ranging in lengths from 10,000 to 100,000 nucleotides with 16 to 20x coverage. We were able to successfully re-assemble entire genomes up to 100,000 nucleotides in length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amplifications and deletions of chromosomal DNA, as well as copy-neutral loss of heterozygosity have been associated with diseases processes. High-throughput single nucleotide polymorphism (SNP) arrays are useful for making genome-wide estimates of copy number and genotype calls. Because neighboring SNPs in high throughput SNP arrays are likely to have dependent copy number and genotype due to the underlying haplotype structure and linkage disequilibrium, hidden Markov models (HMM) may be useful for improving genotype calls and copy number estimates that do not incorporate information from nearby SNPs. We improve previous approaches that utilize a HMM framework for inference in high throughput SNP arrays by integrating copy number, genotype calls, and the corresponding confidence scores when available. Using simulated data, we demonstrate how confidence scores control smoothing in a probabilistic framework. Software for fitting HMMs to SNP array data is available in the R package ICE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diabetic nephropathy and end-stage renal failure are still a major cause of mortality amongst patients with diabetes mellitus (DM). In this study, we evaluated the Clinitek-Microalbumin (CM) screening test strip for the detection of microalbuminuria (MA) in a random morning spot urine in comparison with the quantitative assessment of albuminuria in the timed overnight urine collection ("gold standard"). One hundred thirty-four children, adolescents, and young adults with insulin-dependent DM Type 1 were studied at 222 outpatient visits. Because of urinary tract infection and/or haematuria, the data of 13 visits were excluded. Finally, 165 timed overnight urine were collected in the remaining 209 visits (79% sample per visit rate). Ten (6.1%) patients presented MA of > or =15 microg/min. In comparison however, 200 spot urine could be screened (96% sample/visit rate) yielding a significant increase in compliance and screening rate (P<.001, McNemar test). Furthermore, at 156 occasions, the gold standard and CM could be directly compared. The sensitivity and the specificity for CM in the spot urine (cut-off > or =30 mg albumin/l) were 0.89 [95% confidence interval (CI) 0.56-0.99] and 0.73 (CI 0.66-0.80), respectively. The positive and negative predictive value were 0.17 (CI 0.08-0.30) and 0.99 (CI 0.95-1.00), respectively. Considering CM albumin-to-creatinine ratio, the results were poorer than with the albumin concentration alone. Using CM instead of quantitative assessment of albuminuria is not cost-effective (35 US dollars versus 60 US dollars/patient/year). In conclusion, to exclude MA, the CM used in the random spot urine is reliable and easy to handle, but positive screening results of > or =30 mg albumin/l must be confirmed by analyses in the timed overnight collected urine. Although the screening compliance is improved, in terms of analysing random morning spot urine for MA, we cannot recommend CM in a paediatric diabetic outpatient setting because the specificity is far too low.