967 resultados para probability distributions


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We review some advances in the theory of homogeneous, isotropic turbulence. Our emphasis is on the new insights that have been gained from recent numerical studies of the three-dimensional Navier Stokes equation and simpler shell models for turbulence. In particular, we examine the status of multiscaling corrections to Kolmogorov scaling, extended self similarity, generalized extended self similarity, and non-Gaussian probability distributions for velocity differences and related quantities. We recount our recent proposal of a wave-vector-space version of generalized extended self similarity and show how it allows us to explore an intriguing and apparently universal crossover from inertial- to dissipation-range asymptotics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a detailed model for reservoir irrigation taking into account the soil moisture dynamics in the root zone of the crops, the data set for reservoir inflow and rainfall in the command will usually be of sufficient length to enable their variations to be described by probability distributions. However, the potential evapotranspiration of the crop itself depends on the characteristics of the crop and the reference evaporation, the quantification of both being associated with a high degree of uncertainty. The main purpose of this paper is to propose a mathematical programming model to determine the annual relative yield of crops and to determine its reliability, for a single reservoir meant for irrigation of multiple crops, incorporating variations in inflow, rainfall in the command area, and crop consumptive use. The inflow to the reservoir and rainfall in the reservoir command area are treated as random variables, whereas potential evapotranspiration is modeled as a fuzzy set. The model's application is illustrated with reference to an existing single-reservoir system in Southern India.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Denial-of-service (DoS) attacks form a very important category of security threats that are prevalent in MIPv6 (mobile internet protocol version 6) today. Many schemes have been proposed to alleviate such threats, including one of our own [9]. However, reasoning about the correctness of such protocols is not trivial. In addition, new solutions to mitigate attacks may need to be deployed in the network on a frequent basis as and when attacks are detected, as it is practically impossible to anticipate all attacks and provide solutions in advance. This makes it necessary to validate the solutions in a timely manner before deployment in the real network. However, threshold schemes needed in group protocols make analysis complex. Model checking threshold-based group protocols that employ cryptography have not been successful so far. Here, we propose a new simulation based approach for validation using a tool called FRAMOGR that supports executable specification of group protocols that use cryptography. FRAMOGR allows one to specify attackers and track probability distributions of values or paths. We believe that infrastructure such as FRAMOGR would be required in future for validating new group based threshold protocols that may be needed for making MIPv6 more robust.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using all atom molecular dynamics simulations, we report spontaneous unzipping and strong binding of small interfering RNA (siRNA) on graphene. Our dispersion corrected density functional theory based calculations suggest that nucleosides of RNA have stronger attractive interactions with graphene as compared to DNA residues. These stronger interactions force the double stranded siRNA to spontaneously unzip and bind to the graphene surface. Unzipping always nucleates at one end of the siRNA and propagates to the other end after few base-pairs get unzipped. While both the ends get unzipped, the middle part remains in double stranded form because of torsional constraint. Unzipping probability distributions fitted to single exponential function give unzipping time (tau) of the order of few nanoseconds which decrease exponentially with temperature. From the temperature variation of unzipping time we estimate the energy barrier to unzipping. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4742189]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Our work is motivated by geographical forwarding of sporadic alarm packets to a base station in a wireless sensor network (WSN), where the nodes are sleep-wake cycling periodically and asynchronously. We seek to develop local forwarding algorithms that can be tuned so as to tradeoff the end-to-end delay against a total cost, such as the hop count or total energy. Our approach is to solve, at each forwarding node enroute to the sink, the local forwarding problem of minimizing one-hop waiting delay subject to a lower bound constraint on a suitable reward offered by the next-hop relay; the constraint serves to tune the tradeoff. The reward metric used for the local problem is based on the end-to-end total cost objective (for instance, when the total cost is hop count, we choose to use the progress toward sink made by a relay as the reward). The forwarding node, to begin with, is uncertain about the number of relays, their wake-up times, and the reward values, but knows the probability distributions of these quantities. At each relay wake-up instant, when a relay reveals its reward value, the forwarding node's problem is to forward the packet or to wait for further relays to wake-up. In terms of the operations research literature, our work can be considered as a variant of the asset selling problem. We formulate our local forwarding problem as a partially observable Markov decision process (POMDP) and obtain inner and outer bounds for the optimal policy. Motivated by the computational complexity involved in the policies derived out of these bounds, we formulate an alternate simplified model, the optimal policy for which is a simple threshold rule. We provide simulation results to compare the performance of the inner and outer bound policies against the simple policy, and also against the optimal policy when the source knows the exact number of relays. Observing the good performance and the ease of implementation of the simple policy, we apply it to our motivating problem, i.e., local geographical routing of sporadic alarm packets in a large WSN. We compare the end-to-end performance (i.e., average total delay and average total cost) obtained by the simple policy, when used for local geographical forwarding, against that obtained by the globally optimal forwarding algorithm proposed by Kim et al. 1].

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the basic bidirectional relaying problem, in which two users in a wireless network wish to exchange messages through an intermediate relay node. In the compute-and-forward strategy, the relay computes a function of the two messages using the naturally occurring sum of symbols simultaneously transmitted by user nodes in a Gaussian multiple-access channel (MAC), and the computed function value is forwarded to the user nodes in an ensuing broadcast phase. In this paper, we study the problem under an additional security constraint, which requires that each user's message be kept secure from the relay. We consider two types of security constraints: 1) perfect secrecy, in which the MAC channel output seen by the relay is independent of each user's message and 2) strong secrecy, which is a form of asymptotic independence. We propose a coding scheme based on nested lattices, the main feature of which is that given a pair of nested lattices that satisfy certain goodness properties, we can explicitly specify probability distributions for randomization at the encoders to achieve the desired security criteria. In particular, our coding scheme guarantees perfect or strong secrecy even in the absence of channel noise. The noise in the channel only affects reliability of computation at the relay, and for Gaussian noise, we derive achievable rates for reliable and secure computation. We also present an application of our methods to the multihop line network in which a source needs to transmit messages to a destination through a series of intermediate relays.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is a need to use probability distributions with power-law decaying tails to describe the large variations exhibited by some of the physical phenomena. The Weierstrass Random Walk (WRW) shows promise for modeling such phenomena. The theory of anomalous diffusion is now well established. It has found number of applications in Physics, Chemistry and Biology. However, its applications are limited in structural mechanics in general, and structural engineering in particular. The aim of this paper is to present some mathematical preliminaries related to WRW that would help in possible applications. In the limiting case, it represents a diffusion process whose evolution is governed by a fractional partial differential equation. Three applications of superdiffusion processes in mechanics, illustrating their effectiveness in handling large variations, are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sand velocity in aeolian sand transport was measured using the laser Doppler technique of PDPA (Phase Doppler Particle Analyzer) in a wind tunnel. The sand velocity profile, probability distribution of particle velocity, particle velocity fluctuation and particle turbulence were analyzed in detail. The experimental results verified that the sand horizontal velocity profile can be expressed by a logarithmic function above 0.01 in, while a deviation occurs below 0.01 m. The mean vertical velocity of grains generally ranges from -0.2 m/s to 0.2 m/s, and is downward at the lower height, upward at the higher height. The probability distributions of the horizontal velocity of ascending and descending particles have a typical peak and are right-skewed at a height of 4 turn in the lower part of saltation layer. The vertical profile of the horizontal RMS velocity fluctuation of particles shows a single peak. The horizontal RMS velocity fluctuation of sand particles is generally larger than the vertical RMS velocity fluctuation. The RMS velocity fluctuations of grains in both horizontal and vertical directions increase with wind velocity. The particle turbulence intensity decreases with height. The present investigation is helpful in understanding the sand movement mechanism in windblown sand transport and also provides a reference for the study of blowing sand velocity. (C) 2007 Elsevier B.V All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sequential Monte Carlo (SMC) methods are popular computational tools for Bayesian inference in non-linear non-Gaussian state-space models. For this class of models, we propose SMC algorithms to compute the score vector and observed information matrix recursively in time. We propose two different SMC implementations, one with computational complexity $\mathcal{O}(N)$ and the other with complexity $\mathcal{O}(N^{2})$ where $N$ is the number of importance sampling draws. Although cheaper, the performance of the $\mathcal{O}(N)$ method degrades quickly in time as it inherently relies on the SMC approximation of a sequence of probability distributions whose dimension is increasing linearly with time. In particular, even under strong \textit{mixing} assumptions, the variance of the estimates computed with the $\mathcal{O}(N)$ method increases at least quadratically in time. The $\mathcal{O}(N^{2})$ is a non-standard SMC implementation that does not suffer from this rapid degrade. We then show how both methods can be used to perform batch and recursive parameter estimation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN]In this paper we deal with probability distributions over permutation spaces. The Probability model in use is the Mallows model. The distance for permutations that the model uses in the Ulam distance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis we uncover a new relation which links thermodynamics and information theory. We consider time as a channel and the detailed state of a physical system as a message. As the system evolves with time, ever present noise insures that the "message" is corrupted. Thermodynamic free energy measures the approach of the system toward equilibrium. Information theoretical mutual information measures the loss of memory of initial state. We regard the free energy and the mutual information as operators which map probability distributions over state space to real numbers. In the limit of long times, we show how the free energy operator and the mutual information operator asymptotically attain a very simple relationship to one another. This relationship is founded on the common appearance of entropy in the two operators and on an identity between internal energy and conditional entropy. The use of conditional entropy is what distinguishes our approach from previous efforts to relate thermodynamics and information theory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The time series of abundance indices for many groundfish populations, as determined from trawl surveys, are often imprecise and short, causing stock assessment estimates of abundance to be imprecise. To improve precision, prior probability distributions (priors) have been developed for parameters in stock assessment models by using meta-analysis, expert judgment on catchability, and empirically based modeling. This article presents a synthetic approach for formulating priors for rockfish trawl survey catchability (qgross). A multivariate prior for qgross for different surveys is formulated by using 1) a correction factor for bias in estimating fish density between trawlable and untrawlable areas, 2) expert judgment on trawl net catchability, 3) observations from trawl survey experiments, and 4) data on the fraction of population biomass in each of the areas surveyed. The method is illustrated by using bocaccio (Sebastes paucipinis) in British Columbia. Results indicate that expert judgment can be updated markedly by observing the catch-rate ratio from different trawl gears in the same areas. The marginal priors for qgross are consistent with empirical estimates obtained by fitting a stock assessment model to the survey data under a noninformative prior for qgross. Despite high prior uncertainty (prior coefficients of variation ≥0.8) and high prior correlation between qgross, the prior for qgross still enhances the precision of key stock assessment quantities.