32 resultados para JOINT DISTRIBUTION

em Indian Institute of Science - Bangalore - Índia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study borrows the measures developed for the operation of water resources systems as a means of characterizing droughts in a given region. It is argued that the common approach of assessing drought using a univariate measure (severity or reliability) is inadequate as decision makers need assessment of the other facets considered here. It is proposed that the joint distribution of reliability, resilience, and vulnerability (referred to as RRV in a reservoir operation context), assessed using soil moisture data over the study region, be used to characterize droughts. Use is made of copulas to quantify the joint distribution between these variables. As reliability and resilience vary in a nonlinear but almost deterministic way, the joint probability distribution of only resilience and vulnerability is modeled. Recognizing the negative association between the two variables, a Plackett copula is used to formulate the joint distribution. The developed drought index, referred to as the drought management index (DMI), is able to differentiate the drought proneness of a given area when compared to other areas. An assessment of the sensitivity of the DMI to the length of the data segments used in evaluation indicates relative stability is achieved if the data segments are 5years or longer. The proposed approach is illustrated with reference to the Malaprabha River basin in India, using four adjoining Climate Prediction Center grid cells of soil moisture data that cover an area of approximately 12,000 km(2). (C) 2013 American Society of Civil Engineers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider information theoretic secret key (SK) agreement and secure function computation by multiple parties observing correlated data, with access to an interactive public communication channel. Our main result is an upper bound on the SK length, which is derived using a reduction of binary hypothesis testing to multiparty SK agreement. Building on this basic result, we derive new converses for multiparty SK agreement. Furthermore, we derive converse results for the oblivious transfer problem and the bit commitment problem by relating them to SK agreement. Finally, we derive a necessary condition for the feasibility of secure computation by trusted parties that seek to compute a function of their collective data, using an interactive public communication that by itself does not give away the value of the function. In many cases, we strengthen and improve upon previously known converse bounds. Our results are single-shot and use only the given joint distribution of the correlated observations. For the case when the correlated observations consist of independent and identically distributed (in time) sequences, we derive strong versions of previously known converses.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the nonabelian sandpile model defined on directed trees by Ayyer et al. (2015 Commun. Math. Phys. 335 1065). and restrict it to the special case of a one-dimensional lattice of n sites which has open boundaries and disordered hopping rates. We focus on the joint distribution of the integrated currents across each bond simultaneously, and calculate its cumulant generating function exactly. Surprisingly, the process conditioned on seeing specified currents across each bond turns out to be a renormalised version of the same process. We also remark on a duality property of the large deviation function. Lastly, all eigenvalues and both Perron eigenvectors of the tilted generator are determined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil-cement blocks are employed for load bearing masonry buildings. This paper deals with the study on the influence of bed joint thickness and elastic properties of the soil-cement blocks, and the mortar on the strength and behavior of soil-cement block masonry prisms. Influence of joint thickness on compressive strength has been examined through an experimental program. The nature of stresses developed and their distribution, in the block and the mortar of the soil-cement block masonry prism under compression, has been analyzed by an elastic analysis using FEM. Influence of various parameters like joint thickness, ratio of block to mortar modulus, and Poisson's ratio of the block and the mortar are considered in FEM analysis. Some of the major conclusions of the study are: (1) masonry compressive strength is sensitive to the ratio of modulus of block to that of the mortar (Eb/Em) and masonry compressive strength decreases as the mortar joint thickness is increased for the case where the ratio of block to mortar modulus is more than 1; (2) the lateral tensile stresses developed in the masonry unit are sensitive to the Eb/Em ratio and the Poisson's ratio of mortar and the masonry unit; and (3) lateral stresses developed in the masonry unit are more sensitive to the Poisson's ratio of the mortar than the Poisson's ratio of the masonry unit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sample of 96 compact flat-spectrum extragalactic sources, spread evenly over all galactic latitudes, has been studied at 327 MHz for variability over a time interval of about 15 yr. The variability shows a dependence on galactic latitude being less both at low and high latitudes and peaking around absolute value of b approximately 15-degrees. The latitude dependence is surprisingly similar in both the galactic centre and anticentre directions. Assuming various single and multi-component distributions for the ionized, irregular interstellar plasma, we have tried to generate the observed dependence using a semi-qualitative treatment of refractive interstellar scintillations. We find that it is difficult to fit our data with any single or double component cylindrical distribution. Our data suggests that the observed variability could be influenced by the spiral structure of our Galaxy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Noting that practical impinging injectors are likely to have skewness, an experimental study has been made to understand the behavior of such jets using water as the simulant. In perfectly impinging jets, a high aspect ratio ellipse-like mass distribution pattern is obtained with major axis normal to the plane of two jets whereas in skewed jets the major axis turns from its normal position. A simple analysis shows that this angle of turn is a function of skewness fraction and impingement angle only and is independent of injection velocity. Experimental data from both mass distribution and photographic technique validate this prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a unified framework using the unit cube for measurement, representation and usage of the range of motion (ROM) of body joints with multiple degrees of freedom (d.o.f) to be used for digital human models (DHM). Traditional goniometry needs skill and kn owledge; it is intrusive and has limited applicability for multi-d.o.f. joints. Measurements using motion capture systems often involve complicated mathematics which itself need validation. In this paper we use change of orientation as the measure of rotation; this definition does not require the identification of any fixed axis of rotation. A two-d.o.f. joint ROM can be represented as a Gaussian map. Spherical polygon representation of ROM, though popular, remains inaccurate, vulnerable due to singularities on parametric sphere and difficult to use for point classification. The unit cube representation overcomes these difficulties. In the work presented here, electromagnetic trackers have been effectively used for measuring the relative orientation of a body segment of interest with respect to another body segment. The orientation is then mapped on a surface gridded cube. As the body segment is moved, the grid cells visited are identified and visualized. Using the visual display as a feedback, the subject is instructed to cover as many grid cells as he can. In this way we get a connected patch of contiguous grid cells. The boundary of this patch represents the active ROM of the concerned joint. The tracker data is converted into the motion of a direction aligned with the axis of the segment and a rotation about this axis later on. The direction identifies the grid cells on the cube and rotation about the axis is represented as a range and visualized using color codes. Thus the present methodology provides a simple, intuitive and accura te determination and representation of up to 3 d.o.f. joints. Basic results are presented for the shoulder. The measurement scheme to be used for wrist and neck, and approach for estimation of the statistical distribution of ROM for a given population are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sequence of moments obtained from statistical trials encodes a classical probability distribution. However, it is well known that an incompatible set of moments arises in the quantum scenario, when correlation outcomes associated with measurements on spatially separated entangled states are considered. This feature, viz., the incompatibility of moments with a joint probability distribution, is reflected in the violation of Bell inequalities. Here, we focus on sequential measurements on a single quantum system and investigate if moments and joint probabilities are compatible with each other. By considering sequential measurement of a dichotomic dynamical observable at three different time intervals, we explicitly demonstrate that the moments and the probabilities are inconsistent with each other. Experimental results using a nuclear magnetic resonance system are reported here to corroborate these theoretical observations, viz., the incompatibility of the three-time joint probabilities with those extracted from the moment sequence when sequential measurements on a single-qubit system are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this paper involves the stochastic finite element analysis of composite-epoxy adhesive lap joints using Monte Carlo simulation. A set of composite adhesive lap joints were prepared and loaded till failure to obtain their strength. The peel and shear strain in the bond line region at different levels of load were obtained using digital image correlation (DIC). The corresponding stresses were computed assuming a plane strain condition. The finite element model was verified by comparing the numerical and experimental stresses. The stresses exhibited a similar behavior and a good correlation was obtained. Further, the finite element model was used to perform the stochastic analysis using Monte Carlo simulation. The parameters influencing stress distribution were provided as a random input variable and the resulting probabilistic variation of maximum peel and shear stresses were studied. It was found that the adhesive modulus and bond line thickness had significant influence on the maximum stress variation. While the adherend thickness had a major influence, the effect of variation in longitudinal and shear modulus on the stresses was found to be little. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a new load distribution strategy called `send-and-receive' for scheduling divisible loads, in a linear network of processors with communication delay. This strategy is designed to optimally utilize the network resources and thereby minimizes the processing time of entire processing load. A closed-form expression for optimal size of load fractions and processing time are derived when the processing load originates at processor located in boundary and interior of the network. A condition on processor and link speed is also derived to ensure that the processors are continuously engaged in load distributions. This paper also presents a parallel implementation of `digital watermarking problem' on a personal computer-based Pentium Linear Network (PLN) topology. Experiments are carried out to study the performance of the proposed strategy and results are compared with other strategies found in literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present an improved load distribution strategy, for arbitrarily divisible processing loads, to minimize the processing time in a distributed linear network of communicating processors by an efficient utilization of their front-ends. Closed-form solutions are derived, with the processing load originating at the boundary and at the interior of the network, under some important conditions on the arrangement of processors and links in the network. Asymptotic analysis is carried out to explore the ultimate performance limits of such networks. Two important theorems are stated regarding the optimal load sequence and the optimal load origination point. Comparative study of this new strategy with an earlier strategy is also presented.