317 resultados para Random parameters
Resumo:
An attempt has been made to quantify the variability in the seismic activity rate across the whole of India and adjoining areas (0–45°N and 60–105°E) using earthquake database compiled from various sources. Both historical and instrumental data were compiled and the complete catalog of Indian earthquakes till 2010 has been prepared. Region-specific earthquake magnitude scaling relations correlating different magnitude scales were achieved to develop a homogenous earthquake catalog for the region in unified moment magnitude scale. The dependent events (75.3%) in the raw catalog have been removed and the effect of aftershocks on the variation of b value has been quantified. The study area was divided into 2,025 grid points (1°91°) and the spatial variation of the seismicity across the region have been analyzed considering all the events within 300 km radius from each grid point. A significant decrease in seismic b value was seen when declustered catalog was used which illustrates that a larger proportion of dependent events in the earthquake catalog are related to lower magnitude events. A list of 203,448 earth- quakes (including aftershocks and foreshocks) occurred in the region covering the period from 250 B.C. to 2010 A.D. with all available details is uploaded in the website http://www.civil.iisc.ernet.in/*sreevals/resource.htm.
Resumo:
Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Since its induction, the selective-identity (sID) model for identity-based cryptosystems and its relationship with various other notions of security has been extensively studied. As a result, it is a general consensus that the sID model is much weaker than the full-identity (ID) model. In this paper, we study the sID model for the particular case of identity-based signatures (IBS). The main focus is on the problem of constructing an ID-secure IBS given an sID-secure IBS without using random oracles-the so-called standard model-and with reasonable security degradation. We accomplish this by devising a generic construction which uses as black-box: i) a chameleon hash function and ii) a weakly-secure public-key signature. We argue that the resulting IBS is ID-secure but with a tightness gap of O(q(s)), where q(s) is the upper bound on the number of signature queries that the adversary is allowed to make. To the best of our knowledge, this is the first attempt at such a generic construction.
Resumo:
A new representation of spatio-temporal random processes is proposed in this work. In practical applications, such processes are used to model velocity fields, temperature distributions, response of vibrating systems, to name a few. Finding an efficient representation for any random process leads to encapsulation of information which makes it more convenient for a practical implementations, for instance, in a computational mechanics problem. For a single-parameter process such as spatial or temporal process, the eigenvalue decomposition of the covariance matrix leads to the well-known Karhunen-Loeve (KL) decomposition. However, for multiparameter processes such as a spatio-temporal process, the covariance function itself can be defined in multiple ways. Here the process is assumed to be measured at a finite set of spatial locations and a finite number of time instants. Then the spatial covariance matrix at different time instants are considered to define the covariance of the process. This set of square, symmetric, positive semi-definite matrices is then represented as a third-order tensor. A suitable decomposition of this tensor can identify the dominant components of the process, and these components are then used to define a closed-form representation of the process. The procedure is analogous to the KL decomposition for a single-parameter process, however, the decompositions and interpretations vary significantly. The tensor decompositions are successfully applied on (i) a heat conduction problem, (ii) a vibration problem, and (iii) a covariance function taken from the literature that was fitted to model a measured wind velocity data. It is observed that the proposed representation provides an efficient approximation to some processes. Furthermore, a comparison with KL decomposition showed that the proposed method is computationally cheaper than the KL, both in terms of computer memory and execution time.
Resumo:
A discrete-time dynamics of a non-Markovian random walker is analyzed using a minimal model where memory of the past drives the present dynamics. In recent work N. Kumar et al., Phys. Rev. E 82, 021101 (2010)] we proposed a model that exhibits asymptotic superdiffusion, normal diffusion, and subdiffusion with the sweep of a single parameter. Here we propose an even simpler model, with minimal options for the walker: either move forward or stay at rest. We show that this model can also give rise to diffusive, subdiffusive, and superdiffusive dynamics at long times as a single parameter is varied. We show that in order to have subdiffusive dynamics, the memory of the rest states must be perfectly correlated with the present dynamics. We show explicitly that if this condition is not satisfied in a unidirectional walk, the dynamics is only either diffusive or superdiffusive (but not subdiffusive) at long times.
Resumo:
Despite decades of research, it remains to be established whether the transformation of a liquid into a glass is fundamentally thermodynamic or dynamic in origin. Although observations of growing length scales are consistent with thermodynamic perspectives, the purely dynamic approach of the Dynamical Facilitation (DF) theory lacks experimental support. Further, for vitrification induced by randomly freezing a subset of particles in the liquid phase, simulations support the existence of an underlying thermodynamic phase transition, whereas the DF theory remains unexplored. Here, using video microscopy and holographic optical tweezers, we show that DF in a colloidal glass-forming liquid grows with density as well as the fraction of pinned particles. In addition, we observe that heterogeneous dynamics in the form of string-like cooperative motion emerges naturally within the framework of facilitation. Our findings suggest that a deeper understanding of the glass transition necessitates an amalgamation of existing theoretical approaches.
Resumo:
This paper proposes a novel experimental test procedure to estimate the reliability of structural dynamical systems under excitations specified via random process models. The samples of random excitations to be used in the test are modified by the addition of an artificial control force. An unbiased estimator for the reliability is derived based on measured ensemble of responses under these modified inputs based on the tenets of Girsanov transformation. The control force is selected so as to reduce the sampling variance of the estimator. The study observes that an acceptable choice for the control force can be made solely based on experimental techniques and the estimator for the reliability can be deduced without taking recourse to mathematical model for the structure under study. This permits the proposed procedure to be applied in the experimental study of time-variant reliability of complex structural systems that are difficult to model mathematically. Illustrative example consists of a multi-axes shake table study on bending-torsion coupled, geometrically non-linear, five-storey frame under uni/bi-axial, non-stationary, random base excitation. Copyright (c) 2014 John Wiley & Sons, Ltd.
Resumo:
Using numerical diagonalization we study the crossover among different random matrix ensembles (Poissonian, Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE) and Gaussian symplectic ensemble (GSE)) realized in two different microscopic models. The specific diagnostic tool used to study the crossovers is the level spacing distribution. The first model is a one-dimensional lattice model of interacting hard-core bosons (or equivalently spin 1/2 objects) and the other a higher dimensional model of non-interacting particles with disorder and spin-orbit coupling. We find that the perturbation causing the crossover among the different ensembles scales to zero with system size as a power law with an exponent that depends on the ensembles between which the crossover takes place. This exponent is independent of microscopic details of the perturbation. We also find that the crossover from the Poissonian ensemble to the other three is dominated by the Poissonian to GOE crossover which introduces level repulsion while the crossover from GOE to GUE or GOE to GSE associated with symmetry breaking introduces a subdominant contribution. We also conjecture that the exponent is dependent on whether the system contains interactions among the elementary degrees of freedom or not and is independent of the dimensionality of the system.
Resumo:
We apply the objective method of Aldous to the problem of finding the minimum-cost edge cover of the complete graph with random independent and identically distributed edge costs. The limit, as the number of vertices goes to infinity, of the expected minimum cost for this problem is known via a combinatorial approach of Hessler and Wastlund. We provide a proof of this result using the machinery of the objective method and local weak convergence, which was used to prove the (2) limit of the random assignment problem. A proof via the objective method is useful because it provides us with more information on the nature of the edge's incident on a typical root in the minimum-cost edge cover. We further show that a belief propagation algorithm converges asymptotically to the optimal solution. This can be applied in a computational linguistics problem of semantic projection. The belief propagation algorithm yields a near optimal solution with lesser complexity than the known best algorithms designed for optimality in worst-case settings.
Resumo:
Although uncertainties in material properties have been addressed in the design of flexible pavements, most current modeling techniques assume that pavement layers are homogeneous. The paper addresses the influence of the spatial variability of the resilient moduli of pavement layers by evaluating the effect of the variance and correlation length on the pavement responses to loading. The integration of the spatially varying log-normal random field with the finite-difference method has been achieved through an exponential autocorrelation function. The variation in the correlation length was found to have a marginal effect on the mean values of the critical strains and a noticeable effect on the standard deviation which decreases with decreases in correlation length. This reduction in the variance arises because of the spatial averaging phenomenon over the softer and stiffer zones generated because of spatial variability. The increase in the mean value of critical strains with decreasing correlation length, although minor, illustrates that pavement performance is adversely affected by the presence of spatially varying layers. The study also confirmed that the higher the variability in the pavement layer moduli, introduced through a higher value of coefficient of variation (COV), the higher the variability in the pavement response. The study concludes that ignoring spatial variability by modeling the pavement layers as homogeneous that have very short correlation lengths can result in the underestimation of the critical strains and thus an inaccurate assessment of the pavement performance. (C) 2014 American Society of Civil Engineers.
Resumo:
The potential of Citrobacter freundii, a Gram negative bacteria for the remediation of hexavalent chromium (Cr(VI)) and trivalent chromium (Cr(III))) from aqueous solutions was investigated. Bioremediation of Cr(VI) involved both biosorption and bioreduction processes, as compared to only biosorption process observed with respect to Cr(III) bioremediation. In the case of Cr(VI) bioremediation studies, about 59 % biosorption was achieved at an equilibrium time of 2 h, initial Cr(VI) concentration of 4 mg/L, pH 1 and a biomass loading of 5x10(11) cells/mL. The remainder, 41 %, was found to be in the form of Cr(111) ions owing to bioreduction of Cr(VI) by the bacteria resulting in the absence of Cr(VI) ions in the residue, there by meeting the USEPA specifications. Similar studies were carried out using Cr(III) solution for an equilibrium time of 2 h, Cr(III) concentration of 4 mg/L, pH 3 and a biomass loading of 6.3x10(11) cells/mL., wherein a maximum biosorption of about 30 % was achieved.
Resumo:
Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for optimizing parameters during FSP of aluminium alloys, which is otherwise difficult through analytical methods due to the complex interactions between work-piece, tool and process parameters. Precipitation mechanisms during FSP were responsible for the fine grained microstructure in the nugget zone that provided better mechanical properties than the base metal. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Designing a robust algorithm for visual object tracking has been a challenging task since many years. There are trackers in the literature that are reasonably accurate for many tracking scenarios but most of them are computationally expensive. This narrows down their applicability as many tracking applications demand real time response. In this paper, we present a tracker based on random ferns. Tracking is posed as a classification problem and classification is done using ferns. We used ferns as they rely on binary features and are extremely fast at both training and classification as compared to other classification algorithms. Our experiments show that the proposed tracker performs well on some of the most challenging tracking datasets and executes much faster than one of the state-of-the-art trackers, without much difference in tracking accuracy.
Resumo:
Diffusion couple experiments are conducted to study phase evolutions in the Co-rich part of the Co-Ni-Ta phase diagram. This helps to examine the available phase diagram and propose a correction on the stability of the Co2Ta phase based on the compositional measurements and X-ray analysis. The growth rate of this phase decreases with an increase in Ni content. The same is reflected on the estimated integrated interdiffusion coefficients of the components in this phase. The possible reasons for this change are discussed based on the discussions of defects, crystal structure and the driving forces for diffusion. Diffusion rate of Co in the Co2Ta phase at the Co-rich composition is higher because of more number of Co-Co bonds present compared to that of Ta-Ta bonds and the presence of Co antisites for the deviation from the stoichiometry. The decrease in the diffusion coefficients because of Ni addition indicates that Ni preferably replaces Co antisites to decrease the diffusion rate. (C) 2014 Elsevier B.V. All rights reserved.