934 resultados para PROBABILISTIC TELEPORTATION
Resumo:
A link level reliable multicast requires a channel access protocol to resolve the collision of feedback messages sent by multicast data receivers. Several deterministic media access control protocols have been proposed to attain high reliability, but with large delay. Besides, there are also protocols which can only give probabilistic guarantee about reliability, but have the least delay. In this paper, we propose a virtual token-based channel access and feedback protocol (VTCAF) for link level reliable multicasting. The VTCAF protocol introduces a virtual (implicit) token passing mechanism based on carrier sensing to avoid the collision between feedback messages. The delay performance is improved in VTCAF protocol by reducing the number of feedback messages. Besides, the VTCAF protocol is parametric in nature and can easily trade off reliability with the delay as per the requirement of the underlying application. Such a cross layer design approach would be useful for a variety of multicast applications which require reliable communication with different levels of reliability and delay performance. We have analyzed our protocol to evaluate various performance parameters at different packet loss rate and compared its performance with those of others. Our protocol has also been simulated using Castalia network simulator to evaluate the same performance parameters. Simulation and analytical results together show that the VTCAF protocol is able to considerably reduce average access delay while ensuring very high reliability at the same time.
Resumo:
High wind poses a number of hazards in different areas such as structural safety, aviation, and wind energy-where low wind speed is also a concern, pollutant transport, to name a few. Therefore, usage of a good prediction tool for wind speed is necessary in these areas. Like many other natural processes, behavior of wind is also associated with considerable uncertainties stemming from different sources. Therefore, to develop a reliable prediction tool for wind speed, these uncertainties should be taken into account. In this work, we propose a probabilistic framework for prediction of wind speed from measured spatio-temporal data. The framework is based on decompositions of spatio-temporal covariance and simulation using these decompositions. A novel simulation method based on a tensor decomposition is used here in this context. The proposed framework is composed of a set of four modules, and the modules have flexibility to accommodate further modifications. This framework is applied on measured data on wind speed in Ireland. Both short-and long-term predictions are addressed.
Resumo:
A nonlinear stochastic filtering scheme based on a Gaussian sum representation of the filtering density and an annealing-type iterative update, which is additive and uses an artificial diffusion parameter, is proposed. The additive nature of the update relieves the problem of weight collapse often encountered with filters employing weighted particle based empirical approximation to the filtering density. The proposed Monte Carlo filter bank conforms in structure to the parent nonlinear filtering (Kushner-Stratonovich) equation and possesses excellent mixing properties enabling adequate exploration of the phase space of the state vector. The performance of the filter bank, presently assessed against a few carefully chosen numerical examples, provide ample evidence of its remarkable performance in terms of filter convergence and estimation accuracy vis-a-vis most other competing filters especially in higher dimensional dynamic system identification problems including cases that may demand estimating relatively minor variations in the parameter values from their reference states. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Understanding the changing nature of the intraseasonal oscillatory (ISO) modes of Indian summer monsoon manifested by active and break phase, and their association with extreme rainfall events are necessary for probabilistic estimation of flood-related risks in a warming climate. Here, using ground-based observed rainfall, we define an index to measure the strength of monsoon ISOs and show that the relative strength of the northward-propagating low-frequency ISO (20-60 days) modes have had a significant decreasing trend during the past six decades, possibly attributed to the weakening of large-scale circulation in the region during monsoon season. This reduction is compensated by a gain in synoptic-scale (3-9 days) variability. The decrease in low-frequency ISO variability is associated with a significant decreasing trend in the percentage of extreme events during the active phase of the monsoon. However, this decrease is balanced by significant increasing trends in the percentage of extreme events in the break and transition phases. We also find a significant rise in the occurrence of extremes during early and late monsoon months, mainly over eastern coastal regions. Our study highlights the redistribution of rainfall intensity among periodic (low-frequency) and non-periodic (extreme) modes in a changing climate scenario.
Resumo:
The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.
Resumo:
The objective of this paper was to develop the seismic hazard maps of Patna district considering the region-specific maximum magnitude and ground motion prediction equation (GMPEs) by worst-case deterministic and classical probabilistic approaches. Patna, located near Himalayan active seismic region has been subjected to destructive earthquakes such as 1803 and 1934 Bihar-Nepal earthquakes. Based on the past seismicity and earthquake damage distribution, linear sources and seismic events have been considered at radius of about 500 km around Patna district center. Maximum magnitude (M (max)) has been estimated based on the conventional approaches such as maximum observed magnitude (M (max) (obs) ) and/or increment of 0.5, Kijko method and regional rupture characteristics. Maximum of these three is taken as maximum probable magnitude for each source. Twenty-seven ground motion prediction equations (GMPEs) are found applicable for Patna region. Of these, suitable region-specific GMPEs are selected by performing the `efficacy test,' which makes use of log-likelihood. Maximum magnitude and selected GMPEs are used to estimate PGA and spectral acceleration at 0.2 and 1 s and mapped for worst-case deterministic approach and 2 and 10 % period of exceedance in 50 years. Furthermore, seismic hazard results are used to develop the deaggregation plot to quantify the contribution of seismic sources in terms of magnitude and distance. In this study, normalized site-specific design spectrum has been developed by dividing the hazard map into four zones based on the peak ground acceleration values. This site-specific response spectrum has been compared with recent Sikkim 2011 earthquake and Indian seismic code IS1893.
Resumo:
The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Identification of dominant modes is an important step in studying linearly vibrating systems, including flow-induced vibrations. In the presence of uncertainty, when some of the system parameters and the external excitation are modeled as random quantities, this step becomes more difficult. This work is aimed at giving a systematic treatment to this end. The ability to capture the time averaged kinetic energy is chosen as the primary criterion for selection of modes. Accordingly, a methodology is proposed based on the overlap of probability density functions (pdf) of the natural and excitation frequencies, proximity of the natural frequencies of the mean or baseline system, modal participation factor, and stochastic variation of mode shapes in terms of the modes of the baseline system - termed here as statistical modal overlapping. The probabilistic descriptors of the natural frequencies and mode shapes are found by solving a random eigenvalue problem. Three distinct vibration scenarios are considered: (i) undamped arid damped free vibrations of a bladed disk assembly, (ii) forced vibration of a building, and (iii) flutter of a bridge model. Through numerical studies, it is observed that the proposed methodology gives an accurate selection of modes. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Granular filters are provided for the safety of water retaining structure for protection against piping failure. The phenomenon of piping triggers when the base soil to be protected starts migrating in the direction of seepage flow under the influence of seepage force. To protect base soil from migration, the voids in the filter media should be small enough but it should not also be too small to block smooth passage of seeping water. Fulfilling these two contradictory design requirements at the same time is a major concern for the successful performance of granular filter media. Since Terzaghi era, conventionally, particle size distribution (PSD) of granular filters is designed based on particle size distribution characteristics of the base soil to be protected. The design approach provides a range of D15f value in which the PSD of granular filter media should fall and there exist infinite possibilities. Further, safety against the two critical design requirements cannot be ensured. Although used successfully for many decades, the existing filter design guidelines are purely empirical in nature accompanied with experience and good engineering judgment. In the present study, analytical solutions for obtaining the factor of safety with respect to base soil particle migration and soil permeability consideration as proposed by the authors are first discussed. The solution takes into consideration the basic geotechnical properties of base soil and filter media as well as existing hydraulic conditions and provides a comprehensive solution to the granular filter design with ability to assess the stability in terms of factor of safety. Considering the fact that geotechnical properties are variable in nature, probabilistic analysis is further suggested to evaluate the system reliability of the filter media that may help in risk assessment and risk management for decision making.
Resumo:
Identifying translations from comparable corpora is a well-known problem with several applications, e.g. dictionary creation in resource-scarce languages. Scarcity of high quality corpora, especially in Indian languages, makes this problem hard, e.g. state-of-the-art techniques achieve a mean reciprocal rank (MRR) of 0.66 for English-Italian, and a mere 0.187 for Telugu-Kannada. There exist comparable corpora in many Indian languages with other ``auxiliary'' languages. We observe that translations have many topically related words in common in the auxiliary language. To model this, we define the notion of a translingual theme, a set of topically related words from auxiliary language corpora, and present a probabilistic framework for translation induction. Extensive experiments on 35 comparable corpora using English and French as auxiliary languages show that this approach can yield dramatic improvements in performance (e.g. MRR improves by 124% to 0.419 for Telugu-Kannada). A user study on WikiTSu, a system for cross-lingual Wikipedia title suggestion that uses our approach, shows a 20% improvement in the quality of titles suggested.
Resumo:
Anonymity and authenticity are both important yet often conflicting security goals in a wide range of applications. On the one hand for many applications (say for access control) it is crucial to be able to verify the identity of a given legitimate party (a.k.a. entity authentication). Alternatively an application might require that no one but a party can communicate on its behalf (a.k.a. message authentication). Yet, on the other hand privacy concerns also dictate that anonymity of a legitimate party should be preserved; that is no information concerning the identity of parties should be leaked to an outside entity eavesdropping on the communication. This conflict becomes even more acute when considering anonymity with respect to an active entity that may attempt to impersonate other parties in the system. In this work we resolve this conflict in two steps. First we formalize what it means for a system to provide both authenticity and anonymity even in the presence of an active man-in-the-middle adversary for various specific applications such as message and entity authentication using the constructive cryptography framework of Mau11, MR11]. Our approach inherits the composability statement of constructive cryptography and can therefore be directly used in any higher-level context. Next we demonstrate several simple protocols for realizing these systems, at times relying on a new type of (probabilistic) Message Authentication Code (MAC) called key indistinguishable (KI) MACs. Similar to the key hiding encryption schemes of BBDP01] they guarantee that tags leak no discernible information about the keys used to generate them.
Resumo:
The problem of characterizing global sensitivity indices of structural response when system uncertainties are represented using probabilistic and (or) non-probabilistic modeling frameworks (which include intervals, convex functions, and fuzzy variables) is considered. These indices are characterized in terms of distance measures between a fiducial model in which uncertainties in all the pertinent variables are taken into account and a family of hypothetical models in which uncertainty in one or more selected variables are suppressed. The distance measures considered include various probability distance measures (Hellinger,l(2), and the Kantorovich metrics, and the Kullback-Leibler divergence) and Hausdorff distance measure as applied to intervals and fuzzy variables. Illustrations include studies on an uncertainly parametered building frame carrying uncertain loads. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, a methodology to reduce composite structure maintenance operational cost using SHM systems is adressed. Based on SHM real-time data, in-service structure lifetime prognostic and remaining useful lifetime (RUL) can be performed. Maintenance timetable can be therefore predicted by optimizing inspection times. A probabilistic ap-proach is combined with phenomenological fatigue damage models for composite mate-rials to perform maintenance cost-effectiveness of composite structure. A Monte Carlo method is used to estimate the probability of failure of composite structures and com-pute the average number of composite structure components to be replaced over the component lifetime. The replacement frequency of a given structure component over the aircraft lifetime is assessed. A first application of aeronautical composite structure maintenance is considered. Two composite models to predict the fatigue life and several laminates have been used. Our study shows that maintenance cost-effectiveness depends on material and fatigue loading applied.
Resumo:
Modeling the spatial variability that exists in pavement systems can be conveniently represented by means of random fields; in this study, a probabilistic analysis that considers the spatial variability, including the anisotropic nature of the pavement layer properties, is presented. The integration of the spatially varying log-normal random fields into a linear-elastic finite difference analysis has been achieved through the expansion optimal linear estimation method. For the estimation of the critical pavement responses, metamodels based on polynomial chaos expansion (PCE) are developed to replace the computationally expensive finite-difference model. The sparse polynomial chaos expansion based on an adaptive regression-based algorithm, and enhanced by the combined use of the global sensitivity analysis (GSA) is used, with significant savings in computational effort. The effect of anisotropy in each layer on the pavement responses was studied separately, and an effort is made to identify the pavement layer wherein the introduction of anisotropic characteristics results in the most significant impact on the critical strains. It is observed that the anisotropy in the base layer has a significant but diverse effect on both critical strains. While the compressive strain tends to be considerably higher than that observed for the isotropic section, the tensile strains show a decrease in the mean value with the introduction of base-layer anisotropy. Furthermore, asphalt-layer anisotropy also tends to decrease the critical tensile strain while having little effect on the critical compressive strain. (C) 2015 American Society of Civil Engineers.