93 resultados para probability of precocious pregnancy


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the use of probability theory in reliability based optimum design of reinforced gravity retaining wall is described. The formulation for computing system reliability index is presented. A parametric study is conducted using advanced first order second moment method (AFOSM) developed by Hasofer-Lind and Rackwitz-Fiessler (HL-RF) to asses the effect of uncertainties in design parameters on the probability of failure of reinforced gravity retaining wall. Totally 8 modes of failure are considered, viz overturning, sliding, eccentricity, bearing capacity failure, shear and moment failure in the toe slab and heel slab. The analysis is performed by treating back fill soil properties, foundation soil properties, geometric properties of wall, reinforcement properties and concrete properties as random variables. These results are used to investigate optimum wall proportions for different coefficients of variation of φ (5% and 10%) and targeting system reliability index (βt) in the range of 3 – 3.2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, an attempt has been made to evaluate the spatial variation of peak horizontal acceleration (PHA) and spectral acceleration (SA) values at rock level for south India based on the probabilistic seismic hazard analysis (PSHA). These values were estimated by considering the uncertainties involved in magnitude, hypocentral distance and attenuation of seismic waves. Different models were used for the hazard evaluation, and they were combined together using a logic tree approach. For evaluating the seismic hazard, the study area was divided into small grids of size 0.1A degrees A xA 0.1A degrees, and the hazard parameters were calculated at the centre of each of these grid cells by considering all the seismic sources within a radius of 300 km. Rock level PHA values and SA at 1 s corresponding to 10% probability of exceedance in 50 years were evaluated for all the grid points. Maps showing the spatial variation of rock level PHA values and SA at 1 s for the entire south India are presented in this paper. To compare the seismic hazard for some of the important cities, the seismic hazard curves and the uniform hazard response spectrum (UHRS) at rock level with 10% probability of exceedance in 50 years are also presented in this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimal maintenance policies for a machine with degradation in performance with age and subject to failure are derived using optimal control theory. The optimal policies are shown to be, normally, of bang-coast nature, except in the case when probability of machine failure is a function of maintenance. It is also shown, in the deterministic case that a higher depreciation rate tends to reverse this policy to coast-bang. When the probability of failure is a function of maintenance, considerable computational effort is needed to obtain an optimal policy and the resulting policy is not easily implementable. For this case also, an optimal policy in the class of bang-coast policies is derived, using a semi-Markov decision model. A simple procedure for modifying the probability of machine failure with maintenance is employed. The results obtained extend and unify the recent results for this problem along both theoretical and practical lines. Numerical examples are presented to illustrate the results obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The firing characteristics of the simple triggered vacuum gap (TVG) using lead zirconate titanate as dielectric material in the triggered gap are described. This TVG has a long life of about 2000 firings without appreciable deterioration of the electrical properties for main discharge currents upto 3 kA and is much superior to these made with Supramica (Mycalex Corporation of America) and silicon carbide as used in our earlier investigations. The effects of the variation of trigger voltage, trigger curcit, trigger pulse duration, trigger pulse energy, main gap voltage, main gap separation and main circuit energy on the firing characteristics have been studied. Trigger resistance progressively decreases with the number of firings of the trigger gap and as well as of the main gap. This decrease in the trigger resistance is more pronounced for main discharge currents exceeding 10 kA. The minimum trigger current required for reliable firing decreases with increase of trigger voltage upto a threshold value of 1.2 kV and there-onwards saturates at 3.0 A. This value is less than that obtained with Supramica as dielectric material. One hundred percent firing probability of the TVG at main gap voltages as low as 50 V is possible and this low voltage breakdown of the main gap appears to be similar to the breakdown at low pressures between moving plasma by other workers. and the cold electrodes immersed in it, as reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol as standardized in the IEEE 802.11 Distributed Coordination Function (DCF). The approximation is that, when n of the M queues are non-empty, the (transmission) attempt probability of each of the n non-empty nodes is given by the long-term (transmission) attempt probability of n saturated nodes. With the arrival of packets into the M queues according to independent Poisson processes, the SDAR approximation reduces a single cell with non-saturated nodes to a Markovian coupled queueing system. We provide a sufficient condition under which the joint queue length Markov chain is positive recurrent. For the symmetric case of equal arrival rates and finite and equal buffers, we develop an iterative method which leads to accurate predictions for important performance measures such as collision probability, throughput and mean packet delay. We replace the MAC layer with the SDAR model of contention by modifying the NS-2 source code pertaining to the MAC layer, keeping all other layers unchanged. By this model-based simulation technique at the MAC layer, we achieve speed-ups (w.r.t. MAC layer operations) up to 5.4. Through extensive model-based simulations and numerical results, we show that the SDAR model is an accurate model for the DCF MAC protocol in single cells. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A lightning strike in the neighborhood can induce significant currents in tall down conductors. Though the magnitude of induced current in this case is much smaller than that encountered during a direct strike, the probability of occurrence and the frequency content is higher. In view of this, appropriate knowledge on the characteristics of such induced currents is relevant for the scrutiny of the recorded currents and in the evaluation of interference to the electrical and electronic system in the vicinity. Previously, a study was carried out on characteristics of induced currents assuming ideal conditions, that there were no influencing objects in the vicinity of the down conductor and channel. However, some influencing conducting bodies will always be present, such as trees, electricity and communication towers, buildings, and other elevated objects that can affect the induced currents in a down conductor. The present work is carried out to understand the influence of nearby conducting objects on the characteristics of induced currents due to a strike to ground in the vicinity of a tall down conductor. For the study, an electromagnetic model is employed to model the down conductor, channel, and neighboring conducting objects, and Numerical Electromagnetic Code-2 is used for numerical field computations. Neighboring objects of different heights, of different shapes, and at different locations are considered. It is found that the neighboring objects have significant influence on the magnitude and nature of induced currents in a down conductor when the height of the nearby conducting object is comparable to that of the down conductor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gujarat is one of the fastest-growing states of India with high industrial activities coming up in major cities of the state. It is indispensable to analyse seismic hazard as the region is considered to be most seismically active in stable continental region of India. The Bhuj earthquake of 2001 has caused extensive damage in terms of causality and economic loss. In the present study, the seismic hazard of Gujarat evaluated using a probabilistic approach with the use of logic tree framework that minimizes the uncertainties in hazard assessment. The peak horizontal acceleration (PHA) and spectral acceleration (Sa) values were evaluated for 10 and 2 % probability of exceedance in 50 years. Two important geotechnical effects of earthquakes, site amplification and liquefaction, are also evaluated, considering site characterization based on site classes. The liquefaction return period for the entire state of Gujarat is evaluated using a performance-based approach. The maps of PHA and PGA values prepared in this study are very useful for seismic hazard mitigation of the region in future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We studied the development of surface instabilities leading to the generation of multielectron bubbles (MEBs) in superfluid helium upon the application of a pulsed electric field. We found the statistical distribution of the charge of individual instabilities to be strongly dependent on the duration of the electric field pulse. The rate and probability of generation of these instabilities in relation to the temporal characteristics of the applied field was also investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Future space-based gravity wave (GW) experiments such as the Big Bang Observatory (BBO), with their excellent projected, one sigma angular resolution, will measure the luminosity distance to a large number of GW sources to high precision, and the redshift of the single galaxies in the narrow solid angles towards the sources will provide the redshifts of the gravity wave sources. One sigma BBO beams contain the actual source in only 68% of the cases; the beams that do not contain the source may contain a spurious single galaxy, leading to misidentification. To increase the probability of the source falling within the beam, larger beams have to be considered, decreasing the chances of finding single galaxies in the beams. Saini et al. T.D. Saini, S.K. Sethi, and V. Sahni, Phys. Rev. D 81, 103009 (2010)] argued, largely analytically, that identifying even a small number of GW source galaxies furnishes a rough distance-redshift relation, which could be used to further resolve sources that have multiple objects in the angular beam. In this work we further develop this idea by introducing a self-calibrating iterative scheme which works in conjunction with Monte Carlo simulations to determine the luminosity distance to GW sources with progressively greater accuracy. This iterative scheme allows one to determine the equation of state of dark energy to within an accuracy of a few percent for a gravity wave experiment possessing a beam width an order of magnitude larger than BBO (and therefore having a far poorer angular resolution). This is achieved with no prior information about the nature of dark energy from other data sets such as type Ia supernovae, baryon acoustic oscillations, cosmic microwave background, etc. DOI:10.1103/PhysRevD.87.083001

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We theoretically explore the annihilation of vortex dipoles, generated when an obstacle moves through an oblate Bose-Einstein condensate, and examine the energetics of the annihilation event. We show that the grey soliton, which results from the vortex dipole annihilation, is lower in energy than the vortex dipole. We also investigate the annihilation events numerically and observe that annihilation occurs only when the vortex dipole overtakes the obstacle and comes closer than the coherence length. Furthermore, we find that noise reduces the probability of annihilation events. This may explain the lack of annihilation events in experimental realizations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The delineation of seismic source zones plays an important role in the evaluation of seismic hazard. In most of the studies the seismic source delineation is done based on geological features. In the present study, an attempt has been made to delineate seismic source zones in the study area (south India) based on the seismicity parameters. Seismicity parameters and the maximum probable earthquake for these source zones were evaluated and were used in the hazard evaluation. The probabilistic evaluation of seismic hazard for south India was carried out using a logic tree approach. Two different types of seismic sources, linear and areal, were considered in the present study to model the seismic sources in the region more precisely. In order to properly account for the attenuation characteristics of the region, three different attenuation relations were used with different weightage factors. Seismic hazard evaluation was done for the probability of exceedance (PE) of 10% and 2% in 50 years. The spatial variation of rock level peak horizontal acceleration (PHA) and spectral acceleration (Sa) values corresponding to return periods of 475 and 2500 years for the entire study area are presented in this work. The peak ground acceleration (PGA) values at ground surface level were estimated based on different NEHRP site classes by considering local site effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a simple, reliable method based on probability of transitions and distribution of adjacent pixel pairs for steganalysis on digital images in spatial domain subjected to Least Significant Bit replacement steganography. Our method is sensitive to the statistics of underlying cover image and is a variant of Sample Pair Method. We use the new method to estimate length of hidden message reliably. The novelty of our method is that it detects from the statistics of the underlying image, which is invariant with embedding, whether the results it calculate are reliable or not. To our knowledge, no steganalytic method so far predicts from the properties of the stego image, whether its results are accurate or not.