120 resultados para Probability of detection
Resumo:
The firing characteristics of the simple triggered vacuum gap (TVG) using lead zirconate titanate as dielectric material in the triggered gap are described. This TVG has a long life of about 2000 firings without appreciable deterioration of the electrical properties for main discharge currents upto 3 kA and is much superior to these made with Supramica (Mycalex Corporation of America) and silicon carbide as used in our earlier investigations. The effects of the variation of trigger voltage, trigger curcit, trigger pulse duration, trigger pulse energy, main gap voltage, main gap separation and main circuit energy on the firing characteristics have been studied. Trigger resistance progressively decreases with the number of firings of the trigger gap and as well as of the main gap. This decrease in the trigger resistance is more pronounced for main discharge currents exceeding 10 kA. The minimum trigger current required for reliable firing decreases with increase of trigger voltage upto a threshold value of 1.2 kV and there-onwards saturates at 3.0 A. This value is less than that obtained with Supramica as dielectric material. One hundred percent firing probability of the TVG at main gap voltages as low as 50 V is possible and this low voltage breakdown of the main gap appears to be similar to the breakdown at low pressures between moving plasma by other workers. and the cold electrodes immersed in it, as reported.
Resumo:
In this letter, we investigate the circular differential deflection of a light beam refracted at the interface of an optically active medium. We show that the difference between the angles of deviation of the two circularly polarized components of the transmitted beam is enhanced manyfold near total internal reflection, which suggests a simple way of increasing the limit of detection of chiro-optical measurements. (C) 2012 Optical Society of America
Resumo:
We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol as standardized in the IEEE 802.11 Distributed Coordination Function (DCF). The approximation is that, when n of the M queues are non-empty, the (transmission) attempt probability of each of the n non-empty nodes is given by the long-term (transmission) attempt probability of n saturated nodes. With the arrival of packets into the M queues according to independent Poisson processes, the SDAR approximation reduces a single cell with non-saturated nodes to a Markovian coupled queueing system. We provide a sufficient condition under which the joint queue length Markov chain is positive recurrent. For the symmetric case of equal arrival rates and finite and equal buffers, we develop an iterative method which leads to accurate predictions for important performance measures such as collision probability, throughput and mean packet delay. We replace the MAC layer with the SDAR model of contention by modifying the NS-2 source code pertaining to the MAC layer, keeping all other layers unchanged. By this model-based simulation technique at the MAC layer, we achieve speed-ups (w.r.t. MAC layer operations) up to 5.4. Through extensive model-based simulations and numerical results, we show that the SDAR model is an accurate model for the DCF MAC protocol in single cells. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A lightning strike in the neighborhood can induce significant currents in tall down conductors. Though the magnitude of induced current in this case is much smaller than that encountered during a direct strike, the probability of occurrence and the frequency content is higher. In view of this, appropriate knowledge on the characteristics of such induced currents is relevant for the scrutiny of the recorded currents and in the evaluation of interference to the electrical and electronic system in the vicinity. Previously, a study was carried out on characteristics of induced currents assuming ideal conditions, that there were no influencing objects in the vicinity of the down conductor and channel. However, some influencing conducting bodies will always be present, such as trees, electricity and communication towers, buildings, and other elevated objects that can affect the induced currents in a down conductor. The present work is carried out to understand the influence of nearby conducting objects on the characteristics of induced currents due to a strike to ground in the vicinity of a tall down conductor. For the study, an electromagnetic model is employed to model the down conductor, channel, and neighboring conducting objects, and Numerical Electromagnetic Code-2 is used for numerical field computations. Neighboring objects of different heights, of different shapes, and at different locations are considered. It is found that the neighboring objects have significant influence on the magnitude and nature of induced currents in a down conductor when the height of the nearby conducting object is comparable to that of the down conductor.
Assessment of seismic hazard and liquefaction potential of Gujarat based on probabilistic approaches
Resumo:
Gujarat is one of the fastest-growing states of India with high industrial activities coming up in major cities of the state. It is indispensable to analyse seismic hazard as the region is considered to be most seismically active in stable continental region of India. The Bhuj earthquake of 2001 has caused extensive damage in terms of causality and economic loss. In the present study, the seismic hazard of Gujarat evaluated using a probabilistic approach with the use of logic tree framework that minimizes the uncertainties in hazard assessment. The peak horizontal acceleration (PHA) and spectral acceleration (Sa) values were evaluated for 10 and 2 % probability of exceedance in 50 years. Two important geotechnical effects of earthquakes, site amplification and liquefaction, are also evaluated, considering site characterization based on site classes. The liquefaction return period for the entire state of Gujarat is evaluated using a performance-based approach. The maps of PHA and PGA values prepared in this study are very useful for seismic hazard mitigation of the region in future.
Resumo:
We studied the development of surface instabilities leading to the generation of multielectron bubbles (MEBs) in superfluid helium upon the application of a pulsed electric field. We found the statistical distribution of the charge of individual instabilities to be strongly dependent on the duration of the electric field pulse. The rate and probability of generation of these instabilities in relation to the temporal characteristics of the applied field was also investigated.
Resumo:
Future space-based gravity wave (GW) experiments such as the Big Bang Observatory (BBO), with their excellent projected, one sigma angular resolution, will measure the luminosity distance to a large number of GW sources to high precision, and the redshift of the single galaxies in the narrow solid angles towards the sources will provide the redshifts of the gravity wave sources. One sigma BBO beams contain the actual source in only 68% of the cases; the beams that do not contain the source may contain a spurious single galaxy, leading to misidentification. To increase the probability of the source falling within the beam, larger beams have to be considered, decreasing the chances of finding single galaxies in the beams. Saini et al. T.D. Saini, S.K. Sethi, and V. Sahni, Phys. Rev. D 81, 103009 (2010)] argued, largely analytically, that identifying even a small number of GW source galaxies furnishes a rough distance-redshift relation, which could be used to further resolve sources that have multiple objects in the angular beam. In this work we further develop this idea by introducing a self-calibrating iterative scheme which works in conjunction with Monte Carlo simulations to determine the luminosity distance to GW sources with progressively greater accuracy. This iterative scheme allows one to determine the equation of state of dark energy to within an accuracy of a few percent for a gravity wave experiment possessing a beam width an order of magnitude larger than BBO (and therefore having a far poorer angular resolution). This is achieved with no prior information about the nature of dark energy from other data sets such as type Ia supernovae, baryon acoustic oscillations, cosmic microwave background, etc. DOI:10.1103/PhysRevD.87.083001
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.
Resumo:
We theoretically explore the annihilation of vortex dipoles, generated when an obstacle moves through an oblate Bose-Einstein condensate, and examine the energetics of the annihilation event. We show that the grey soliton, which results from the vortex dipole annihilation, is lower in energy than the vortex dipole. We also investigate the annihilation events numerically and observe that annihilation occurs only when the vortex dipole overtakes the obstacle and comes closer than the coherence length. Furthermore, we find that noise reduces the probability of annihilation events. This may explain the lack of annihilation events in experimental realizations.
Resumo:
The delineation of seismic source zones plays an important role in the evaluation of seismic hazard. In most of the studies the seismic source delineation is done based on geological features. In the present study, an attempt has been made to delineate seismic source zones in the study area (south India) based on the seismicity parameters. Seismicity parameters and the maximum probable earthquake for these source zones were evaluated and were used in the hazard evaluation. The probabilistic evaluation of seismic hazard for south India was carried out using a logic tree approach. Two different types of seismic sources, linear and areal, were considered in the present study to model the seismic sources in the region more precisely. In order to properly account for the attenuation characteristics of the region, three different attenuation relations were used with different weightage factors. Seismic hazard evaluation was done for the probability of exceedance (PE) of 10% and 2% in 50 years. The spatial variation of rock level peak horizontal acceleration (PHA) and spectral acceleration (Sa) values corresponding to return periods of 475 and 2500 years for the entire study area are presented in this work. The peak ground acceleration (PGA) values at ground surface level were estimated based on different NEHRP site classes by considering local site effects.
Resumo:
We propose a simple, reliable method based on probability of transitions and distribution of adjacent pixel pairs for steganalysis on digital images in spatial domain subjected to Least Significant Bit replacement steganography. Our method is sensitive to the statistics of underlying cover image and is a variant of Sample Pair Method. We use the new method to estimate length of hidden message reliably. The novelty of our method is that it detects from the statistics of the underlying image, which is invariant with embedding, whether the results it calculate are reliable or not. To our knowledge, no steganalytic method so far predicts from the properties of the stego image, whether its results are accurate or not.
Resumo:
The uncertainty in material properties and traffic characterization in the design of flexible pavements has led to significant efforts in recent years to incorporate reliability methods and probabilistic design procedures for the design, rehabilitation, and maintenance of pavements. In the mechanistic-empirical (ME) design of pavements, despite the fact that there are multiple failure modes, the design criteria applied in the majority of analytical pavement design methods guard only against fatigue cracking and subgrade rutting, which are usually considered as independent failure events. This study carries out the reliability analysis for a flexible pavement section for these failure criteria based on the first-order reliability method (FORM) and the second-order reliability method (SORM) techniques and the crude Monte Carlo simulation. Through a sensitivity analysis, the most critical parameter affecting the design reliability for both fatigue and rutting failure criteria was identified as the surface layer thickness. However, reliability analysis in pavement design is most useful if it can be efficiently and accurately applied to components of pavement design and the combination of these components in an overall system analysis. The study shows that for the pavement section considered, there is a high degree of dependence between the two failure modes, and demonstrates that the probability of simultaneous occurrence of failures can be almost as high as the probability of component failures. Thus, the need to consider the system reliability in the pavement analysis is highlighted, and the study indicates that the improvement of pavement performance should be tackled in the light of reducing this undesirable event of simultaneous failure and not merely the consideration of the more critical failure mode. Furthermore, this probability of simultaneous occurrence of failures is seen to increase considerably with small increments in the mean traffic loads, which also results in wider system reliability bounds. The study also advocates the use of narrow bounds to the probability of failure, which provides a better estimate of the probability of failure, as validated from the results obtained from Monte Carlo simulation (MCS).
Resumo:
The evolution of sexually dimorphic, elaborate male traits that are seemingly maladaptive may be driven by sexual selection (male-male competition and or female mate choice). Tusk possession in the Asian elephant is sexually dimorphic and exaggerated but its role in male-male competition has not yet been determined. We examined the role of the tusks in establishing dominance along with two other known male-male signals, namely, body size and musth (a temporary physiologically heightened sexual state) in an Asian elephant population in northeastern India with equal proportions of tusked and tuskless males. We observed 116 agonistic interactions with clear dominance outcomes between adult (>15 years) males during 458 field days in the dry season months of 2008-2011. A generalized linear mixed-effects model was used to predict the probability of winning as a function of body size, tusk possession and musth status relative to the opponent. A hierarchy of the three male-male signals emerged from this analysis, with musth overriding body size and body size overriding tusk possession. In this elephant population tusk possession thus plays a relatively minor role in male-male competition. An important implication of musth and body size being stronger determinants of dominance than tusk possession is that it could facilitate rapid evolution of tuskless males in the population under artificial selection against tusked individuals, which are poached for ivory. (C) 2013 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.
Resumo:
The detection of contaminated food in every stage of processing required new technology for fast identification and isolation of toxicity in food. Since effect of food contaminant are severe to human health, the need of pioneer technologies also increasing over last few decades. In the current study, MDA was prepared by hydrolysis of 1,1,3,3-tetramethoxypropane in HCl media and used in the electrochemical studies. The electrochemical sensor was fabricated with modified glassy carbon electrode with polyaniline. These sensors were used for detection of sodium salt of malonaldehyde and observed that a high sensitivity in the concentration range similar to 1 x 10(-1) M and 1 x 10(-2) M. Tafel plots show the variation of over potential from -1.73 V to -3.74 V up to 10(-5) mol/L indicating the lower limit of detection of the system. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
The industrial production and commercial applications of titanium dioxide nanoparticles have increased considerably in recent times, which has increased the probability of environmental contamination with these agents and their adverse effects on living systems. This study was designed to assess the genotoxicity potential of TiO2 NPs at high exposure concentrations, its bio-uptake, and the oxidative stress it generated, a recognised cause of genotoxicity. Allium cepa root tips were treated with TiO2 NP dispersions at four different concentrations (12.5, 25, 50, 100 mu g/mL). A dose dependant decrease in the mitotic index (69 to 21) and an increase in the number of distinctive chromosomal aberrations were observed. Optical, fluorescence and confocal laser scanning microscopy revealed chromosomal aberrations, including chromosomal breaks and sticky, multipolar, and laggard chromosomes, and micronucleus formation. The chromosomal aberrations and DNA damage were also validated by the comet assay. The bio-uptake of TiO2 in particulate form was the key cause of reactive oxygen species generation, which in turn was probably the cause of the DNA aberrations and genotoxicity observed in this study.