120 resultados para Probability of detection
Resumo:
The effect of uncertainty in composite material properties on the aeroelastic response, vibratory loads, and stability of a hingeless helicopter rotor is investigated. The uncertainty impact on rotating natural frequencies of the blade is studied with Monte Carlo simulations and first-order reliability methods. The stochastic aeroelastic analyses in hover and forward flight are carried out with Monte Carlo simulations. The flap, lag, and torsion responses show considerable scatter from their baseline values, and the uncertainty impact varies with the azimuth angle. Furthermore, the blade response shows finite probability of resonance-type conditions caused by modal frequencies approaching multiples of the rotor speed. The 4/rev vibratory forces show large deviations from their baseline values. The lag mode damping shows considerable scatter due to uncertain material properties with an almost 40% probability of instability in hover.
Resumo:
It is well known that the increasing space activities pose a serious threat to future missions. This is mainly due to the presence of spent stages, rockets spacecraft and fragments which can lead to collisions. The calculation of the collision probability of future space vehicles with the orbital debris is necessary for estimating the risk. There is lack of adequately catalogued and openly available detailed information on the explosion characteristics of trackable and untrackable debris data. Such a situation compels one to develop suitable mathematical modelling of the explosion and the resultant debris environment. Based on a study of the available information regarding the fragmentation, subsequent evolution and observation, it turns out to be possible to develop such a mathematical model connecting the dynamical features of the fragmentation with the geometrical/orbital characteristics of the debris and representing the environment through the idea of equivalent breakup. (C) 1997 COSPAR.
Resumo:
Timer-based mechanisms are often used in several wireless systems to help a given (sink) node select the best helper node among many available nodes. Specifically, a node transmits a packet when its timer expires, and the timer value is a function of its local suitability metric. In practice, the best node gets selected successfully only if no other node's timer expires within a `vulnerability' window after its timer expiry. In this paper, we provide a complete closed-form characterization of the optimal metric-to-timer mapping that maximizes the probability of success for any probability distribution function of the metric. The optimal scheme is scalable, distributed, and much better than the popular inverse metric timer mapping. We also develop an asymptotic characterization of the optimal scheme that is elegant and insightful, and accurate even for a small number of nodes.
Resumo:
This paper is on the design and performance analysis of practical distributed space-time codes for wireless relay networks with multiple antennas terminals. The amplify-andforward scheme is used in a way that each relay transmits a scaled version of the linear combination of the received symbols. We propose distributed generalized quasi-orthogonal space-time codes which are distributed among the source antennas and relays, and valid for any number of relays. Assuming M-PSK and M-QAM signals, we derive a formula for the symbol error probability of the investigated scheme over Rayleigh fading channels. For sufficiently large SNR, this paper derives closed-form average SER expression. The simplicity of the asymptotic results provides valuable insights into the performance of cooperative networks and suggests means of optimizing them. Our analytical results have been confirmed by simulation results, using full-rate full-diversity distributed codes.
Resumo:
Seismic design of reinforced soil structures involves many uncertainties that arise from the backfill soil properties and tensile strength of the reinforcement which is not addressed in current design guidelines. This paper highlights the significance of variability in the internal stability assessment of reinforced soil structures. Reliability analysis is applied to estimate probability of failure and pseudo‐static approach has been used for the calculation of the tensile strength and length of the reinforcement needed to maintain the internal stability against tension and pullout failures. Logarithmic spiral failure surface has been considered in conjunction with the limit equilibrium method. Two modes of failure namely, tension failure and pullout failure have been considered. The influence of variations of the backfill soil friction angle, the tensile strength of reinforcement, horizontal seismic acceleration on the reliability index against tension failure and pullout failure of reinforced earth structure have been discussed.
Resumo:
Inspired by the demonstration that tool-use variants among wild chimpanzees and orangutans qualify as traditions (or cultures), we developed a formal model to predict the incidence of these acquired specializations among wild primates and to examine the evolution of their underlying abilities. We assumed that the acquisition of the skill by an individual in a social unit is crucially controlled by three main factors, namely probability of innovation, probability of socially biased learning, and the prevailing social conditions (sociability, or number of potential experts at close proximity). The model reconfirms the restriction of customary tool use in wild primates to the most intelligent radiation, great apes; the greater incidence of tool use in more sociable populations of orangutans and chimpanzees; and tendencies toward tool manufacture among the most sociable monkeys. However, it also indicates that sociable gregariousness is far more likely to produce the maintenance of invented skills in a population than solitary life, where the mother is the only accessible expert. We therefore used the model to explore the evolution of the three key parameters. The most likely evolutionary scenario is that where complex skills contribute to fitness, sociability and/or the capacity for socially biased learning increase, whereas innovative abilities (i.e., intelligence) follow indirectly. We suggest that the evolution of high intelligence will often be a byproduct of selection on abilities for socially biased learning that are needed to acquire important skills, and hence that high intelligence should be most common in sociable rather than solitary organisms. Evidence for increased sociability during hominin evolution is consistent with this new hypothesis. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
We have investigated a mathematical model of the process of activation of the X chromosomes in eutherian mammals. The model assumes that the activation is brought about over some definite time interval T by the complete saturation of N receptor sites on an X chromosome by M activating molecules (or multiples of M). The probability λ of a first hit on the receptor site is considered to be very much lower than that of subsequent hits; that is, we assume strong co-operative binding. Assuming further that an incomplete saturation of receptor sites is malfunctional, we can show that for proper activation of X chromosomes in normal diploid males and females, we must have λMT ≥ 3 and 0·96 ≤ N/M ≤ 1. An extension of this analysis for the triploid cases shows that under these conditions, we cannot explain the activation of two X's if the number of activating molecules is fixed at M. This suggests that there must be two classes of triploid embryos differing from each other in a step-wise manner in the number of activating molecules. In other words, triploids with two active X chromosomes would require 2M activating molecules as opposed to M molecules in triploids with a single active X. This interpretation of the two classes of triploids would be consistent with differing imprinting histories of the parental contributions to the triploid zygote.
Resumo:
We derive the computational cutoff rate, R-o, for coherent trellis-coded modulation (TCM) schemes on independent indentically distributed (i.i.d.) Rayleigh fading channels with (K, L) generalized selection combining (GSC) diversity, which combines the K paths with the largest instantaneous signal-to-noise ratios (SNRs) among the L available diversity paths. The cutoff rate is shown to be a simple function of the moment generating function (MGF) of the SNR at the output of the (K, L) GSC receiver. We also derive the union bound on the bit error probability of TCM schemes with (K, L) GSC in the form of a simple, finite integral. The effectiveness of this bound is verified through simulations.
Resumo:
This paper compares and analyzes the performance of distributed cophasing techniques for uplink transmission over wireless sensor networks. We focus on a time-division duplexing approach, and exploit the channel reciprocity to reduce the channel feedback requirement. We consider periodic broadcast of known pilot symbols by the fusion center (FC), and maximum likelihood estimation of the channel by the sensor nodes for the subsequent uplink cophasing transmission. We assume carrier and phase synchronization across the participating nodes for analytical tractability. We study binary signaling over frequency-flat fading channels, and quantify the system performance such as the expected gains in the received signal-to-noise ratio (SNR) and the average probability of error at the FC, as a function of the number of sensor nodes and the pilot overhead. Our results show that a modest amount of accumulated pilot SNR is sufficient to realize a large fraction of the maximum possible beamforming gain. We also investigate the performance gains obtained by censoring transmission at the sensors based on the estimated channel state, and the benefits obtained by using maximum ratio transmission (MRT) and truncated channel inversion (TCI) at the sensors in addition to cophasing transmission. Simulation results corroborate the theoretical expressions and show the relative performance benefits offered by the various schemes.
Resumo:
Fluorescence quenching of biologically active carboxamide namely (E)-2-(4-chlorobenzylideneamino)-N-(2-chlorophenyl)-4,5,6,7-tetrahydrobe nzo[b]thiophene-3-carboxamide [ECNCTTC] by aniline and carbon tetrachloride (CCl(4)) quenchers in different solvents using steady state method and time resolved method using only one solvent has been carried out at room temperature to understand the role of quenching mechanisms. The Stern-Volmer plot has been found to be linear for all the solvents studied. The probability of quenching per encounter p (p') was determined in all the solvents and was found to be less than unity. Further, from the studies of rate parameters and life time measurements in n-heptane and cyclohexane with aniline and carbon tetrachloride as quenchers have been shown that, the phenomenon of quenching is generally governed by the well-known Stern-Volmer (S-V) plot. The activation energy E(a) (or E(a)') of quenching was determined using the literature values of activation energy of diffusion E(d) and the experimentally determined values of p (or p'). It has been found that, the activation energy E(a) (E(a)') is greater than the activation energy for diffusion E(d) in all solvents. Hence, from the magnitudes of E(a) (or E(a)') as well as p (or p') infer that, the quenching mechanism is not solely due to the material diffusion, but there is also contribution from the activation energy. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The capacity region of a two-user Gaussian Multiple Access Channel (GMAC) with complex finite input alphabets and continuous output alphabet is studied. When both the users are equipped with the same code alphabet, it is shown that, rotation of one of the user’s alphabets by an appropriate angle can make the new pair of alphabets not only uniquely decodable, but will result in enlargement of the capacity region. For this set-up, we identify the primary problem to be finding appropriate angle(s) of rotation between the alphabets such that the capacity region is maximally enlarged. It is shown that the angle of rotation which provides maximum enlargement of the capacity region also minimizes the union bound on the probability of error of the sumalphabet and vice-verse. The optimum angle(s) of rotation varies with the SNR. Through simulations, optimal angle(s) of rotation that gives maximum enlargement of the capacity region of GMAC with some well known alphabets such as M-QAM and M-PSK for some M are presented for several values of SNR. It is shown that for large number of points in the alphabets, capacity gains due to rotations progressively reduce. As the number of points N tends to infinity, our results match the results in the literature wherein the capacity region of the Gaussian code alphabet doesn’t change with rotation for any SNR.
Resumo:
In this paper the use of probability theory in reliability based optimum design of reinforced gravity retaining wall is described. The formulation for computing system reliability index is presented. A parametric study is conducted using advanced first order second moment method (AFOSM) developed by Hasofer-Lind and Rackwitz-Fiessler (HL-RF) to asses the effect of uncertainties in design parameters on the probability of failure of reinforced gravity retaining wall. Totally 8 modes of failure are considered, viz overturning, sliding, eccentricity, bearing capacity failure, shear and moment failure in the toe slab and heel slab. The analysis is performed by treating back fill soil properties, foundation soil properties, geometric properties of wall, reinforcement properties and concrete properties as random variables. These results are used to investigate optimum wall proportions for different coefficients of variation of φ (5% and 10%) and targeting system reliability index (βt) in the range of 3 – 3.2.
Resumo:
In this work, an attempt has been made to evaluate the spatial variation of peak horizontal acceleration (PHA) and spectral acceleration (SA) values at rock level for south India based on the probabilistic seismic hazard analysis (PSHA). These values were estimated by considering the uncertainties involved in magnitude, hypocentral distance and attenuation of seismic waves. Different models were used for the hazard evaluation, and they were combined together using a logic tree approach. For evaluating the seismic hazard, the study area was divided into small grids of size 0.1A degrees A xA 0.1A degrees, and the hazard parameters were calculated at the centre of each of these grid cells by considering all the seismic sources within a radius of 300 km. Rock level PHA values and SA at 1 s corresponding to 10% probability of exceedance in 50 years were evaluated for all the grid points. Maps showing the spatial variation of rock level PHA values and SA at 1 s for the entire south India are presented in this paper. To compare the seismic hazard for some of the important cities, the seismic hazard curves and the uniform hazard response spectrum (UHRS) at rock level with 10% probability of exceedance in 50 years are also presented in this work.
Resumo:
The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.
Resumo:
Optimal maintenance policies for a machine with degradation in performance with age and subject to failure are derived using optimal control theory. The optimal policies are shown to be, normally, of bang-coast nature, except in the case when probability of machine failure is a function of maintenance. It is also shown, in the deterministic case that a higher depreciation rate tends to reverse this policy to coast-bang. When the probability of failure is a function of maintenance, considerable computational effort is needed to obtain an optimal policy and the resulting policy is not easily implementable. For this case also, an optimal policy in the class of bang-coast policies is derived, using a semi-Markov decision model. A simple procedure for modifying the probability of machine failure with maintenance is employed. The results obtained extend and unify the recent results for this problem along both theoretical and practical lines. Numerical examples are presented to illustrate the results obtained.