42 resultados para 68% probability ranges (cal. BP)
Resumo:
We study the tradeoff between the average error probability and the average queueing delay of messages which randomly arrive to the transmitter of a point-to-point discrete memoryless channel that uses variable rate fixed codeword length random coding. Bounds to the exponential decay rate of the average error probability with average queueing delay in the regime of large average delay are obtained. Upper and lower bounds to the optimal average delay for a given average error probability constraint are presented. We then formulate a constrained Markov decision problem for characterizing the rate of transmission as a function of queue size given an average error probability constraint. Using a Lagrange multiplier the constrained Markov decision problem is then converted to a problem of minimizing the average cost for a Markov decision problem. A simple heuristic policy is proposed which approximately achieves the optimal average cost.
Resumo:
Low-complexity near-optimal detection of large-MIMO signals has attracted recent research. Recently, we proposed a local neighborhood search algorithm, namely reactive tabu search (RTS) algorithm, as well as a factor-graph based belief propagation (BP) algorithm for low-complexity large-MIMO detection. The motivation for the present work arises from the following two observations on the above two algorithms: i) Although RTS achieved close to optimal performance for 4-QAM in large dimensions, significant performance improvement was still possible for higher-order QAM (e.g., 16-, 64-QAM). ii) BP also achieved near-optimal performance for large dimensions, but only for {±1} alphabet. In this paper, we improve the large-MIMO detection performance of higher-order QAM signals by using a hybrid algorithm that employs RTS and BP. In particular, motivated by the observation that when a detection error occurs at the RTS output, the least significant bits (LSB) of the symbols are mostly in error, we propose to first reconstruct and cancel the interference due to bits other than LSBs at the RTS output and feed the interference cancelled received signal to the BP algorithm to improve the reliability of the LSBs. The output of the BP is then fed back to RTS for the next iteration. Simulation results show that the proposed algorithm performs better than the RTS algorithm, and semi-definite relaxation (SDR) and Gaussian tree approximation (GTA) algorithms.
Resumo:
Future space-based gravity wave (GW) experiments such as the Big Bang Observatory (BBO), with their excellent projected, one sigma angular resolution, will measure the luminosity distance to a large number of GW sources to high precision, and the redshift of the single galaxies in the narrow solid angles towards the sources will provide the redshifts of the gravity wave sources. One sigma BBO beams contain the actual source in only 68% of the cases; the beams that do not contain the source may contain a spurious single galaxy, leading to misidentification. To increase the probability of the source falling within the beam, larger beams have to be considered, decreasing the chances of finding single galaxies in the beams. Saini et al. T.D. Saini, S.K. Sethi, and V. Sahni, Phys. Rev. D 81, 103009 (2010)] argued, largely analytically, that identifying even a small number of GW source galaxies furnishes a rough distance-redshift relation, which could be used to further resolve sources that have multiple objects in the angular beam. In this work we further develop this idea by introducing a self-calibrating iterative scheme which works in conjunction with Monte Carlo simulations to determine the luminosity distance to GW sources with progressively greater accuracy. This iterative scheme allows one to determine the equation of state of dark energy to within an accuracy of a few percent for a gravity wave experiment possessing a beam width an order of magnitude larger than BBO (and therefore having a far poorer angular resolution). This is achieved with no prior information about the nature of dark energy from other data sets such as type Ia supernovae, baryon acoustic oscillations, cosmic microwave background, etc. DOI:10.1103/PhysRevD.87.083001
Resumo:
Spatial information at the landscape scale is extremely important for conservation planning, especially in the case of long-ranging vertebrates. The biodiversity-rich Anamalai hill ranges in the Western Ghats of southern India hold a viable population for the long-term conservation of the Asian elephant. Through rapid but extensive field surveys we mapped elephant habitat, corridors, vegetation and land-use patterns, estimated the elephant population density and structure, and assessed elephant-human conflict across this landscape. GIS and remote sensing analyses indicate that elephants are distributed among three blocks over a total area of about 4600 km(2). Approximately 92% remains contiguous because of four corridors; however, under 4000 km2 of this area may be effectively used by elephants. Nine landscape elements were identified, including five natural vegetation types, of which tropical moist deciduous forest is dominant. Population density assessed through the dung count method using line transects covering 275 km of walk across the effective elephant habitat of the landscape yielded a mean density of 1.1 (95% Cl = 0.99-1.2) elephant/km(2). Population structure from direct sighting of elephants showed that adult male elephants constitute just 2.9% and adult females 42.3% of the population with the rest being subadults (27.4%), juveniles (16%) and calves (11.4%). Sex ratios show an increasing skew toward females from juvenile (1:1.8) to sub-adult (1:2.4) and adult (1:14.7) indicating higher mortality of sub-adult and adult males that is most likely due to historical poaching for ivory. A rapid questionnaire survey and secondary data on elephant-human conflict from forest department records reveals that villages in and around the forest divisions on the eastern side of landscape experience higher levels of elephant-human conflict than those on the western side; this seems to relate to a greater degree of habitat fragmentation and percentage farmers cultivating annual crops in the east. We provide several recommendations that could help maintain population viability and reduce elephant-human conflict of the Anamalai elephant landscape. (C) 2013 Deutsche Gesellschaft far Saugetierkunde. Published by Elsevier GmbH. All rights reserved.
Resumo:
The effects of the initial height on the temporal persistence probability of steady-state height fluctuations in up-down symmetric linear models of surface growth are investigated. We study the (1 + 1)-dimensional Family model and the (1 + 1)-and (2 + 1)-dimensional larger curvature (LC) model. Both the Family and LC models have up-down symmetry, so the positive and negative persistence probabilities in the steady state, averaged over all values of the initial height h(0), are equal to each other. However, these two probabilities are not equal if one considers a fixed nonzero value of h(0). Plots of the positive persistence probability for negative initial height versus time exhibit power-law behavior if the magnitude of the initial height is larger than the interface width at saturation. By symmetry, the negative persistence probability for positive initial height also exhibits the same behavior. The persistence exponent that describes this power-law decay decreases as the magnitude of the initial height is increased. The dependence of the persistence probability on the initial height, the system size, and the discrete sampling time is found to exhibit scaling behavior.
Resumo:
In underlay cognitive radio (CR), a secondary user (SU) can transmit concurrently with a primary user (PU) provided that it does not cause excessive interference at the primary receiver (PRx). The interference constraint fundamentally changes how the SU transmits, and makes link adaptation in underlay CR systems different from that in conventional wireless systems. In this paper, we develop a novel, symbol error probability (SEP)-optimal transmit power adaptation policy for an underlay CR system that is subject to two practically motivated constraints, namely, a peak transmit power constraint and an interference outage probability constraint. For the optimal policy, we derive its SEP and a tight upper bound for MPSK and MQAM constellations when the links from the secondary transmitter (STx) to its receiver and to the PRx follow the versatile Nakagami-m fading model. We also characterize the impact of imperfectly estimating the STx-PRx link on the SEP and the interference. Extensive simulation results are presented to validate the analysis and evaluate the impact of the constraints, fading parameters, and imperfect estimates.
Resumo:
In this letter, we analyze the end-to-end average bit error probability (ABEP) of space shift keying (SSK) in cooperative relaying with decode-and-forward (DF) protocol, considering multiple relays with a threshold based best relay selection, and selection combining of direct and relayed paths at the destination. We derive an exact analytical expression for the end-to-end ABEP in closed-form for binary SSK, where analytical results agree with simulation results. For non-binary SSK, approximate analytical and simulation results are presented.
Resumo:
We address the problem of designing an optimal pointwise shrinkage estimator in the transform domain, based on the minimum probability of error (MPE) criterion. We assume an additive model for the noise corrupting the clean signal. The proposed formulation is general in the sense that it can handle various noise distributions. We consider various noise distributions (Gaussian, Student's-t, and Laplacian) and compare the denoising performance of the estimator obtained with the mean-squared error (MSE)-based estimators. The MSE optimization is carried out using an unbiased estimator of the MSE, namely Stein's Unbiased Risk Estimate (SURE). Experimental results show that the MPE estimator outperforms the SURE estimator in terms of SNR of the denoised output, for low (0 -10 dB) and medium values (10 - 20 dB) of the input SNR.
Resumo:
Detailed pedofacies characterization along-with lithofacies investigations of the Mio-Pleistocene Siwalik sediments exposed in the Ramnagar sub-basin have been studied so as to elucidate variability in time and space of fluvial processes and the role of intra- and extra-basinal controls on fluvial sedimentation during the evolution of the Himalayan foreland basin (HFB). Dominance of multiple, moderately to strongly developed palaeosol assemblages during deposition of Lower Siwalik (similar to 12-10.8 Ma) sediments suggest that the HFB was marked by Upland set-up of Thomas et al. (2002). Activity of intra-basinal faults on the uplands and deposition of terminal fans at different times caused the development of multiple soils. Further, detailed pedofacies along-with lithofacies studies indicate prevalence of stable tectonic conditions and development of meandering streams with broad floodplains. However, the Middle Siwalik (similar to 10.8-4.92 Ma) sub-group is marked by multistoried sandstones and minor mudstone and mainly weakly developed palaeosols, indicating deposition by large braided rivers in the form of megafans in a Lowland set-up of Thomas et al. (2002). Significant change in nature and size of rivers from the Lower to Middle Siwalik at similar to 10 Ma is found almost throughout of the basin from Kohat Plateau (Pakistan) to Nepal because the Himalayan orogeny witnessed its greatest tectonic upheaval at this time leading to attainment of great heights by the Himalaya, intensification of the monsoon, development of large rivers systems and a high rate of sedimentation, hereby a major change from the Upland set-up to the Lowland set-up over major parts of the HFB. An interesting geomorphic environmental set-up prevailed in the Ramnagar sub-basin during deposition of the studied Upper Siwalik (similar to 4.92 to <1.68 Ma) sediments as observed from the degree of pedogenesis and the type of palaeosols. In general, the Upper Siwalik sub-group in the Ramnagar sub-basin is subdivided from bottom to top into the Purmandal sandstone (4.92-4.49 Ma), Nagrota (4.49-1.68 Ma) and Boulder Conglomerate (<1.68 Ma) formations on the basis of sedimentological characters and change in dominant lithology. Presence of mudstone, a few thin gravel beds and dominant sandstone lithology with weakly to moderately developed palaeosols in the Purmandal sandstone Fm. indicates deposition by shallow braided fluvial streams. The deposition of mudstone dominant Nagrota Fm. with moderately to some well developed palaeosols and a zone of gleyed palaeosols with laminated mudstones and thin sandstones took place in an environment marked by numerous small lakes, water-logged regions and small streams in an environment just south of the Piedmont zone, perhaps similar to what is happening presently in the Upland region/the Upper Gangetic plain. This area is locally called the `Trai region' (Pascoe, 1964). Deposition of Boulder Conglomerate Fm. took place by gravelly braided river system close to the Himalayan Ranges. Activity along the Main Boundary Fault led to progradation of these environments distal-ward and led to development of on the whole a coarsening upward sequence. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In a complete bipartite graph with vertex sets of cardinalities n and n', assign random weights from exponential distribution with mean 1, independently to each edge. We show that, as n -> infinity, with n' = n/alpha] for any fixed alpha > 1, the minimum weight of many-to-one matchings converges to a constant (depending on alpha). Many-to-one matching arises as an optimization step in an algorithm for genome sequencing and as a measure of distance between finite sets. We prove that a belief propagation (BP) algorithm converges asymptotically to the optimal solution. We use the objective method of Aldous to prove our results. We build on previous works on minimum weight matching and minimum weight edge cover problems to extend the objective method and to further the applicability of belief propagation to random combinatorial optimization problems.
Resumo:
We consider carrier frequency offset (CFO) estimation in the context of multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over noisy frequency-selective wireless channels with both single- and multiuser scenarios. We conceived a new approach for parameter estimation by discretizing the continuous-valued CFO parameter into a discrete set of bins and then invoked detection theory, analogous to the minimum-bit-error-ratio optimization framework for detecting the finite-alphabet received signal. Using this radical approach, we propose a novel CFO estimation method and study its performance using both analytical results and Monte Carlo simulations. We obtain expressions for the variance of the CFO estimation error and the resultant BER degradation with the single- user scenario. Our simulations demonstrate that the overall BER performance of a MIMO-OFDM system using the proposed method is substantially improved for all the modulation schemes considered, albeit this is achieved at increased complexity.
Resumo:
The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.