172 resultados para Signal variability


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is possible to sample signals at sub-Nyquist rate and still be able to reconstruct them with reasonable accuracy provided they exhibit local Fourier sparsity. Underdetermined systems of equations, which arise out of undersampling, have been solved to yield sparse solutions using compressed sensing algorithms. In this paper, we propose a framework for real time sampling of multiple analog channels with a single A/D converter achieving higher effective sampling rate. Signal reconstruction from noisy measurements on two different synthetic signals has been presented. A scheme of implementing the algorithm in hardware has also been suggested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The issue of dynamic spectrum scene analysis in any cognitive radio network becomes extremely complex when low probability of intercept, spread spectrum systems are present in environment. The detection and estimation become more complex if frequency hopping spread spectrum is adaptive in nature. In this paper, we propose two phase approach for detection and estimation of frequency hoping signals. Polyphase filter bank has been proposed as the architecture of choice for detection phase to efficiently detect the presence of frequency hopping signal. Based on the modeling of frequency hopping signal it can be shown that parametric methods of line spectral analysis are well suited for estimation of frequency hopping signals if the issues of order estimation and time localization are resolved. An algorithm using line spectra parameter estimation and wavelet based transient detection has been proposed which resolves above issues in computationally efficient manner suitable for implementation in cognitive radio. The simulations show promising results proving that adaptive frequency hopping signals can be detected and demodulated in a non cooperative context, even at a very low signal to noise ratio in real time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sample of 96 compact flat-spectrum extragalactic sources, spread evenly over all galactic latitudes, has been studied at 327 MHz for variability over a time interval of about 15 yr. The variability shows a dependence on galactic latitude being less both at low and high latitudes and peaking around absolute value of b approximately 15-degrees. The latitude dependence is surprisingly similar in both the galactic centre and anticentre directions. Assuming various single and multi-component distributions for the ionized, irregular interstellar plasma, we have tried to generate the observed dependence using a semi-qualitative treatment of refractive interstellar scintillations. We find that it is difficult to fit our data with any single or double component cylindrical distribution. Our data suggests that the observed variability could be influenced by the spiral structure of our Galaxy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed, A brief overview of Genetic Algorithms (GAs) and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance pf our GA-based approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger. To account for the relatively quick convergence of the gradient descent methods, we analyze the landscape of the COP-based cost function. We prove that the cost function is unimodal in the search space. This feature makes the cost function amenable to optimization by gradient-descent techniques as compared to random search methods such as Genetic Algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic structural systems having a stochastic distribution of material properties and stochastic external loadings in space are analysed when a crack of deterministic size is present. The material properties and external loadings are considered to constitute independent, two-dimensional, univariate, real, homogeneous stochastic fields. The stochastic fields are characterized by their means, variances, autocorrelation functions or the equivalent power spectral density functions, and scale fluctuations. The Young's modulus and Poisson's ratio are treated to be stochastic quantities. The external loading is treated to be a stochastic field in space. The energy release rate is derived using the method of virtual crack extension. The deterministic relationship is derived to represent the sensitivities of energy release rate with respect to both virtual crack extension and real system parameter fluctuations. Taylor series expansion is used and truncation is made to the first order. This leads to the determination of second-order properties of the output quantities to the first order. Using the linear perturbations about the mean values of the output quantities, the statistical information about the energy release rates, SIF and crack opening displacements are obtained. Both plane stress and plane strain cases are considered. The general expressions for the SIF in all the three fracture modes are derived and a more detailed analysis is conducted for a mode I situation. A numerical example is given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose, for the first time, a reinforcement learning (RL) algorithm with function approximation for traffic signal control. Our algorithm incorporates state-action features and is easily implementable in high-dimensional settings. Prior work, e. g., the work of Abdulhai et al., on the application of RL to traffic signal control requires full-state representations and cannot be implemented, even in moderate-sized road networks, because the computational complexity exponentially grows in the numbers of lanes and junctions. We tackle this problem of the curse of dimensionality by effectively using feature-based state representations that use a broad characterization of the level of congestion as low, medium, or high. One advantage of our algorithm is that, unlike prior work based on RL, it does not require precise information on queue lengths and elapsed times at each lane but instead works with the aforementioned described features. The number of features that our algorithm requires is linear to the number of signaled lanes, thereby leading to several orders of magnitude reduction in the computational complexity. We perform implementations of our algorithm on various settings and show performance comparisons with other algorithms in the literature, including the works of Abdulhai et al. and Cools et al., as well as the fixed-timing and the longest queue algorithms. For comparison, we also develop an RL algorithm that uses full-state representation and incorporates prioritization of traffic, unlike the work of Abdulhai et al. We observe that our algorithm outperforms all the other algorithms on all the road network settings that we consider.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cardiac autonomic neuropathy is known to occur in alcoholics but the extent of its subclinical form is not usually recognized, Heart Rate Variability (HRV) analysis can detect subclinical autonomic neuropathy. In this study the HRV parameters were compared in 20 neurologically asymptomatic alcoholics, 20 age-matched normals and 16 depressives. All were males, ECG was recorded in a quiet room for four minutes in supine position. Time and Frequency domain parameters of HRV were computed by a researcher blind to clinical details. Alcoholics had significantly smaller Coefficient of Variation of R-R intervals (CVR-R) on time domain analysis and smaller HF band (0.15-0.5 Hz) power on spectral analysis. The decreased Heart Rate Variability indicates cardiac autonomic dysfunction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High sensitivity detection techniques are required for indoor navigation using Global Navigation Satellite System (GNSS) receivers, and typically, a combination of coherent and non- coherent integration is used as the test statistic for detection. The coherent integration exploits the deterministic part of the signal and is limited due to the residual frequency error, navigation data bits and user dynamics, which are not known apriori. So, non- coherent integration, which involves squaring of the coherent integration output, is used to improve the detection sensitivity. Due to this squaring, it is robust against the artifacts introduced due to data bits and/or frequency error. However, it is susceptible to uncertainty in the noise variance, and this can lead to fundamental sensitivity limits in detecting weak signals. In this work, the performance of the conventional non-coherent integration-based GNSS signal detection is studied in the presence of noise uncertainty. It is shown that the performance of the current state of the art GNSS receivers is close to the theoretical SNR limit for reliable detection at moderate levels of noise uncertainty. Alternate robust post-coherent detectors are also analyzed, and are shown to alleviate the noise uncertainty problem. Monte-Carlo simulations are used to confirm the theoretical predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A distinctive feature of the Nhecolandia, a sub-region of the Pantanal wetland in Brazil, is the presence of both saline and freshwater lakes. Saline lakes used to be attributed to a past and phase during the Pleistocene. However, recent studies have shown that saline and fresh water lakes are linked by a continuous water table, indicating that saline water could come from a contemporary concentration process. This concentration process could also be responsible for the large chemical variability of the waters observed in the area. A regional water sampling has been conducted in surface and sub-surface water and the water table, and the results of the geochemical and statistical analysis are presented. Based on sodium contents, the concentration shows a 1: 4443 ratio. All the samples belong to the same chemical family and evolve in a sodic alkaline manner. Calcite or magnesian calcite precipitates very early in the process of concentration, probably followed by the precipitation of magnesian silicates. The most concentrated solutions remain under-saturated with respect to the sodium carbonate salt, even if this equilibrium is likely reached around the saline lakes. Apparently, significant amounts of sulfate and chloride are lost simultaneously from the solutions, and this cannot be explained solely by evaporative concentration. This could be attributed to the sorption on reduced minerals in a green sub-surface horizon in the "cordilhieira" areas. In the saline lakes, low potassium, phosphate, magnesium, and sulfate are attributed to algal blooms. Under the influence of evaporation, the concentration of solutions and associated chemical precipitations are identified as the main factors responsible for the geochemical variability in this environment (about 92 % of the variance). Therefore, the saline lakes of Nhecolandia have to be managed as landscape units in equilibrium with the present water flows and not inherited from a past and phase. In order to elaborate hydrochemical tracers for a quantitative estimation of water flows, three points have to be investigated more precisely: (1) the quantification of magnesium involved in the Mg-calcite precipitation; (2) the identification of the precise stoichiometry of the Mg-silicate; and (3) the verification of the loss of chloride and sulfate by sorption onto labile iron minerals.