84 resultados para Auditory Warning Signals.
Resumo:
In this paper, we develop a low-complexity message passing algorithm for joint support and signal recovery of approximately sparse signals. The problem of recovery of strictly sparse signals from noisy measurements can be viewed as a problem of recovery of approximately sparse signals from noiseless measurements, making the approach applicable to strictly sparse signal recovery from noisy measurements. The support recovery embedded in the approach makes it suitable for recovery of signals with same sparsity profiles, as in the problem of multiple measurement vectors (MMV). Simulation results show that the proposed algorithm, termed as JSSR-MP (joint support and signal recovery via message passing) algorithm, achieves performance comparable to that of sparse Bayesian learning (M-SBL) algorithm in the literature, at one order less complexity compared to the M-SBL algorithm.
Resumo:
The nanocrystallites ( ≈ 3 nm) of LiNbO3, evolved in the (100−x)LiBO2-xNb2O5 (5x20, in molar ratio) glass system exhibited intense second-harmonic signals in transmission mode when exposed to infrared (IR) light at λ = 1064 nm. The second-harmonic waves were found to undergo optical diffraction which was attributed to the presence of self-organized submicrometer-sized LiNbO3 crystallites that were grown within the glass matrix along the parallel damage fringes created by the IR laser radiation. Micro-Raman studies carried out on the laser-irradiated samples confirmed the self-organized crystallites to be LiNbO3.
Resumo:
A method for calculating the bends in the glide slope due to uneven terrain is presented. A computer program written for the purpose enables the calculation of the difference in depth of modulation (DDM) at any point, taking into account the effect of uneven terrain. Results are presented for a hypothetical case of a hill in front of the runway. The program enables us to predict the glide slope bends due to irregular terrain, so that proper selection of glide slope antenna location can be made.
Resumo:
The flux tube model offers a pictorial description of what happens during the deconfinement phase transition in QCD. The three-point vertices of a flux tube network lead to formation of baryons upon hadronization. Therefore, correlations in the baryon number distribution at the last scattering surface are related to the preceding pattern of the flux tube vertices in the quark-gluon plasma, and provide a signature of the nearby deconfinement phase transition. I discuss the nature of the expected signal, and how to extract it from the experimental data for heavy ion collisions at RHIC and LHC.
Resumo:
An automated geo-hazard warning system is the need of the hour. It is integration of automation in hazard evaluation and warning communication. The primary objective of this paper is to explain a geo-hazard warning system based on Internet-resident concept and available cellular mobile infrastructure that makes use of geo-spatial data. The functionality of the system is modular in architecture having input, understanding, expert, output and warning modules. Thus, the system provides flexibility in integration between different types of hazard evaluation and communication systems leading to a generalized hazard warning system. The developed system has been validated for landslide hazard in Indian conditions. It has been realized through utilization of landslide causative factors, rainfall forecast from NASA's TRMM (Tropical Rainfall Measuring Mission) and knowledge base of landslide hazard intensity map and invokes the warning as warranted. The system evaluated hazard commensurate with expert evaluation within 5-6 % variability, and the warning message permeability has been found to be virtually instantaneous, with a maximum time lag recorded as 50 s, minimum of 10 s. So it could be concluded that a novel and stand-alone system for dynamic hazard warning has been developed and implemented. Such a handy system could be very useful in a densely populated country where people are unaware of the impending hazard.
Resumo:
The advent and evolution of geohazard warning systems is a very interesting study. The two broad fields that are immediately visible are that of geohazard evaluation and subsequent warning dissemination. Evidently, the latter field lacks any systematic study or standards. Arbitrarily organized and vague data and information on warning techniques create confusion and indecision. The purpose of this review is to try and systematize the available bulk of information on warning systems so that meaningful insights can be derived through decidable flowcharts, and a developmental process can be undertaken. Hence, the methods and technologies for numerous geohazard warning systems have been assessed by putting them into suitable categories for better understanding of possible ways to analyze their efficacy as well as shortcomings. By establishing a classification scheme based on extent, control, time period, and advancements in technology, the geohazard warning systems available in any literature could be comprehensively analyzed and evaluated. Although major advancements have taken place in geohazard warning systems in recent times, they have been lacking a complete purpose. Some systems just assess the hazard and wait for other means to communicate, and some are designed only for communication and wait for the hazard information to be provided, which usually is after the mishap. Primarily, systems are left at the mercy of administrators and service providers and are not in real time. An integrated hazard evaluation and warning dissemination system could solve this problem. Warning systems have also suffered from complexity of nature, requirement of expert-level monitoring, extensive and dedicated infrastructural setups, and so on. The user community, which would greatly appreciate having a convenient, fast, and generalized warning methodology, is surveyed in this review. The review concludes with the future scope of research in the field of hazard warning systems and some suggestions for developing an efficient mechanism toward the development of an automated integrated geohazard warning system. DOI: 10.1061/(ASCE)NH.1527-6996.0000078. (C) 2012 American Society of Civil Engineers.
Resumo:
Compressive Sensing (CS) is a new sensing paradigm which permits sampling of a signal at its intrinsic information rate which could be much lower than Nyquist rate, while guaranteeing good quality reconstruction for signals sparse in a linear transform domain. We explore the application of CS formulation to music signals. Since music signals comprise of both tonal and transient nature, we examine several transforms such as discrete cosine transform (DCT), discrete wavelet transform (DWT), Fourier basis and also non-orthogonal warped transforms to explore the effectiveness of CS theory and the reconstruction algorithms. We show that for a given sparsity level, DCT, overcomplete, and warped Fourier dictionaries result in better reconstruction, and warped Fourier dictionary gives perceptually better reconstruction. “MUSHRA” test results show that a moderate quality reconstruction is possible with about half the Nyquist sampling.
Resumo:
We consider the design of a linear equalizer with a finite number of coefficients in the context of a classical linear intersymbol-interference channel with additive Gaussian noise for channel estimation. Previous literature has shown that Minimum Bit Error Rate(MBER) based detection has outperformed Minimum Mean Squared Error (MMSE) based detection. We pose the channel estimation problem as a detection problem and propose a novel algorithm to estimate the channel based on the MBER framework for BPSK signals. It is shown that the proposed algorithm reduces BER compared to an MMSE based channel estimation when used in MMSE or MBER detection.
Resumo:
Low-complexity near-optimal detection of large-MIMO signals has attracted recent research. Recently, we proposed a local neighborhood search algorithm, namely reactive tabu search (RTS) algorithm, as well as a factor-graph based belief propagation (BP) algorithm for low-complexity large-MIMO detection. The motivation for the present work arises from the following two observations on the above two algorithms: i) Although RTS achieved close to optimal performance for 4-QAM in large dimensions, significant performance improvement was still possible for higher-order QAM (e.g., 16-, 64-QAM). ii) BP also achieved near-optimal performance for large dimensions, but only for {±1} alphabet. In this paper, we improve the large-MIMO detection performance of higher-order QAM signals by using a hybrid algorithm that employs RTS and BP. In particular, motivated by the observation that when a detection error occurs at the RTS output, the least significant bits (LSB) of the symbols are mostly in error, we propose to first reconstruct and cancel the interference due to bits other than LSBs at the RTS output and feed the interference cancelled received signal to the BP algorithm to improve the reliability of the LSBs. The output of the BP is then fed back to RTS for the next iteration. Simulation results show that the proposed algorithm performs better than the RTS algorithm, and semi-definite relaxation (SDR) and Gaussian tree approximation (GTA) algorithms.
Resumo:
Signal acquisition under a compressed sensing scheme offers the possibility of acquisition and reconstruction of signals sparse on some basis incoherent with measurement kernel with sub-Nyquist number of measurements. In particular when the sole objective of the acquisition is the detection of the frequency of a signal rather than exact reconstruction, then an undersampling framework like CS is able to perform the task. In this paper we explore the possibility of acquisition and detection of frequency of multiple analog signals, heavily corrupted with additive white Gaussian noise. We improvise upon the MOSAICS architecture proposed by us in our previous work to include a wider class of signals having non-integral frequency components. This makes it possible to perform multiplexed compressed sensing for general frequency sparse signals.
Resumo:
We analyze the spectral zero-crossing rate (SZCR) properties of transient signals and show that SZCR contains accurate localization information about the transient. For a train of pulses containing transient events, the SZCR computed on a sliding window basis is useful in locating the impulse locations accurately. We present the properties of SZCR on standard stylized signal models and then show how it may be used to estimate the epochs in speech signals. We also present comparisons with some state-of-the-art techniques that are based on the group-delay function. Experiments on real speech show that the proposed SZCR technique is better than other group-delay-based epoch detectors. In the presence of noise, a comparison with the zero-frequency filtering technique (ZFF) and Dynamic programming projected Phase-Slope Algorithm (DYPSA) showed that performance of the SZCR technique is better than DYPSA and inferior to that of ZFF. For highpass-filtered speech, where ZFF performance suffers drastically, the identification rates of SZCR are better than those of DYPSA.
Resumo:
Distributed compressed sensing exploits information redundancy, inbuilt in multi-signal ensembles with interas well as intra-signal correlations, to reconstruct undersampled signals. In this paper we revisit this problem, albeit from a different perspective, of taking streaming data, from several correlated sources, as input to a real time system which, without any a priori information, incrementally learns and admits each source into the system.
Resumo:
We address the problem of sampling and reconstruction of two-dimensional (2-D) finite-rate-of-innovation (FRI) signals. We propose a three-channel sampling method for efficiently solving the problem. We consider the sampling of a stream of 2-D Dirac impulses and a sum of 2-D unit-step functions. We propose a 2-D causal exponential function as the sampling kernel. By causality in 2-D, we mean that the function has its support restricted to the first quadrant. The advantage of using a multichannel sampling method with causal exponential sampling kernel is that standard annihilating filter or root-finding algorithms are not required. Further, the proposed method has inexpensive hardware implementation and is numerically stable as the number of Dirac impulses increases.
Enhancing fluorescence signals from aluminium thin films and foils using polyelectrolyte multilayers
Resumo:
In this paper we investigate the application of polyelectrolyte multilayer (PEM) coated metal slides in enhancing fluorescence signal. We observed around eight-fold enhancement in fluorescence for protein incubated on PEM coated on aluminium mirror surface with respect to that of functionalized bare glass slides. The fluorescence intensities were also compared with commercially available FAST (R) slides (Whatman) offering 3D immobilization of proteins and the results were found to be comparable. We also showed that PEM coated on low-cost and commonly available aluminium foils also results in comparable fluorescence enhancement as sputtered aluminium mirrors. Immunoassay was also performed, using model proteins, on aluminium mirror as well as on aluminium foil based devices to confirm the activity of proteins. This work demonstrated the potential of PEMs in the large-scale, roll-to-roll manufacturing of fluorescence enhancements substrates for developing disposable, low-cost devices for fluorescence based diagnostic methods.
Resumo:
Neutral and niche theories give contrasting explanations for the maintenance of tropical tree species diversity. Both have some empirical support, but methods to disentangle their effects have not yet been developed. We applied a statistical measure of spatial structure to data from 14 large tropical forest plots to test a prediction of niche theory that is incompatible with neutral theory: that species in heterogeneous environments should separate out in space according to their niche preferences. We chose plots across a range of topographic heterogeneity, and tested whether pairwise spatial associations among species were more variable in more heterogeneous sites. We found strong support for this prediction, based on a strong positive relationship between variance in the spatial structure of species pairs and topographic heterogeneity across sites. We interpret this pattern as evidence of pervasive niche differentiation, which increases in importance with increasing environmental heterogeneity.