285 resultados para fire detection
Resumo:
In [8], we recently presented two computationally efficient algorithms named B-RED and P-RED for random early detection. In this letter, we present the mathematical proof of convergence of these algorithms under general conditions to local minima.
Resumo:
The present paper records the results of a case study on the impact of an extensive grassland fire on the physical and optical properties of aerosols at a semi-arid station in southern India for the first time from ground based measurements using a MICROTOPS-II sunphotometer, an aethalometer and a quartz crystal microbalance impactor (QCM). Observations revealed a substantial increase in aerosol optical depth (AOD) at all wavelengths during burning days compared to normal days. High AOD values observed at shorter wavelengths suggest the dominance of accumulation mode particle loading over the study area. Daily mean aerosol size spectra shows, most of the time, power-law distribution. To characterize AOD, the Angstrom parameters (i.e., alpha and beta) were used. Wavelength exponent (1.38) and turbidity coefficient (0.21) are high during burning days compared to normal days, thereby suggesting an increase in accumulation mode particle loading. Aerosol size distribution suggested dominance of accumulation mode particle loading during burning days compared to normal days. A significant positive correlation was observed between AOD at 500 mn and water vapour and negative correlation between AOD at 500 nm and wind speed for burning and non-burning days. Diurnal variations of black carbon (BC) aerosol mass concentrations increased by a factor of similar to 2 in the morning and afternoon hours during burning period compared to normal days.
Resumo:
One-dimensional (1D) proton NMR spectra of enantiomers are generally undecipherable in chiral orienting poly-gamma-benzyl-L-glutamate (PBLG)/CDCl3 solvent. This arises due to large number of couplings, in addition to superposition of spectra from both the enantiomers, severely hindering the H-1 detection. On the other hand in the present study the benefit is derived front the presence of several couplings among the entire network of interacting protons. Transition selective 1D H-1-H-1 correlation experiment (1D-COSY) which utilizes the Coupling assisted transfer of magnetization not only for unraveling the overlap but also for the selective detection of enantiopure spectrum is reported. The experiment is simple, easy to implement and provides accurate eanantiomeric excess in addition to the determination of the proton-proton couplings of an enantiomer within a short experimental time (few minutes). (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-multiple-input multiple-output (MIMO) systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16 X 16 and 32 X 32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.
Resumo:
Along with useful microorganisms, there are some that cause potential damage to the animals and plants. Detection and identification of these harmful organisms in a cost and time effective way is a challenge for the researchers. The future of detection methods for microorganisms shall be guided by biosensor, which has already contributed enormously in sensing and detection technology. Here, we aim to review the use of various biosensors, developed by integrating the biological and physicochemical/mechanical properties (of tranducers), which can have enormous implication in healthcare, food, agriculture and biodefence. We have also highlighted the ways to improve the functioning of the biosensor.
Resumo:
In this paper, we present a low-complexity, near maximum-likelihood (ML) performance achieving detector for large MIMO systems having tens of transmit and receive antennas. Such large MIMO systems are of interest because of the high spectral efficiencies possible in such systems. The proposed detection algorithm, termed as multistage likelihood-ascent search (M-LAS) algorithm, is rooted in Hopfield neural networks, and is shown to possess excellent performance as well as complexity attributes. In terms of performance, in a 64 x 64 V-BLAST system with 4-QAM, the proposed algorithm achieves an uncoded BER of 10(-3) at an SNR of just about 1 dB away from AWGN-only SISO performance given by Q(root SNR). In terms of coded BER, with a rate-3/4 turbo code at a spectral efficiency of 96 bps/Hz the algorithm performs close to within about 4.5 dB from theoretical capacity, which is remarkable in terms of both high spectral efficiency as well as nearness to theoretical capacity. Our simulation results show that the above performance is achieved with a complexity of just O(NtNt) per symbol, where N-t and N-tau denote the number of transmit and receive antennas.
Resumo:
The need for paying with mobile devices has urged the development of payment systems for mobile electronic commerce. In this paper we have considered two important abuses in electronic payments systems for detection. The fraud, which is an intentional deception accomplished to secure an unfair gain, and an intrusion which are any set of actions that attempt to compromise the integrity, confidentiality or availability of a resource. Most of the available fraud and intrusion detection systems for e-payments are specific to the systems where they have been incorporated. This paper proposes a generic model called as Activity-Event-Symptoms(AES) model for detecting fraud and intrusion attacks which appears during payment process in the mobile commerce environment. The AES model is designed to identify the symptoms of fraud and intrusions by observing various events/transactions occurs during mobile commerce activity. The symptoms identification is followed by computing the suspicion factors for event attributes, and the certainty factor for a fraud and intrusion is generated using these suspicion factors. We have tested the proposed system by conducting various case studies, on the in-house established mobile commerce environment over wired and wire-less networks test bed.
Resumo:
In this paper, we present a new feature-based approach for mosaicing of camera-captured document images. A novel block-based scheme is employed to ensure that corners can be reliably detected over a wide range of images. 2-D discrete cosine transform is computed for image blocks defined around each of the detected corners and a small subset of the coefficients is used as a feature vector A 2-pass feature matching is performed to establish point correspondences from which the homography relating the input images could be computed. The algorithm is tested on a number of complex document images casually taken from a hand-held camera yielding convincing results.
Resumo:
The paper presents the results of a computational modeling for damage identification process for an axial rod representing an end-bearing pile foundation with known damage and a simply supported beam representing a bridge girder. The paper proposes a methodology for damage identification from measured natural frequencies of a contiguously damaged reinforced concrete axial rod and beam, idealized with distributed damage model. Identification of damage is from Equal_Eigen_value_change (Iso_Eigen_value_Change) contours, plotted between pairs of different frequencies. The performance of the method is checked for a wide variation of damage positions and extents. An experiment conducted on a free-free axially loaded reinforced concrete member and a flexural beam is shown as examples to prove the pros and cons of this method. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We propose a simple and energy efficient distributed change detection scheme for sensor networks based on Page's parametric CUSUM algorithm. The sensor observations are IID over time and across the sensors conditioned on the change variable. Each sensor runs CUSUM and transmits only when the CUSUM is above some threshold. The transmissions from the sensors are fused at the physical layer. The channel is modeled as a multiple access channel (MAC) corrupted with IID noise. The fusion center which is the global decision maker, performs another CUSUM to detect the change. We provide the analysis and simulation results for our scheme and compare the performance with an existing scheme which ensures energy efficiency via optimal power selection.
Resumo:
We study the problem of decentralized sequential change detection with conditionally independent observations. The sensors form a star topology with a central node called fusion center as the hub. The sensors transmit a simple function of their observations in an analog fashion over a wireless Gaussian multiple access channel and operate under either a power constraint or an energy constraint. Simulations demonstrate that the proposed techniques have lower detection delays when compared with existing schemes. Moreover we demonstrate that the energy-constrained formulation enables better use of the total available energy than a power-constrained formulation.
Resumo:
Use of space-frequency block coded (SFBC) OFDM signals is advantageous in high-mobility broadband wireless access, where the channel is highly time- as well as frequency-selective because of which the receiver experiences both inter-symbol interference (ISI) as well as inter-carrier interference (10). ISI occurs due to the violation of the 'quasi-static' fading assumption caused due to frequency- and/or time-selectivity of the channel. In addition, ICI occurs due to time-selectivity of the channel which results in loss of orthogonality among the subcarriers. In this paper, we are concerned with the detection of SFBC-OFDM signals on time- and frequency-selective MIMO channels. Specifically, we propose and evaluate the performance of an interference cancelling receiver for SFBC-OFDM which alleviates the effects of ISI and ICI in highly time- and frequency-selective channels.
Resumo:
Multicode operation in space-time block coded (STBC) multiple input multiple output (MIMO) systems can provide additional degrees of freedom in code domain to achieve high data rates. In such multicode STBC systems, the receiver experiences code domain interference (CDI) in frequency selective fading. In this paper, we propose a linear parallel interference cancellation (LPIC) approach to cancel the CDI in multicode STBC in frequency selective fading. The proposed detector first performs LPIC followed by STBC decoding. We evaluate the bit error performance of the detector and show that it effectively cancels the CDI and achieves improved error performance. Our results further illustrate how the combined effect of interference cancellation, transmit diversity, and RAKE diversity affect the bit error performance of the system.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.