854 resultados para event detection algorithm
Resumo:
We present a motion detection algorithm which detects direction of motion at sufficient number of points and thus segregates the edge image into clusters of coherently moving points. Unlike most algorithms for motion analysis, we do not estimate magnitude of velocity vectors or obtain dense motion maps. The motivation is that motion direction information at a number of points seems to be sufficient to evoke perception of motion and hence should be useful in many image processing tasks requiring motion analysis. The algorithm essentially updates the motion at previous time using the current image frame as input in a dynamic fashion. One of the novel features of the algorithm is the use of some feedback mechanism for evidence segregation. This kind of motion analysis can identify regions in the image that are moving together coherently, and such information could be sufficient for many applications that utilize motion such as segmentation, compression, and tracking. We present an algorithm for tracking objects using our motion information to demonstrate the potential of this motion detection algorithm.
Resumo:
Recently, we reported a low-complexity likelihood ascent search (LAS) detection algorithm for large MIMO systems with several tens of antennas that can achieve high spectral efficiencies of the order of tens to hundreds of bps/Hz. Through simulations, we showed that this algorithm achieves increasingly near SISO AWGN performance for increasing number of antennas in Lid. Rayleigh fading. However, no bit error performance analysis of the algorithm was reported. In this paper, we extend our work on this low-complexity large MIMO detector in two directions: i) We report an asymptotic bit error probability analysis of the LAS algorithm in the large system limit, where N-t, N-r -> infinity keeping N-t = N-r, where N-t and N-r are the number of transmit and receive antennas, respectively. Specifically, we prove that the error performance of the LAS detector for V-BLAST with 4-QAM in i.i.d. Rayleigh fading converges to that of the maximum-likelihood (ML) detector as N-t, N-r -> infinity keeping N-t = N-r ii) We present simulated BER and nearness to capacity results for V-BLAST as well as high-rate non-orthogonal STBC from Division Algebras (DA), in a more realistic spatially correlated MIMO channel model. Our simulation results show that a) at an uncoded BER of 10(-3), the performance of the LAS detector in decoding 16 x 16 STBC from DA with N-t = = 16 and 16-QAM degrades in spatially correlated fading by about 7 dB compared to that in i.i.d. fading, and 19) with a rate-3/4 outer turbo code and 48 bps/Hz spectral efficiency, the performance degrades by about 6 dB at a coded BER of 10(-4). Our results further show that providing asymmetry in number of antennas such that N-r > N-t keeping the total receiver array length same as that for N-r = N-t, the detector is able to pick up the extra receive diversity thereby significantly improving the BER performance.
Resumo:
In this paper, we are interested in high spectral efficiency multicode CDMA systems with large number of users employing single/multiple transmit antennas and higher-order modulation. In particular, we consider a local neighborhood search based multiuser detection algorithm which offers very good performance and complexity, suited for systems with large number of users employing M-QAM/M-PSK. We apply the algorithm on the chip matched filter output vector. We demonstrate near-single user (SU) performance of the algorithm in CDMA systems with large number of users using 4-QAM/16-QAM/64-QAM/8-PSK on AWGN, frequency-flat, and frequency-selective fading channels. We further show that the algorithm performs very well in multicode multiple-input multiple-output (MIMO) CDMA systems as well, outperforming other linear detectors and interference cancelers reported in the literature for such systems. The per-symbol complexity of the search algorithm is O(K2n2tn2cM), K: number of users, nt: number of transmit antennas at each user, nc: number of spreading codes multiplexed on each transmit antenna, M: modulation alphabet size, making the algorithm attractive for multiuser detection in large-dimension multicode MIMO-CDMA systems with M-QAM.
Resumo:
We develop several novel signal detection algorithms for two-dimensional intersymbol-interference channels. The contribution of the paper is two-fold: (1) We extend the one-dimensional maximum a-posteriori (MAP) detection algorithm to operate over multiple rows and columns in an iterative manner. We study the performance vs. complexity trade-offs for various algorithmic options ranging from single row/column non-iterative detection to a multi-row/column iterative scheme and analyze the performance of the algorithm. (2) We develop a self-iterating 2-D linear minimum mean-squared based equalizer by extending the 1-D linear equalizer framework, and present an analysis of the algorithm. The iterative multi-row/column detector and the self-iterating equalizer are further connected together within a turbo framework. We analyze the combined 2-D iterative equalization and detection engine through analysis and simulations. The performance of the overall equalizer and detector is near MAP estimate with tractable complexity, and beats the Marrow Wolf detector by about at least 0.8 dB over certain 2-D ISI channels. The coded performance indicates about 8 dB of significant SNR gain over the uncoded 2-D equalizer-detector system.
Resumo:
The design and development of a Bottom Pressure Recorder for a Tsunami Early Warning System is described here. The special requirements that it should satisfy for the specific application of deployment at ocean bed and pressure monitoring of the water column above are dealt with. A high-resolution data digitization and low circuit power consumption are typical ones. The implementation details of the data sensing and acquisition part to meet these are also brought out. The data processing part typically encompasses a Tsunami detection algorithm that should detect an event of significance in the background of a variety of periodic and aperiodic noise signals. Such an algorithm and its simulation are presented. Further, the results of sea trials carried out on the system off the Chennai coast are presented. The high quality and fidelity of the data prove that the system design is robust despite its low cost and with suitable augmentations, is ready for a full-fledged deployment at ocean bed. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we consider signal detection in nt × nr underdetermined MIMO (UD-MIMO) systems, where i) nt >; nr with a overload factor α = nt over nr >; 1, ii) nt symbols are transmitted per channel use through spatial multiplexing, and iii) nt, nr are large (in the range of tens). A low-complexity detection algorithm based on reactive tabu search is considered. A variable threshold based stopping criterion is proposed which offers near-optimal performance in large UD-MIMO systems at low complexities. A lower bound on the maximum likelihood (ML) bit error performance of large UD-MIMO systems is also obtained for comparison. The proposed algorithm is shown to achieve BER performance close to the ML lower bound within 0.6 dB at an uncoded BER of 10-2 in 16 × 8 V-BLAST UD-MIMO system with 4-QAM (32 bps/Hz). Similar near-ML performance results are shown for 32 × 16, 32 × 24 V-BLAST UD-MIMO with 4-QAM/16-QAM as well. A performance and complexity comparison between the proposed algorithm and the λ-generalized sphere decoder (λ-GSD) algorithm for UD-MIMO shows that the proposed algorithm achieves almost the same performance of λ-GSD but at a significantly lesser complexity.
Resumo:
Lattice reduction (LR) aided detection algorithms are known to achieve the same diversity order as that of maximum-likelihood (ML) detection at low complexity. However, they suffer SNR loss compared to ML performance. The SNR loss is mainly due to imperfect orthogonalization and imperfect nearest neighbor quantization. In this paper, we propose an improved LR-aided (ILR) detection algorithm, where we specifically target to reduce the effects of both imperfect orthogonalization and imperfect nearest neighbor quantization. The proposed ILR detection algorithm is shown to achieve near-ML performance in large-MIMO systems and outperform other LR-aided detection algorithms in the literature. Specifically, the SNR loss incurred by the proposed ILR algorithm compared to ML performance is just 0.1 dB for 4-QAM and < 0.5 dB for 16-QAM in 16 x 16 V-BLAST MIMO system. This performance is superior compared to those of other LR-aided detection algorithms, whose SNR losses are in the 2 dB to 9 dB range.
Resumo:
Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.
Resumo:
In this paper, we have proposed an anomaly detection algorithm based on Histogram of Oriented Motion Vectors (HOMV) 1] in sparse representation framework. Usual behavior is learned at each location by sparsely representing the HOMVs over learnt normal feature bases obtained using an online dictionary learning algorithm. In the end, anomaly is detected based on the likelihood of the occurrence of sparse coefficients at that location. The proposed approach is found to be robust compared to existing methods as demonstrated in the experiments on UCSD Ped1 and UCSD Ped2 datasets.
Resumo:
Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.
This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.
Resumo:
In a Text-to-Speech system based on time-domain techniques that employ pitch-synchronous manipulation of the speech waveforms, one of the most important issues that affect the output quality is the way the analysis points of the speech signal are estimated and the actual points, i.e. the analysis pitchmarks. In this paper we present our methodology for calculating the pitchmarks of a speech waveform, a pitchmark detection algorithm, which after thorough experimentation and in comparison with other algorithms, proves to behave better with our TD-PSOLA-based Text-to-Speech synthesizer (Time- Domain Pitch-Synchronous Overlap Add Text to Speech System).
Resumo:
This paper describes the ground target detection, classification and sensor fusion problems in distributed fiber seismic sensor network. Compared with conventional piezoelectric seismic sensor used in UGS, fiber optic sensor has advantages of high sensitivity and resistance to electromagnetic disturbance. We have developed a fiber seismic sensor network for target detection and classification. However, ground target recognition based on seismic sensor is a very challenging problem because of the non-stationary characteristic of seismic signal and complicated real life application environment. To solve these difficulties, we study robust feature extraction and classification algorithms adapted to fiber sensor network. An united multi-feature (UMF) method is used. An adaptive threshold detection algorithm is proposed to minimize the false alarm rate. Three kinds of targets comprise personnel, wheeled vehicle and tracked vehicle are concerned in the system. The classification simulation result shows that the SVM classifier outperforms the GMM and BPNN. The sensor fusion method based on D-S evidence theory is discussed to fully utilize information of fiber sensor array and improve overall performance of the system. A field experiment is organized to test the performance of fiber sensor network and gather real signal of targets for classification testing.
Resumo:
This paper presents a novel architecture of vision chip for fast traffic lane detection (FTLD). The architecture consists of a 32*32 SIMD processing element (PE) array processor and a dual-core RISC processor. The PE array processor performs low-level pixel-parallel image processing at high speed and outputs image features for high-level image processing without I/O bottleneck. The dual-core processor carries out high-level image processing. A parallel fast lane detection algorithm for this architecture is developed. The FPGA system with a CMOS image sensor is used to implement the architecture. Experiment results show that the system can perform the fast traffic lane detection at 50fps rate. It is much faster than previous works and has good robustness that can operate in various intensity of light. The novel architecture of vision chip is able to meet the demand of real-time lane departure warning system.
Resumo:
While Histograms of Oriented Gradients (HOG) plus Support Vector Machine (SVM) (HOG+SVM) is the most successful human detection algorithm, it is time-consuming. This paper proposes two ways to deal with this problem. One way is to reuse the features in blocks to construct the HOG features for intersecting detection windows. Another way is to utilize sub-cell based interpolation to efficiently compute the HOG features for each block. The combination of the two ways results in significant increase in detecting humans-more than five times better. To evaluate the proposed method, we have established a top-view human database. Experimental results on the top-view database and the well-known INRIA data set have demonstrated the effectiveness and efficiency of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.