992 resultados para Traffic signals
Resumo:
We consider the problem of signal estimation where the observed time series is modeled as y(i) = x(i) + s(i) with {x(i)} being an orbit of a chaotic self-map on a compact subset of R-d and {s(i)} a sequence in R-d converging to zero. This model is motivated by experimental results in the literature where the ocean ambient noise and the ocean clutter are found to be chaotic. Making use of observations up to time n, we propose an estimate of s(i) for i < n and show that it approaches s(i) as n -> infinity for typical asymptotic behaviors of orbits. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Non-Gaussianity of signals/noise often results in significant performance degradation for systems, which are designed using the Gaussian assumption. So non-Gaussian signals/noise require a different modelling and processing approach. In this paper, we discuss a new Bayesian estimation technique for non-Gaussian signals corrupted by colored non Gaussian noise. The method is based on using zero mean finite Gaussian Mixture Models (GMMs) for signal and noise. The estimation is done using an adaptive non-causal nonlinear filtering technique. The method involves deriving an estimator in terms of the GMM parameters, which are in turn estimated using the EM algorithm. The proposed filter is of finite length and offers computational feasibility. The simulations show that the proposed method gives a significant improvement compared to the linear filter for a wide variety of noise conditions, including impulsive noise. We also claim that the estimation of signal using the correlation with past and future samples leads to reduced mean squared error as compared to signal estimation based on past samples only.
Resumo:
This paper describes a method of automated segmentation of speech assuming the signal is continuously time varying rather than the traditional short time stationary model. It has been shown that this representation gives comparable if not marginally better results than the other techniques for automated segmentation. A formulation of the 'Bach' (music semitonal) frequency scale filter-bank is proposed. A comparative study has been made of the performances using Mel, Bark and Bach scale filter banks considering this model. The preliminary results show up to 80 % matches within 20 ms of the manually segmented data, without any information of the content of the text and without any language dependence. 'Bach' filters are seen to marginally outperform the other filters.
Resumo:
Homomorphic analysis and pole-zero modeling of electrocardiogram (ECG) signals are presented in this paper. Four typical ECG signals are considered and deconvolved into their minimum and maximum phase components through cepstral filtering, with a view to study the possibility of more efficient feature selection from the component signals for diagnostic purposes. The complex cepstra of the signals are linearly filtered to extract the basic wavelet and the excitation function. The ECG signals are, in general, mixed phase and hence, exponential weighting is done to aid deconvolution of the signals. The basic wavelet for normal ECG approximates the action potential of the muscle fiber of the heart and the excitation function corresponds to the excitation pattern of the heart muscles during a cardiac cycle. The ECG signals and their components are pole-zero modeled and the pole-zero pattern of the models can give a clue to classify the normal and abnormal signals. Besides, storing only the parameters of the model can result in a data reduction of more than 3:1 for normal signals sampled at a moderate 128 samples/s
Resumo:
Modelling of city traffic involves capturing of all the dynamics that exist in real-time traffic. Probabilistic models and queuing theory have been used for mathematical representation of the traffic system. This paper proposes the concept of modelling the traffic system using bond graphs wherein traffic flow is based on energy conservation. The proposed modelling approach uses switched junctions to model complex traffic networks. This paper presents the modelling, simulation and experimental validation aspects.
Resumo:
We consider the possibility of fingerprinting the presence of heavy additional Z' bosons that arise naturally in extensions of the standard model such as E-6 models and left-right symmetric models, through their mixing with the standard model Z boson. By considering a class of observables including total cross sections, energy distributions and angular distributions of decay leptons we find significant deviation from the standard model predictions for these quantities with right-handed electrons and left-handed positrons at root s= 800GeV. The deviations being less pronounced at smaller centre of mass energies as the models are already tightly constrained. Our work suggests that the ILC should have a strong beam polarization physics program particularly with these configurations. On the other hand, a forward backward asymmetry and lepton fraction in the backward direction are more sensitive to new physics with realistic polarization due to interesting interplay with the neutrino t-channel diagram. This process complements the study of fermion pair production processes that have been considered for discrimination between these models.
Resumo:
The EEG time series has been subjected to various formalisms of analysis to extract meaningful information regarding the underlying neural events. In this paper the linear prediction (LP) method has been used for analysis and presentation of spectral array data for the better visualisation of background EEG activity. It has also been used for signal generation, efficient data storage and transmission of EEG. The LP method is compared with the standard Fourier method of compressed spectral array (CSA) of the multichannel EEG data. The autocorrelation autoregressive (AR) technique is used for obtaining the LP coefficients with a model order of 15. While the Fourier method reduces the data only by half, the LP method just requires the storage of signal variance and LP coefficients. The signal generated using white Gaussian noise as the input to the LP filter has a high correlation coefficient of 0.97 with that of original signal, thus making LP as a useful tool for storage and transmission of EEG. The biological significance of Fourier method and the LP method in respect to the microstructure of neuronal events in the generation of EEG is discussed.
Resumo:
We propose F-norm of the cross-correlation part of the array covariance matrix as a measure of correlation between the impinging signals and study the performance of different decorrelation methods in the broadband case using this measure. We first show that dimensionality of the composite signal subspace, defined as the number of significant eigenvectors of the source sample covariance matrix, collapses in the presence of multipath and the spatial smoothing recovers this dimensionality. Using an upper bound on the proposed measure, we then study the decorrelation of the broadband signals with spatial smoothing and the effect of spacing and directions of the sources on the rate of decorrelation with progressive smoothing. Next, we introduce a weighted smoothing method based on Toeplitz-block-Toeplitz (TBT) structuring of the data covariance matrix which decorrelates the signals much faster than the spatial smoothing. Computer simulations are included to demonstrate the performance of the two methods.
Resumo:
In this letter, we propose a method for blind separation of d co-channel BPSK signals arriving at an antenna array. Our method involves two steps. In the first step, the received data vectors at the output of the array is grouped into 2d clusters. In the second step, we assign the 2d d-tuples with ±1 elements to these clusters in a consistent fashion. From the knowledge of the cluster to which a data vector belongs, we estimate the bits transmitted at that instant. Computer simulations are used to study the performance of our method
Resumo:
In this paper, we have developed a method to compute fractal dimension (FD) of discrete time signals, in the time domain, by modifying the box-counting method. The size of the box is dependent on the sampling frequency of the signal. The number of boxes required to completely cover the signal are obtained at multiple time resolutions. The time resolutions are made coarse by decimating the signal. The loglog plot of total number of boxes required to cover the curve versus size of the box used appears to be a straight line, whose slope is taken as an estimate of FD of the signal. The results are provided to demonstrate the performance of the proposed method using parametric fractal signals. The estimation accuracy of the method is compared with that of Katz, Sevcik, and Higuchi methods. In ddition, some properties of the FD are discussed.
Resumo:
This paper reports new results concerning the capabilities of a family of service disciplines aimed at providing per-connection end-to-end delay (and throughput) guarantees in high-speed networks. This family consists of the class of rate-controlled service disciplines, in which traffic from a connection is reshaped to conform to specific traffic characteristics, at every hop on its path. When used together with a scheduling policy at each node, this reshaping enables the network to provide end-to-end delay guarantees to individual connections. The main advantages of this family of service disciplines are their implementation simplicity and flexibility. On the other hand, because the delay guarantees provided are based on summing worst case delays at each node, it has also been argued that the resulting bounds are very conservative which may more than offset the benefits. In particular, other service disciplines such as those based on Fair Queueing or Generalized Processor Sharing (GPS), have been shown to provide much tighter delay bounds. As a result, these disciplines, although more complex from an implementation point-of-view, have been considered for the purpose of providing end-to-end guarantees in high-speed networks. In this paper, we show that through ''proper'' selection of the reshaping to which we subject the traffic of a connection, the penalty incurred by computing end-to-end delay bounds based on worst cases at each node can be alleviated. Specifically, we show how rate-controlled service disciplines can be designed to outperform the Rate Proportional Processor Sharing (RPPS) service discipline. Based on these findings, we believe that rate-controlled service disciplines provide a very powerful and practical solution to the problem of providing end-to-end guarantees in high-speed networks.
Resumo:
We provide a comparative performance evaluation of packet queuing and link admission strategies for low-speed wide area network Links (e.g. 9600 bps, 64 kbps) that interconnect relatively highspeed, connectionless local area networks (e.g. 10 Mbps). In particular, we are concerned with the problem of providing differential quality of service to interLAN remote terminal and file transfer sessions, and throughput fairness between interLAN file transfer sessions. We use analytical and simulation models to study a variety of strategies. Our work also serves to address the performance comparison of connectionless vs. connection-oriented interconnection of CLNS LANS. When provision of priority at the physical transmission level is not feasible, we show, for low-speed WAN links (e.g. 9600 bps), the superiority of connection-oriented interconnection of connectionless LANs, with segregation of traffic streams with different QoS requirements into different window flow controlled connections. Such an implementation can easily be obtained by transporting IP packets over an X.25 WAN. For 64 kbps WAN links, there is a drop in file transfer throughputs, owing to connection overheads, but the other advantages are retained, The same solution also helps to provide throughput fairness between interLAN file transfer sessions. We also provide a corroboration of some of our modelling results with results from an experimental test-bed.
Resumo:
In this paper we consider an N x N non-blocking, space division ATM switch with input cell queueing. At each input, the cell arrival process comprises geometrically distributed bursts of consecutive cells for the various outputs. Motivated by the fact that some input links may be connected to metropolitan area networks, and others directly to B-ISDN terminals, we study the situation where there are two classes of inputs with different values of mean burst length. We show that when inputs contend for an output, giving priority to an input with smaller expected burst length yields a saturation throughput larger than if the reverse priority is given. Further, giving priority to less bursty traffic can give better throughput than if all the inputs were occupied by this less bursty traffic. We derive the asymptotic (as N --> infinity) saturation throughputs for each priority class.
Resumo:
Development of preimplantation embryos and blastocyst implantation are critical early events in the establishment of pregnancy. In primates, embryonic signals, secreted during the peri-implantation period, are believed to play a major role in the regulation of embryonic differentiation and implantation. However, only limited progress has been made in the molecular and functional characterization of embryonic signals, partly due to severe paucity of primate embryos and the lack of optimal culture conditions to obtain viable embryo development. Two embryonic (endocrine) secretions, i.e. chorionic gonadotrophin (CG) and gonadotrophin releasing hormone (GnRH) are being studied. This article reviews the current status of knowledge on the recovery and culture of embryos, their secretion of CG, GnRH and other potential endocrine signals and their regulation and physiological role(s) during the peri-implantation period in primates, including humans.
Resumo:
One of the main disturbances in EEG signals is EMG artefacts generated by muscle movements. In the paper, the use of a linear phase FIR digital low-pass filter with finite wordlength precision coefficients is proposed, designed using the compensation procedure, to minimise EMG artefacts in contaminated EEG signals. To make the filtering more effective, different structures are used, i.e. cascading, twicing and sharpening (apart from simple low-pass filtering) of the designed FIR filter Modifications are proposed to twicing and sharpening structures to regain the linear phase characteristics that are lost in conventional twicing and sharpening operations. The efficacy of all these transformed filters in minimising EMG artefacts is studied, using SNR improvements as a performance measure for simulated signals. Time plots of the signals are also compared. Studies show that the modified sharpening structure is superior in performance to all other proposed methods. These algorithms have also been applied to real or recorded EMG-contaminated EEG signal. Comparison of time plots, and also the output SNR, show that the proposed modified sharpened structure works better in minimising EMG artefacts compared with other methods considered.