853 resultados para Packet Filtering
Resumo:
Merton's model views equity as a call option on the asset of the firm. Thus the asset is partially observed through the equity. Then using nonlinear filtering an explicit expression for likelihood ratio for underlying parameters in terms of the nonlinear filter is obtained. As the evolution of the filter itself depends on the parameters in question, this does not permit direct maximum likelihood estimation, but does pave the way for the `Expectation-Maximization' method for estimating parameters. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A complete solution to the fundamental problem of delineation of an ECG signal into its component waves by filtering the discrete Fourier transform of the signal is presented. The set of samples in a component wave is transformed into a complex sequence with a distinct frequency band. The filter characteristics are determined from the time signal itself. Multiplication of the transformed signal with a complex sinusoidal function allows the use of a bank of low-pass filters for the delineation of all component waves. Data from about 300 beats have been analysed and the results are highly satisfactory both qualitatively and quantitatively.
Resumo:
In correlation filtering we attempt to remove that component of the aeromagnetic field which is closely related to the topography. The magnetization vector is assumed to be spatially variable, but it can be successively estimated under the additional assumption that the magnetic component due to topography is uncorrelated with the magnetic signal of deeper origin. The correlation filtering was tested against a synthetic example. The filtered field compares very well with the known signal of deeper origin. We have also applied this method to real data from the south Indian shield. It is demonstrated that the performance of the correlation filtering is superior in situations where the direction of magnetization is variable, for example, where the remnant magnetization is dominant.
Resumo:
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.
Resumo:
We present experimental investigation of a new reconstruction method for off-axis digital holographic microscopy (DHM). This method effectively suppresses the object auto-correlation, commonly called the zero-order term, from holographic measurements, thereby suppressing the artifacts generated by the intensities of the two beams employed for interference from complex wavefield reconstruction. The algorithm is based on non-linear filtering, and can be applied to standard DHM setups, with realistic recording conditions. We study the applicability of the technique under different experimental configurations, such as topographic images of microscopic specimens or speckle holograms.
Resumo:
We formulate a two-stage Iterative Wiener filtering (IWF) approach to speech enhancement, bettering the performance of constrained IWF, reported in literature. The codebook constrained IWF (CCIWF) has been shown to be effective in achieving convergence of IWF in the presence of both stationary and non-stationary noise. To this, we include a second stage of unconstrained IWF and show that the speech enhancement performance can be improved in terms of average segmental SNR (SSNR), Itakura-Saito (IS) distance and Linear Prediction Coefficients (LPC) parameter coincidence. We also explore the tradeoff between the number of CCIWF iterations and the second stage IWF iterations.
Resumo:
In this paper, expressions for convolution multiplication properties of MDCT are derived starting from the equivalent DFT representations. Using these expressions, methods for implementing linear filtering through block convolution in the MDCT domain are presented. The implementation is exact for symmetric filters and approximate for non-symmetric filters in the case of rectangular window based MDCT. For a general MDCT window function, the filtering is done on the windowed segments and hence the convolution is approximate for symmetric as well as non-symmetric filters. This approximation error is shown to be perceptually insignificant for symmetric impulse response filters. Moreover, the inherent $50 \%$ overlap between adjacent frames used in MDCT computation does reduce this approximation error similar to smoothing of other block processing errors. The presented techniques are useful for compressed domain processing of audio signals.
Resumo:
Network processors today consist of multiple parallel processors (micro engines) with support for multiple threads to exploit packet level parallelism inherent in network workloads. With such concurrency, packet ordering at the output of the network processor cannot be guaranteed. This paper studies the effect of concurrency in network processors on packet ordering. We use a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application. Our study indicates that in addition to the parallel processing in the network processor, the allocation scheme for the transmit buffer also adversely impacts packet ordering. In particular, our results reveal that these packet reordering results in a packet retransmission rate of up to 61%. We explore different transmit buffer allocation schemes namely, contiguous, strided, local, and global which reduces the packet retransmission to 24%. We propose an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps. Further, packet sort outperforms the in-built packet ordering schemes in the IXP processor by up to 35%.
Resumo:
The paper presents an adaptive Fourier filtering technique and a relaying scheme based on a combination of a digital band-pass filter along with a three-sample algorithm, for applications in high-speed numerical distance protection. To enhance the performance of above-mentioned technique, a high-speed fault detector has been used. MATLAB based simulation studies show that the adaptive Fourier filtering technique provides fast tripping for near faults and security for farther faults. The digital relaying scheme based on a combination of digital band-pass filter along with three-sample data window algorithm also provides accurate and high-speed detection of faults. The paper also proposes a high performance 16-bit fixed point DSP (Texas Instruments TMS320LF2407A) processor based hardware scheme suitable for implementation of the above techniques. To evaluate the performance of the proposed relaying scheme under steady state and transient conditions, PC based menu driven relay test procedures are developed using National Instruments LabVIEW software. The test signals are generated in real time using LabVIEW compatible analog output modules. The results obtained from the simulation studies as well as hardware implementations are also presented.
Resumo:
Two models for large eddy simulation of turbulent reacting flow in homogeneous turbulence were studied. The sub-grid stress arising out of non-linearities of the Navier-Stokes equations were modeled using an explicit filtering approach. A filtered mass density function (FMDF) approach was used for closure of the sub-grid scalar fluctuations. A posteriori calculations, when compared with the results from the direct numerical simulation, indicate that the explicit filtering is adequate in representing the effect of sub-grid stress on the filtered velocity field in the absence of reaction. Discrepancies arise when reactions occur, but the FMDF approach suffices to account for sub-grid scale fluctuations of the reacting scalars, accurately.
Resumo:
We recast the reconstruction problem of diffuse optical tomography (DOT) in a pseudo-dynamical framework and develop a method to recover the optical parameters using particle filters, i.e., stochastic filters based on Monte Carlo simulations. In particular, we have implemented two such filters, viz., the bootstrap (BS) filter and the Gaussian-sum (GS) filter and employed them to recover optical absorption coefficient distribution from both numerically simulated and experimentally generated photon fluence data. Using either indicator functions or compactly supported continuous kernels to represent the unknown property distribution within the inhomogeneous inclusions, we have drastically reduced the number of parameters to be recovered and thus brought the overall computation time to within reasonable limits. Even though the GS filter outperformed the BS filter in terms of accuracy of reconstruction, both gave fairly accurate recovery of the height, radius, and location of the inclusions. Since the present filtering algorithms do not use derivatives, we could demonstrate accurate contrast recovery even in the middle of the object where the usual deterministic algorithms perform poorly owing to the poor sensitivity of measurement of the parameters. Consistent with the fact that the DOT recovery, being ill posed, admits multiple solutions, both the filters gave solutions that were verified to be admissible by the closeness of the data computed through them to the data used in the filtering step (either numerically simulated or experimentally generated). (C) 2011 Optical Society of America