886 resultados para Noisy 3D data
Resumo:
Classification of large datasets is a challenging task in Data Mining. In the current work, we propose a novel method that compresses the data and classifies the test data directly in its compressed form. The work forms a hybrid learning approach integrating the activities of data abstraction, frequent item generation, compression, classification and use of rough sets.
Resumo:
Classification of large datasets is a challenging task in Data Mining. In the current work, we propose a novel method that compresses the data and classifies the test data directly in its compressed form. The work forms a hybrid learning approach integrating the activities of data abstraction, frequent item generation, compression, classification and use of rough sets.
Resumo:
Automatic identification of software faults has enormous practical significance. This requires characterizing program execution behavior and the use of appropriate data mining techniques on the chosen representation. In this paper, we use the sequence of system calls to characterize program execution. The data mining tasks addressed are learning to map system call streams to fault labels and automatic identification of fault causes. Spectrum kernels and SVM are used for the former while latent semantic analysis is used for the latter The techniques are demonstrated for the intrusion dataset containing system call traces. The results show that kernel techniques are as accurate as the best available results but are faster by orders of magnitude. We also show that latent semantic indexing is capable of revealing fault-specific features.
Resumo:
The Orthogonal Frequency Division Multiplexing (OFDM) is a form of Multi-Carrier Modulation where the data stream is transmitted over a number of carriers which are orthogonal to each other i.e. the carrier spacing is selected such that each carrier is located at the zeroes of all other carriers in the spectral domain. This paper proposes a new novel sampling offset estimation algorithm for an OFDM system in order to receive the OFDM data symbols error-free over the noisy channel at the receiver and to achieve fine timing synchronization between the transmitter and the receiver. The performance of this algorithm has been studied in AWGN, ADSL and SUI channels successfully.
Resumo:
A common and practical paradigm in cooperative communications is the use of a dynamically selected 'best' relay to decode and forward information from a source to a destination. Such a system consists of two core phases: a relay selection phase, in which the system expends resources to select the best relay, and a data transmission phase, in which it uses the selected relay to forward data to the destination. In this paper, we study and optimize the trade-off between the selection and data transmission phase durations. We derive closed-form expressions for the overall throughput of a non-adaptive system that includes the selection phase overhead, and then optimize the selection and data transmission phase durations. Corresponding results are also derived for an adaptive system in which the relays can vary their transmission rates. Our results show that the optimal selection phase overhead can be significant even for fast selection algorithms. Furthermore, the optimal selection phase duration depends on the number of relays and whether adaptation is used.
Resumo:
In linear elastic fracture mechanics (LEFM), Irwin's crack closure integral (CCI) is one of the signficant concepts for the estimation of strain energy release rates (SERR) G, in individual as well as mixed-mode configurations. For effective utilization of this concept in conjunction with the finite element method (FEM), Rybicki and Kanninen [Engng Fracture Mech. 9, 931 938 (1977)] have proposed simple and direct estimations of the CCI in terms of nodal forces and displacements in the elements forming the crack tip from a single finite element analysis instead of the conventional two configuration analyses. These modified CCI (MCCI) expressions are basically element dependent. A systematic derivation of these expressions using element stress and displacement distributions is required. In the present work, a general procedure is given for the derivation of MCCI expressions in 3D problems with cracks. Further, a concept of sub-area integration is proposed which facilitates evaluation of SERR at a large number of points along the crack front without refining the finite element mesh. Numerical data are presented for two standard problems, a thick centre-cracked tension specimen and a semi-elliptical surface crack in a thick slab. Estimates for the stress intensity factor based on MCCI expressions corresponding to eight-noded brick elements are obtained and compared with available results in the literature.
Resumo:
The impulse response of a typical wireless multipath channel can be modeled as a tapped delay line filter whose non-zero components are sparse relative to the channel delay spread. In this paper, a novel method of estimating such sparse multipath fading channels for OFDM systems is explored. In particular, Sparse Bayesian Learning (SBL) techniques are applied to jointly estimate the sparse channel and its second order statistics, and a new Bayesian Cramer-Rao bound is derived for the SBL algorithm. Further, in the context of OFDM channel estimation, an enhancement to the SBL algorithm is proposed, which uses an Expectation Maximization (EM) framework to jointly estimate the sparse channel, unknown data symbols and the second order statistics of the channel. The EM-SBL algorithm is able to recover the support as well as the channel taps more efficiently, and/or using fewer pilot symbols, than the SBL algorithm. To further improve the performance of the EM-SBL, a threshold-based pruning of the estimated second order statistics that are input to the algorithm is proposed, and its mean square error and symbol error rate performance is illustrated through Monte-Carlo simulations. Thus, the algorithms proposed in this paper are capable of obtaining efficient sparse channel estimates even in the presence of a small number of pilots.
Resumo:
In a mobile ad-hoc network scenario, where communication nodes are mounted on moving platforms (like jeeps, trucks, tanks, etc.), use of V-BLAST requires that the number of receive antennas in a given node must be greater than or equal to the sum of the number of transmit antennas of all its neighbor nodes. This limits the achievable spatial multiplexing gain (data rate) for a given node. In such a scenario, we propose to achieve high data rates per node through multicode direct sequence spread spectrum techniques in conjunction with V-BLAST. In the considered multicode V-BLAST system, the receiver experiences code domain interference (CDI) in frequency selective fading, in addition to space domain interference (SDI) experienced in conventional V-BLAST systems. We propose two interference cancelling receivers that employ a linear parallel interference cancellation approach to handle the CDI, followed by conventional V-BLAST detector to handle the SDI, and then evaluate their bit error rates.
Resumo:
This paper considers the design and analysis of a filter at the receiver of a source coding system to mitigate the excess Mean-Squared Error (MSE) distortion caused due to channel errors. It is assumed that the source encoder is channel-agnostic, i.e., that a Vector Quantization (VQ) based compression designed for a noiseless channel is employed. The index output by the source encoder is sent over a noisy memoryless discrete symmetric channel, and the possibly incorrect received index is decoded by the corresponding VQ decoder. The output of the VQ decoder is processed by a receive filter to obtain an estimate of the source instantiation. In the sequel, the optimum linear receive filter structure to minimize the overall MSE is derived, and shown to have a minimum-mean squared error receiver type structure. Further, expressions are derived for the resulting high-rate MSE performance. The performance is compared with the MSE obtained using conventional VQ as well as the channel optimized VQ. The accuracy of the expressions is demonstrated through Monte Carlo simulations.
Resumo:
This paper describes an algorithm for constructing the solid model (boundary representation) from pout data measured from the faces of the object. The poznt data is assumed to be clustered for each face. This algorithm does not require any compuiier model of the part to exist and does not require any topological infarmation about the part to be input by the user. The property that a convex solid can be constructed uniquely from geometric input alone is utilized in the current work. Any object can be represented a5 a combznatzon of convex solids. The proposed algorithm attempts to construct convex polyhedra from the given input. The polyhedra so obtained are then checked against the input data for containment and those polyhedra, that satisfy this check, are combined (using boolean union operation) to realise the solid model. Results of implementation are presented.
Resumo:
In this work, a procedure is presented for the reconstruction of biological organs from image sequences obtained through CT-scan. Although commercial software, which can accomplish this task, are readily available, the procedure presented here needs only free software. The procedure has been applied to reconstruct a liver from the scan data available in literature. 3D biological organs obtained this way can be used for the finite element analysis of biological organs and this has been demonstrated by carrying out an FE analysis on the reconstructed liver.
Resumo:
Two methods based on wavelet/wavelet packet expansion to denoise and compress optical tomography data containing scattered noise are presented, In the first, the wavelet expansion coefficients of noisy data are shrunk using a soft threshold. In the second, the data are expanded into a wavelet packet tree upon which a best basis search is done. The resulting coefficients are truncated on the basis of energy content. It can be seen that the first method results in efficient denoising of experimental data when scattering particle density in the medium surrounding the object was up to 12.0 x 10(6) per cm(3). This method achieves a compression ratio of approximate to 8:1. The wavelet packet based method resulted in a compression of up to 11:1 and also exhibited reasonable noise reduction capability. Tomographic reconstructions obtained from denoised data are presented. (C) 1999 Published by Elsevier Science B.V. All rights reserved,
Resumo:
Filtering methods are explored for removing noise from data while preserving sharp edges that many indicate a trend shift in gas turbine measurements. Linear filters are found to be have problems with removing noise while preserving features in the signal. The nonlinear hybrid median filter is found to accurately reproduce the root signal from noisy data. Simulated faulty data and fault-free gas path measurement data are passed through median filters and health residuals for the data set are created. The health residual is a scalar norm of the gas path measurement deltas and is used to partition the faulty engine from the healthy engine using fuzzy sets. The fuzzy detection system is developed and tested with noisy data and with filtered data. It is found from tests with simulated fault-free and faulty data that fuzzy trend shift detection based on filtered data is very accurate with no false alarms and negligible missed alarms.
Resumo:
We propose a method to encode a 3D magnetic resonance image data and a decoder in such way that fast access to any 2D image is possible by decoding only the corresponding information from each subband image and thus provides minimum decoding time. This will be of immense use for medical community, because most of the PET and MRI data are volumetric data. Preprocessing is carried out at every level before wavelet transformation, to enable easier identification of coefficients from each subband image. Inclusion of special characters in the bit stream facilitates access to corresponding information from the encoded data. Results are taken by performing Daub4 along x (row), y (column) direction and Haar along z (slice) direction. Comparable results are achieved with the existing technique. In addition to that decoding time is reduced by 1.98 times. Arithmetic coding is used to encode corresponding information independently