12 resultados para preprocessing
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Modern Multiple-Input Multiple-Output (MIMO) communication systems place huge demands on embedded processing resources in terms of throughput, latency and resource utilization. State-of-the-art MIMO detector algorithms, such as Fixed-Complexity Sphere Decoding (FSD), rely on efficient channel preprocessing involving numerous calculations of the pseudo-inverse of the channel matrix by QR Decomposition (QRD) and ordering. These highly complicated operations can quickly become the critical prerequisite for real-time MIMO detection, exaggerated as the number of antennas in a MIMO detector increases. This paper describes a sorted QR decomposition (SQRD) algorithm extended for FSD, which significantly reduces the complexity and latency
of this preprocessing step and increases the throughput of MIMO detection. It merges the calculations of the QRD and ordering operations to avoid multiple iterations of QRD. Specifically, it shows that SQRD reduces the computational complexity by over 60-70% when compared to conventional
MIMO preprocessing algorithms. In 4x4 to 7x7 MIMO cases, the approach suffers merely 0.16-0.2 dB reduction in Bit Error Rate (BER) performance.
Resumo:
This paper compares the complexity of the sphere decoder (SD) and a previously proposed detection scheme, denoted here as block SD (BSD), when they are applied to the detection of multiple-input multiple-output (MIMO) systems in frequency-selective channels. The complexity of both algorithms depends on their preprocessing and tree search stages. Although the BSD was proposed as a means of greatly reducing the complexity of the preprocessing stage of the SD, no study was done on how the complexity of the tree search stage could be affected by that reduced preprocessing stage. This paper shows, both analytically and through simulation, that the reduction in preprocessing complexity provided by the BSD has the side effect of increasing the complexity of its tree search stage compared to that of the SD, independent of the signal-to-noise ratio (SNR). In addition, this paper shows how sorting the columns of the frequency-selective channel matrix in the SD does not reduce the complexity of the tree search stage, contrary to what occurs in frequency-flat channels.
Resumo:
Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.
Resumo:
Recently, a single-symbol decodable transmit strategy based on preprocessing at the transmitter has been introduced to decouple the quasi-orthogonal space-time block codes (QOSTBC) with reduced complexity at the receiver [9]. Unfortunately, it does not achieve full diversity, thus suffering from significant performance loss. To tackle this problem, we propose a full diversity scheme with four transmit antennas in this letter. The proposed code is based on a class of restricted full-rank single-symbol decodable design (RFSDD) and has many similar characteristics as the coordinate interleaved orthogonal designs (CIODs), but with a lower peak-to-average ratio (PAR).
Resumo:
Latent semantic indexing (LSI) is a technique used for intelligent information retrieval (IR). It can be used as an alternative to traditional keyword matching IR and is attractive in this respect because of its ability to overcome problems with synonymy and polysemy. This study investigates various aspects of LSI: the effect of the Haar wavelet transform (HWT) as a preprocessing step for the singular value decomposition (SVD) in the key stage of the LSI process; and the effect of different threshold types in the HWT on the search results. The developed method allows the visualisation and processing of the term document matrix, generated in the LSI process, using HWT. The results have shown that precision can be increased by applying the HWT as a preprocessing step, with better results for hard thresholding than soft thresholding, whereas standard SVD-based LSI remains the most effective way of searching in terms of recall value.
Resumo:
This paper reports image analysis methods that have been developed to study the microstructural changes of non-wovens made by the hydroentanglement process. The validity of the image processing techniques has been ascertained by applying them to test images with known properties. The parameters in preprocessing of the scanning electron microscope (SEM) images used in image processing have been tested and optimized. The fibre orientation distribution is estimated using fast Fourier transform (FFT) and Hough transform (HT) methods. The results obtained using these two methods are in good agreement. The HT method is more demanding in computational time compared with the Fourier transform (FT) method. However, the advantage of the HT method is that the actual orientation of the lines can be concluded directly from the result of the transform without the need for any further computation. The distribution of the length of the straight fibre segments of the fabrics is evaluated by the HT method. The effect of curl of the fibres on the result of this evaluation is shown.
Resumo:
Features analysis is an important task which can significantly affect the performance of automatic bacteria colony picking. Unstructured environments also affect the automatic colony screening. This paper presents a novel approach for adaptive colony segmentation in unstructured environments by treating the detected peaks of intensity histograms as a morphological feature of images. In order to avoid disturbing peaks, an entropy based mean shift filter is introduced to smooth images as a preprocessing step. The relevance and importance of these features can be determined in an improved support vector machine classifier using unascertained least square estimation. Experimental results show that the proposed unascertained least square support vector machine (ULSSVM) has better recognition accuracy than the other state-of-the-art techniques, and its training process takes less time than most of the traditional approaches presented in this paper.
Resumo:
This paper presents a machine learning approach to sarcasm detection on Twitter in two languages – English and Czech. Although there has been some research in sarcasm detection in languages other than English (e.g., Dutch, Italian, and Brazilian Portuguese), our work is the first attempt at sarcasm detection in the Czech language. We created a large Czech Twitter corpus consisting of 7,000 manually-labeled tweets and provide it to the community. We evaluate two classifiers with various combinations of features on both the Czech and English datasets. Furthermore, we tackle the issues of rich Czech morphology by examining different preprocessing techniques. Experiments show that our language-independent approach significantly outperforms adapted state-of-the-art methods in English (F-measure 0.947) and also represents a strong baseline for further research in Czech (F-measure 0.582).
Resumo:
Numerical methods have enabled the simulation of complex problems in off-shore and marine engineering. A significant challenge in these simulations is the creation of a realistic wave field. A good numerical tank requires wave creation and absorption of waves at various locations. Several numerical wavemakers with these capabilities have been presented in the past. This paper reviews four different wave-maker methods and discusses limitations, computational efficiency, requirements on the mesh and preprocessing and complexity of implementation.
Resumo:
We describe a pre-processing correlation attack on an FPGA implementation of AES, protected with a random clocking countermeasure that exhibits complex variations in both the location and amplitude of the power consumption patterns of the AES rounds. It is demonstrated that the merged round patterns can be pre-processed to identify and extract the individual round amplitudes, enabling a successful power analysis attack. We show that the requirement of the random clocking countermeasure to provide a varying execution time between processing rounds can be exploited to select a sub-set of data where sufficient current decay has occurred, further improving the attack. In comparison with the countermeasure's estimated security of 3 million traces from an integration attack, we show that through application of our proposed techniques that the countermeasure can now be broken with as few as 13k traces.
Resumo:
As cryptographic implementations are increasingly subsumed as functional blocks within larger systems on chip, it becomes more difficult to identify the power consumption signatures of cryptographic operations amongst other unrelated processing activities. In addition, at higher clock frequencies, the current decay between successive processing rounds is only partial, making it more difficult to apply existing pattern matching techniques in side-channel analysis. We show however, through the use of a phase-sensitive detector, that power traces can be pre-processed to generate a filtered output which exhibits an enhanced round pattern, enabling the identification of locations on a device where encryption operations are occurring and also assisting with the re-alignment of power traces for side-channel attacks.
Resumo:
Pre-processing (PP) of received symbol vector and channel matrices is an essential pre-requisite operation for Sphere Decoder (SD)-based detection of Multiple-Input Multiple-Output (MIMO) wireless systems. PP is a highly complex operation, but relative to the total SD workload it represents a relatively small fraction of the overall computational cost of detecting an OFDM MIMO frame in standards such as 802.11n. Despite this, real-time PP architectures are highly inefficient, dominating the resource cost of real-time SD architectures. This paper resolves this issue. By reorganising the ordering and QR decomposition sub operations of PP, we describe a Field Programmable Gate Array (FPGA)-based PP architecture for the Fixed Complexity Sphere Decoder (FSD) applied to 4 × 4 802.11n MIMO which reduces resource cost by 50% as compared to state-of-the-art solutions whilst maintaining real-time performance.