52 resultados para pre-processing quality
em Universidad Politécnica de Madrid
Resumo:
Background Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955)
Resumo:
Background:Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).
Resumo:
Energy efficiency is a major design issue in the context of Wireless Sensor Networks (WSN). If data is to be sent to a far-away base station, collaborative beamforming by the sensors may help to dis- tribute the load among the nodes and reduce fast battery depletion. However, collaborative beamforming techniques are far from opti- mality and in many cases may be wasting more power than required. In this contribution we consider the issue of energy efficiency in beamforming applications. Using a convex optimization framework, we propose the design of a virtual beamformer that maximizes the network's lifetime while satisfying a pre-specified Quality of Service (QoS) requirement. A distributed consensus-based algorithm for the computation of the optimal beamformer is also provided
Resumo:
Remote sensing information from spaceborne and airborne platforms continues to provide valuable data for different environmental monitoring applications. In this sense, high spatial resolution im-agery is an important source of information for land cover mapping. For the processing of high spa-tial resolution images, the object-based methodology is one of the most commonly used strategies. However, conventional pixel-based methods, which only use spectral information for land cover classification, are inadequate for classifying this type of images. This research presents a method-ology to characterise Mediterranean land covers in high resolution aerial images by means of an object-oriented approach. It uses a self-calibrating multi-band region growing approach optimised by pre-processing the image with a bilateral filtering. The obtained results show promise in terms of both segmentation quality and computational efficiency.
Resumo:
Recent advances in non-destructive imaging techniques, such as X-ray computed tomography (CT), make it possible to analyse pore space features from the direct visualisation from soil structures. A quantitative characterisation of the three-dimensional solid-pore architecture is important to understand soil mechanics, as they relate to the control of biological, chemical, and physical processes across scales. This analysis technique therefore offers an opportunity to better interpret soil strata, as new and relevant information can be obtained. In this work, we propose an approach to automatically identify the pore structure of a set of 200-2D images that represent slices of an original 3D CT image of a soil sample, which can be accomplished through non-linear enhancement of the pixel grey levels and an image segmentation based on a PFCM (Possibilistic Fuzzy C-Means) algorithm. Once the solids and pore spaces have been identified, the set of 200-2D images is then used to reconstruct an approximation of the soil sample by projecting only the pore spaces. This reconstruction shows the structure of the soil and its pores, which become more bounded, less bounded, or unbounded with changes in depth. If the soil sample image quality is sufficiently favourable in terms of contrast, noise and sharpness, the pore identification is less complicated, and the PFCM clustering algorithm can be used without additional processing; otherwise, images require pre-processing before using this algorithm. Promising results were obtained with four soil samples, the first of which was used to show the algorithm validity and the additional three were used to demonstrate the robustness of our proposal. The methodology we present here can better detect the solid soil and pore spaces on CT images, enabling the generation of better 2D?3D representations of pore structures from segmented 2D images.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.
Resumo:
La presente Tesis analiza y desarrolla metodología específica que permite la caracterización de sistemas de transmisión acústicos basados en el fenómeno del array paramétrico. Este tipo de estructuras es considerado como uno de los sistemas más representativos de la acústica no lineal con amplias posibilidades tecnológicas. Los arrays paramétricos aprovechan la no linealidad del medio aéreo para obtener en recepción señales en el margen sónico a partir de señales ultrasónicas en emisión. Por desgracia, este procedimiento implica que la señal transmitida y la recibida guardan una relación compleja, que incluye una fuerte ecualización así como una distorsión apreciable por el oyente. Este hecho reduce claramente la posibilidad de obtener sistemas acústicos de gran fidelidad. Hasta ahora, los esfuerzos tecnológicos dirigidos al diseño de sistemas comerciales han tratado de paliar esta falta de fidelidad mediante técnicas de preprocesado fuertemente dependientes de los modelos físicos teóricos. Estos están basados en la ecuación de propagación de onda no lineal. En esta Tesis se propone un nuevo enfoque: la obtención de una representación completa del sistema mediante series de Volterra que permita inferir un sistema de compensación computacionalmente ligero y fiable. La dificultad que entraña la correcta extracción de esta representación obliga a desarrollar una metodología completa de identificación adaptada a este tipo de estructuras. Así, a la hora de aplicar métodos de identificación se hace indispensable la determinación de ciertas características iniciales que favorezcan la parametrización del sistema. En esta Tesis se propone una metodología propia que extrae estas condiciones iniciales. Con estos datos, nos encontramos en disposición de plantear un sistema completo de identificación no lineal basado en señales pseudoaleatorias, que aumenta la fiabilidad de la descripción del sistema, posibilitando tanto la inferencia de la estructura basada en bloques subyacente, como el diseño de mecanismos de compensación adecuados. A su vez, en este escenario concreto en el que intervienen procesos de modulación, factores como el punto de trabajo o las características físicas del transductor, hacen inviables los algoritmos de caracterización habituales. Incluyendo el método de identificación propuesto. Con el fin de eliminar esta problemática se propone una serie de nuevos algoritmos de corrección que permiten la aplicación de la caracterización. Las capacidades de estos nuevos algoritmos se pondrán a prueba sobre un prototipo físico, diseñado a tal efecto. Para ello, se propondrán la metodología y los mecanismos de instrumentación necesarios para llevar a cabo el diseño, la identificación del sistema y su posible corrección, todo ello mediante técnicas de procesado digital previas al sistema de transducción. Los algoritmos se evaluarán en términos de error de modelado a partir de la señal de salida del sistema real frente a la salida sintetizada a partir del modelo estimado. Esta estrategia asegura la posibilidad de aplicar técnicas de compensación ya que éstas son sensibles a errores de estima en módulo y fase. La calidad del sistema final se evaluará en términos de fase, coloración y distorsión no lineal mediante un test propuesto a lo largo de este discurso, como paso previo a una futura evaluación subjetiva. ABSTRACT This Thesis presents a specific methodology for the characterization of acoustic transmission systems based on the parametric array phenomenon. These structures are well-known representatives of the nonlinear acoustics field and display large technological opportunities. Parametric arrays exploit the nonlinear behavior of air to obtain sonic signals at the receptors’side, which were generated within the ultrasonic range. The underlying physical process redunds in a complex relationship between the transmitted and received signals. This includes both a strong equalization and an appreciable distortion for a human listener. High fidelity, acoustic equipment based on this phenomenon is therefore difficult to design. Until recently, efforts devoted to this enterprise have focused in fidelity enhancement based on physically-informed, pre-processing schemes. These derive directly from the nonlinear form of the wave equation. However, online limited enhancement has been achieved. In this Thesis we propose a novel approach: the evaluation of a complete representation of the system through its projection onto the Volterra series, which allows the posterior inference of a computationally light and reliable compensation scheme. The main difficulty in the derivation of such representation strives from the need of a complete identification methodology, suitable for this particular type of structures. As an example, whenever identification techniques are involved, we require preliminary estimates on certain parameters that contribute to the correct parameterization of the system. In this Thesis we propose a methodology to derive such initial values from simple measures. Once these information is made available, a complete identification scheme is required for nonlinear systems based on pseudorandom signals. These contribute to the robustness and fidelity of the resulting model, and facilitate both the inference of the underlying structure, which we subdivide into a simple block-oriented construction, and the design of the corresponding compensation structure. In a scenario such as this where frequency modulations occur, one must control exogenous factors such as devices’ operation point and the physical properties of the transducer. These may conflict with the principia behind the standard identification procedures, as it is the case. With this idea in mind, the Thesis includes a series of novel correction algorithms that facilitate the application of the characterization results onto the system compensation. The proposed algorithms are tested on a prototype that was designed and built for this purpose. The methodology and instrumentation required for its design, the identification of the overall acoustic system and its correction are all based on signal processing techniques, focusing on the system front-end, i.e. prior to transduction. Results are evaluated in terms of input-output modelling error, considering a synthetic construction of the system. This criterion ensures that compensation techniques may actually be introduced, since these are highly sensible to estimation errors both on the envelope and the phase of the signals involved. Finally, the quality of the overall system will be evaluated in terms of phase, spectral color and nonlinear distortion; by means of a test protocol specifically devised for this Thesis, as a prior step for a future, subjective quality evaluation.
Resumo:
El presente trabajo describe una nueva metodología para la detección automática del espacio glotal de imágenes laríngeas tomadas a partir de 15 vídeos grabados por el servicio ORL del hospital Gregorio Marañón de Madrid con luz estroboscópica. El sistema desarrollado está basado en el modelo de contornos activos (snake). El algoritmo combina en el pre-procesado, algunas técnicas tradicionales (umbralización y filtro de mediana) con técnicas más sofisticadas tales como filtrado anisotrópico. De esta forma, se obtiene una imagen apropiada para el uso de las snakes. El valor escogido para el umbral es del 85% del pico máximo del histograma de la imagen; sobre este valor la información de los píxeles no es relevante. El filtro anisotrópico permite distinguir dos niveles de intensidad, uno es el fondo y el otro es la glotis. La inicialización se basa en obtener el módulo del campo GVF; de esta manera se asegura un proceso automático para la selección del contorno inicial. El rendimiento del algoritmo se valida usando los coeficientes de Pratt y se compara contra una segmentación realizada manualmente y otro método automático basado en la transformada de watershed. SUMMARY: The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images taken from 15 videos recorded by the ENT service of the Gregorio Marañon Hospital in Madrid with videostroboscopic equipment. The system is based on active contour models (snakes). The algorithm combines for the pre-processing, some traditional techniques (thresholding and median filter) with more sophisticated techniques such as anisotropic filtering. In this way, we obtain an appropriate image for the use of snake. The value selected for the threshold is 85% of the maximum peak of the image histogram; over this point the information of the pixels is not relevant. The anisotropic filter permits to distinguish two intensity levels, one is the background and the other one is the glottis. The initialization is based on the obtained magnitude by GVF field; in this manner an automatic process for the initial contour selection will be assured. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation and another automatic method based on the watershed transformation.
Resumo:
Different procedures for monitoring the evolution of leafy vegetables, under plastic covers during cold storage, have been studied. Fifteen spinach leaves were put inside Petri dishes covered with three different plastic films and stored at 4 °C for 21 days. Hyperspectral images were taken during this storage. A radiometric correction is proposed in order to avoid the variation in transmittance of the plastic films during time in the hyperspectral images. Afterwards, three spectral pre-processing procedures (no pre-process, Savitsky–Golay and Standard Normal Variate, combined with Principal Component Analysis) were applied to obtain different models. The corresponding artificial images of scores were studied by means of Analysis of Variance to compare their ability to sense the aging of the leaves. All models were able to monitor the aging through storage. Radiometric correction seemed to work properly and could allow the supervision of shelf-life in leafy vegetables through commercial transparent films.
Resumo:
We address a cognitive radio scenario, where a number of secondary users performs identification of which primary user, if any, is trans- mitting, in a distributed way and using limited location information. We propose two fully distributed algorithms: the first is a direct iden- tification scheme, and in the other a distributed sub-optimal detection based on a simplified Neyman-Pearson energy detector precedes the identification scheme. Both algorithms are studied analytically in a realistic transmission scenario, and the advantage obtained by detec- tion pre-processing is also verified via simulation. Finally, we give details of their fully distributed implementation via consensus aver- aging algorithms.
Resumo:
A basic requirement of the data acquisition systems used in long pulse fusion experiments is the real time physical events detection in signals. Developing such applications is usually a complex task, so it is necessary to develop a set of hardware and software tools that simplify their implementation. This type of applications can be implemented in ITER using fast controllers. ITER is standardizing the architectures to be used for fast controller implementation. Until now the standards chosen are PXIe architectures (based on PCIe) for the hardware and EPICS middleware for the software. This work presents the methodology for implementing data acquisition and pre-processing using FPGA-based DAQ cards and how to integrate these in fast controllers using EPICS.
Resumo:
Thermorheological changes in high hydrostatic pressure (HHP)-treated chickpea flour (CF) slurries were studied as a function of pressure level (0.1, 150, 300, 400, and 600 MPa) and slurry concentration (1:5, 1:4, 1:3, and 1:2 flour-to-water ratios). HHP-treated slurries were subsequently analyzed for changes in properties produced by heating, under both isothermal and non-isothermal processes. Elasticity (G′) of pressurized slurry increased with pressure applied and concentration. Conversely, heat-induced CF paste gradually transformed from solid-like behavior to liquid-like behavior as a function of moisture content and pressure level. The G′ and enthalpy of the CF paste decreased with increasing pressure level in proportion with the extent of HHP-induced starch gelatinization. At 25 °C and 15 min, HHP treatment at 450 and 600 MPa was sufficient to complete gelatinization of CF slurry at the lowest concentration (1:5), while more concentrated slurries would require higher pressures and temperature during treatment or longer holding times. Industrial relevance Demand for chickpea gel has increased considerably in the health and food industries because of its many beneficial effects. However, its use is affected by its very difficult handling. Judicious application of high hydrostatic pressure (HHP) at appropriate levels, adopted as a pre-processing instrument in combination with heating processes, is presented as an innovative technology to produce a remarkable decrease in thermo-hardening of heat-induced chickpea flour paste, permitting the development of new chickpea-based products with desirable handling properties and sensory attributes.
Resumo:
La influencia de la aerodinámica en el diseño de los trenes de alta velocidad, unida a la necesidad de resolver nuevos problemas surgidos con el aumento de la velocidad de circulación y la reducción de peso del vehículo, hace evidente el interés de plantear un estudio de optimización que aborde tales puntos. En este contexto, se presenta en esta tesis la optimización aerodinámica del testero de un tren de alta velocidad, llevada a cabo mediante el uso de métodos de optimización avanzados. Entre estos métodos, se ha elegido aquí a los algoritmos genéticos y al método adjunto como las herramientas para llevar a cabo dicha optimización. La base conceptual, las características y la implementación de los mismos se detalla a lo largo de la tesis, permitiendo entender los motivos de su elección, y las consecuencias, en términos de ventajas y desventajas que cada uno de ellos implican. El uso de los algorimos genéticos implica a su vez la necesidad de una parametrización geométrica de los candidatos a óptimo y la generación de un modelo aproximado que complementa al método de optimización. Estos puntos se describen de modo particular en el primer bloque de la tesis, enfocada a la metodología seguida en este estudio. El segundo bloque se centra en la aplicación de los métodos a fin de optimizar el comportamiento aerodinámico del tren en distintos escenarios. Estos escenarios engloban los casos más comunes y también algunos de los más exigentes a los que hace frente un tren de alta velocidad: circulación en campo abierto con viento frontal o viento lateral, y entrada en túnel. Considerando el caso de viento frontal en campo abierto, los dos métodos han sido aplicados, permitiendo una comparación de las diferentes metodologías, así como el coste computacional asociado a cada uno, y la minimización de la resistencia aerodinámica conseguida en esa optimización. La posibilidad de evitar parametrizar la geometría y, por tanto, reducir el coste computacional del proceso de optimización es la característica más significativa de los métodos adjuntos, mientras que en el caso de los algoritmos genéticos se destaca la simplicidad y capacidad de encontrar un óptimo global en un espacio de diseño multi-modal o de resolver problemas multi-objetivo. El caso de viento lateral en campo abierto considera nuevamente los dos métoxi dos de optimización anteriores. La parametrización se ha simplificado en este estudio, lo que notablemente reduce el coste numérico de todo el estudio de optimización, a la vez que aún recoge las características geométricas más relevantes en un tren de alta velocidad. Este análisis ha permitido identificar y cuantificar la influencia de cada uno de los parámetros geométricos incluídos en la parametrización, y se ha observado que el diseño de la arista superior a barlovento es fundamental, siendo su influencia mayor que la longitud del testero o que la sección frontal del mismo. Finalmente, se ha considerado un escenario más a fin de validar estos métodos y su capacidad de encontrar un óptimo global. La entrada de un tren de alta velocidad en un túnel es uno de los casos más exigentes para un tren por el pico de sobrepresión generado, el cual afecta a la confortabilidad del pasajero, así como a la estabilidad del vehículo y al entorno próximo a la salida del túnel. Además de este problema, otro objetivo a minimizar es la resistencia aerodinámica, notablemente superior al caso de campo abierto. Este problema se resuelve usando algoritmos genéticos. Dicho método permite obtener un frente de Pareto donde se incluyen el conjunto de óptimos que minimizan ambos objetivos. ABSTRACT Aerodynamic design of trains influences several aspects of high-speed trains performance in a very significant level. In this situation, considering also that new aerodynamic problems have arisen due to the increase of the cruise speed and lightness of the vehicle, it is evident the necessity of proposing an optimization study concerning the train aerodynamics. Thus, the aerodynamic optimization of the nose shape of a high-speed train is presented in this thesis. This optimization is based on advanced optimization methods. Among these methods, genetic algorithms and the adjoint method have been selected. A theoretical description of their bases, the characteristics and the implementation of each method is detailed in this thesis. This introduction permits understanding the causes of their selection, and the advantages and drawbacks of their application. The genetic algorithms requirethe geometrical parameterization of any optimal candidate and the generation of a metamodel or surrogate model that complete the optimization process. These points are addressed with a special attention in the first block of the thesis, focused on the methodology considered in this study. The second block is referred to the use of these methods with the purpose of optimizing the aerodynamic performance of a high-speed train in several scenarios. These scenarios englobe the most representative operating conditions of high-speed trains, and also some of the most exigent train aerodynamic problems: front wind and cross-wind situations in open air, and the entrance of a high-speed train in a tunnel. The genetic algorithms and the adjoint method have been applied in the minimization of the aerodynamic drag on the train with front wind in open air. The comparison of these methods allows to evaluate the methdology and computational cost of each one, as well as the resulting minimization of the aerodynamic drag. Simplicity and robustness, the straightforward realization of a multi-objective optimization, and the capability of searching a global optimum are the main attributes of genetic algorithm. However, the requirement of geometrically parameterize any optimal candidate is a significant drawback that is avoided with the use of the adjoint method. This independence of the number of design variables leads to a relevant reduction of the pre-processing and computational cost. Considering the cross-wind stability, both methods are used again for the minimization of the side force. In this case, a simplification of the geometric parameterization of the train nose is adopted, what dramatically reduces the computational cost of the optimization process. Nevertheless, some of the most important geometrical characteristics are still described with this simplified parameterization. This analysis identifies and quantifies the influence of each design variable on the side force on the train. It is observed that the A-pillar roundness is the most demanding design parameter, with a more important effect than the nose length or the train cross-section area. Finally, a third scenario is considered for the validation of these methods in the aerodynamic optimization of a high-speed train. The entrance of a train in a tunnel is one of the most exigent train aerodynamic problems. The aerodynamic consequences of high-speed trains running in a tunnel are basically resumed in two correlated phenomena, the generation of pressure waves and an increase in aerodynamic drag. This multi-objective optimization problem is solved with genetic algorithms. The result is a Pareto front where a set of optimal solutions that minimize both objectives.
Resumo:
It has been demonstrated that rating trust and reputation of individual nodes is an effective approach in distributed environments in order to improve security, support decision-making and promote node collaboration. Nevertheless, these systems are vulnerable to deliberate false or unfair testimonies. In one scenario, the attackers collude to give negative feedback on the victim in order to lower or destroy its reputation. This attack is known as bad mouthing attack. In another scenario, a number of entities agree to give positive feedback on an entity (often with adversarial intentions). This attack is known as ballot stuffing. Both attack types can significantly deteriorate the performances of the network. The existing solutions for coping with these attacks are mainly concentrated on prevention techniques. In this work, we propose a solution that detects and isolates the abovementioned attackers, impeding them in this way to further spread their malicious activity. The approach is based on detecting outliers using clustering, in this case self-organizing maps. An important advantage of this approach is that we have no restrictions on training data, and thus there is no need for any data pre-processing. Testing results demonstrate the capability of the approach in detecting both bad mouthing and ballot stuffing attack in various scenarios.