918 resultados para Image pre-processing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Skull-stripping (or brain extraction) is an important pre-processing step in neuroimage analysis. This document describes a skull-stripping filter implemented using the Insight Toolkit ITK, which we named itk::StripTsImageFilter. It is a composite filter based on existing ITK classes. The filter has been implemented with usability, robustness, speed and versatility in mind, rather than accuracy. This makes it useful for many pre-processing tasks in neuroimage analysis. This paper is accompanied by the source code, input data and a testing environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Manual counting of bacterial colony forming units (CFUs) on agar plates is laborious and error-prone. We therefore implemented a colony counting system with a novel segmentation algorithm to discriminate bacterial colonies from blood and other agar plates.A colony counter hardware was designed and a novel segmentation algorithm was written in MATLAB. In brief, pre-processing with Top-Hat-filtering to obtain a uniform background was followed by the segmentation step, during which the colony images were extracted from the blood agar and individual colonies were separated. A Bayes classifier was then applied to count the final number of bacterial colonies as some of the colonies could still be concatenated to form larger groups. To assess accuracy and performance of the colony counter, we tested automated colony counting of different agar plates with known CFU numbers of S. pneumoniae, P. aeruginosa and M. catarrhalis and showed excellent performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lake water temperature (LWT) is an important driver of lake ecosystems and it has been identified as an indicator of climate change. Consequently, the Global Climate Observing System (GCOS) lists LWT as an essential climate variable. Although for some European lakes long in situ time series of LWT do exist, many lakes are not observed or only on a non-regular basis making these observations insufficient for climate monitoring. Satellite data can provide the information needed. However, only few satellite sensors offer the possibility to analyse time series which cover 25 years or more. The Advanced Very High Resolution Radiometer (AVHRR) is among these and has been flown as a heritage instrument for almost 35 years. It will be carried on for at least ten more years, offering a unique opportunity for satellite-based climate studies. Herein we present a satellite-based lake surface water temperature (LSWT) data set for European water bodies in or near the Alps based on the extensive AVHRR 1 km data record (1989–2013) of the Remote Sensing Research Group at the University of Bern. It has been compiled out of AVHRR/2 (NOAA-07, -09, -11, -14) and AVHRR/3 (NOAA-16, -17, -18, -19 and MetOp-A) data. The high accuracy needed for climate related studies requires careful pre-processing and consideration of the atmospheric state. The LSWT retrieval is based on a simulation-based scheme making use of the Radiative Transfer for TOVS (RTTOV) Version 10 together with ERA-interim reanalysis data from the European Centre for Medium-range Weather Forecasts. The resulting LSWTs were extensively compared with in situ measurements from lakes with various sizes between 14 and 580 km2 and the resulting biases and RMSEs were found to be within the range of −0.5 to 0.6 K and 1.0 to 1.6 K, respectively. The upper limits of the reported errors could be rather attributed to uncertainties in the data comparison between in situ and satellite observations than inaccuracies of the satellite retrieval. An inter-comparison with the standard Moderate-resolution Imaging Spectroradiometer (MODIS) Land Surface Temperature product exhibits RMSEs and biases in the range of 0.6 to 0.9 and −0.5 to 0.2 K, respectively. The cross-platform consistency of the retrieval was found to be within ~ 0.3 K. For one lake, the satellite-derived trend was compared with the trend of in situ measurements and both were found to be similar. Thus, orbital drift is not causing artificial temperature trends in the data set. A comparison with LSWT derived through global sea surface temperature (SST) algorithms shows lower RMSEs and biases for the simulation-based approach. A running project will apply the developed method to retrieve LSWT for all of Europe to derive the climate signal of the last 30 years. The data are available at doi:10.1594/PANGAEA.831007.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the framework of ACTRIS (Aerosols, Clouds, and Trace Gases Research Infrastructure Network) summer 2012 measurement campaign (8 June–17 July 2012), EARLINET organized and performed a controlled exercise of feasibility to demonstrate its potential to perform operational, coordinated measurements and deliver products in near-real time. Eleven lidar stations participated in the exercise which started on 9 July 2012 at 06:00 UT and ended 72 h later on 12 July at 06:00 UT. For the first time, the single calculus chain (SCC) – the common calculus chain developed within EARLINET for the automatic evaluation of lidar data from raw signals up to the final products – was used. All stations sent in real-time measurements of a 1 h duration to the SCC server in a predefined netcdf file format. The pre-processing of the data was performed in real time by the SCC, while the optical processing was performed in near-real time after the exercise ended. 98 and 79 % of the files sent to SCC were successfully pre-processed and processed, respectively. Those percentages are quite large taking into account that no cloud screening was performed on the lidar data. The paper draws present and future SCC users' attention to the most critical parameters of the SCC product configuration and their possible optimal value but also to the limitations inherent to the raw data. The continuous use of SCC direct and derived products in heterogeneous conditions is used to demonstrate two potential applications of EARLINET infrastructure: the monitoring of a Saharan dust intrusion event and the evaluation of two dust transport models. The efforts made to define the measurements protocol and to configure properly the SCC pave the way for applying this protocol for specific applications such as the monitoring of special events, atmospheric modeling, climate research and calibration/validation activities of spaceborne observations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

IBAMar (http://www.ba.ieo.es/ibamar) is a regional database that puts together all physical and biochemical data obtained by multiparametric probes (CTDs equipped with different sensors), during the cruises managed by the Balearic Center of the Spanish Institute of Oceanography (COB-IEO). It has been recently extended to include data obtained with classical hydro casts using oceanographic Niskin or Nansen bottles. The result is a database that includes a main core of hydrographic data: temperature (T), salinity (S), dissolved oxygen (DO), fluorescence and turbidity; complemented by bio-chemical data: dissolved inorganic nutrients (phosphate, nitrate, nitrite and silicate) and chlorophyll-a. In IBAMar Database, different technologies and methodologies were used by different teams along the four decades of data sampling in the COB-IEO. Despite of this fact, data have been reprocessed using the same protocols, and a standard QC has been applied to each variable. Therefore it provides a regional database of homogeneous, good quality data. Data acquisition and quality control (QC): 94% of the data are CTDs Sbe911 and Sbe25. S and DO were calibrated on board using water samples, whenever a Rossetta was available (70% of the cases). All CTD data from Seabird CTDs were reviewed and post processed with the software provided by Sea-Bird Electronics. Data were averaged to get 1 dbar vertical resolution. General sampling methodology and pre processing are described in https://ibamardatabase.wordpress.com/home/). Manual QC include visual checks of metadata, duplicate data and outliers. Automatic QC include range check of variables by area (north of Balearic Islands, south of BI and Alboran Sea) and depth (27 standard levels), check for spikes and check for density inversions. Nutrients QC includes a preliminary control and a range check on the observed level of the data to detect outliers around objectively analyzed data fields. A quality flag is assigned as an integer number, depending on the result of the QC check.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, retrievals of the medium resolution imaging spectrometer (MERIS) reflectances and water quality products using 4 different coastal processing algorithms freely available are assessed by comparison against sea-truthing data. The study is based on a pair-wise comparison using processor-dependent quality flags for the retrieval of valid common macro-pixels. This assessment is required in order to ensure the reliability of monitoring systems based on MERIS data, such as the Swedish coastal and lake monitoring system (http.vattenkvalitet.se). The results show that the pre-processing with the Improved Contrast between Ocean and Land (ICOL) processor, correcting for adjacency effects, improve the retrieval of spectral reflectance for all processors, Therefore, it is recommended that the ICOL processor should be applied when Baltic coastal waters are investigated. Chlorophyll was retrieved best using the FUB (Free University of Berlin) processing algorithm, although overestimations in the range 18-26.5%, dependent on the compared pairs, were obtained. At low chlorophyll concentrations (< 2.5 mg/m**3), random errors dominated in the retrievals with the MEGS (MERIS ground segment processor) processor. The lowest bias and random errors were obtained with MEGS for suspended particulate matter, for which overestimations in te range of 8-16% were found. Only the FUB retrieved CDOM (Coloured Dissolved Organic Matter) correlate with in situ values. However, a large systematic underestimation appears in the estimates that nevertheless may be corrected for by using a~local correction factor. The MEGS has the potential to be used as an operational processing algorithm for the Himmerfjärden bay and adjacent areas, but it requires further improvement of the atmospheric correction for the blue bands and better definition at relatively low chlorophyll concentrations in presence of high CDOM attenuation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background:Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address a cognitive radio scenario, where a number of secondary users performs identification of which primary user, if any, is trans- mitting, in a distributed way and using limited location information. We propose two fully distributed algorithms: the first is a direct iden- tification scheme, and in the other a distributed sub-optimal detection based on a simplified Neyman-Pearson energy detector precedes the identification scheme. Both algorithms are studied analytically in a realistic transmission scenario, and the advantage obtained by detec- tion pre-processing is also verified via simulation. Finally, we give details of their fully distributed implementation via consensus aver- aging algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A basic requirement of the data acquisition systems used in long pulse fusion experiments is the real time physical events detection in signals. Developing such applications is usually a complex task, so it is necessary to develop a set of hardware and software tools that simplify their implementation. This type of applications can be implemented in ITER using fast controllers. ITER is standardizing the architectures to be used for fast controller implementation. Until now the standards chosen are PXIe architectures (based on PCIe) for the hardware and EPICS middleware for the software. This work presents the methodology for implementing data acquisition and pre-processing using FPGA-based DAQ cards and how to integrate these in fast controllers using EPICS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este proyecto tiene como objetivo el desarrollo de una interfaz MIDI, basada en técnicas de procesamiento digital de la imagen, capaz de controlar diversos parámetros de un software de audio mediante información gestual: el movimiento de las manos. La imagen es capturada por una cámara Kinect comercial y los datos obtenidos por ésta son procesados en tiempo real. La finalidad es convertir la posición de varios puntos de control de nuestro cuerpo en información de control musical MIDI. La interfaz ha sido desarrollada en el lenguaje y entorno de programación Processing, el cual está basado en Java, es de libre distribución y de fácil utilización. El software de audio seleccionado es Ableton Live, versión 8.2.2, elegido porque es útil tanto para la composición musical como para la música en directo, y esto último es la principal utilidad que se le pretende dar a la interfaz. El desarrollo del proyecto se divide en dos bloques principales: el primero, diseño gráfico del controlador, y el segundo, la gestión de la información musical. En el primer apartado se justifica el diseño del controlador, formado por botones virtuales: se explica el funcionamiento y, brevemente, la función de cada botón. Este último tema es tratado en profundidad en el Anexo II: Manual de usuario. En el segundo bloque se explica el camino que realiza la información MIDI desde el procesador gestual hasta el sintetizador musical. Este camino empieza en Processing, desde donde se mandan los mensajes que más tarde son interpretados por el secuenciador seleccionado, Ableton Live. Una vez terminada la explicación con detalle del desarrollo del proyecto se exponen las conclusiones del autor acerca del desarrollo del proyecto, donde se encuentran los pros y los contras a tener en cuenta para poder sacar el máximo provecho en el uso del controlador . En este mismo bloque de la memoria se exponen posibles líneas futuras a desarrollar. Se facilita también un presupuesto, desglosado en costes materiales y de personal. ABSTRACT. The aim of this project is the development of a MIDI interface based on image digital processing techniques, able to control several parameters of an audio software using gestural information, the movement of the hands. The image is captured by a commercial Kinect camera and the data obtained by it are processed in real time. The purpose is to convert the position of various points of our body into MIDI musical control information. The interface has been developed in the Processing programming language and environment which is based on Java, freely available and easy to used. The audio software selected is Ableton Live, version 8.2.2, chosen because it is useful for both music composition and live music, and the latter is the interface main intended utility. The project development is divided into two main blocks: the controller graphic design, and the information management. The first section justifies the controller design, consisting of virtual buttons: it is explained the operation and, briefly, the function of each button. This latter topic is covered in detail in Annex II: user manual. In the second section it is explained the way that the MIDI information makes from the gestural processor to the musical synthesizer. It begins in Processing, from where the messages, that are later interpreted by the selected sequencer, Ableton Live, are sent. Once finished the detailed explanation of the project development, the author conclusions are presented, among which are found the pros and cons to take into account in order to take full advantage in the controller use. In this same block are explained the possible future aspects to develop. It is also provided a budget, broken down into material and personal costs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thermorheological changes in high hydrostatic pressure (HHP)-treated chickpea flour (CF) slurries were studied as a function of pressure level (0.1, 150, 300, 400, and 600 MPa) and slurry concentration (1:5, 1:4, 1:3, and 1:2 flour-to-water ratios). HHP-treated slurries were subsequently analyzed for changes in properties produced by heating, under both isothermal and non-isothermal processes. Elasticity (G′) of pressurized slurry increased with pressure applied and concentration. Conversely, heat-induced CF paste gradually transformed from solid-like behavior to liquid-like behavior as a function of moisture content and pressure level. The G′ and enthalpy of the CF paste decreased with increasing pressure level in proportion with the extent of HHP-induced starch gelatinization. At 25 °C and 15 min, HHP treatment at 450 and 600 MPa was sufficient to complete gelatinization of CF slurry at the lowest concentration (1:5), while more concentrated slurries would require higher pressures and temperature during treatment or longer holding times. Industrial relevance Demand for chickpea gel has increased considerably in the health and food industries because of its many beneficial effects. However, its use is affected by its very difficult handling. Judicious application of high hydrostatic pressure (HHP) at appropriate levels, adopted as a pre-processing instrument in combination with heating processes, is presented as an innovative technology to produce a remarkable decrease in thermo-hardening of heat-induced chickpea flour paste, permitting the development of new chickpea-based products with desirable handling properties and sensory attributes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La influencia de la aerodinámica en el diseño de los trenes de alta velocidad, unida a la necesidad de resolver nuevos problemas surgidos con el aumento de la velocidad de circulación y la reducción de peso del vehículo, hace evidente el interés de plantear un estudio de optimización que aborde tales puntos. En este contexto, se presenta en esta tesis la optimización aerodinámica del testero de un tren de alta velocidad, llevada a cabo mediante el uso de métodos de optimización avanzados. Entre estos métodos, se ha elegido aquí a los algoritmos genéticos y al método adjunto como las herramientas para llevar a cabo dicha optimización. La base conceptual, las características y la implementación de los mismos se detalla a lo largo de la tesis, permitiendo entender los motivos de su elección, y las consecuencias, en términos de ventajas y desventajas que cada uno de ellos implican. El uso de los algorimos genéticos implica a su vez la necesidad de una parametrización geométrica de los candidatos a óptimo y la generación de un modelo aproximado que complementa al método de optimización. Estos puntos se describen de modo particular en el primer bloque de la tesis, enfocada a la metodología seguida en este estudio. El segundo bloque se centra en la aplicación de los métodos a fin de optimizar el comportamiento aerodinámico del tren en distintos escenarios. Estos escenarios engloban los casos más comunes y también algunos de los más exigentes a los que hace frente un tren de alta velocidad: circulación en campo abierto con viento frontal o viento lateral, y entrada en túnel. Considerando el caso de viento frontal en campo abierto, los dos métodos han sido aplicados, permitiendo una comparación de las diferentes metodologías, así como el coste computacional asociado a cada uno, y la minimización de la resistencia aerodinámica conseguida en esa optimización. La posibilidad de evitar parametrizar la geometría y, por tanto, reducir el coste computacional del proceso de optimización es la característica más significativa de los métodos adjuntos, mientras que en el caso de los algoritmos genéticos se destaca la simplicidad y capacidad de encontrar un óptimo global en un espacio de diseño multi-modal o de resolver problemas multi-objetivo. El caso de viento lateral en campo abierto considera nuevamente los dos métoxi dos de optimización anteriores. La parametrización se ha simplificado en este estudio, lo que notablemente reduce el coste numérico de todo el estudio de optimización, a la vez que aún recoge las características geométricas más relevantes en un tren de alta velocidad. Este análisis ha permitido identificar y cuantificar la influencia de cada uno de los parámetros geométricos incluídos en la parametrización, y se ha observado que el diseño de la arista superior a barlovento es fundamental, siendo su influencia mayor que la longitud del testero o que la sección frontal del mismo. Finalmente, se ha considerado un escenario más a fin de validar estos métodos y su capacidad de encontrar un óptimo global. La entrada de un tren de alta velocidad en un túnel es uno de los casos más exigentes para un tren por el pico de sobrepresión generado, el cual afecta a la confortabilidad del pasajero, así como a la estabilidad del vehículo y al entorno próximo a la salida del túnel. Además de este problema, otro objetivo a minimizar es la resistencia aerodinámica, notablemente superior al caso de campo abierto. Este problema se resuelve usando algoritmos genéticos. Dicho método permite obtener un frente de Pareto donde se incluyen el conjunto de óptimos que minimizan ambos objetivos. ABSTRACT Aerodynamic design of trains influences several aspects of high-speed trains performance in a very significant level. In this situation, considering also that new aerodynamic problems have arisen due to the increase of the cruise speed and lightness of the vehicle, it is evident the necessity of proposing an optimization study concerning the train aerodynamics. Thus, the aerodynamic optimization of the nose shape of a high-speed train is presented in this thesis. This optimization is based on advanced optimization methods. Among these methods, genetic algorithms and the adjoint method have been selected. A theoretical description of their bases, the characteristics and the implementation of each method is detailed in this thesis. This introduction permits understanding the causes of their selection, and the advantages and drawbacks of their application. The genetic algorithms requirethe geometrical parameterization of any optimal candidate and the generation of a metamodel or surrogate model that complete the optimization process. These points are addressed with a special attention in the first block of the thesis, focused on the methodology considered in this study. The second block is referred to the use of these methods with the purpose of optimizing the aerodynamic performance of a high-speed train in several scenarios. These scenarios englobe the most representative operating conditions of high-speed trains, and also some of the most exigent train aerodynamic problems: front wind and cross-wind situations in open air, and the entrance of a high-speed train in a tunnel. The genetic algorithms and the adjoint method have been applied in the minimization of the aerodynamic drag on the train with front wind in open air. The comparison of these methods allows to evaluate the methdology and computational cost of each one, as well as the resulting minimization of the aerodynamic drag. Simplicity and robustness, the straightforward realization of a multi-objective optimization, and the capability of searching a global optimum are the main attributes of genetic algorithm. However, the requirement of geometrically parameterize any optimal candidate is a significant drawback that is avoided with the use of the adjoint method. This independence of the number of design variables leads to a relevant reduction of the pre-processing and computational cost. Considering the cross-wind stability, both methods are used again for the minimization of the side force. In this case, a simplification of the geometric parameterization of the train nose is adopted, what dramatically reduces the computational cost of the optimization process. Nevertheless, some of the most important geometrical characteristics are still described with this simplified parameterization. This analysis identifies and quantifies the influence of each design variable on the side force on the train. It is observed that the A-pillar roundness is the most demanding design parameter, with a more important effect than the nose length or the train cross-section area. Finally, a third scenario is considered for the validation of these methods in the aerodynamic optimization of a high-speed train. The entrance of a train in a tunnel is one of the most exigent train aerodynamic problems. The aerodynamic consequences of high-speed trains running in a tunnel are basically resumed in two correlated phenomena, the generation of pressure waves and an increase in aerodynamic drag. This multi-objective optimization problem is solved with genetic algorithms. The result is a Pareto front where a set of optimal solutions that minimize both objectives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has been demonstrated that rating trust and reputation of individual nodes is an effective approach in distributed environments in order to improve security, support decision-making and promote node collaboration. Nevertheless, these systems are vulnerable to deliberate false or unfair testimonies. In one scenario, the attackers collude to give negative feedback on the victim in order to lower or destroy its reputation. This attack is known as bad mouthing attack. In another scenario, a number of entities agree to give positive feedback on an entity (often with adversarial intentions). This attack is known as ballot stuffing. Both attack types can significantly deteriorate the performances of the network. The existing solutions for coping with these attacks are mainly concentrated on prevention techniques. In this work, we propose a solution that detects and isolates the abovementioned attackers, impeding them in this way to further spread their malicious activity. The approach is based on detecting outliers using clustering, in this case self-organizing maps. An important advantage of this approach is that we have no restrictions on training data, and thus there is no need for any data pre-processing. Testing results demonstrate the capability of the approach in detecting both bad mouthing and ballot stuffing attack in various scenarios.