13 resultados para principal components analysis (PCA) algorithm

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

FBGs are excellent strain sensors, because of its low size and multiplexing capability. Tens to hundred of sensors may be embedded into a structure, as it has already been demonstrated. Nevertheless, they only afford strain measurements at local points, so unless the damage affects the strain readings in a distinguishable manner, damage will go undetected. This paper show the experimental results obtained on the wing of a UAV, instrumented with 32 FBGs, before and after small damages were introduced. The PCA algorithm was able to distinguish the damage cases, even for small cracks. Principal Component Analysis (PCA) is a technique of multivariable analysis to reduce a complex data set to a lower dimension and reveal some hidden patterns that underlie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of a common environment for processing different powder foods in the industry has increased the risk of finding peanut traces in powder foods. The analytical methods commonly used for detection of peanut such as enzyme-linked immunosorbent assay (ELISA) and real-time polymerase chain reaction (RT-PCR) represent high specificity and sensitivity but are destructive and time-consuming, and require highly skilled experimenters. The feasibility of NIR hyperspectral imaging (HSI) is studied for the detection of peanut traces down to 0.01% by weight. A principal-component analysis (PCA) was carried out on a dataset of peanut and flour spectra. The obtained loadings were applied to the HSI images of adulterated wheat flour samples with peanut traces. As a result, HSI images were reduced to score images with enhanced contrast between peanut and flour particles. Finally, a threshold was fixed in score images to obtain a binary classification image, and the percentage of peanut adulteration was compared with the percentage of pixels identified as peanut particles. This study allowed the detection of traces of peanut down to 0.01% and quantification of peanut adulteration from 10% to 0.1% with a coefficient of determination (r2) of 0.946. These results show the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA to facilitate enhanced quality-control surveillance on food-product processing lines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, Independent Components Analysis (ICA) has proven itself to be a powerful signal-processing technique for solving the Blind-Source Separation (BSS) problems in different scientific domains. In the present work, an application of ICA for processing NIR hyperspectral images to detect traces of peanut in wheat flour is presented. Processing was performed without a priori knowledge of the chemical composition of the two food materials. The aim was to extract the source signals of the different chemical components from the initial data set and to use them in order to determine the distribution of peanut traces in the hyperspectral images. To determine the optimal number of independent component to be extracted, the Random ICA by blocks method was used. This method is based on the repeated calculation of several models using an increasing number of independent components after randomly segmenting the matrix data into two blocks and then calculating the correlations between the signals extracted from the two blocks. The extracted ICA signals were interpreted and their ability to classify peanut and wheat flour was studied. Finally, all the extracted ICs were used to construct a single synthetic signal that could be used directly with the hyperspectral images to enhance the contrast between the peanut and the wheat flours in a real multi-use industrial environment. Furthermore, feature extraction methods (connected components labelling algorithm followed by flood fill method to extract object contours) were applied in order to target the spatial location of the presence of peanut traces. A good visualization of the distributions of peanut traces was thus obtained

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of the Electro-Mechanical Impedance (EMI) method for damage detection in Structural Health Monitoring has noticeable increased in recent years. EMI method utilizes piezoelectric transducers for directly measuring the mechanical properties of the host structure, obtaining the so called impedance measurement, highly influenced by the variations of dynamic parameters of the structure. These measurements usually contain a large number of frequency points, as well as a high number of dimensions, since each frequency range swept can be considered as an independent variable. That makes this kind of data hard to handle, increasing the computational costs and being substantially time-consuming. In that sense, the Principal Component Analysis (PCA)-based data compression has been employed in this work, in order to enhance the analysis capability of the raw data. Furthermore, a Support Vector Machine (SVM), which has been widespread used in machine learning and pattern recognition fields, has been applied in this study in order to model any possible existing pattern in the PCAcompress data, using for that just the first two Principal Components. Different known non-damaged and damaged measurements of an experimental tested beam were used as training input data for the SVM algorithm, using as test input data the same amount of cases measured in beams with unknown structural health conditions. Thus, the purpose of this work is to demonstrate how, with a few impedance measurements of a beam as raw data, its healthy status can be determined based on pattern recognition procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Independent Components Analysis is a Blind Source Separation method that aims to find the pure source signals mixed together in unknown proportions in the observed signals under study. It does this by searching for factors which are mutually statistically independent. It can thus be classified among the latent-variable based methods. Like other methods based on latent variables, a careful investigation has to be carried out to find out which factors are significant and which are not. Therefore, it is important to dispose of a validation procedure to decide on the optimal number of independent components to include in the final model. This can be made complicated by the fact that two consecutive models may differ in the order and signs of similarly-indexed ICs. As well, the structure of the extracted sources can change as a function of the number of factors calculated. Two methods for determining the optimal number of ICs are proposed in this article and applied to simulated and real datasets to demonstrate their performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Video-based vehicle detection is the focus of increasing interest due to its potential towards collision avoidance. In particular, vehicle verification is especially challenging due to the enormous variability of vehicles in size, color, pose, etc. In this paper, a new approach based on supervised learning using Principal Component Analysis (PCA) is proposed that addresses the main limitations of existing methods. Namely, in contrast to classical approaches which train a single classifier regardless of the relative position of the candidate (thus ignoring valuable pose information), a region-dependent analysis is performed by considering four different areas. In addition, a study on the evolution of the classification performance according to the dimensionality of the principal subspace is carried out using PCA features within a SVM-based classification scheme. Indeed, the experiments performed on a publicly available database prove that PCA dimensionality requirements are region-dependent. Hence, in this work, the optimal configuration is adapted to each of them, rendering very good vehicle verification results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data from an attitudinal survey and stated preference ranking experiment conducted in two urban European interchanges (i.e. City-HUBs) in Madrid (Spain) and Thessaloniki (Greece) show that the importance that City-HUBs users attach to the intermodal infrastructure varies strongly as a function of their perceptions of time spent in the interchange (i.e.intermodal transfer and waiting time). A principal components analysis allocates respondents (i.e. city-HUB users) to two classes with substantially different perceptions of time saving when they make a transfer and of time using during their waiting time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo presenta una solución al problema del reconocimiento del género de un rostro humano a partir de una imagen. Adoptamos una aproximación que utiliza la cara completa a través de la textura de la cara normalizada y redimensionada como entrada a un clasificador Näive Bayes. Presentamos la técnica de Análisis de Componentes Principales Probabilístico Condicionado-a-la-Clase (CC-PPCA) para reducir la dimensionalidad de los vectores de características para la clasificación y asegurar la asunción de independencia para el clasificador. Esta nueva aproximación tiene la deseable propiedad de presentar un modelo paramétrico sencillo para las marginales. Además, este modelo puede estimarse con muy pocos datos. En los experimentos que hemos desarrollados mostramos que CC-PPCA obtiene un 90% de acierto en la clasificación, resultado muy similar al mejor presentado en la literatura---ABSTRACT---This paper presents a solution to the problem of recognizing the gender of a human face from an image. We adopt a holistic approach by using the cropped and normalized texture of the face as input to a Naïve Bayes classifier. First it is introduced the Class-Conditional Probabilistic Principal Component Analysis (CC-PPCA) technique to reduce the dimensionality of the classification attribute vector and enforce the independence assumption of the classifier. This new approach has the desirable property of a simple parametric model for the marginals. Moreover this model can be estimated with very few data. In the experiments conducted we show that using CCPPCA we get 90% classification accuracy, which is similar result to the best in the literature. The proposed method is very simple to train and implement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo principal alrededor del cual se desenvuelve este proyecto es el desarrollo de un sistema de reconocimiento facial. Entre sus objetivos específicos se encuentran: realizar una primera aproximación sobre las técnicas de reconocimiento facial existentes en la actualidad, elegir una aplicación donde pueda ser útil el reconocimiento facial, diseñar y desarrollar un programa en MATLAB que lleve a cabo la función de reconocimiento facial, y evaluar el funcionamiento del sistema desarrollado. Este documento se encuentra dividido en cuatro partes: INTRODUCCIÓN, MARCO TEÓRICO, IMPLEMENTACIÓN, y RESULTADOS, CONCLUSIONES Y LÍNEAS FUTURAS. En la primera parte, se hace una introducción relativa a la actualidad del reconocimiento facial y se comenta brevemente sobre las técnicas existentes para desarrollar un sistema biométrico de este tipo. En ella se justifican también aquellas técnicas que acabaron formando parte de la implementación. En la segunda parte, el marco teórico, se explica la estructura general que tiene un sistema de reconocimiento biométrico, así como sus modos de funcionamiento, y las tasas de error utilizadas para evaluar y comparar su rendimiento. Así mismo, se lleva a cabo una descripción más profunda sobre los conceptos y métodos utilizados para efectuar la detección y reconocimiento facial en la tercera parte del proyecto. La tercera parte abarca una descripción detallada de la solución propuesta. En ella se explica el diseño, características y aplicación de la implementación; que trata de un programa elaborado en MATLAB con interfaz gráfica, y que utiliza cuatro sistemas de reconocimiento facial, basados cada uno en diferentes técnicas: Análisis por componentes principales, análisis lineal discriminante, wavelets de Gabor, y emparejamiento de grafos elásticos. El programa ofrece además la capacidad de crear y editar una propia base de datos con etiquetas, dándole aplicación directa sobre el tema que se trata. Se proponen además una serie de características con el objetivo de ampliar y mejorar las funcionalidades del programa diseñado. Dentro de dichas características destaca la propuesta de un modo de verificación híbrido aplicable a cualquier rama de la biometría y un programa de evaluación capaz de medir, graficar, y comparar las configuraciones de cada uno de los sistemas de reconocimiento implementados. Otra característica destacable es la herramienta programada para la creación de grafos personalizados y generación de modelos, aplicable a reconocimiento de objetos en general. En la cuarta y última parte, se presentan al principio los resultados obtenidos. En ellos se contemplan y analizan las comparaciones entre las distintas configuraciones de los sistemas de reconocimiento implementados para diferentes bases de datos (una de ellas formada con imágenes con condiciones de adquisición no controladas). También se miden las tasas de error del modo de verificación híbrido propuesto. Finalmente, se extraen conclusiones, y se proponen líneas futuras de investigación. ABSTRACT The main goal of this project is to develop a facial recognition system. To meet this end, it was necessary to accomplish a series of specific objectives, which were: researching on the existing face recognition technics nowadays, choosing an application where face recognition might be useful, design and develop a face recognition system using MATLAB, and measure the performance of the implemented system. This document is divided into four parts: INTRODUCTION, THEORTICAL FRAMEWORK, IMPLEMENTATION, and RESULTS, CONCLUSSIONS AND FUTURE RESEARCH STUDIES. In the first part, an introduction is made in relation to facial recognition nowadays, and the techniques used to develop a biometric system of this kind. Furthermore, the techniques chosen to be part of the implementation are justified. In the second part, the general structure and the two basic modes of a biometric system are explained. The error rates used to evaluate and compare the performance of a biometric system are explained as well. Moreover, a description of the concepts and methods used to detect and recognize faces in the third part is made. The design, characteristics, and applications of the systems put into practice are explained in the third part. The implementation consists in developing a program with graphical user interface made in MATLAB. This program uses four face recognition systems, each of them based on a different technique: Principal Component Analysis (PCA), Fisher’s Linear Discriminant (FLD), Gabor wavelets, and Elastic Graph Matching (EGM). In addition, with this implementation it is possible to create and edit one´s tagged database, giving it a direct application. Also, a group of characteristics are proposed to enhance the functionalities of the program designed. Among these characteristics, three of them should be emphasized in this summary: A proposal of an hybrid verification mode of a biometric system; and an evaluation program capable of measuring, plotting curves, and comparing different configurations of each implemented recognition system; and a tool programmed to create personalized graphs and models (tagged graph associated to an image of a person), which can be used generally in object recognition. In the fourth and last part of the project, the results of the comparisons between different configurations of the systems implemented are shown for three databases (One of them created with pictures taken under non-controlled environments). The error rates of the proposed hybrid verification mode are measured as well. Finally, conclusions are extracted and future research studies are proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la actualidad las industrias químicas, farmacéuticas y clínicas, originan contaminantes en aguas superficiales, aguas subterráneas y suelos de nuestro país, como es el caso del fenol, contaminante orgánico común y altamente dañino para los organismos, incluso a bajas concentraciones. Existen en el mercado diferentes metodologías para minimizar la contaminación pero muchos de estos procesos tienen un alto coste, generación de contaminantes, etc. La adsorción de contaminantes por medio de arcillas es un método ampliamente utilizado, encontrándose eficaz y económico. Pero la dificultad de adsorber un contaminante orgánico como el fenol motiva la creación de un material llamado organoarcillas. Las organoarcillas son arcillas modificadas con un surfactante, a su vez, los surfactantes son moléculas orgánicas que confieren a la superficie de la arcilla carga catiónica en lugar de aniónica, haciendo más fácil la adsorción de fenol. Para esta tesis se ha elegido el caolín como material adsorbente, fácilmente disponible y relativamente de bajo coste. Se ha trabajado con: arenas de caolín, material directo de la extracción, y caolín lavado, originado del proceso de lavado de las arenas de caolín. Ambos grupos se diferencian fundamentalmente por su contenido en cuarzo, ampliamente mayor en las arenas de caolín. Con el objetivo de desarrollar un material a partir del caolín y arenas de éste con capacidad de retención de contaminates, en concreto, fenol, se procedió a modificar los materiales de partida mediante tratamientos térmicos, mecánicos y/o químicos, dando lugar a compuestos con mayor superficie química reactiva. Para ello se sometió el caolín y las arenas caoliníferas a temperaturas de 750ºC durante 3h, a moliendas hasta alcanzar su amorfización, y/o a activaciones con HCl 6M o con NaOH 5M durante 3h a 90ºC. En total se obtuvieron 18 muestras, en las que se estudiaron las características físico-químicas, mineralógicas y morfológicas de cada una de ellas con el fin de caracterizarlas después de haber sufrido los tratamientos y/o activaciones químicas. Los cambios producidos fueron estudiados mediante pH, capacidad de intercambio catiónico (CEC), capacidad de adsorción de agua (WCU y CWC), distribución de tamaño de partícula (PSD), área de superficie específica (SBET), difracción de rayos X (XRD), espectroscopía infrarroja por transformada de Fourier (FTIR), métodos térmicos (TG, DTG y DTA), y microscopía electrónica de transmisión y barrido (SEM y TEM). Además se analizó los cambios producidos por los tratamientos en función de las pérdidas de Al y Si que acontece en las 18 muestras. Los resultados para los materiales derivados de la arenas caoliníferas fueron similares a los obtenidos para los caolines lavados, la diferencia radica en la cantidad de contenido de caolinita en los diferente grupos de muestras. Apoyándonos en las técnicas de caracterización se puede observar que los tratamientos térmico y molienda produce materiales amorfos, este cambio en la estructura inicial sumado a las activaciones ácida y alcalina dan lugar a pérdidas de Si y Al, ocasionando que sus propiedades físico-químicas, mineralógicas y morfológicas se vean alteradas. Un fuerte aumento es observado en las áreas superficiales y en la CEC en determinadas muestras, además entre los cambios producidos se encuentra la producción de diferentes zeolitas en porcentajes distintos con el tratamiento alcalino. Para la obtención de las organoarcillas, las 18 muestras se sometieron a la surfactación con hexadeciltrimetil amonio (HDTMA) 20 mM durante 24h a 60ºC, esta concentración de tensioactivo fue más alta que la CEC de cada muestra. Los camext bios anteriormente producidos por los tratamientos y activaciones, afectan de forma diferente en la adsorción de HDTMA, variando por tanto la adsorción del surfactante en la superficie de las muestras. Se determinó el tensioactivo en superficie por FTIR, además se realizó un análisis de componentes principales (PCA) para examinar la dependencia entre las relaciones Si/Al de las muestras en la capacidad de adsorción de tensioactivo, y para el estudio de la adsorción de HDTMA en las muestras se realizaron además del análisis termogravimétrico, aproximaciones con los modelos de Freundllich y Langmuir. Se persigue conocer las diferentes formas y maneras que tiene el tensioactivo de fijarse en la superficie de las muestras. En las organoarcillas resultantes se cuantificó el fenol adsorbido cuando éstas fueron puestas en contacto con diferentes concentraciones de fenol: 50, 500, 1000, 2000, y 2500 mg/l durante 24h. El contaminante sorbido se calculó por medio de cromatografía de gases, y se realizaron aproximaciones con los modelos de Freundllich y Langmuir. El comportamiento de adsorción de fenol en arcillas orgánicas es regido por las características de las muestras. De forma general se puede decir que las muestras de caolines lavados tienen más capacidad de adsorción de fenol que las muestras de arenas de caolín y que la activación alcalina ha proporcionado una mejora en la adsorción de fenol en los dos grupos. En consecuencia se han obtenido materiales adsorbentes heterogéneos y por tanto, con propiedades diferentes. Se ha evaluado el comportamiento global de las arenas de caolín por un lado y del caolín lavado por otro. Las arenas de caolín presentan altos niveles de cuarzo y su uso para ciertos tipos de industrias no son recomendados en ocasiones por el alto costo que el proceso de limpieza y purificación implicaría. Por ello es importante reseñar en este proyecto las aplicaciones que ofrecen algunas muestras de este grupo. Los ensayos acontecidos en esta tesis han dado lugar a las siguientes publicaciones: • Pérdida de Al y Si en caolines modificados térmica- o mecánicamente y activados por tratamientos químicos. A. G. San Cristóbal, C Vizcayno, R. Castelló. Macla 9, 113-114. (2008). • Acid activation of mechanically and thermally modfied kaolins. A. G. San Cristóbal, R. Castelló, M. A. Martín Luengo, C Vizcayno. Mater. Res. Bull. 44 (2009) 2103-2111. • Zeolites prepared from calcined and mechanically modified kaolins. A comparative study. A. G San Cristóbal, R. Castelló, M. A. Martín Luengo, C Vizcayno. Applied Clay Science 49 (2010) 239-246. • Study comparative of the sorption of HDTMA on natural and modified kaolin. A. G San Cristóbal, R. Castelló, J. M. Castillejo, C Vizcayno. Aceptada en Clays and Clay minerals. • Capacity of modified kaolin sand and washed kaolin to adsorb phenol. A. G San Cristóbal, R. Castelló, C Vizcayno. Envío a revista sujeto a la publicación del artículo anterior. ABSTRACT Today’s chemical, pharmaceutical and clinical industries generate pollutants that affect the soils and surface and ground waters of our country. Among these, phenol is a common organic pollutant that is extremely harmful to living organisms, even at low concentrations. Several protocols exist to minimize the effects of pollutants, but most are costly procedures or even generate other pollutants. The adsorption of hazardous materials onto clays is perhaps the most used, efficient and cost-saving method available. However, organic compounds such as phenol are difficult to adsorb and this has led to the development of materials known as organoclays, which are much better at remediating organic compounds. Organoclays are clays that have been modified using a surfactant. In turn, surfactants are organic molecules that confer a cationic rather than anionic charge to the clay surface, improving it’s capacity to adsorb phenol. For this doctorate project, kaolin was selected as an adsorbent material for the removal of phenol given its easy sourcing and relatively low cost. The materials investigated were kaolin sand, a directly extracted material, and washed kaolin, which is the byproduct of the kaolin sand washing process. The main difference between the materials is their quartz content, which is much higher in the kaolin sands. To generate a product from kaolin or kaolin sand capable of retaining organic pollutants such as phenol, both materials were subjected to several heat, chemical and/or mechanical treatments to give rise to compounds with a greater reactive surface area. To this end the two starting materials underwent heating at 750ºC for 3 h, grinding to the point of amorphization and/or activation with HCl 6M or NaOH 5M for 3 h at 90ºC. These treatments gave rise to 18 processed samples, which were characterized in terms of their morphological, mineralogical, and physical-chemical properties. The behaviour of these new materials was examined in terms of their pH, cation exchange capacity (CEC), water adsorption capacity (WCU and WCC), particle size distribution (PSD), specific surface area (SBET), and their X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), thermal (DTG, DTA) and scanning and transmission electron microscopy (SEM and TEM) properties. The changes conferred by the different treatments were also examined in terms of Al and Si losses. Results for the materials derived from kaolin sands and washed kaolin were similar, with differences attributable to the kaolinite contents of the samples. The treatments heat and grinding produced amorphous materials, which when subjected to acid or alkali activation gave rise to Si and Al losses. This in turn led to a change in physico- chemical, mineralogical and morphological properties. Some samples showed a highly increased surface area and CEC. Further, among the changes produced, alkali treatment led to the generation of zeolites in different proportions depending on the sample. To produce the organoclays, the 18 samples were surfacted with hexadecyltrimethylammonium (HDTMA) 20 mM for 24 h at 60ºC. This surfactant concentration is higher than the CEC of each sample. The amount of HDTMA adsorbed onto the surface of each sample determined by FTIR varied according to treatment. A principle components analysis (PCA) was performed to examine correlations between sample Si/Al ratios and surfactant adsorption capacity. In addition, to explore HDTMA adsorption by the samples, DTG and DTA data were fitted to Freundllich and Langmuir models. The mechanisms of surfactant attachment to the sample surface were also addressed. The amount of phenol adsorbed by the resultant organoclays was determined when exposed to different phenol concentrations: 50, 500, 1000, 2000, and 2500 mg/l for 24 h. The quantity of adsorbed pollutant was estimated by gas chromatography and the data fitted to the models of Freundllich and Langmuir. Results indicate that the phenol adsorption capacity of the surfacted samples is dependent on the sample’s characteristics. In general, the washed kaolin samples showed a greater phenol adsorption capacity than the kaolon sands and alkali activation improved this capacity in the two types of sample. In conclusion, the treatments used gave rise to adsorbent materials with varying properties. Kaolin sands showed high quartz levels and their use in some industries is not recommended due to the costs involved in their washing and purification. The applications suggested by the data obtained for some of the kaolin sand samples indicate the added value of this industrial by-product. The results of this research project have led to the following publications: • Pérdida de Al y Si en caolines modificados térmica- o mecánicamente y activados por tratamientos químicos. A. G. San Cristóbal, C Vizcayno, R. Castelló. Macla 9, 113-114. (2008). • Acid activation of mechanically and thermally modfied kaolins. A. G. San Cristóbal, R. Castelló, M. A. Martín Luengo, C Vizcayno. Mater. Res. Bull. 44 (2009) 2103-2111. • Zeolites prepared from calcined and mechanically modified kaolins. A comparative study. A. G. San Cristóbal, R. Castelló, M. A. Martín Luengo, C Vizcayno. Applied Clay Science 49 (2010) 239-246. • Study comparative of the sorption of HDTMA on natural and modified kaolin. A. G. San Cristóbal, R. Castelló, J. M. Castillejo, C Vizcayno Accepted in Clays and Clay minerals. • Capacity of modified kaolin sand and washed kaolin to adsorb phenol. A. G San Cristóbal, R. Castelló, C Vizcayno. Shipment postponed, subject to the publication of the previous article.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poder clasificar de manera precisa la aplicación o programa del que provienen los flujos que conforman el tráfico de uso de Internet dentro de una red permite tanto a empresas como a organismos una útil herramienta de gestión de los recursos de sus redes, así como la posibilidad de establecer políticas de prohibición o priorización de tráfico específico. La proliferación de nuevas aplicaciones y de nuevas técnicas han dificultado el uso de valores conocidos (well-known) en puertos de aplicaciones proporcionados por la IANA (Internet Assigned Numbers Authority) para la detección de dichas aplicaciones. Las redes P2P (Peer to Peer), el uso de puertos no conocidos o aleatorios, y el enmascaramiento de tráfico de muchas aplicaciones en tráfico HTTP y HTTPS con el fin de atravesar firewalls y NATs (Network Address Translation), entre otros, crea la necesidad de nuevos métodos de detección de tráfico. El objetivo de este estudio es desarrollar una serie de prácticas que permitan realizar dicha tarea a través de técnicas que están más allá de la observación de puertos y otros valores conocidos. Existen una serie de metodologías como Deep Packet Inspection (DPI) que se basa en la búsqueda de firmas, signatures, en base a patrones creados por el contenido de los paquetes, incluido el payload, que caracterizan cada aplicación. Otras basadas en el aprendizaje automático de parámetros de los flujos, Machine Learning, que permite determinar mediante análisis estadísticos a qué aplicación pueden pertenecer dichos flujos y, por último, técnicas de carácter más heurístico basadas en la intuición o el conocimiento propio sobre tráfico de red. En concreto, se propone el uso de alguna de las técnicas anteriormente comentadas en conjunto con técnicas de minería de datos como son el Análisis de Componentes Principales (PCA por sus siglas en inglés) y Clustering de estadísticos extraídos de los flujos procedentes de ficheros de tráfico de red. Esto implicará la configuración de diversos parámetros que precisarán de un proceso iterativo de prueba y error que permita dar con una clasificación del tráfico fiable. El resultado ideal sería aquel en el que se pudiera identificar cada aplicación presente en el tráfico en un clúster distinto, o en clusters que agrupen grupos de aplicaciones de similar naturaleza. Para ello, se crearán capturas de tráfico dentro de un entorno controlado e identificando cada tráfico con su aplicación correspondiente, a continuación se extraerán los flujos de dichas capturas. Tras esto, parámetros determinados de los paquetes pertenecientes a dichos flujos serán obtenidos, como por ejemplo la fecha y hora de llagada o la longitud en octetos del paquete IP. Estos parámetros serán cargados en una base de datos MySQL y serán usados para obtener estadísticos que ayuden, en un siguiente paso, a realizar una clasificación de los flujos mediante minería de datos. Concretamente, se usarán las técnicas de PCA y clustering haciendo uso del software RapidMiner. Por último, los resultados obtenidos serán plasmados en una matriz de confusión que nos permitirá que sean valorados correctamente. ABSTRACT. Being able to classify the applications that generate the traffic flows in an Internet network allows companies and organisms to implement efficient resource management policies such as prohibition of specific applications or prioritization of certain application traffic, looking for an optimization of the available bandwidth. The proliferation of new applications and new technics in the last years has made it more difficult to use well-known values assigned by the IANA (Internet Assigned Numbers Authority), like UDP and TCP ports, to identify the traffic. Also, P2P networks and data encapsulation over HTTP and HTTPS traffic has increased the necessity to improve these traffic analysis technics. The aim of this project is to develop a number of techniques that make us able to classify the traffic with more than the simple observation of the well-known ports. There are some proposals that have been created to cover this necessity; Deep Packet Inspection (DPI) tries to find signatures in the packets reading the information contained in them, the payload, looking for patterns that can be used to characterize the applications to which that traffic belongs; Machine Learning procedures work with statistical analysis of the flows, trying to generate an automatic process that learns from those statistical parameters and calculate the likelihood of a flow pertaining to a certain application; Heuristic Techniques, finally, are based in the intuition or the knowledge of the researcher himself about the traffic being analyzed that can help him to characterize the traffic. Specifically, the use of some of the techniques previously mentioned in combination with data mining technics such as Principal Component Analysis (PCA) and Clustering (grouping) of the flows extracted from network traffic captures are proposed. An iterative process based in success and failure will be needed to configure these data mining techniques looking for a reliable traffic classification. The perfect result would be the one in which the traffic flows of each application is grouped correctly in each cluster or in clusters that contain group of applications of similar nature. To do this, network traffic captures will be created in a controlled environment in which every capture is classified and known to pertain to a specific application. Then, for each capture, all the flows will be extracted. These flows will be used to extract from them information such as date and arrival time or the IP length of the packets inside them. This information will be then loaded to a MySQL database where all the packets defining a flow will be classified and also, each flow will be assigned to its specific application. All the information obtained from the packets will be used to generate statistical parameters in order to describe each flow in the best possible way. After that, data mining techniques previously mentioned (PCA and Clustering) will be used on these parameters making use of the software RapidMiner. Finally, the results obtained from the data mining will be compared with the real classification of the flows that can be obtained from the database. A Confusion Matrix will be used for the comparison, letting us measure the veracity of the developed classification process.