851 resultados para statistical methods
Resumo:
The present study focused on the quality of rainwater at various land use locations and its variations on interaction with various domestic rainwater harvesting systems.Sampling sites were selected based upon the land use pattern of the locations and were classified as rural, urban, industrial and sub urban. Rainwater samples were collected from the south west monsoon of May 2007 to north east monsoon of October 2008, from four sampling sites namely Kothamangalam, Emakulam, Eloor and Kalamassery, in Ernakulam district of the State of Kerala, which characterized typical rural, urban, industrial and suburban locations respectively. Rain water samples at various stages of harvesting were also collected. The samples were analyzed according to standard procedures and their physico-chemical and microbiological parameters were determined. The variations of the chemical composition of the rainwater collected were studied using statistical methods. It was observed that 17.5%, 30%, 45.8% and 12.1% of rainwater samples collected at rural, urban, industrial and suburban locations respectively had pH less than 5.6, which is considered as the pH of cloud water at equilibrium with atmospheric CO,.Nearly 46% of the rainwater samples were in acidic range in the industrial location while it was only 17% in the rural location. Multivariate statistical analysls was done using Principal Component Analysis, and the sources that inf1uence the composition of rainwater at each locations were identified .which clearly indicated that the quality of rain water is site specific and represents the atmospheric characteristics of the free fall The quality of harvested rainwater showed significant variations at different stages of harvesting due to deposition of dust from the roof catchment surface, leaching of cement constituents etc. Except the micro biological quality, the harvested rainwater satisfied the Indian Standard guide lines for drinking water. Studies conducted on the leaching of cement constituents in water concluded that tanks made with ordinary portland cement and portland pozzolana cement could be safely used for storage of rain water.
Resumo:
In the present thesis entitled” Implications of Hydrobiology and Nutrient dynamics on Trophic structure and Interactions in Cochin backwaters”, an attempt has been made to assess the influence of general hydrography, nutrients and other environmental factors on the abundance, distribution and trophic interactions in Cochin backwater system. The study was based on five seasonal sampling campaigns carried out at 15 stations spread along the Cochin backwater system. The thesis is presented in the following 5 chapters. Salient features of each chapter are summarized below: Chapter 1- General Introduction: Provides information on the topic of study, environmental factors, back ground information, the significance, review of literature, aim and scope of the present study and its objectives.Chapter 2- Materials and Methods: This chapter deals with the description of the study area and the methodology adopted for sample collection and analysis. Chapter 3- General Hydrograhy and Sediment Characteristics: Describes the environmental setting of the study area explaining seasonal variation in physicochemical parameters of water column and sediment characteristics. Data on hydrographical parameters, nitrogen fractionation, phosphorus fractionation and biochemical composition of the sediment samples were assessed to evaluate the trophic status. Chapter 4- Nutrient Dynamics on Trophic Structure and Interactions: Describes primary, secondary and tertiary production in Cochin backwater system. Primary production related to cell abundance, diversity of phytoplankton that varies seasonally, concentration of various pigments and primary productivitySecondary production refers to the seasonal abundance of zooplankton especially copepod abundance and tertiary production deals with seasonal fish landings, gut content analysis and proximate composition of dominant fish species. The spatiotemporal variation, interrelationships and trophic interactions were evaluated by statistical methods. Chapter 5- Summary: The results and findings of the study are summarized in the fifth chapter of the thesis.
Resumo:
Pollution of water with pesticides has become a threat to the man, material and environment. The pesticides released to the environment reach the water bodies through run off. Industrial wastewater from pesticide manufacturing industries contains pesticides at higher concentration and hence a major source of water pollution. Pesticides create a lot of health and environmental hazards which include diseases like cancer, liver and kidney disorders, reproductive disorders, fatal death, birth defects etc. Conventional wastewater treatment plants based on biological treatment are not efficient to remove these compounds to the desired level. Most of the pesticides are phyto-toxic i.e., they kill the microorganism responsible for the degradation and are recalcitrant in nature. Advanced oxidation process (AOP) is a class of oxidation techniques where hydroxyl radicals are employed for oxidation of pollutants. AOPs have the ability to totally mineralise the organic pollutants to CO2 and water. Different methods are employed for the generation of hydroxyl radicals in AOP systems. Acetamiprid is a neonicotinoid insecticide widely used to control sucking type insects on crops such as leafy vegetables, citrus fruits, pome fruits, grapes, cotton, ornamental flowers. It is now recommended as a substitute for organophosphorous pesticides. Since its use is increasing, its presence is increasingly found in the environment. It has high water solubility and is not easily biodegradable. It has the potential to pollute surface and ground waters. Here, the use of AOPs for the removal of acetamiprid from wastewater has been investigated. Five methods were selected for the study based on literature survey and preliminary experiments conducted. Fenton process, UV treatment, UV/ H2O2 process, photo-Fenton and photocatalysis using TiO2 were selected for study. Undoped TiO2 and TiO2 doped with Cu and Fe were prepared by sol-gel method. Characterisation of the prepared catalysts was done by X-ray diffraction, scanning electron microscope, differential thermal analysis and thermogravimetric analysis. Influence of major operating parameters on the removal of acetamiprid has been investigated. All the experiments were designed using central compoiste design (CCD) of response surface methodology (RSM). Model equations were developed for Fenton, UV/ H2O2, photo-Fenton and photocatalysis for predicting acetamiprid removal and total organic carbon (TOC) removal for different operating conditions. Quality of the models were analysed by statistical methods. Experimental validations were also done to confirm the quality of the models. Optimum conditions obtained by experiment were verified with that obtained using response optimiser. Fenton Process is the simplest and oldest AOP where hydrogen peroxide and iron are employed for the generation of hydroxyl radicals. Influence of H2O2 and Fe2+ on the acetamiprid removal and TOC removal by Fenton process were investigated and it was found that removal increases with increase in H2O2 and Fe2+ concentration. At an initial concentration of 50 mg/L acetamiprid, 200 mg/L H2O2 and 20 mg/L Fe2+ at pH 3 was found to be optimum for acetamiprid removal. For UV treatment effect of pH was studied and it was found that pH has not much effect on the removal rate. Addition of H2O2 to UV process increased the removal rate because of the hydroxyl radical formation due to photolyis of H2O2. An H2O2 concentration of 110 mg/L at pH 6 was found to be optimum for acetamiprid removal. With photo-Fenton drastic reduction in the treatment time was observed with 10 times reduction in the amount of reagents required. H2O2 concentration of 20 mg/L and Fe2+ concentration of 2 mg/L was found to be optimum at pH 3. With TiO2 photocatalysis improvement in the removal rate was noticed compared to UV treatment. Effect of Cu and Fe doping on the photocatalytic activity under UV light was studied and it was observed that Cu doping enhanced the removal rate slightly while Fe doping has decreased the removal rate. Maximum acetamiprid removal was observed for an optimum catalyst loading of 1000 mg/L and Cu concentration of 1 wt%. It was noticed that mineralisation efficiency of the processes is low compared to acetamiprid removal efficiency. This may be due to the presence of stable intermediate compounds formed during degradation Kinetic studies were conducted for all the treatment processes and it was found that all processes follow pseudo-first order kinetics. Kinetic constants were found out from the experimental data for all the processes and half lives were calculated. The rate of reaction was in the order, photo- Fenton>UV/ H2O2>Fenton> TiO2 photocatalysis>UV. Operating cost was calculated for the processes and it was found that photo-Fenton removes the acetamiprid at lowest operating cost in lesser time. A kinetic model was developed for photo-Fenton process using the elementary reaction data and mass balance equations for the species involved in the process. Variation of acetamiprid concentration with time for different H2O2 and Fe2+ concentration at pH 3 can be found out using this model. The model was validated by comparing the simulated concentration profiles with that obtained from experiments. This study established the viability of the selected AOPs for the removal of acetamiprid from wastewater. Of the studied AOPs photo- Fenton gives the highest removal efficiency with lowest operating cost within shortest time.
Resumo:
The statistical analysis of compositional data is commonly used in geological studies. As is well-known, compositions should be treated using logratios of parts, which are difficult to use correctly in standard statistical packages. In this paper we describe the new features of our freeware package, named CoDaPack, which implements most of the basic statistical methods suitable for compositional data. An example using real data is presented to illustrate the use of the package
Resumo:
The statistical analysis of compositional data should be treated using logratios of parts, which are difficult to use correctly in standard statistical packages. For this reason a freeware package, named CoDaPack was created. This software implements most of the basic statistical methods suitable for compositional data. In this paper we describe the new version of the package that now is called CoDaPack3D. It is developed in Visual Basic for applications (associated with Excel©), Visual Basic and Open GL, and it is oriented towards users with a minimum knowledge of computers with the aim at being simple and easy to use. This new version includes new graphical output in 2D and 3D. These outputs could be zoomed and, in 3D, rotated. Also a customization menu is included and outputs could be saved in jpeg format. Also this new version includes an interactive help and all dialog windows have been improved in order to facilitate its use. To use CoDaPack one has to access Excel© and introduce the data in a standard spreadsheet. These should be organized as a matrix where Excel© rows correspond to the observations and columns to the parts. The user executes macros that return numerical or graphical results. There are two kinds of numerical results: new variables and descriptive statistics, and both appear on the same sheet. Graphical output appears in independent windows. In the present version there are 8 menus, with a total of 38 submenus which, after some dialogue, directly call the corresponding macro. The dialogues ask the user to input variables and further parameters needed, as well as where to put these results. The web site http://ima.udg.es/CoDaPack contains this freeware package and only Microsoft Excel© under Microsoft Windows© is required to run the software. Kew words: Compositional data Analysis, Software
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the Rodrigues Triple Junction in the Indian Ocean were studied applying classical statistical methods (fuzzy c-means clustering, linear mixing model, principal component analysis) for the extraction of endmembers and evaluating the spatial and temporal variation of geochemical signals. Three main factors of sedimentation were expected by the marine geologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. The display of fuzzy membership values and/or factor scores versus depth provided consistent results for two factors only; the ultra-basic component could not be identified. The reason for this may be that only traditional statistical methods were applied, i.e. the untransformed components were used and the cosine-theta coefficient as similarity measure. During the last decade considerable progress in compositional data analysis was made and many case studies were published using new tools for exploratory analysis of these data. Therefore it makes sense to check if the application of suitable data transformations, reduction of the D-part simplex to two or three factors and visual interpretation of the factor scores would lead to a revision of earlier results and to answers to open questions . In this paper we follow the lines of a paper of R. Tolosana- Delgado et al. (2005) starting with a problem-oriented interpretation of the biplot scattergram, extracting compositional factors, ilr-transformation of the components and visualization of the factor scores in a spatial context: The compositional factors will be plotted versus depth (time) of the core samples in order to facilitate the identification of the expected sources of the sedimentary process. Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
Our essay aims at studying suitable statistical methods for the clustering of compositional data in situations where observations are constituted by trajectories of compositional data, that is, by sequences of composition measurements along a domain. Observed trajectories are known as “functional data” and several methods have been proposed for their analysis. In particular, methods for clustering functional data, known as Functional Cluster Analysis (FCA), have been applied by practitioners and scientists in many fields. To our knowledge, FCA techniques have not been extended to cope with the problem of clustering compositional data trajectories. In order to extend FCA techniques to the analysis of compositional data, FCA clustering techniques have to be adapted by using a suitable compositional algebra. The present work centres on the following question: given a sample of compositional data trajectories, how can we formulate a segmentation procedure giving homogeneous classes? To address this problem we follow the steps described below. First of all we adapt the well-known spline smoothing techniques in order to cope with the smoothing of compositional data trajectories. In fact, an observed curve can be thought of as the sum of a smooth part plus some noise due to measurement errors. Spline smoothing techniques are used to isolate the smooth part of the trajectory: clustering algorithms are then applied to these smooth curves. The second step consists in building suitable metrics for measuring the dissimilarity between trajectories: we propose a metric that accounts for difference in both shape and level, and a metric accounting for differences in shape only. A simulation study is performed in order to evaluate the proposed methodologies, using both hierarchical and partitional clustering algorithm. The quality of the obtained results is assessed by means of several indices
Resumo:
In this article we compare regression models obtained to predict PhD students’ academic performance in the universities of Girona (Spain) and Slovenia. Explanatory variables are characteristics of PhD student’s research group understood as an egocentered social network, background and attitudinal characteristics of the PhD students and some characteristics of the supervisors. Academic performance was measured by the weighted number of publications. Two web questionnaires were designed, one for PhD students and one for their supervisors and other research group members. Most of the variables were easily comparable across universities due to the careful translation procedure and pre-tests. When direct comparison was not possible we created comparable indicators. We used a regression model in which the country was introduced as a dummy coded variable including all possible interaction effects. The optimal transformations of the main and interaction variables are discussed. Some differences between Slovenian and Girona universities emerge. Some variables like supervisor’s performance and motivation for autonomy prior to starting the PhD have the same positive effect on the PhD student’s performance in both countries. On the other hand, variables like too close supervision by the supervisor and having children have a negative influence in both countries. However, we find differences between countries when we observe the motivation for research prior to starting the PhD which increases performance in Slovenia but not in Girona. As regards network variables, frequency of supervisor advice increases performance in Slovenia and decreases it in Girona. The negative effect in Girona could be explained by the fact that additional contacts of the PhD student with his/her supervisor might indicate a higher workload in addition to or instead of a better advice about the dissertation. The number of external student’s advice relationships and social support mean contact intensity are not significant in Girona, but they have a negative effect in Slovenia. We might explain the negative effect of external advice relationships in Slovenia by saying that a lot of external advice may actually result from a lack of the more relevant internal advice
Resumo:
Objetivos: La mediastinitis se presenta hasta en el 4% de los pacientes sometidos a revascularización miocárdica, con un mortalidad hospitalaria reportada del 14 al 47%, generando aumento en los costos de atención, deterioro de la calidad de vida y la sobrevida a largo plazo del enfermo; su etiología es multifactorial. El objetivo de este estudio fue determinar cuáles antecedentes clínicos del paciente y factores relacionados con el procedimiento quirúrgico se asocian con la aparición mediastinitis. Métodos: Diseño de casos y controles anidado en una cohorte histórica de pacientes sometidos a revascularización miocárdica en el periodo de enero de 2005 a julio de 2011. Los pacientes con mediastinitis se compararon con un grupo control sin mediastinitis tomados del mismo grupo de riesgo en una relación 1:4, y pareados por fecha de cirugía. El diagnóstico de mediastinitis se hizo con criterios clínicos, de laboratorio y hallazgos quirúrgicos. Resultados: Se identificaron 30 casos en ese periodo. Los factores asociados a la aparición del evento fueron: Diabetes Mellitus OR 2,3 (1.1- 4,9), uso de circulación extracorpórea OR 2,4 (1,1 -5.5), tiempo de perfusión OR 1,1 (1,1 – 1.3) y pacientes mayores de 70 años OR 1.1 (1,2-1-4). Conclusiones: La mediastinitis sigue siendo una complicación de baja prevalencia con consecuencias devastadoras. El impacto clínico y económico de esta complicación debe obligar a los grupos quirúrgicos a crear estrategias de prevención con base en el conocimiento de los factores de riesgo de su población.
Resumo:
Introducción: La disminución de flujo en los vasos coronarios sin presencia de oclusión, es conocido como fenómeno de no reflujo, se observa después de la reperfusión, su presentación oscila entre el 5% y el 50% dependiendo de la población y de los criterios diagnósticos, dicho suceso es de mal pronóstico, aumenta el riesgo de morir en los primeros 30 días posterior a la angioplastia (RR 2,1 p 0,038), y se relaciona con falla cardiaca y arritmias, por eso al identificar los factores a los cuales se asocia, se podrán implementar terapias preventivas. Metodología: Estudio de casos y controles pareado por médico que valoró el evento, para garantizar que no existieron variaciones inter observador, con una razón 1:4 (18:72), realizado para identificar factores asociados a la presencia de no reflujo en pacientes llevados a angioplastia, entre noviembre de 2010 y mayo de 2014, en la Clínica San Rafael de Bogotá, D.C. Resultados: La frecuencia del no reflujo fue del 2.89%. El Infarto Agudo de Miocardio con elevación del ST (IAMCEST) fue la única variable que mostró una asociación estadísticamente significativa con este suceso, valor de p 0,002, OR 8,7, IC 95% (2,0 – 36,7). Discusión: El fenómeno de no reflujo en esta población se comportó de manera similar a lo descrito en la literatura, siendo el IAMCEST un factor fuertemente asociado.
Resumo:
Desde la adopción de un significado integral de salud por la Organización mundial de la salud (OMS) donde esta es definida como “…un estado de completo bienestar físico, mental y social, y no solamente la ausencia de enfermedad… 1948”, ha sido fundamental entender las motivaciones colectivas e individuales que se involucran como determinantes del proceso de bienestar y enfermedad, estos mismos hacen que se torne el estado de salud en una compleja sinfonía de variables dinámicas que se transforman de lugar a lugar o de individuo a individuo. Desde allí, el entorno, en todos sus aspectos ha mostrado gran importancia imprimiendo patrones en las conductas comunes e individuales que se transfiguran finalmente sobre el individuo.
Resumo:
RESUMEN Introducción El papel de las nuevas técnicas ecocardiográficas para el diagnóstico de infarto agudo del miocardio se encuentra en desarrollo y la realización de mecánica ventricular izquierda podría sugerir la presencia de enfermedad coronaria hemodinámicamente significativa. Objetivos Determinar si en pacientes con infarto agudo del miocardio la medición de strain longitudinal global y regional sirve para predecir la presencia de enfermedad coronaria significativa. Métodos Es un estudio de pruebas diagnósticas en el que se evaluaron las características operativas de la mecánica ventricular izquierda para la detección de enfermedad coronaria significativa comparado contra el cateterismo cardiaco, considerado el patrón de oro. Se analizaron 54 pacientes con infarto agudo del miocardio llevados a cateterismo cardiaco, a quienes se les realizó un ecocardiograma transtorácico con medición de strain longitudinal global y regional. Resultados De los 54 pacientes analizados, el 83% tenía enfermedad coronaria significativa. El hallazgo de un strain longitudinal global < -17.5 tuvo una sensibilidad del 85% y una especificidad del 78% para predecir la presencia de enfermedad coronaria; para la arteria descendente anterior un strain longitudinal regional < – 17.4 tuvo una sensibilidad de 82% y una especificidad de 44%, para la arteria circunfleja una sensibilidad del 87% y una especificidad del 37% y para la arteria coronaria derecha una sensibilidad de 73% y una especificidad de 32%. Conclusiones La realización de ecocardiografía con mecánica ventricular en pacientes con infarto agudo del miocardio es útil para predecir la presencia de enfermedad coronaria hemodinámicamente significativa.
Resumo:
L’objecte del present treball és la realització d’una aplicació que permeti portar a terme el control estadístic multivariable en línia d’una planta SBR. Aquesta eina ha de permetre realitzar un anàlisi estadístic multivariable complet del lot en procés, de l’últim lot finalitzat i de la resta de lots processats a la planta. L’aplicació s’ha de realitzar en l’entorn LabVIEW. L’elecció d’aquest programa ve condicionada per l’actualització del mòdul de monitorització de la planta que s’està desenvolupant en aquest mateix entorn
Resumo:
Una de las actuaciones posibles para la gestión de los residuos sólidos urbanos es la valorización energética, es decir la incineración con recuperación de energía. Sin embargo es muy importante controlar adecuadamente el proceso de incineración para evitar en lo posible la liberación de sustancias contaminantes a la atmósfera que puedan ocasionar problemas de contaminación industrial.Conseguir que tanto el proceso de incineración como el tratamiento de los gases se realice en condiciones óptimas presupone tener un buen conocimiento de las dependencias entre las variables de proceso. Se precisan métodos adecuados de medida de las variables más importantes y tratar los valores medidos con modelos adecuados para transformarlos en magnitudes de mando. Un modelo clásico para el control parece poco prometedor en este caso debido a la complejidad de los procesos, la falta de descripción cuantitativa y la necesidad de hacer los cálculos en tiempo real. Esto sólo se puede conseguir con la ayuda de las modernas técnicas de proceso de datos y métodos informáticos, tales como el empleo de técnicas de simulación, modelos matemáticos, sistemas basados en el conocimiento e interfases inteligentes. En [Ono, 1989] se describe un sistema de control basado en la lógica difusa aplicado al campo de la incineración de residuos urbanos. En el centro de investigación FZK de Karslruhe se están desarrollando aplicaciones que combinan la lógica difusa con las redes neuronales [Jaeschke, Keller, 1994] para el control de la planta piloto de incineración de residuos TAMARA. En esta tesis se plantea la aplicación de un método de adquisición de conocimiento para el control de sistemas complejos inspirado en el comportamiento humano. Cuando nos encontramos ante una situación desconocida al principio no sabemos como actuar, salvo por la extrapolación de experiencias anteriores que puedan ser útiles. Aplicando procedimientos de prueba y error, refuerzo de hipótesis, etc., vamos adquiriendo y refinando el conocimiento, y elaborando un modelo mental. Podemos diseñar un método análogo, que pueda ser implementado en un sistema informático, mediante el empleo de técnicas de Inteligencia Artificial.Así, en un proceso complejo muchas veces disponemos de un conjunto de datos del proceso que a priori no nos dan información suficientemente estructurada para que nos sea útil. Para la adquisición de conocimiento pasamos por una serie de etapas: - Hacemos una primera selección de cuales son las variables que nos interesa conocer. - Estado del sistema. En primer lugar podemos empezar por aplicar técnicas de clasificación (aprendizaje no supervisado) para agrupar los datos y obtener una representación del estado de la planta. Es posible establecer una clasificación, pero normalmente casi todos los datos están en una sola clase, que corresponde a la operación normal. Hecho esto y para refinar el conocimiento utilizamos métodos estadísticos clásicos para buscar correlaciones entre variables (análisis de componentes principales) y así poder simplificar y reducir la lista de variables. - Análisis de las señales. Para analizar y clasificar las señales (por ejemplo la temperatura del horno) es posible utilizar métodos capaces de describir mejor el comportamiento no lineal del sistema, como las redes neuronales. Otro paso más consiste en establecer relaciones causales entre las variables. Para ello nos sirven de ayuda los modelos analíticos - Como resultado final del proceso se pasa al diseño del sistema basado en el conocimiento. El objetivo principal es aplicar el método al caso concreto del control de una planta de tratamiento de residuos sólidos urbanos por valorización energética. En primer lugar, en el capítulo 2 Los residuos sólidos urbanos, se trata el problema global de la gestión de los residuos, dando una visión general de las diferentes alternativas existentes, y de la situación nacional e internacional en la actualidad. Se analiza con mayor detalle la problemática de la incineración de los residuos, poniendo especial interés en aquellas características de los residuos que tienen mayor importancia de cara al proceso de combustión.En el capítulo 3, Descripción del proceso, se hace una descripción general del proceso de incineración y de los distintos elementos de una planta incineradora: desde la recepción y almacenamiento de los residuos, pasando por los distintos tipos de hornos y las exigencias de los códigos de buena práctica de combustión, el sistema de aire de combustión y el sistema de humos. Se presentan también los distintos sistemas de depuración de los gases de combustión, y finalmente el sistema de evacuación de cenizas y escorias.El capítulo 4, La planta de tratamiento de residuos sólidos urbanos de Girona, describe los principales sistemas de la planta incineradora de Girona: la alimentación de residuos, el tipo de horno, el sistema de recuperación de energía, y el sistema de depuración de los gases de combustión Se describe también el sistema de control, la operación, los datos de funcionamiento de la planta, la instrumentación y las variables que son de interés para el control del proceso de combustión.En el capítulo 5, Técnicas utilizadas, se proporciona una visión global de los sistemas basados en el conocimiento y de los sistemas expertos. Se explican las diferentes técnicas utilizadas: redes neuronales, sistemas de clasificación, modelos cualitativos, y sistemas expertos, ilustradas con algunos ejemplos de aplicación.Con respecto a los sistemas basados en el conocimiento se analizan en primer lugar las condiciones para su aplicabilidad, y las formas de representación del conocimiento. A continuación se describen las distintas formas de razonamiento: redes neuronales, sistemas expertos y lógica difusa, y se realiza una comparación entre ellas. Se presenta una aplicación de las redes neuronales al análisis de series temporales de temperatura.Se trata también la problemática del análisis de los datos de operación mediante técnicas estadísticas y el empleo de técnicas de clasificación. Otro apartado está dedicado a los distintos tipos de modelos, incluyendo una discusión de los modelos cualitativos.Se describe el sistema de diseño asistido por ordenador para el diseño de sistemas de supervisión CASSD que se utiliza en esta tesis, y las herramientas de análisis para obtener información cualitativa del comportamiento del proceso: Abstractores y ALCMEN. Se incluye un ejemplo de aplicación de estas técnicas para hallar las relaciones entre la temperatura y las acciones del operador. Finalmente se analizan las principales características de los sistemas expertos en general, y del sistema experto CEES 2.0 que también forma parte del sistema CASSD que se ha utilizado.El capítulo 6, Resultados, muestra los resultados obtenidos mediante la aplicación de las diferentes técnicas, redes neuronales, clasificación, el desarrollo de la modelización del proceso de combustión, y la generación de reglas. Dentro del apartado de análisis de datos se emplea una red neuronal para la clasificación de una señal de temperatura. También se describe la utilización del método LINNEO+ para la clasificación de los estados de operación de la planta.En el apartado dedicado a la modelización se desarrolla un modelo de combustión que sirve de base para analizar el comportamiento del horno en régimen estacionario y dinámico. Se define un parámetro, la superficie de llama, relacionado con la extensión del fuego en la parrilla. Mediante un modelo linealizado se analiza la respuesta dinámica del proceso de incineración. Luego se pasa a la definición de relaciones cualitativas entre las variables que se utilizan en la elaboración de un modelo cualitativo. A continuación se desarrolla un nuevo modelo cualitativo, tomando como base el modelo dinámico analítico.Finalmente se aborda el desarrollo de la base de conocimiento del sistema experto, mediante la generación de reglas En el capítulo 7, Sistema de control de una planta incineradora, se analizan los objetivos de un sistema de control de una planta incineradora, su diseño e implementación. Se describen los objetivos básicos del sistema de control de la combustión, su configuración y la implementación en Matlab/Simulink utilizando las distintas herramientas que se han desarrollado en el capítulo anterior.Por último para mostrar como pueden aplicarse los distintos métodos desarrollados en esta tesis se construye un sistema experto para mantener constante la temperatura del horno actuando sobre la alimentación de residuos.Finalmente en el capítulo Conclusiones, se presentan las conclusiones y resultados de esta tesis.
Resumo:
The North Atlantic Oscillation (NAO) is an important large-scale atmospheric circulation that influences the European countries climate. This study evaluated NAO impact in air quality in Porto Metropolitan Area (PMA), Portugal, for the period 2002-2006. NAO, air pollutants and meteorological data were statistically analyzed. All data were obtained from PMA Weather Station, PMA Air Quality Stations and NOAA analysis. Two statistical methods were applied in different time scale : principal component and correlation coefficient. Annual time scale, using multivariate analysis (PCA, principal component analysis), were applied in order to identified positive and significant association between air pollutants such as PM10, PM2.5, CO, NO and NO2, with NAO. On the other hand, the correlation coefficient using seasonal time scale were also applied to the same data. The results of PCA analysis present a general negative significant association between the total precipitation and NAO, in Factor 1 and 2 (explaining around 70% of the variance), presented in the years of 2002, 2004 and 2005. During the same years, some air pollutants (such as PM10, PM2.5, SO2, NOx and CO) present also a positive association with NAO. The O3 shows as well a positive association with NAP during 2002 and 2004, at 2nd Factor, explaining 30% of the variance. From the seasonal analysis using correlation coefficient, it was found significant correlation between PM10 (0.72., p<0.05, in 2002), PM2.5 (0 74, p<0.05, in 2004), and SO2 (0.78, p<0.01, in 2002) with NAO during March-December (no winter period) period. Significant associations between air pollutants and NAO were also verified in the winter period (December to April) mainly with ozone (2005, r=-0.55, p.<0.01). Once that human health and hospital morbidities may be affected by air pollution, the results suggest that NAO forecast can be an important tool to prevent them, in the Iberian Peninsula and specially Portugal.