823 resultados para convolutional neural network


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introducción: La exposición en minas subterráneas a altos niveles de polvo de carbón está relacionada con patologías pulmonares. Objetivo: Determinar la prevalencia de neumoconiosis, medidas de higiene y seguridad industrial y su relación con niveles ambientales de carbón en trabajadores de minas de socavón en Cundinamarca. Materiales y Métodos: Estudio de corte transversal, en 215 trabajadores seleccionados mediante muestreo probabilístico estratificado con asignación proporcional. Se realizaron monitoreos ambientales, radiografías de tórax y encuestas con variables sociodemográficas y laborales. Se emplearon medidas de tendencia central y dispersión y la prueba de independencia ji-cuadrado de Pearson o pruebas exactas, con el fin de establecer las asociaciones. Resultados: El 99,5% de la población perteneció al género masculino, el 36,7% tenía entre 41-50 años, con un promedio de años de trabajo de 21,70 ± 9,99. La prevalencia de neumoconiosis fue de 42,3% y la mediana de la concentración de polvo de carbón bituminoso fue de 2,329670 mg/m3. El índice de riesgo de polvo de carbón presentó diferencias significativas en las categorías de bajo (p=0,0001) y medio (p=0,0186) con la prevalencia de neumoconiosis. El 84,2% reporto no usar mascarilla. No se presentan diferencias entre los niveles de carbón (p=0,194) con la prevalencia de neumoconiosis. Conclusiones: Se encontró una prevalencia de neumoconiosis de 42,3% en Cundinamarca. Se requiere contar con medidas de higiene y seguridad industrial efectivas para controlar el riesgo al que están expuestos los mineros de carbón por la inhalación de polvo de carbón.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Una de las actuaciones posibles para la gestión de los residuos sólidos urbanos es la valorización energética, es decir la incineración con recuperación de energía. Sin embargo es muy importante controlar adecuadamente el proceso de incineración para evitar en lo posible la liberación de sustancias contaminantes a la atmósfera que puedan ocasionar problemas de contaminación industrial.Conseguir que tanto el proceso de incineración como el tratamiento de los gases se realice en condiciones óptimas presupone tener un buen conocimiento de las dependencias entre las variables de proceso. Se precisan métodos adecuados de medida de las variables más importantes y tratar los valores medidos con modelos adecuados para transformarlos en magnitudes de mando. Un modelo clásico para el control parece poco prometedor en este caso debido a la complejidad de los procesos, la falta de descripción cuantitativa y la necesidad de hacer los cálculos en tiempo real. Esto sólo se puede conseguir con la ayuda de las modernas técnicas de proceso de datos y métodos informáticos, tales como el empleo de técnicas de simulación, modelos matemáticos, sistemas basados en el conocimiento e interfases inteligentes. En [Ono, 1989] se describe un sistema de control basado en la lógica difusa aplicado al campo de la incineración de residuos urbanos. En el centro de investigación FZK de Karslruhe se están desarrollando aplicaciones que combinan la lógica difusa con las redes neuronales [Jaeschke, Keller, 1994] para el control de la planta piloto de incineración de residuos TAMARA. En esta tesis se plantea la aplicación de un método de adquisición de conocimiento para el control de sistemas complejos inspirado en el comportamiento humano. Cuando nos encontramos ante una situación desconocida al principio no sabemos como actuar, salvo por la extrapolación de experiencias anteriores que puedan ser útiles. Aplicando procedimientos de prueba y error, refuerzo de hipótesis, etc., vamos adquiriendo y refinando el conocimiento, y elaborando un modelo mental. Podemos diseñar un método análogo, que pueda ser implementado en un sistema informático, mediante el empleo de técnicas de Inteligencia Artificial.Así, en un proceso complejo muchas veces disponemos de un conjunto de datos del proceso que a priori no nos dan información suficientemente estructurada para que nos sea útil. Para la adquisición de conocimiento pasamos por una serie de etapas: - Hacemos una primera selección de cuales son las variables que nos interesa conocer. - Estado del sistema. En primer lugar podemos empezar por aplicar técnicas de clasificación (aprendizaje no supervisado) para agrupar los datos y obtener una representación del estado de la planta. Es posible establecer una clasificación, pero normalmente casi todos los datos están en una sola clase, que corresponde a la operación normal. Hecho esto y para refinar el conocimiento utilizamos métodos estadísticos clásicos para buscar correlaciones entre variables (análisis de componentes principales) y así poder simplificar y reducir la lista de variables. - Análisis de las señales. Para analizar y clasificar las señales (por ejemplo la temperatura del horno) es posible utilizar métodos capaces de describir mejor el comportamiento no lineal del sistema, como las redes neuronales. Otro paso más consiste en establecer relaciones causales entre las variables. Para ello nos sirven de ayuda los modelos analíticos - Como resultado final del proceso se pasa al diseño del sistema basado en el conocimiento. El objetivo principal es aplicar el método al caso concreto del control de una planta de tratamiento de residuos sólidos urbanos por valorización energética. En primer lugar, en el capítulo 2 Los residuos sólidos urbanos, se trata el problema global de la gestión de los residuos, dando una visión general de las diferentes alternativas existentes, y de la situación nacional e internacional en la actualidad. Se analiza con mayor detalle la problemática de la incineración de los residuos, poniendo especial interés en aquellas características de los residuos que tienen mayor importancia de cara al proceso de combustión.En el capítulo 3, Descripción del proceso, se hace una descripción general del proceso de incineración y de los distintos elementos de una planta incineradora: desde la recepción y almacenamiento de los residuos, pasando por los distintos tipos de hornos y las exigencias de los códigos de buena práctica de combustión, el sistema de aire de combustión y el sistema de humos. Se presentan también los distintos sistemas de depuración de los gases de combustión, y finalmente el sistema de evacuación de cenizas y escorias.El capítulo 4, La planta de tratamiento de residuos sólidos urbanos de Girona, describe los principales sistemas de la planta incineradora de Girona: la alimentación de residuos, el tipo de horno, el sistema de recuperación de energía, y el sistema de depuración de los gases de combustión Se describe también el sistema de control, la operación, los datos de funcionamiento de la planta, la instrumentación y las variables que son de interés para el control del proceso de combustión.En el capítulo 5, Técnicas utilizadas, se proporciona una visión global de los sistemas basados en el conocimiento y de los sistemas expertos. Se explican las diferentes técnicas utilizadas: redes neuronales, sistemas de clasificación, modelos cualitativos, y sistemas expertos, ilustradas con algunos ejemplos de aplicación.Con respecto a los sistemas basados en el conocimiento se analizan en primer lugar las condiciones para su aplicabilidad, y las formas de representación del conocimiento. A continuación se describen las distintas formas de razonamiento: redes neuronales, sistemas expertos y lógica difusa, y se realiza una comparación entre ellas. Se presenta una aplicación de las redes neuronales al análisis de series temporales de temperatura.Se trata también la problemática del análisis de los datos de operación mediante técnicas estadísticas y el empleo de técnicas de clasificación. Otro apartado está dedicado a los distintos tipos de modelos, incluyendo una discusión de los modelos cualitativos.Se describe el sistema de diseño asistido por ordenador para el diseño de sistemas de supervisión CASSD que se utiliza en esta tesis, y las herramientas de análisis para obtener información cualitativa del comportamiento del proceso: Abstractores y ALCMEN. Se incluye un ejemplo de aplicación de estas técnicas para hallar las relaciones entre la temperatura y las acciones del operador. Finalmente se analizan las principales características de los sistemas expertos en general, y del sistema experto CEES 2.0 que también forma parte del sistema CASSD que se ha utilizado.El capítulo 6, Resultados, muestra los resultados obtenidos mediante la aplicación de las diferentes técnicas, redes neuronales, clasificación, el desarrollo de la modelización del proceso de combustión, y la generación de reglas. Dentro del apartado de análisis de datos se emplea una red neuronal para la clasificación de una señal de temperatura. También se describe la utilización del método LINNEO+ para la clasificación de los estados de operación de la planta.En el apartado dedicado a la modelización se desarrolla un modelo de combustión que sirve de base para analizar el comportamiento del horno en régimen estacionario y dinámico. Se define un parámetro, la superficie de llama, relacionado con la extensión del fuego en la parrilla. Mediante un modelo linealizado se analiza la respuesta dinámica del proceso de incineración. Luego se pasa a la definición de relaciones cualitativas entre las variables que se utilizan en la elaboración de un modelo cualitativo. A continuación se desarrolla un nuevo modelo cualitativo, tomando como base el modelo dinámico analítico.Finalmente se aborda el desarrollo de la base de conocimiento del sistema experto, mediante la generación de reglas En el capítulo 7, Sistema de control de una planta incineradora, se analizan los objetivos de un sistema de control de una planta incineradora, su diseño e implementación. Se describen los objetivos básicos del sistema de control de la combustión, su configuración y la implementación en Matlab/Simulink utilizando las distintas herramientas que se han desarrollado en el capítulo anterior.Por último para mostrar como pueden aplicarse los distintos métodos desarrollados en esta tesis se construye un sistema experto para mantener constante la temperatura del horno actuando sobre la alimentación de residuos.Finalmente en el capítulo Conclusiones, se presentan las conclusiones y resultados de esta tesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O controlo de segurança para preservação da integridade estrutural da barragens é, durante a fase de exploração normal, uma actividade que tem essencialmente como elemento fulcral as inspecções à estrutura e os dados resultantes das observações periódicas da obra, apoiando-se em modelos de comportamento da mesma. Neste sentido, a análise de situações de emergência requer, em regra, a atenção de um especialista em segurança de barragens, o qual poderá, perante os resultados da observação disponíveis e da aplicação de modelos do comportamento da estrutura, identificar o nível de alerta adequado à situação que se está a viver na barragem. Esta abordagem tradicional de controlo de segurança é um processo eficaz mas que apresenta a desvantagem de poder decorrer um período de tempo significativo entre a identificação de um processo anómalo e a definição do respectivo nível de gravidade. O uso de novas tecnologias de apoio à decisão e o planeamento de emergência podem contribuir para minorar os efeitos desta desvantagem. O presente trabalho consiste no desenvolvimento de um modelo de aferição do comportamento de uma barragem através da aplicação de redes neuronais do tipo Perceptrão Multicamadas aos resultados da observação de uma barragem de aterro, por forma a identificar anomalias de comportamento e a quantificar o correspondente nível de alerta. A tese divide-se essencialmente em duas partes. A primeira parte aborda os aspectos que se relacionam com as barragens de aterro, nomeadamente definindo as soluções estruturais mais correntes e identificando os principais tipos de deteriorações que podem surgir nestas estruturas. São, igualmente, abordadas as questões que se relacionam com o controlo de segurança e o planeamento de emergência em barragens de aterro. A segunda parte do trabalho versa sobre o modelo de rede neuronal desenvolvido em linguagem de programação java – o modelo ALBATROZ. Este modelo permite definir o nível de alerta em função do nível de água na albufeira, da pressão registada em quatro piezómetros localizados no corpo e na fundação da barragem e do caudal percolado através da barragem e respectiva fundação. Nesta parte, o trabalho recorre, aos resultados da observação da barragem de Valtorno/Mourão e usa os resultados de um modelo de elementos finitos (desenvolvido no Laboratório Nacional de Engenharia Civil, no âmbito do plano de observação da obra) por forma a simular o comportamento da barragem e fornecer dados para o treino da rede neuronal desenvolvida.O presente trabalho concluiu que o desenvolvimento de redes neuronais que relacionem o valor registado em algumas das grandezas monitorizadas pelo sistema de observação com o nível de alerta associado a uma situação anómala na barragem pode contribuir para a identificação rápida de situações de emergência e permitir agir atempadamente na sua resolução. Esta característica transforma a redes neuronais numa peça importante no planeamento de emergência em barragens e constitui, igualmente, um instrumento de apoio ao controlo de segurança das mesmas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling of cyanobacteria in freshwaters is an important tool for understanding their population dynamics and predicting bloom occurrence in lakes and rivers. In this paper existing key models of cyanobacteria are reviewed, evaluated and classified. Two major groups emerge: deterministic mathematical and artificial neural network models. Mathematical models can be further subcategorized into those models concerned with impounded water bodies and those concerned with rivers. Most existing models focus on a single aspect such as the growth of transport mechanisms, but there are a few models which couple both.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chemical and meteorological parameters measured on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe 146 Atmospheric Research Aircraft during the African Monsoon Multidisciplinary Analysis (AMMA) campaign are presented to show the impact of NOx emissions from recently wetted soils in West Africa. NO emissions from soils have been previously observed in many geographical areas with different types of soil/vegetation cover during small scale studies and have been inferred at large scales from satellite measurements of NOx. This study is the first dedicated to showing the emissions of NOx at an intermediate scale between local surface sites and continental satellite measurements. The measurements reveal pronounced mesoscale variations in NOx concentrations closely linked to spatial patterns of antecedent rainfall. Fluxes required to maintain the NOx concentrations observed by the BAe-146 in a number of cases studies and for a range of assumed OH concentrations (1×106 to 1×107 molecules cm−3) are calculated to be in the range 8.4 to 36.1 ng N m−2 s−1. These values are comparable to the range of fluxes from 0.5 to 28 ng N m−2 s−1 reported from small scale field studies in a variety of non-nutrient rich tropical and sub-tropical locations reported in the review of Davidson and Kingerlee (1997). The fluxes calculated in the present study have been scaled up to cover the area of the Sahel bounded by 10 to 20 N and 10 E to 20 W giving an estimated emission of 0.03 to 0.30 Tg N from this area for July and August 2006. The observed chemical data also suggest that the NOx emitted from soils is taking part in ozone formation as ozone concentrations exhibit similar fine scale structure to the NOx, with enhancements over the wet soils. Such variability can not be explained on the basis of transport from other areas. Delon et al. (2008) is a companion paper to this one which models the impact of soil NOx emissions on the NOx and ozone concentration over West Africa during AMMA. It employs an artificial neural network to define the emissions of NOx from soils, integrated into a coupled chemistry-dynamics model. The results are compared to the observed data presented in this paper. Here we compare fluxes deduced from the observed data with the model-derived values from Delon et al. (2008).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nitrogen oxide biogenic emissions from soils are driven by soil and environmental parameters. The relationship between these parameters and NO fluxes is highly non linear. A new algorithm, based on a neural network calculation, is used to reproduce the NO biogenic emissions linked to precipitations in the Sahel on the 6 August 2006 during the AMMA campaign. This algorithm has been coupled in the surface scheme of a coupled chemistry dynamics model (MesoNH Chemistry) to estimate the impact of the NO emissions on NOx and O3 formation in the lower troposphere for this particular episode. Four different simulations on the same domain and at the same period are compared: one with anthropogenic emissions only, one with soil NO emissions from a static inventory, at low time and space resolution, one with NO emissions from neural network, and one with NO from neural network plus lightning NOx. The influence of NOx from lightning is limited to the upper troposphere. The NO emission from soils calculated with neural network responds to changes in soil moisture giving enhanced emissions over the wetted soil, as observed by aircraft measurements after the passing of a convective system. The subsequent enhancement of NOx and ozone is limited to the lowest layers of the atmosphere in modelling, whereas measurements show higher concentrations above 1000 m. The neural network algorithm, applied in the Sahel region for one particular day of the wet season, allows an immediate response of fluxes to environmental parameters, unlike static emission inventories. Stewart et al (2008) is a companion paper to this one which looks at NOx and ozone concentrations in the boundary layer as measured on a research aircraft, examines how they vary with respect to the soil moisture, as indicated by surface temperature anomalies, and deduces NOx fluxes. In this current paper the model-derived results are compared to the observations and calculated fluxes presented by Stewart et al (2008).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the past decade, the amount of data in biological field has become larger and larger; Bio-techniques for analysis of biological data have been developed and new tools have been introduced. Several computational methods are based on unsupervised neural network algorithms that are widely used for multiple purposes including clustering and visualization, i.e. the Self Organizing Maps (SOM). Unfortunately, even though this method is unsupervised, the performances in terms of quality of result and learning speed are strongly dependent from the neuron weights initialization. In this paper we present a new initialization technique based on a totally connected undirected graph, that report relations among some intersting features of data input. Result of experimental tests, where the proposed algorithm is compared to the original initialization techniques, shows that our technique assures faster learning and better performance in terms of quantization error.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When people monitor a visual stream of rapidly presented stimuli for two targets (T1 and T2), they often miss T2 if it falls into a time window of about half a second after T1 onset-the attentional blink (AB). We provide an overview of recent neuroscientific studies devoted to analyze the neural processes underlying the AB and their temporal dynamics. The available evidence points to an attentional network involving temporal, right-parietal and frontal cortex, and suggests that the components of this neural network interact by means of synchronization and stimulus-induced desynchronization in the beta frequency range. We set up a neurocognitive scenario describing how the AB might emerge and why it depends on the presence of masks and the other event(s) the targets are embedded in. The scenario supports the idea that the AB arises from "biased competition", with the top-down bias being generated by parietal-frontal interactions and the competition taking place between stimulus codes in temporal cortex.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A technique is presented for locating and tracking objects in cluttered environments. Agents are randomly distributed across the image, and subsequently grouped around targets. Each agent uses a weightless neural network and a histogram intersection technique to score its location. The system has been used to locate and track a head in 320x240 resolution video at up to 15fps.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Self-Organizing Map (SOM) is a popular unsupervised neural network able to provide effective clustering and data visualization for data represented in multidimensional input spaces. In this paper, we describe Fast Learning SOM (FLSOM) which adopts a learning algorithm that improves the performance of the standard SOM with respect to the convergence time in the training phase. We show that FLSOM also improves the quality of the map by providing better clustering quality and topology preservation of multidimensional input data. Several tests have been carried out on different multidimensional datasets, which demonstrate better performances of the algorithm in comparison with the original SOM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work compares and contrasts results of classifying time-domain ECG signals with pathological conditions taken from the MITBIH arrhythmia database. Linear discriminant analysis and a multi-layer perceptron were used as classifiers. The neural network was trained by two different methods, namely back-propagation and a genetic algorithm. Converting the time-domain signal into the wavelet domain reduced the dimensionality of the problem at least 10-fold. This was achieved using wavelets from the db6 family as well as using adaptive wavelets generated using two different strategies. The wavelet transforms used in this study were limited to two decomposition levels. A neural network with evolved weights proved to be the best classifier with a maximum of 99.6% accuracy when optimised wavelet-transform ECG data wits presented to its input and 95.9% accuracy when the signals presented to its input were decomposed using db6 wavelets. The linear discriminant analysis achieved a maximum classification accuracy of 95.7% when presented with optimised and 95.5% with db6 wavelet coefficients. It is shown that the much simpler signal representation of a few wavelet coefficients obtained through an optimised discrete wavelet transform facilitates the classification of non-stationary time-variant signals task considerably. In addition, the results indicate that wavelet optimisation may improve the classification ability of a neural network. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study a minimum variance neuro self-tuning proportional-integral-derivative (PID) controller is designed for complex multiple input-multiple output (MIMO) dynamic systems. An approximation model is constructed, which consists of two functional blocks. The first block uses a linear submodel to approximate dominant system dynamics around a selected number of operating points. The second block is used as an error agent, implemented by a neural network, to accommodate the inaccuracy possibly introduced by the linear submodel approximation, various complexities/uncertainties, and complicated coupling effects frequently exhibited in non-linear MIMO dynamic systems. With the proposed model structure, controller design of an MIMO plant with n inputs and n outputs could be, for example, decomposed into n independent single input-single output (SISO) subsystem designs. The effectiveness of the controller design procedure is initially verified through simulations of industrial examples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Synapsing Variable Length Crossover (SVLC) algorithm provides a biologically inspired method for performing meaningful crossover between variable length genomes. In addition to providing a rationale for variable length crossover it also provides a genotypic similarity metric for variable length genomes enabling standard niche formation techniques to be used with variable length genomes. Unlike other variable length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses non-random crossover points based on the common parental sequence similarity. The SVLC Algorithm recurrently "glues" or synapses homogenous genetic sub-sequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable length test problem the SVLC algorithm is shown to outperform current variable length crossover techniques. The SVLC algorithm is also shown to work in a more realistic robot neural network controller evolution application.