982 resultados para Modeling problems
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The concentrations of sulfate, black carbon (BC) and other aerosols in the Arctic are characterized by high values in late winter and spring (so-called Arctic Haze) and low values in summer. Models have long been struggling to capture this seasonality and especially the high concentrations associated with Arctic Haze. In this study, we evaluate sulfate and BC concentrations from eleven different models driven with the same emission inventory against a comprehensive pan-Arctic measurement data set over a time period of 2 years (2008–2009). The set of models consisted of one Lagrangian particle dispersion model, four chemistry transport models (CTMs), one atmospheric chemistry-weather forecast model and five chemistry climate models (CCMs), of which two were nudged to meteorological analyses and three were running freely. The measurement data set consisted of surface measurements of equivalent BC (eBC) from five stations (Alert, Barrow, Pallas, Tiksi and Zeppelin), elemental carbon (EC) from Station Nord and Alert and aircraft measurements of refractory BC (rBC) from six different campaigns. We find that the models generally captured the measured eBC or rBC and sulfate concentrations quite well, compared to previous comparisons. However, the aerosol seasonality at the surface is still too weak in most models. Concentrations of eBC and sulfate averaged over three surface sites are underestimated in winter/spring in all but one model (model means for January–March underestimated by 59 and 37 % for BC and sulfate, respectively), whereas concentrations in summer are overestimated in the model mean (by 88 and 44 % for July–September), but with overestimates as well as underestimates present in individual models. The most pronounced eBC underestimates, not included in the above multi-site average, are found for the station Tiksi in Siberia where the measured annual mean eBC concentration is 3 times higher than the average annual mean for all other stations. This suggests an underestimate of BC sources in Russia in the emission inventory used. Based on the campaign data, biomass burning was identified as another cause of the modeling problems. For sulfate, very large differences were found in the model ensemble, with an apparent anti-correlation between modeled surface concentrations and total atmospheric columns. There is a strong correlation between observed sulfate and eBC concentrations with consistent sulfate/eBC slopes found for all Arctic stations, indicating that the sources contributing to sulfate and BC are similar throughout the Arctic and that the aerosols are internally mixed and undergo similar removal. However, only three models reproduced this finding, whereas sulfate and BC are weakly correlated in the other models. Overall, no class of models (e.g., CTMs, CCMs) performed better than the others and differences are independent of model resolution.
Resumo:
We study an one-dimensional nonlinear reaction-diffusion system coupled on the boundary. Such system comes from modeling problems of temperature distribution on two bars of same length, jointed together, with different diffusion coefficients. We prove the transversality property of unstable and stable manifolds assuming all equilibrium points are hyperbolic. To this end, we write the system as an equation with noncontinuous diffusion coefficient. We then study the nonincreasing property of the number of zeros of a linearized nonautonomous equation as well as the Sturm-Liouville properties of the solutions of a linear elliptic problem. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Bit performance prediction has been a challenging problem for the petroleum industry. It is essential in cost reduction associated with well planning and drilling performance prediction, especially when rigs leasing rates tend to follow the projects-demand and barrel-price rises. A methodology to model and predict one of the drilling bit performance evaluator, the Rate of Penetration (ROP), is presented herein. As the parameters affecting the ROP are complex and their relationship not easily modeled, the application of a Neural Network is suggested. In the present work, a dynamic neural network, based on the Auto-Regressive with Extra Input Signals model, or ARX model, is used to approach the ROP modeling problem. The network was applied to a real oil offshore field data set, consisted of information from seven wells drilled with an equal-diameter bit.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
El empleo de refuerzos de FRP en vigas de hormigón armado es cada vez más frecuente por sus numerosas ventajas frente a otros métodos más tradicionales. Durante los últimos años, la técnica FRP-NSM, consistente en introducir barras de FRP sobre el recubrimiento de una viga de hormigón, se ha posicionado como uno de los mejores métodos de refuerzo y rehabilitación de estructuras de hormigón armado, tanto por su facilidad de montaje y mantenimiento, como por su rendimiento para aumentar la capacidad resistente de dichas estructuras. Si bien el refuerzo a flexión ha sido ampliamente desarrollado y estudiado hasta la fecha, no sucede lo mismo con el refuerzo a cortante, debido principalmente a su gran complejidad. Sin embargo, se debería dedicar más estudio a este tipo de refuerzo si se pretenden conservar los criterios de diseño en estructuras de hormigón armado, los cuales están basados en evitar el fallo a cortante por sus consecuencias catastróficas Esta ausencia de información y de normativa es la que justifica esta tesis doctoral. En este pro-yecto se van a desarrollar dos metodologías alternativas, que permiten estimar la capacidad resistente de vigas de hormigón armado, reforzadas a cortante mediante la técnica FRP-NSM. El primer método aplicado consiste en la implementación de una red neuronal artificial capaz de predecir adecuadamente la resistencia a cortante de vigas reforzadas con este método a partir de experimentos anteriores. Asimismo, a partir de la red se han llevado a cabo algunos estudios a fin de comprender mejor la influencia real de algunos parámetros de la viga y del refuerzo sobre la resistencia a cortante con el propósito de lograr diseños más seguros de este tipo de refuerzo. Una configuración óptima de la red requiere discriminar adecuadamente de entre los numerosos parámetros (geométricos y de material) que pueden influir en el compor-tamiento resistente de la viga, para lo cual se han llevado a cabo diversos estudios y pruebas. Mediante el segundo método, se desarrolla una ecuación de proyecto que permite, de forma sencilla, estimar la capacidad de vigas reforzadas a cortante con FRP-NSM, la cual podría ser propuesta para las principales guías de diseño. Para alcanzar este objetivo, se plantea un pro-blema de optimización multiobjetivo a partir de resultados de ensayos experimentales llevados a cabo sobre vigas de hormigón armado con y sin refuerzo de FRP. El problema multiobjetivo se resuelve mediante algoritmos genéticos, en concreto el algoritmo NSGA-II, por ser más apropiado para problemas con varias funciones objetivo que los métodos de optimización clásicos. Mediante una comparativa de las predicciones realizadas con ambos métodos y de los resulta-dos de ensayos experimentales se podrán establecer las ventajas e inconvenientes derivadas de la aplicación de cada una de las dos metodologías. Asimismo, se llevará a cabo un análisis paramétrico con ambos enfoques a fin de intentar determinar la sensibilidad de aquellos pa-rámetros más sensibles a este tipo de refuerzo. Finalmente, se realizará un análisis estadístico de la fiabilidad de las ecuaciones de diseño deri-vadas de la optimización multiobjetivo. Con dicho análisis se puede estimar la capacidad resis-tente de una viga reforzada a cortante con FRP-NSM dentro de un margen de seguridad espe-cificado a priori. ABSTRACT The use of externally bonded (EB) fibre-reinforced polymer (FRP) composites has gained acceptance during the last two decades in the construction engineering community, particularly in the rehabilitation of reinforced concrete (RC) structures. Currently, to increase the shear resistance of RC beams, FRP sheets are externally bonded (EB-FRP) and applied on the external side surface of the beams to be strengthened with different configurations. Of more recent application, the near-surface mounted FRP bar (NSM-FRP) method is another technique successfully used to increase the shear resistance of RC beams. In the NSM method, FRP rods are embedded into grooves intentionally prepared in the concrete cover of the side faces of RC beams. While flexural strengthening has been widely developed and studied so far, the same doesn´t occur to shearing strength mainly due to its great complexity. Nevertheless, if design criteria are to be preserved more research should be done to this sort of strength, which are based on avoiding shear failure and its catastrophic consequences. However, in spite of this, accurately calculating the shear capacity of FRP shear strengthened RC beams remains a complex challenge that has not yet been fully resolved due to the numerous variables involved in the procedure. The objective of this Thesis is to develop methodologies to evaluate the capacity of FRP shear strengthened RC beams by dealing with the problem from a different point of view to the numerical modeling approach by using artificial intelligence techniques. With this purpose two different approaches have been developed: one concerned with the use of artificial neural networks and the other based on the implementation of an optimization approach developed jointly with the use of artificial neural networks (ANNs) and solved with genetic algorithms (GAs). With these approaches some of the difficulties concerned regarding the numerical modeling can be overcome. As an alternative tool to conventional numerical techniques, neural networks do not provide closed form solutions for modeling problems but do, however, offer a complex and accurate solution based on a representative set of historical examples of the relationship. Furthermore, they can adapt solutions over time to include new data. On the other hand, as a second proposal, an optimization approach has also been developed to implement simple yet accurate shear design equations for this kind of strengthening. This approach is developed in a multi-objective framework by considering experimental results of RC beams with and without NSM-FRP. Furthermore, the results obtained with the previous scheme based on ANNs are also used as a filter to choose the parameters to include in the design equations. Genetic algorithms are used to solve the optimization problem since they are especially suitable for solving multi-objective problems when compared to standard optimization methods. The key features of the two proposed procedures are outlined and their performance in predicting the capacity of NSM-FRP shear strengthened RC beams is evaluated by comparison with results from experimental tests and with predictions obtained using a simplified numerical model. A sensitivity study of the predictions of both models for the input parameters is also carried out.
Resumo:
A comercialização de energia elétrica de fontes renováveis, ordinariamente, constitui-se uma atividade em que as operações são estruturadas sob condições de incerteza, por exemplo, em relação ao preço \"spot\" no mercado de curto prazo e a geração de energia dos empreendimentos. Deriva desse fato a busca dos agentes pela formulação de estratégias e utilização de ferramentais para auxiliá-los em suas tomadas de decisão, visando não somente o retorno financeiro, mas também à mitigação dos riscos envolvidos. Análises de investimentos em fontes renováveis compartilham de desafios similares. Na literatura, o estudo da tomada de decisão considerada ótima sob condições de incerteza se dá por meio da aplicação de técnicas de programação estocástica, que viabiliza a modelagem de problemas com variáveis randômicas e a obtenção de soluções racionais, de interesse para o investidor. Esses modelos permitem a incorporação de métricas de risco, como por exemplo, o Conditional Value-at-Risk, a fim de se obter soluções ótimas que ponderem a expectativa de resultado financeiro e o risco associado da operação, onde a aversão ao risco do agente torna-se um condicionante fundamental. O objetivo principal da Tese - sob a ótica dos agentes geradores, consumidores e comercializadores - é: (i) desenvolver e implementar modelos de otimização em programação linear estocástica com métrica CVaR associada, customizados para cada um desses agentes; e (ii) aplicá-los na análise estratégica de operações como forma de apresentar alternativas factíveis à gestão das atividades desses agentes e contribuir com a proposição de um instrumento conceitualmente robusto e amigável ao usuário, para utilização por parte das empresas. Nesse contexto, como antes frisado, dá-se ênfase na análise do risco financeiro dessas operações por meio da aplicação do CVaR e com base na aversão ao risco do agente. Considera-se as fontes renováveis hídrica e eólica como opções de ativos de geração, de forma a estudar o efeito de complementaridade entre fontes distintas e entre sites distintos da mesma fonte, avaliando-se os rebatimentos nas operações.
Resumo:
In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems.
Resumo:
This work presents an analysis of the wavelet-Galerkin method for one-dimensional elastoplastic-damage problems. Time-stepping algorithm for non-linear dynamics is presented. Numerical treatment of the constitutive models is developed by the use of return-mapping algorithm. For spacial discretization we can use wavelet-Galerkin method instead of standard finite element method. This approach allows to locate singularities. The discrete formulation developed can be applied to the simulation of one-dimensional problems for elastic-plastic-damage models. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
In order to use the finite element method for solving fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins effectively and efficiently, we have presented, in this paper, the new concept and numerical algorithms to deal with the fundamental issues associated with the fluid-rock interaction problems. These fundamental issues are often overlooked by some purely numerical modelers. (1) Since the fluid-rock interaction problem involves heterogeneous chemical reactions between reactive aqueous chemical species in the pore-fluid and solid minerals in the rock masses, it is necessary to develop the new concept of the generalized concentration of a solid mineral, so that two types of reactive mass transport equations, namely, the conventional mass transport equation for the aqueous chemical species in the pore-fluid and the degenerated mass transport equation for the solid minerals in the rock mass, can be solved simultaneously in computation. (2) Since the reaction area between the pore-fluid and mineral surfaces is basically a function of the generalized concentration of the solid mineral, there is a definite need to appropriately consider the dependence of the dissolution rate of a dissolving mineral on its generalized concentration in the numerical analysis. (3) Considering the direct consequence of the porosity evolution with time in the transient analysis of fluid-rock interaction problems; we have proposed the term splitting algorithm and the concept of the equivalent source/sink terms in mass transport equations so that the problem of variable mesh Peclet number and Courant number has been successfully converted into the problem of constant mesh Peclet and Courant numbers. The numerical results from an application example have demonstrated the usefulness of the proposed concepts and the robustness of the proposed numerical algorithms in dealing with fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This work presents an analysis of the wavelet-Galerkin method for one-dimensional elastoplastic-damage problems. Time-stepping algorithm for non-linear dynamics is presented. Numerical treatment of the constitutive models is developed by the use of return-mapping algorithm. For spacial discretization we can use wavelet-Galerkin method instead of standard finite element method. This approach allows to locate singularities. The discrete formulation developed can be applied to the simulation of one-dimensional problems for elastic-plastic-damage models. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)