791 resultados para artificial neural network (ANN)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho pretende-se introduzir os conceitos associados às redes neuronais e a sua aplicação no controlo de sistemas, neste caso na área da robótica autónoma. Foi utilizado um AGV de modo a testar experimentalmente um controlo através de uma rede neuronal artificial. A grande vantagem das redes neuronais artificiais é estas poderem ser ensinadas a funcionarem como se pretende. A partir desta caraterística foram efetuadas duas abordagens na implementação do AGV disponibilizado. A primeira abordagem ensinava a rede neuronal a funcionar como o controlo por lógica difusa que foi implementado no AGV aquando do seu desenvolvimento. A segunda abordagem foi ensinar a rede neuronal artificial a funcionar a partir de dados retirados de um controlo remoto simples implementado no AGV. Ambas as abordagens foram inicialmente implementadas e simuladas no MATLAB, antes de se efetuar a sua implementação no AGV. O MATLAB é utilizado para efetuar o treino das redes neuronais multicamada proactivas através do algoritmo de treino por retropropagação de Levenberg-Marquardt. A implementação de uma rede neuronal artificial na primeira abordagem foi implementada em três fases, MATLAB, posteriormente linguagem de programação C no computador e por fim, microcontrolador PIC no AGV, permitindo assim diferenciar o desenvolvimento destas técnicas em várias plataformas. Durante o desenvolvimento da segunda abordagem foi desenvolvido uma aplicação Android que permite monitorizar e controlar o AGV remotamente. Os resultados obtidos pela implementação da rede neuronal a partir do controlo difuso e do controlo remoto foram satisfatórios, pois o AGV percorria os percursos testados corretamente, em ambos os casos. Por fim concluiu-se que é viável a aplicação das redes neuronais no controlo de um AGV. Mais ainda, é possível utilizar o sistema desenvolvido para implementar e testar novas RNA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Load forecasting has gradually becoming a major field of research in electricity industry. Therefore, Load forecasting is extremely important for the electric sector under deregulated environment as it provides a useful support to the power system management. Accurate power load forecasting models are required to the operation and planning of a utility company, and they have received increasing attention from researches of this field study. Many mathematical methods have been developed for load forecasting. This work aims to develop and implement a load forecasting method for short-term load forecasting (STLF), based on Holt-Winters exponential smoothing and an artificial neural network (ANN). One of the main contributions of this paper is the application of Holt-Winters exponential smoothing approach to the forecasting problem and, as an evaluation of the past forecasting work, data mining techniques are also applied to short-term Load forecasting. Both ANN and Holt-Winters exponential smoothing approaches are compared and evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wind speed forecasting has been becoming an important field of research to support the electricity industry mainly due to the increasing use of distributed energy sources, largely based on renewable sources. This type of electricity generation is highly dependent on the weather conditions variability, particularly the variability of the wind speed. Therefore, accurate wind power forecasting models are required to the operation and planning of wind plants and power systems. A Support Vector Machines (SVM) model for short-term wind speed is proposed and its performance is evaluated and compared with several artificial neural network (ANN) based approaches. A case study based on a real database regarding 3 years for predicting wind speed at 5 minutes intervals is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of Geographic Information Systems has revolutionalized the handling and the visualization of geo-referenced data and has underlined the critic role of spatial analysis. The usual tools for such a purpose are geostatistics which are widely used in Earth science. Geostatistics are based upon several hypothesis which are not always verified in practice. On the other hand, Artificial Neural Network (ANN) a priori can be used without special assumptions and are known to be flexible. This paper proposes to discuss the application of ANN in the case of the interpolation of a geo-referenced variable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research project was designed to identify the typical Iowa material input values that are required by the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the Level 3 concrete pavement design. It was also designed to investigate the existing equations that might be used to predict Iowa pavement concrete for the Level 2 pavement design. In this project, over 20,000 data were collected from the Iowa Department of Transportation (DOT) and other sources. These data, most of which were concrete compressive strength, slump, air content, and unit weight data, were synthesized and their statistical parameters (such as the mean values and standard variations) were analyzed. Based on the analyses, the typical input values of Iowa pavement concrete, such as 28-day compressive strength (f’c), splitting tensile strength (fsp), elastic modulus (Ec), and modulus of rupture (MOR), were evaluated. The study indicates that the 28-day MOR of Iowa concrete is 646 + 51 psi, very close to the MEPDG default value (650 psi). The 28-day Ec of Iowa concrete (based only on two available data of the Iowa Curling and Warping project) is 4.82 + 0.28x106 psi, which is quite different from the MEPDG default value (3.93 x106 psi); therefore, the researchers recommend re-evaluating after more Iowa test data become available. The drying shrinkage (εc) of a typical Iowa concrete (C-3WR-C20 mix) was tested at Concrete Technology Laboratory (CTL). The test results show that the ultimate shrinkage of the concrete is about 454 microstrain and the time for the concrete to reach 50% of ultimate shrinkage is at 32 days; both of these values are very close to the MEPDG default values. The comparison of the Iowa test data and the MEPDG default values, as well as the recommendations on the input values to be used in MEPDG for Iowa PCC pavement design, are summarized in Table 20 of this report. The available equations for predicting the above-mentioned concrete properties were also assembled. The validity of these equations for Iowa concrete materials was examined. Multiple-parameters nonlinear regression analyses, along with the artificial neural network (ANN) method, were employed to investigate the relationships among Iowa concrete material properties and to modify the existing equations so as to be suitable for Iowa concrete materials. However, due to lack of necessary data sets, the relationships between Iowa concrete properties were established based on the limited data from CP Tech Center’s projects and ISU classes only. The researchers suggest that the resulting relationships be used by Iowa pavement design engineers as references only. The present study furthermore indicates that appropriately documenting concrete properties, including flexural strength, elastic modulus, and information on concrete mix design, is essential for updating the typical Iowa material input values and providing rational prediction equations for concrete pavement design in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we present a simulation of a recognition process with perimeter characterization of a simple plant leaves as a unique discriminating parameter. Data coding allowing for independence of leaves size and orientation may penalize performance recognition for some varieties. Border description sequences are then used to characterize the leaves. Independent Component Analysis (ICA) is then applied in order to study which is the best number of components to be considered for the classification task, implemented by means of an Artificial Neural Network (ANN). Obtained results with ICA as a pre-processing tool are satisfactory, and compared with some references our system improves the recognition success up to 80.8% depending on the number of considered independent components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mountain regions worldwide are particularly sensitive to on-going climate change. Specifically in the Alps in Switzerland, the temperature has increased twice as fast than in the rest of the Northern hemisphere. Water temperature closely follows the annual air temperature cycle, severely impacting streams and freshwater ecosystems. In the last 20 years, brown trout (Salmo trutta L) catch has declined by approximately 40-50% in many rivers in Switzerland. Increasing water temperature has been suggested as one of the most likely cause of this decline. Temperature has a direct effect on trout population dynamics through developmental and disease control but can also indirectly impact dynamics via food-web interactions such as resource availability. We developed a spatially explicit modelling framework that allows spatial and temporal projections of trout biomass using the Aare river catchment as a model system, in order to assess the spatial and seasonal patterns of trout biomass variation. Given that biomass has a seasonal variation depending on trout life history stage, we developed seasonal biomass variation models for three periods of the year (Autumn-Winter, Spring and Summer). Because stream water temperature is a critical parameter for brown trout development, we first calibrated a model to predict water temperature as a function of air temperature to be able to further apply climate change scenarios. We then built a model of trout biomass variation by linking water temperature to trout biomass measurements collected by electro-fishing in 21 stations from 2009 to 2011. The different modelling components of our framework had overall a good predictive ability and we could show a seasonal effect of water temperature affecting trout biomass variation. Our statistical framework uses a minimum set of input variables that make it easily transferable to other study areas or fish species but could be improved by including effects of the biotic environment and the evolution of demographical parameters over time. However, our framework still remains informative to spatially highlight where potential changes of water temperature could affect trout biomass. (C) 2015 Elsevier B.V. All rights reserved.-

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multivariate models were developed using Artificial Neural Network (ANN) and Least Square - Support Vector Machines (LS-SVM) for estimating lignin siringyl/guaiacyl ratio and the contents of cellulose, hemicelluloses and lignin in eucalyptus wood by pyrolysis associated to gaseous chromatography and mass spectrometry (Py-GC/MS). The results obtained by two calibration methods were in agreement with those of reference methods. However a comparison indicated that the LS-SVM model presented better predictive capacity for the cellulose and lignin contents, while the ANN model presented was more adequate for estimating the hemicelluloses content and lignin siringyl/guaiacyl ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The quantitative structure property relationship (QSPR) for the boiling point (Tb) of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) was investigated. The molecular distance-edge vector (MDEV) index was used as the structural descriptor. The quantitative relationship between the MDEV index and Tb was modeled by using multivariate linear regression (MLR) and artificial neural network (ANN), respectively. Leave-one-out cross validation and external validation were carried out to assess the prediction performance of the models developed. For the MLR method, the prediction root mean square relative error (RMSRE) of leave-one-out cross validation and external validation was 1.77 and 1.23, respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and external validation was 1.65 and 1.16, respectively. A quantitative relationship between the MDEV index and Tb of PCDD/Fs was demonstrated. Both MLR and ANN are practicable for modeling this relationship. The MLR model and ANN model developed can be used to predict the Tb of PCDD/Fs. Thus, the Tb of each PCDD/F was predicted by the developed models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work is to demonstrate the efficient utilization of the Principal Components Analysis (PCA) as a method to pre-process the original multivariate data, that is rewrite in a new matrix with principal components sorted by it's accumulated variance. The Artificial Neural Network (ANN) with backpropagation algorithm is trained, using this pre-processed data set derived from the PCA method, representing 90.02% of accumulated variance of the original data, as input. The training goal is modeling Dissolved Oxygen using information of other physical and chemical parameters. The water samples used in the experiments are gathered from the Paraíba do Sul River in São Paulo State, Brazil. The smallest Mean Square Errors (MSE) is used to compare the results of the different architectures and choose the best. The utilization of this method allowed the reduction of more than 20% of the input data, which contributed directly for the shorting time and computational effort in the ANN training.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les avancés dans le domaine de l’intelligence artificielle, permettent à des systèmes informatiques de résoudre des tâches de plus en plus complexes liées par exemple à la vision, à la compréhension de signaux sonores ou au traitement de la langue. Parmi les modèles existants, on retrouve les Réseaux de Neurones Artificiels (RNA), dont la popularité a fait un grand bond en avant avec la découverte de Hinton et al. [22], soit l’utilisation de Machines de Boltzmann Restreintes (RBM) pour un pré-entraînement non-supervisé couche après couche, facilitant grandement l’entraînement supervisé du réseau à plusieurs couches cachées (DBN), entraînement qui s’avérait jusqu’alors très difficile à réussir. Depuis cette découverte, des chercheurs ont étudié l’efficacité de nouvelles stratégies de pré-entraînement, telles que l’empilement d’auto-encodeurs traditionnels(SAE) [5, 38], et l’empilement d’auto-encodeur débruiteur (SDAE) [44]. C’est dans ce contexte qu’a débuté la présente étude. Après un bref passage en revue des notions de base du domaine de l’apprentissage machine et des méthodes de pré-entraînement employées jusqu’à présent avec les modules RBM, AE et DAE, nous avons approfondi notre compréhension du pré-entraînement de type SDAE, exploré ses différentes propriétés et étudié des variantes de SDAE comme stratégie d’initialisation d’architecture profonde. Nous avons ainsi pu, entre autres choses, mettre en lumière l’influence du niveau de bruit, du nombre de couches et du nombre d’unités cachées sur l’erreur de généralisation du SDAE. Nous avons constaté une amélioration de la performance sur la tâche supervisée avec l’utilisation des bruits poivre et sel (PS) et gaussien (GS), bruits s’avérant mieux justifiés que celui utilisé jusqu’à présent, soit le masque à zéro (MN). De plus, nous avons démontré que la performance profitait d’une emphase imposée sur la reconstruction des données corrompues durant l’entraînement des différents DAE. Nos travaux ont aussi permis de révéler que le DAE était en mesure d’apprendre, sur des images naturelles, des filtres semblables à ceux retrouvés dans les cellules V1 du cortex visuel, soit des filtres détecteurs de bordures. Nous aurons par ailleurs pu montrer que les représentations apprises du SDAE, composées des caractéristiques ainsi extraites, s’avéraient fort utiles à l’apprentissage d’une machine à vecteurs de support (SVM) linéaire ou à noyau gaussien, améliorant grandement sa performance de généralisation. Aussi, nous aurons observé que similairement au DBN, et contrairement au SAE, le SDAE possédait une bonne capacité en tant que modèle générateur. Nous avons également ouvert la porte à de nouvelles stratégies de pré-entraînement et découvert le potentiel de l’une d’entre elles, soit l’empilement d’auto-encodeurs rebruiteurs (SRAE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short term load forecasting is one of the key inputs to optimize the management of power system. Almost 60-65% of revenue expenditure of a distribution company is against power purchase. Cost of power depends on source of power. Hence any optimization strategy involves optimization in scheduling power from various sources. As the scheduling involves many technical and commercial considerations and constraints, the efficiency in scheduling depends on the accuracy of load forecast. Load forecasting is a topic much visited in research world and a number of papers using different techniques are already presented. The accuracy of forecast for the purpose of merit order dispatch decisions depends on the extent of the permissible variation in generation limits. For a system with low load factor, the peak and the off peak trough are prominent and the forecast should be able to identify these points to more accuracy rather than minimizing the error in the energy content. In this paper an attempt is made to apply Artificial Neural Network (ANN) with supervised learning based approach to make short term load forecasting for a power system with comparatively low load factor. Such power systems are usual in tropical areas with concentrated rainy season for a considerable period of the year

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerous studies have proven an effect of a probable climate change on the hydrosphere’s different subsystems. In the 21st century global and regional redistribution of water has to be expected and it is very likely that extreme weather phenomenon will occur more frequently. From a global view the flood situation will exacerbate. In contrast to these discoveries the classical approach of flood frequency analysis provides terms like “mean flood recurrence interval”. But for this analysis to be valid there is a need for the precondition of stationary distribution parameters which implies that the flood frequencies are constant in time. Newer approaches take into account extreme value distributions with time-dependent parameters. But the latter implies a discard of the mentioned old terminology that has been used up-to-date in engineering hydrology. On the regional scale climate change affects the hydrosphere in various ways. So, the question appears to be whether in central Europe the classical approach of flood frequency analysis is not usable anymore and whether the traditional terminology should be renewed. In the present case study hydro-meteorological time series of the Fulda catchment area (6930 km²), upstream of the gauging station Bonaforth, are analyzed for the time period 1960 to 2100. At first a distributed catchment area model (SWAT2005) is build up, calibrated and finally validated. The Edertal reservoir is regulated as well by a feedback control of the catchments output in case of low water. Due to this intricacy a special modeling strategy has been necessary: The study area is divided into three SWAT basin models and an additional physically-based reservoir model is developed. To further improve the streamflow predictions of the SWAT model, a correction by an artificial neural network (ANN) has been tested successfully which opens a new way to improve hydrological models. With this extension the calibration and validation of the SWAT model for the Fulda catchment area is improved significantly. After calibration of the model for the past 20th century observed streamflow, the SWAT model is driven by high resolution climate data of the regional model REMO using the IPCC scenarios A1B, A2, and B1, to generate future runoff time series for the 21th century for the various sub-basins in the study area. In a second step flood time series HQ(a) are derived from the 21st century runoff time series (scenarios A1B, A2, and B1). Then these flood projections are extensively tested with regard to stationarity, homogeneity and statistical independence. All these tests indicate that the SWAT-predicted 21st-century trends in the flood regime are not significant. Within the projected time the members of the flood time series are proven to be stationary and independent events. Hence, the classical stationary approach of flood frequency analysis can still be used within the Fulda catchment area, notwithstanding the fact that some regional climate change has been predicted using the IPCC scenarios. It should be noted, however, that the present results are not transferable to other catchment areas. Finally a new method is presented that enables the calculation of extreme flood statistics, even if the flood time series is non-stationary and also if the latter exhibits short- and longterm persistence. This method, which is called Flood Series Maximum Analysis here, enables the calculation of maximum design floods for a given risk- or safety level and time period.