790 resultados para ARTIFICIAL NEURAL NETWORK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regional climate models (RCMs) provide reliable climatic predictions for the next 90 years with high horizontal and temporal resolution. In the 21st century northward latitudinal and upward altitudinal shift of the distribution of plant species and phytogeographical units is expected. It is discussed how the modeling of phytogeographical unit can be reduced to modeling plant distributions. Predicted shift of the Moesz line is studied as case study (with three different modeling approaches) using 36 parameters of REMO regional climate data-set, ArcGIS geographic information software, and periods of 1961-1990 (reference period), 2011-2040, and 2041-2070. The disadvantages of this relatively simple climate envelope modeling (CEM) approach are then discussed and several ways of model improvement are suggested. Some statistical and artificial intelligence (AI) methods (logistic regression, cluster analysis and other clustering methods, decision tree, evolutionary algorithm, artificial neural network) are able to provide development of the model. Among them artificial neural networks (ANN) seems to be the most suitable algorithm for this purpose, which provides a black box method for distribution modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is about the development and the application of an ESRI ArcGIS tool which implements multi-layer, feed-forward artificial neural network (ANN) to study the climate envelope of species. The supervised learning is achieved by backpropagation algorithm. Based on the distribution and the grids of the climate (and edaphic data) of the reference and future periods the tool predicts the future potential distribution of the studied species. The trained network can be saved and loaded. A modeling result based on the distribution of European larch (Larix decidua Mill.) is presented as a case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most research on stock prices is based on the present value model or the more general consumption-based model. When applied to real economic data, both of them are found unable to account for both the stock price level and its volatility. Three essays here attempt to both build a more realistic model, and to check whether there is still room for bubbles in explaining fluctuations in stock prices. In the second chapter, several innovations are simultaneously incorporated into the traditional present value model in order to produce more accurate model-based fundamental prices. These innovations comprise replacing with broad dividends the more narrow traditional dividends that are more commonly used, a nonlinear artificial neural network (ANN) forecasting procedure for these broad dividends instead of the more common linear forecasting models for narrow traditional dividends, and a stochastic discount rate in place of the constant discount rate. Empirical results show that the model described above predicts fundamental prices better, compared with alternative models using linear forecasting process, narrow dividends, or a constant discount factor. Nonetheless, actual prices are still largely detached from fundamental prices. The bubblelike deviations are found to coincide with business cycles. The third chapter examines possible cointegration of stock prices with fundamentals and non-fundamentals. The output gap is introduced to form the nonfundamental part of stock prices. I use a trivariate Vector Autoregression (TVAR) model and a single equation model to run cointegration tests between these three variables. Neither of the cointegration tests shows strong evidence of explosive behavior in the DJIA and S&P 500 data. Then, I applied a sup augmented Dickey-Fuller test to check for the existence of periodically collapsing bubbles in stock prices. Such bubbles are found in S&P data during the late 1990s. Employing econometric tests from the third chapter, I continue in the fourth chapter to examine whether bubbles exist in stock prices of conventional economic sectors on the New York Stock Exchange. The ‘old economy’ as a whole is not found to have bubbles. But, periodically collapsing bubbles are found in Material and Telecommunication Services sectors, and the Real Estate industry group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flow Cytometry analyzers have become trusted companions due to their ability to perform fast and accurate analyses of human blood. The aim of these analyses is to determine the possible existence of abnormalities in the blood that have been correlated with serious disease states, such as infectious mononucleosis, leukemia, and various cancers. Though these analyzers provide important feedback, it is always desired to improve the accuracy of the results. This is evidenced by the occurrences of misclassifications reported by some users of these devices. It is advantageous to provide a pattern interpretation framework that is able to provide better classification ability than is currently available. Toward this end, the purpose of this dissertation was to establish a feature extraction and pattern classification framework capable of providing improved accuracy for detecting specific hematological abnormalities in flow cytometric blood data. ^ This involved extracting a unique and powerful set of shift-invariant statistical features from the multi-dimensional flow cytometry data and then using these features as inputs to a pattern classification engine composed of an artificial neural network (ANN). The contribution of this method consisted of developing a descriptor matrix that can be used to reliably assess if a donor’s blood pattern exhibits a clinically abnormal level of variant lymphocytes, which are blood cells that are potentially indicative of disorders such as leukemia and infectious mononucleosis. ^ This study showed that the set of shift-and-rotation-invariant statistical features extracted from the eigensystem of the flow cytometric data pattern performs better than other commonly-used features in this type of disease detection, exhibiting an accuracy of 80.7%, a sensitivity of 72.3%, and a specificity of 89.2%. This performance represents a major improvement for this type of hematological classifier, which has historically been plagued by poor performance, with accuracies as low as 60% in some cases. This research ultimately shows that an improved feature space was developed that can deliver improved performance for the detection of variant lymphocytes in human blood, thus providing significant utility in the realm of suspect flagging algorithms for the detection of blood-related diseases.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research was to investigate the influence of elevation and other terrain characteristics over the spatial and temporal distribution of rainfall. A comparative analysis was conducted between several methods of spatial interpolations using mean monthly precipitation values in order to select the best. Following those previous results it was possible to fit an Artificial Neural Network model for interpolation of monthly precipitation values for a period of 20 years, with input values such as longitude, latitude, elevation, four geomorphologic characteristics and anchored by seven weather stations, it reached a high correlation coefficient (r=0.85). This research demonstrated a strong influence of elevation and other geomorphologic variables over the spatial distribution of precipitation and the agreement that there are nonlinear relationships. This model will be used to fill gaps in time-series of monthly precipitation, and to generate maps of spatial distribution of monthly precipitation at a resolution of 1km2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most research on stock prices is based on the present value model or the more general consumption-based model. When applied to real economic data, both of them are found unable to account for both the stock price level and its volatility. Three essays here attempt to both build a more realistic model, and to check whether there is still room for bubbles in explaining fluctuations in stock prices. In the second chapter, several innovations are simultaneously incorporated into the traditional present value model in order to produce more accurate model-based fundamental prices. These innovations comprise replacing with broad dividends the more narrow traditional dividends that are more commonly used, a nonlinear artificial neural network (ANN) forecasting procedure for these broad dividends instead of the more common linear forecasting models for narrow traditional dividends, and a stochastic discount rate in place of the constant discount rate. Empirical results show that the model described above predicts fundamental prices better, compared with alternative models using linear forecasting process, narrow dividends, or a constant discount factor. Nonetheless, actual prices are still largely detached from fundamental prices. The bubble-like deviations are found to coincide with business cycles. The third chapter examines possible cointegration of stock prices with fundamentals and non-fundamentals. The output gap is introduced to form the non-fundamental part of stock prices. I use a trivariate Vector Autoregression (TVAR) model and a single equation model to run cointegration tests between these three variables. Neither of the cointegration tests shows strong evidence of explosive behavior in the DJIA and S&P 500 data. Then, I applied a sup augmented Dickey-Fuller test to check for the existence of periodically collapsing bubbles in stock prices. Such bubbles are found in S&P data during the late 1990s. Employing econometric tests from the third chapter, I continue in the fourth chapter to examine whether bubbles exist in stock prices of conventional economic sectors on the New York Stock Exchange. The ‘old economy’ as a whole is not found to have bubbles. But, periodically collapsing bubbles are found in Material and Telecommunication Services sectors, and the Real Estate industry group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A uniform chronology for foraminifera-based sea surface temperature records has been established in more than 120 sediment cores obtained from the equatorial and eastern Atlantic up to the Arctic Ocean. The chronostratigraphy of the last 30,000 years is mainly based on published d18O records and 14C ages from accelerator mass spectrometry, converted into calendar-year ages. The high-precision age control provides the database necessary for the uniform reconstruction of the climate interval of the Last Glacial Maximum within the GLAMAP-2000 project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Valve stiction, or static friction, in control loops is a common problem in modern industrial processes. Recently, many studies have been developed to understand, reproduce and detect such problem, but quantification still remains a challenge. Since the valve position (mv) is normally unknown in an industrial process, the main challenge is to diagnose stiction knowing only the output signals of the process (pv) and the control signal (op). This paper presents an Artificial Neural Network approach in order to detect and quantify the amount of static friction using only the pv and op information. Different methods for preprocessing the training set of the neural network are presented. Those methods are based on the calculation of centroid and Fourier Transform. The proposal is validated using a simulated process and the results show a satisfactory measurement of stiction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work consists basically in the elaboration of an Artificial Neural Network (ANN) in order to model the composites materials’ behavior when submitted to fatigue loadings. The proposal is to develop and present a mixed model, which associate an analytical equation (Adam Equation) to the structure of the ANN. Given that the composites often shows a similar behavior when subject to float loadings, this equation aims to establish a pre-defined comparison pattern for a generic material, so that the ANN fit the behavior of another composite material to that pattern. In this way, the ANN did not need to fully learn the behavior of a determined material, because the Adam Equation would do the big part of the job. This model was used in two different network architectures, modular and perceptron, with the aim of analyze it efficiency in distinct structures. Beyond the different architectures, it was analyzed the answers generated from two sets of different data – with three and two SN curves. This model was also compared to the specialized literature results, which use a conventional structure of ANN. The results consist in analyze and compare some characteristics like generalization capacity, robustness and the Goodman Diagrams, developed by the networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Artificial Neural Networks (ANN), which is one of the branches of Artificial Intelligence (AI), are being employed as a solution to many complex problems existing in several areas. To solve these problems, it is essential that its implementation is done in hardware. Among the strategies to be adopted and met during the design phase and implementation of RNAs in hardware, connections between neurons are the ones that need more attention. Recently, are RNAs implemented both in application specific integrated circuits's (Application Specific Integrated Circuits - ASIC) and in integrated circuits configured by the user, like the Field Programmable Gate Array (FPGA), which have the ability to be partially rewritten, at runtime, forming thus a system Partially Reconfigurable (SPR), the use of which provides several advantages, such as flexibility in implementation and cost reduction. It has been noted a considerable increase in the use of FPGAs for implementing ANNs. Given the above, it is proposed to implement an array of reconfigurable neurons for topologies Description of artificial neural network multilayer perceptrons (MLPs) in FPGA, in order to encourage feedback and reuse of neural processors (perceptrons) used in the same area of the circuit. It is further proposed, a communication network capable of performing the reuse of artificial neurons. The architecture of the proposed system will configure various topologies MLPs networks through partial reconfiguration of the FPGA. To allow this flexibility RNAs settings, a set of digital components (datapath), and a controller were developed to execute instructions that define each topology for MLP neural network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Artificial Neural Networks (ANN), which is one of the branches of Artificial Intelligence (AI), are being employed as a solution to many complex problems existing in several areas. To solve these problems, it is essential that its implementation is done in hardware. Among the strategies to be adopted and met during the design phase and implementation of RNAs in hardware, connections between neurons are the ones that need more attention. Recently, are RNAs implemented both in application specific integrated circuits's (Application Specific Integrated Circuits - ASIC) and in integrated circuits configured by the user, like the Field Programmable Gate Array (FPGA), which have the ability to be partially rewritten, at runtime, forming thus a system Partially Reconfigurable (SPR), the use of which provides several advantages, such as flexibility in implementation and cost reduction. It has been noted a considerable increase in the use of FPGAs for implementing ANNs. Given the above, it is proposed to implement an array of reconfigurable neurons for topologies Description of artificial neural network multilayer perceptrons (MLPs) in FPGA, in order to encourage feedback and reuse of neural processors (perceptrons) used in the same area of the circuit. It is further proposed, a communication network capable of performing the reuse of artificial neurons. The architecture of the proposed system will configure various topologies MLPs networks through partial reconfiguration of the FPGA. To allow this flexibility RNAs settings, a set of digital components (datapath), and a controller were developed to execute instructions that define each topology for MLP neural network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work is to use algorithms known as Boltzmann Machine to rebuild and classify patterns as images. This algorithm has a similar structure to that of an Artificial Neural Network but network nodes have stochastic and probabilistic decisions. This work presents the theoretical framework of the main Artificial Neural Networks, General Boltzmann Machine algorithm and a variation of this algorithm known as Restricted Boltzmann Machine. Computer simulations are performed comparing algorithms Artificial Neural Network Backpropagation with these algorithms Boltzmann General Machine and Machine Restricted Boltzmann. Through computer simulations are analyzed executions times of the different described algorithms and bit hit percentage of trained patterns that are later reconstructed. Finally, they used binary images with and without noise in training Restricted Boltzmann Machine algorithm, these images are reconstructed and classified according to the bit hit percentage in the reconstruction of the images. The Boltzmann machine algorithms were able to classify patterns trained and showed excellent results in the reconstruction of the standards code faster runtime and thus can be used in applications such as image recognition.