878 resultados para Prediction by neural networks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ancillary services represent a good business opportunity that must be considered by market players. This paper presents a new methodology for ancillary services market dispatch. The method considers the bids submitted to the market and includes a market clearing mechanism based on deterministic optimization. An Artificial Neural Network is used for day-ahead prediction of Regulation Down, regulation-up, Spin Reserve and Non-Spin Reserve requirements. Two test cases based on California Independent System Operator data concerning dispatch of Regulation Down, Regulation Up, Spin Reserve and Non-Spin Reserve services are included in this paper to illustrate the application of the proposed method: (1) dispatch considering simple bids; (2) dispatch considering complex bids.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Customer lifetime value (LTV) enables using client characteristics, such as recency, frequency and monetary (RFM) value, to describe the value of a client through time in terms of profitability. We present the concept of LTV applied to telemarketing for improving the return-on-investment, using a recent (from 2008 to 2013) and real case study of bank campaigns to sell long- term deposits. The goal was to benefit from past contacts history to extract additional knowledge. A total of twelve LTV input variables were tested, un- der a forward selection method and using a realistic rolling windows scheme, highlighting the validity of five new LTV features. The results achieved by our LTV data-driven approach using neural networks allowed an improvement up to 4 pp in the Lift cumulative curve for targeting the deposit subscribers when compared with a baseline model (with no history data). Explanatory knowledge was also extracted from the proposed model, revealing two highly relevant LTV features, the last result of the previous campaign to sell the same product and the frequency of past client successes. The obtained results are particularly valuable for contact center companies, which can improve pre- dictive performance without even having to ask for more information to the companies they serve.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The females of the two species of the Lutzomyia intermedia complex can be easily distinguished, but the males of each species are quite similar. The ratios between the extra-genital and the genital structures of L. neivai are larger than those of L. intermedia s. s., according to ANOVA. An artificial neural network was trained with a set of 300 examples, randomly taken from a sample of 358 individuals. The input vectors consisted of several ratios between some structures of each insect. The model was tested on the remaining 58 insects, 56 of which (96.6%) were correctly identified. This ratio of success can be considered remarkable if one takes into account the difficulty of attaining comparable results using traditional statistical techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control and prediction of wastewater treatment plants poses an important goal: to avoid breaking the environmental balance by always keeping the system in stable operating conditions. It is known that qualitative information — coming from microscopic examinations and subjective remarks — has a deep influence on the activated sludge process. In particular, on the total amount of effluent suspended solids, one of the measures of overall plant performance. The search for an input–output model of this variable and the prediction of sudden increases (bulking episodes) is thus a central concern to ensure the fulfillment of current discharge limitations. Unfortunately, the strong interrelationbetween variables, their heterogeneity and the very high amount of missing information makes the use of traditional techniques difficult, or even impossible. Through the combined use of several methods — rough set theory and artificial neural networks, mainly — reasonable prediction models are found, which also serve to show the different importance of variables and provide insight into the process dynamics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To explore whether triaxial accelerometric measurements can be utilized to accurately assess speed and incline of running in free-living conditions. METHODS: Body accelerations during running were recorded at the lower back and at the heel by a portable data logger in 20 human subjects, 10 men, and 10 women. After parameterizing body accelerations, two neural networks were designed to recognize each running pattern and calculate speed and incline. Each subject ran 18 times on outdoor roads at various speeds and inclines; 12 runs were used to calibrate the neural networks whereas the 6 other runs were used to validate the model. RESULTS: A small difference between the estimated and the actual values was observed: the square root of the mean square error (RMSE) was 0.12 m x s(-1) for speed and 0.014 radiant (rad) (or 1.4% in absolute value) for incline. Multiple regression analysis allowed accurate prediction of speed (RMSE = 0.14 m x s(-1)) but not of incline (RMSE = 0.026 rad or 2.6% slope). CONCLUSION: Triaxial accelerometric measurements allows an accurate estimation of speed of running and incline of terrain (the latter with more uncertainty). This will permit the validation of the energetic results generated on the treadmill as applied to more physiological unconstrained running conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Artificial Neural Networks (ANNs) are mathematical models method capable of estimating non-linear response plans. The advantage of these models is to present different responses of the statistical models. Thus, the objective of this study was to develop and to test ANNs for estimating rainfall erosivity index (EI30) as a function of the geographical location for the state of Rio de Janeiro, Brazil and generating a thematic visualization map. The characteristics of latitude, longitude e altitude using ANNs were acceptable to estimating EI30 and allowing visualization of the space variability of EI30. Thus, ANN is a potential option for the estimate of climatic variables in substitution to the traditional methods of interpolation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the effects of hot-air drying conditions on color, water holding capacity, and total phenolic content of dried apple were investigated using artificial neural network as an intelligent modeling system. After that, a genetic algorithm was used to optimize the drying conditions. Apples were dried at different temperatures (40, 60, and 80 °C) and at three air flow-rates (0.5, 1, and 1.5 m/s). Applying the leave-one-out cross validation methodology, simulated and experimental data were in good agreement presenting an error < 2.4 %. Quality index optimal values were found at 62.9 °C and 1.0 m/s using genetic algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sigmoid type belief networks, a class of probabilistic neural networks, provide a natural framework for compactly representing probabilistic information in a variety of unsupervised and supervised learning problems. Often the parameters used in these networks need to be learned from examples. Unfortunately, estimating the parameters via exact probabilistic calculations (i.e, the EM-algorithm) is intractable even for networks with fairly small numbers of hidden units. We propose to avoid the infeasibility of the E step by bounding likelihoods instead of computing them exactly. We introduce extended and complementary representations for these networks and show that the estimation of the network parameters can be made fast (reduced to quadratic optimization) by performing the estimation in either of the alternative domains. The complementary networks can be used for continuous density estimation as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control and prediction of wastewater treatment plants poses an important goal: to avoid breaking the environmental balance by always keeping the system in stable operating conditions. It is known that qualitative information — coming from microscopic examinations and subjective remarks — has a deep influence on the activated sludge process. In particular, on the total amount of effluent suspended solids, one of the measures of overall plant performance. The search for an input–output model of this variable and the prediction of sudden increases (bulking episodes) is thus a central concern to ensure the fulfillment of current discharge limitations. Unfortunately, the strong interrelation between variables, their heterogeneity and the very high amount of missing information makes the use of traditional techniques difficult, or even impossible. Through the combined use of several methods — rough set theory and artificial neural networks, mainly — reasonable prediction models are found, which also serve to show the different importance of variables and provide insight into the process dynamics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work analyzes the use of linear discriminant models, multi-layer perceptron neural networks and wavelet networks for corporate financial distress prediction. Although simple and easy to interpret, linear models require statistical assumptions that may be unrealistic. Neural networks are able to discriminate patterns that are not linearly separable, but the large number of parameters involved in a neural model often causes generalization problems. Wavelet networks are classification models that implement nonlinear discriminant surfaces as the superposition of dilated and translated versions of a single "mother wavelet" function. In this paper, an algorithm is proposed to select dilation and translation parameters that yield a wavelet network classifier with good parsimony characteristics. The models are compared in a case study involving failed and continuing British firms in the period 1997-2000. Problems associated with over-parameterized neural networks are illustrated and the Optimal Brain Damage pruning technique is employed to obtain a parsimonious neural model. The results, supported by a re-sampling study, show that both neural and wavelet networks may be a valid alternative to classical linear discriminant models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analyzes the use of linear and neural network models for financial distress classification, with emphasis on the issues of input variable selection and model pruning. A data-driven method for selecting input variables (financial ratios, in this case) is proposed. A case study involving 60 British firms in the period 1997-2000 is used for illustration. It is shown that the use of the Optimal Brain Damage pruning technique can considerably improve the generalization ability of a neural model. Moreover, the set of financial ratios obtained with the proposed selection procedure is shown to be an appropriate alternative to the ratios usually employed by practitioners.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esse trabalho tem por objetivo o desenvolvimento de um sistema inteligente para detecção da queima no processo de retificação tangencial plana através da utilização de uma rede neural perceptron multi camadas, treinada para generalizar o processo e, conseqüentemente, obter o limiar de queima. em geral, a ocorrência da queima no processo de retificação pode ser detectada pelos parâmetros DPO e FKS. Porém esses parâmetros não são eficientes nas condições de usinagem usadas nesse trabalho. Os sinais de emissão acústica e potência elétrica do motor de acionamento do rebolo são variáveis de entrada e a variável de saída é a ocorrência da queima. No trabalho experimental, foram empregados um tipo de aço (ABNT 1045 temperado) e um tipo de rebolo denominado TARGA, modelo ART 3TG80.3 NVHB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work is to develop a methodology for electric load forecasting based on a neural network. Here, backpropagation algorithm is used with an adaptive process that based on fuzzy logic and using a decaying exponential function to avoid instability in the convergence process. This methodology results in fast training, when compared to the conventional formulation of backpropagation algorithm. The results are presented using data from a Brazilian Electric Company, and shows a very good performance for the proposal objective.