15 resultados para Statistical models of Box-Jenkins. Artificial neural networks (ANN). Oil flow curve
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
In this paper is presented a multilayer perceptron neural network combined with the Nelder-Mead Simplex method to detect damage in multiple support beams. The input parameters are based on natural frequencies and modal flexibility. It was considered that only a number of modes were available and that only vertical degrees of freedom were measured. The reliability of the proposed methodology is assessed from the generation of random damages scenarios and the definition of three types of errors, which can be found during the damage identification process. Results show that the methodology can reliably determine the damage scenarios. However, its application to large beams may be limited by the high computational cost of training the neural network.
Resumo:
In this study, an effective microbial consortium for the biodegradation of phenol was grown under different operational conditions, and the effects of phosphate concentration (1.4 g L-1, 2.8 g L-1, 4.2 g L-1), temperature (25 degrees C, 30 degrees C, 35 degrees C), agitation (150 rpm, 200 rpm, 250 rpm) and pH (6, 7, 8) on phenol degradation were investigated, whereupon an artificial neural network (ANN) model was developed in order to predict degradation. The learning, recall and generalization characteristics of neural networks were studied using data from the phenol degradation system. The efficiency of the model generated by the ANN was then tested and compared with the experimental results obtained. In both cases, the results corroborate the idea that aeration and temperature are crucial to increasing the efficiency of biodegradation.
Resumo:
Abstract Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.
Resumo:
A new series of austenitic stainless steels-Nb stabilized, without Mo additions, non-susceptible to delta ferrite formation and devoid of intemetallic phases (sigma and chi), without deformation induced martensite is being developed, aiming at high temperature applications as well as for corrosive environments. The base steel composition is a 15Cr-15Ni with normal additions of Nb of 0.5, 1.0 and 2 wt%. Mechanical properties, oxidation and corrosion resistance already have been invetigated in previous papers. In this paper, the effects of Nb on the SFE, strain hardening and recrystallization resistance are evaluated with the help of Adaptive Neural Networks (ANN).
Resumo:
This paper addressed the problem of water-demand forecasting for real-time operation of water supply systems. The present study was conducted to identify the best fit model using hourly consumption data from the water supply system of Araraquara, Sa approximate to o Paulo, Brazil. Artificial neural networks (ANNs) were used in view of their enhanced capability to match or even improve on the regression model forecasts. The ANNs used were the multilayer perceptron with the back-propagation algorithm (MLP-BP), the dynamic neural network (DAN2), and two hybrid ANNs. The hybrid models used the error produced by the Fourier series forecasting as input to the MLP-BP and DAN2, called ANN-H and DAN2-H, respectively. The tested inputs for the neural network were selected literature and correlation analysis. The results from the hybrid models were promising, DAN2 performing better than the tested MLP-BP models. DAN2-H, identified as the best model, produced a mean absolute error (MAE) of 3.3 L/s and 2.8 L/s for training and test set, respectively, for the prediction of the next hour, which represented about 12% of the average consumption. The best forecasting model for the next 24 hours was again DAN2-H, which outperformed other compared models, and produced a MAE of 3.1 L/s and 3.0 L/s for training and test set respectively, which represented about 12% of average consumption. DOI: 10.1061/(ASCE)WR.1943-5452.0000177. (C) 2012 American Society of Civil Engineers.
Resumo:
Background The evolutionary advantages of selective attention are unclear. Since the study of selective attention began, it has been suggested that the nervous system only processes the most relevant stimuli because of its limited capacity [1]. An alternative proposal is that action planning requires the inhibition of irrelevant stimuli, which forces the nervous system to limit its processing [2]. An evolutionary approach might provide additional clues to clarify the role of selective attention. Methods We developed Artificial Life simulations wherein animals were repeatedly presented two objects, "left" and "right", each of which could be "food" or "non-food." The animals' neural networks (multilayer perceptrons) had two input nodes, one for each object, and two output nodes to determine if the animal ate each of the objects. The neural networks also had a variable number of hidden nodes, which determined whether or not it had enough capacity to process both stimuli (Table 1). The evolutionary relevance of the left and the right food objects could also vary depending on how much the animal's fitness was increased when ingesting them (Table 1). We compared sensory processing in animals with or without limited capacity, which evolved in simulations in which the objects had the same or different relevances. Table 1. Nine sets of simulations were performed, varying the values of food objects and the number of hidden nodes in the neural networks. The values of left and right food were swapped during the second half of the simulations. Non-food objects were always worth -3. The evolution of neural networks was simulated by a simple genetic algorithm. Fitness was a function of the number of food and non-food objects each animal ate and the chromosomes determined the node biases and synaptic weights. During each simulation, 10 populations of 20 individuals each evolved in parallel for 20,000 generations, then the relevance of food objects was swapped and the simulation was run again for another 20,000 generations. The neural networks were evaluated by their ability to identify the two objects correctly. The detectability (d') for the left and the right objects was calculated using Signal Detection Theory [3]. Results and conclusion When both stimuli were equally relevant, networks with two hidden nodes only processed one stimulus and ignored the other. With four or eight hidden nodes, they could correctly identify both stimuli. When the stimuli had different relevances, the d' for the most relevant stimulus was higher than the d' for the least relevant stimulus, even when the networks had four or eight hidden nodes. We conclude that selection mechanisms arose in our simulations depending not only on the size of the neuron networks but also on the stimuli's relevance for action.
Resumo:
Hierarchical multi-label classification is a complex classification task where the classes involved in the problem are hierarchically structured and each example may simultaneously belong to more than one class in each hierarchical level. In this paper, we extend our previous works, where we investigated a new local-based classification method that incrementally trains a multi-layer perceptron for each level of the classification hierarchy. Predictions made by a neural network in a given level are used as inputs to the neural network responsible for the prediction in the next level. We compare the proposed method with one state-of-the-art decision-tree induction method and two decision-tree induction methods, using several hierarchical multi-label classification datasets. We perform a thorough experimental analysis, showing that our method obtains competitive results to a robust global method regarding both precision and recall evaluation measures.
Resumo:
Studies of consumer-resource interactions suggest that individual diet specialisation is empirically widespread and theoretically important to the organisation and dynamics of populations and communities. We used weighted networks to analyze the resource use by sea otters, testing three alternative models for how individual diet specialisation may arise. As expected, individual specialisation was absent when otter density was low, but increased at high-otter density. A high-density emergence of nested resource-use networks was consistent with the model assuming individuals share preference ranks. However, a density-dependent emergence of a non-nested modular network for core resources was more consistent with the competitive refuge model. Individuals from different diet modules showed predictable variation in rank-order prey preferences and handling times of core resources, further supporting the competitive refuge model. Our findings support a hierarchical organisation of diet specialisation and suggest individual use of core and marginal resources may be driven by different selective pressures.
Resumo:
The use of statistical methods to analyze large databases of text has been useful in unveiling patterns of human behavior and establishing historical links between cultures and languages. In this study, we identified literary movements by treating books published from 1590 to 1922 as complex networks, whose metrics were analyzed with multivariate techniques to generate six clusters of books. The latter correspond to time periods coinciding with relevant literary movements over the last five centuries. The most important factor contributing to the distinctions between different literary styles was the average shortest path length, in particular the asymmetry of its distribution. Furthermore, over time there has emerged a trend toward larger average shortest path lengths, which is correlated with increased syntactic complexity, and a more uniform use of the words reflected in a smaller power-law coefficient for the distribution of word frequency. Changes in literary style were also found to be driven by opposition to earlier writing styles, as revealed by the analysis performed with geometrical concepts. The approaches adopted here are generic and may be extended to analyze a number of features of languages and cultures.
Resumo:
The fact that there is a complex and bidirectional communication between the immune and nervous systems has been well demonstrated. Lipopolysaccharide (LPS), a component of gram-negative bacteria, is widely used to systematically stimulate the immune system and generate profound physiological and behavioural changes, also known as sickness behaviour (e.g. anhedonia, lethargy, loss of appetite, anxiety, sleepiness). Different ethological tools have been used to analyse the behavioural modifications induced by LPS; however, many researchers analysed only individual tests, a single LPS dose or a unique ethological parameter, thus leading to disagreements regarding the data. In the present study, we investigated the effects of different doses of LPS (10, 50, 200 and 500 mu g/kg, i.p.) in young male Wistar rats (weighing 180200 g; 89 weeks old) on the ethological and spatiotemporal parameters of the elevated plus maze, light-dark box, elevated T maze, open-field tests and emission of ultrasound vocalizations. There was a dose-dependent increase in anxiety-like behaviours caused by LPS, forming an inverted U curve peaked at LPS 200 mu g/kg dose. However, these anxiety-like behaviours were detected only by complementary ethological analysis (stretching, grooming, immobility responses and alarm calls), and these reactions seem to be a very sensitive tool in assessing the first signs of sickness behaviour. In summary, the present work clearly showed that there are resting and alertness reactions induced by opposite neuroimmune mechanisms (neuroimmune bias) that could lead to anxiety behaviours, suggesting that misunderstanding data could occur when only few ethological variables or single doses of LPS are analysed. Finally, it is hypothesized that this bias is an evolutionary tool that increases animals security while the body recovers from a systemic infection.
Resumo:
Shared attention is a type of communication very important among human beings. It is sometimes reserved for the more complex form of communication being constituted by a sequence of four steps: mutual gaze, gaze following, imperative pointing and declarative pointing. Some approaches have been proposed in Human-Robot Interaction area to solve part of shared attention process, that is, the most of works proposed try to solve the first two steps. Models based on temporal difference, neural networks, probabilistic and reinforcement learning are methods used in several works. In this article, we are presenting a robotic architecture that provides a robot or agent, the capacity of learning mutual gaze, gaze following and declarative pointing using a robotic head interacting with a caregiver. Three learning methods have been incorporated to this architecture and a comparison of their performance has been done to find the most adequate to be used in real experiment. The learning capabilities of this architecture have been analyzed by observing the robot interacting with the human in a controlled environment. The experimental results show that the robotic head is able to produce appropriate behavior and to learn from sociable interaction.
Resumo:
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning.
Resumo:
Solar reactors can be attractive in photodegradation processes due to lower electrical energy demand. The performance of a solar reactor for two flow configurations, i.e., plug flow and mixed flow, is compared based on experimental results with a pilot-scale solar reactor. Aqueous solutions of phenol were used as a model for industrial wastewater containing organic contaminants. Batch experiments were carried out under clear sky, resulting in removal rates in the range of 96100?%. The dissolved organic carbon removal rate was simulated by an empirical model based on neural networks, which was adjusted to the experimental data, resulting in a correlation coefficient of 0.9856. This approach enabled to estimate effects of process variables which could not be evaluated from the experiments. Simulations with different reactor configurations indicated relevant aspects for the design of solar reactors.
Resumo:
Biological membranes are constituted from lipid bilayers and proteins. Investigation of protein-membrane interaction, essential for biological function of cells, must rest upon solid knowledge of lipid bilayer behavior. Thus, extensive studies of an experimental model for membranes, lipid bilayers in water solution, have been undertaken in the last decades. These systems present structural, thermal and electrical properties which depend on temperature, ionic strength or concentration. In this talk, we shall discuss statistical models for lipid bilayers, as well as the relation between their properties and results for properties of lipid dispersions investigated by the laboratories supervised by Teresa Lamy (IF-USP) and Amando Ito (FFCL-USP).
Resumo:
While the use of statistical physics methods to analyze large corpora has been useful to unveil many patterns in texts, no comprehensive investigation has been performed on the interdependence between syntactic and semantic factors. In this study we propose a framework for determining whether a text (e.g., written in an unknown alphabet) is compatible with a natural language and to which language it could belong. The approach is based on three types of statistical measurements, i.e. obtained from first-order statistics of word properties in a text, from the topology of complex networks representing texts, and from intermittency concepts where text is treated as a time series. Comparative experiments were performed with the New Testament in 15 different languages and with distinct books in English and Portuguese in order to quantify the dependency of the different measurements on the language and on the story being told in the book. The metrics found to be informative in distinguishing real texts from their shuffled versions include assortativity, degree and selectivity of words. As an illustration, we analyze an undeciphered medieval manuscript known as the Voynich Manuscript. We show that it is mostly compatible with natural languages and incompatible with random texts. We also obtain candidates for keywords of the Voynich Manuscript which could be helpful in the effort of deciphering it. Because we were able to identify statistical measurements that are more dependent on the syntax than on the semantics, the framework may also serve for text analysis in language-dependent applications.