907 resultados para Artificial Neuronal Networks
Resumo:
Complex biological systems require sophisticated approach for analysis, once there are variables with distinct measure levels to be analyzed at the same time in them. The mouse assisted reproduction, e.g. superovulation and viable embryos production, demand a multidisciplinary control of the environment, endocrinologic and physiologic status of the animals, of the stressing factors and the conditions which are favorable to their copulation and subsequently oocyte fertilization. In the past, analyses with a simplified approach of these variables were not well succeeded to predict the situations that viable embryos were obtained in mice. Thereby, we suggest a more complex approach with association of the Cluster Analysis and the Artificial Neural Network to predict embryo production in superovulated mice. A robust prediction could avoid the useless death of animals and would allow an ethic management of them in experiments requiring mouse embryo.
Resumo:
Grinding is a parts finishing process for advanced products and surfaces. However, continuous friction between the workpiece and the grinding wheel causes the latter to lose its sharpness, thus impairing the grinding results. This is when the dressing process is required, which consists of sharpening the worn grains of the grinding wheel. The dressing conditions strongly affect the performance of the grinding operation; hence, monitoring them throughout the process can increase its efficiency. The objective of this study was to estimate the wear of a single-point dresser using intelligent systems whose inputs were obtained by the digital processing of acoustic emission signals. Two intelligent systems, the multilayer perceptron and the Kohonen neural network, were compared in terms of their classifying ability. The harmonic content of the acoustic emission signal was found to be influenced by the condition of dresser, and when used to feed the neural networks it is possible to classify the condition of the tool under study.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Duas das mais importantes atividades da interpretação de perfis para avaliação de reservatórios de hidrocarbonetos são o zoneamento do perfil (log zonation) e o cálculo da porosidade efetiva das rochas atravessadas pelo poço. O zoneamento é a interpretação visual do perfil para identificação das camadas reservatório e, consequentemente, dos seus limites verticais, ou seja, é a separação formal do perfil em rochas reservatório e rochas selante. Todo procedimento de zoneamento é realizado de forma manual, valendo-se do conhecimento geológico-geofísico e da experiência do intérprete, na avaliação visual dos padrões (características da curva do perfil representativa de um evento geológico) correspondentes a cada tipo litológico específico. O cálculo da porosidade efetiva combina tanto uma atividade visual, na identificação dos pontos representativos de uma particular rocha reservatório no perfil, como a escolha adequada da equação petrofísica que relaciona as propriedades físicas mensuradas da rocha com sua porosidade. A partir do conhecimento da porosidade, será estabelecido o volume eventualmente ocupado por hidrocarboneto. Esta atividade, essencial para a qualificação de reservatórios, requer muito do conhecimento e da experiência do intérprete de perfil para a efetiva avaliação da porosidade efetiva, ou seja, a porosidade da rocha reservatório, isenta do efeito da argila sobre a medida das propriedades físicas da mesma. Uma forma eficiente de automatizar estes procedimentos e auxiliar o geofísico de poço nestas atividades, que particularmente demandam grande dispêndio de tempo, é apresentado nesta dissertação, na forma de um novo perfil, derivado dos perfis tradicionais de porosidade, que apresenta diretamente o zoneamento. Pode-se destacar neste novo perfil as profundidades do topo e da base das rochas reservatório e das rochas selante, escalonado na forma de porosidade efetiva, denominado perfil de porosidade efetiva zoneado. A obtenção do perfil de porosidade efetiva zoneado é baseado no projeto e execução de várias arquiteturas de rede neural artificial, do tipo direta, com treinamento não supervisionado e contendo uma camada de neurônios artificiais, do tipo competitivo. Estas arquiteturas são projetadas de modo a simular o comportamento do intérprete de perfil, quando da utilização do gráfico densidade-neutrônico, para as situações de aplicabilidade do modelo arenito-folhelho. A aplicabilidade e limitações desta metodologia são avaliadas diretamente sobre dados reais, oriundos da bacia do Lago Maracaibo (Venezuela).
Resumo:
Semi-supervised learning is one of the important topics in machine learning, concerning with pattern classification where only a small subset of data is labeled. In this paper, a new network-based (or graph-based) semi-supervised classification model is proposed. It employs a combined random-greedy walk of particles, with competition and cooperation mechanisms, to propagate class labels to the whole network. Due to the competition mechanism, the proposed model has a local label spreading fashion, i.e., each particle only visits a portion of nodes potentially belonging to it, while it is not allowed to visit those nodes definitely occupied by particles of other classes. In this way, a "divide-and-conquer" effect is naturally embedded in the model. As a result, the proposed model can achieve a good classification rate while exhibiting low computational complexity order in comparison to other network-based semi-supervised algorithms. Computer simulations carried out for synthetic and real-world data sets provide a numeric quantification of the performance of the method.
Resumo:
We report a morphology-based approach for the automatic identification of outlier neurons, as well as its application to the NeuroMorpho.org database, with more than 5,000 neurons. Each neuron in a given analysis is represented by a feature vector composed of 20 measurements, which are then projected into a two-dimensional space by applying principal component analysis. Bivariate kernel density estimation is then used to obtain the probability distribution for the group of cells, so that the cells with highest probabilities are understood as archetypes while those with the smallest probabilities are classified as outliers. The potential of the methodology is illustrated in several cases involving uniform cell types as well as cell types for specific animal species. The results provide insights regarding the distribution of cells, yielding single and multi-variate clusters, and they suggest that outlier cells tend to be more planar and tortuous. The proposed methodology can be used in several situations involving one or more categories of cells, as well as for detection of new categories and possible artifacts.
Resumo:
Semi-supervised learning techniques have gained increasing attention in the machine learning community, as a result of two main factors: (1) the available data is exponentially increasing; (2) the task of data labeling is cumbersome and expensive, involving human experts in the process. In this paper, we propose a network-based semi-supervised learning method inspired by the modularity greedy algorithm, which was originally applied for unsupervised learning. Changes have been made in the process of modularity maximization in a way to adapt the model to propagate labels throughout the network. Furthermore, a network reduction technique is introduced, as well as an extensive analysis of its impact on the network. Computer simulations are performed for artificial and real-world databases, providing a numerical quantitative basis for the performance of the proposed method.
Resumo:
In this work, we study the performance evaluation of resource-aware business process models. We define a new framework that allows the generation of analytical models for performance evaluation from business process models annotated with resource management information. This framework is composed of a new notation that allows the specification of resource management constraints and a method to convert a business process specification and its resource constraints into Stochastic Automata Networks (SANs). We show that the analysis of the generated SAN model provides several performance indices, such as average throughput of the system, average waiting time, average queues size, and utilization rate of resources. Using the BP2SAN tool - our implementation of the proposed framework - and a SAN solver (such as the PEPS tool) we show through a simple use-case how a business specialist with no skills in stochastic modeling can easily obtain performance indices that, in turn, can help to identify bottlenecks on the model, to perform workload characterization, to define the provisioning of resources, and to study other performance related aspects of the business process.
Resumo:
Texture image analysis is an important field of investigation that has attracted the attention from computer vision community in the last decades. In this paper, a novel approach for texture image analysis is proposed by using a combination of graph theory and partially self-avoiding deterministic walks. From the image, we build a regular graph where each vertex represents a pixel and it is connected to neighboring pixels (pixels whose spatial distance is less than a given radius). Transformations on the regular graph are applied to emphasize different image features. To characterize the transformed graphs, partially self-avoiding deterministic walks are performed to compose the feature vector. Experimental results on three databases indicate that the proposed method significantly improves correct classification rate compared to the state-of-the-art, e.g. from 89.37% (original tourist walk) to 94.32% on the Brodatz database, from 84.86% (Gabor filter) to 85.07% on the Vistex database and from 92.60% (original tourist walk) to 98.00% on the plant leaves database. In view of these results, it is expected that this method could provide good results in other applications such as texture synthesis and texture segmentation. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Background The evolutionary advantages of selective attention are unclear. Since the study of selective attention began, it has been suggested that the nervous system only processes the most relevant stimuli because of its limited capacity [1]. An alternative proposal is that action planning requires the inhibition of irrelevant stimuli, which forces the nervous system to limit its processing [2]. An evolutionary approach might provide additional clues to clarify the role of selective attention. Methods We developed Artificial Life simulations wherein animals were repeatedly presented two objects, "left" and "right", each of which could be "food" or "non-food." The animals' neural networks (multilayer perceptrons) had two input nodes, one for each object, and two output nodes to determine if the animal ate each of the objects. The neural networks also had a variable number of hidden nodes, which determined whether or not it had enough capacity to process both stimuli (Table 1). The evolutionary relevance of the left and the right food objects could also vary depending on how much the animal's fitness was increased when ingesting them (Table 1). We compared sensory processing in animals with or without limited capacity, which evolved in simulations in which the objects had the same or different relevances. Table 1. Nine sets of simulations were performed, varying the values of food objects and the number of hidden nodes in the neural networks. The values of left and right food were swapped during the second half of the simulations. Non-food objects were always worth -3. The evolution of neural networks was simulated by a simple genetic algorithm. Fitness was a function of the number of food and non-food objects each animal ate and the chromosomes determined the node biases and synaptic weights. During each simulation, 10 populations of 20 individuals each evolved in parallel for 20,000 generations, then the relevance of food objects was swapped and the simulation was run again for another 20,000 generations. The neural networks were evaluated by their ability to identify the two objects correctly. The detectability (d') for the left and the right objects was calculated using Signal Detection Theory [3]. Results and conclusion When both stimuli were equally relevant, networks with two hidden nodes only processed one stimulus and ignored the other. With four or eight hidden nodes, they could correctly identify both stimuli. When the stimuli had different relevances, the d' for the most relevant stimulus was higher than the d' for the least relevant stimulus, even when the networks had four or eight hidden nodes. We conclude that selection mechanisms arose in our simulations depending not only on the size of the neuron networks but also on the stimuli's relevance for action.
Resumo:
Abstract Background The organization of the connectivity between mammalian cortical areas has become a major subject of study, because of its important role in scaffolding the macroscopic aspects of animal behavior and intelligence. In this study we present a computational reconstruction approach to the problem of network organization, by considering the topological and spatial features of each area in the primate cerebral cortex as subsidy for the reconstruction of the global cortical network connectivity. Starting with all areas being disconnected, pairs of areas with similar sets of features are linked together, in an attempt to recover the original network structure. Results Inferring primate cortical connectivity from the properties of the nodes, remarkably good reconstructions of the global network organization could be obtained, with the topological features allowing slightly superior accuracy to the spatial ones. Analogous reconstruction attempts for the C. elegans neuronal network resulted in substantially poorer recovery, indicating that cortical area interconnections are relatively stronger related to the considered topological and spatial properties than neuronal projections in the nematode. Conclusion The close relationship between area-based features and global connectivity may hint on developmental rules and constraints for cortical networks. Particularly, differences between the predictions from topological and spatial properties, together with the poorer recovery resulting from spatial properties, indicate that the organization of cortical networks is not entirely determined by spatial constraints.
Resumo:
Abstract Background Recently, it was realized that the functional connectivity networks estimated from actual brain-imaging technologies (MEG, fMRI and EEG) can be analyzed by means of the graph theory, that is a mathematical representation of a network, which is essentially reduced to nodes and connections between them. Methods We used high-resolution EEG technology to enhance the poor spatial information of the EEG activity on the scalp and it gives a measure of the electrical activity on the cortical surface. Afterwards, we used the Directed Transfer Function (DTF) that is a multivariate spectral measure for the estimation of the directional influences between any given pair of channels in a multivariate dataset. Finally, a graph theoretical approach was used to model the brain networks as graphs. These methods were used to analyze the structure of cortical connectivity during the attempt to move a paralyzed limb in a group (N=5) of spinal cord injured patients and during the movement execution in a group (N=5) of healthy subjects. Results Analysis performed on the cortical networks estimated from the group of normal and SCI patients revealed that both groups present few nodes with a high out-degree value (i.e. outgoing links). This property is valid in the networks estimated for all the frequency bands investigated. In particular, cingulate motor areas (CMAs) ROIs act as ‘‘hubs’’ for the outflow of information in both groups, SCI and healthy. Results also suggest that spinal cord injuries affect the functional architecture of the cortical network sub-serving the volition of motor acts mainly in its local feature property. In particular, a higher local efficiency El can be observed in the SCI patients for three frequency bands, theta (3-6 Hz), alpha (7-12 Hz) and beta (13-29 Hz). By taking into account all the possible pathways between different ROI couples, we were able to separate clearly the network properties of the SCI group from the CTRL group. In particular, we report a sort of compensatory mechanism in the SCI patients for the Theta (3-6 Hz) frequency band, indicating a higher level of “activation” Ω within the cortical network during the motor task. The activation index is directly related to diffusion, a type of dynamics that underlies several biological systems including possible spreading of neuronal activation across several cortical regions. Conclusions The present study aims at demonstrating the possible applications of graph theoretical approaches in the analyses of brain functional connectivity from EEG signals. In particular, the methodological aspects of the i) cortical activity from scalp EEG signals, ii) functional connectivity estimations iii) graph theoretical indexes are emphasized in the present paper to show their impact in a real application.
Resumo:
Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.