849 resultados para Neural networks model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This book combines geostatistics and global mapping systems to present an up-to-the-minute study of environmental data. Featuring numerous case studies, the reference covers model dependent (geostatistics) and data driven (machine learning algorithms) analysis techniques such as risk mapping, conditional stochastic simulations, descriptions of spatial uncertainty and variability, artificial neural networks (ANN) for spatial data, Bayesian maximum entropy (BME), and more.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Summary : Division of labour is one of the most fascinating aspects of social insects. The efficient allocation of individuals to a multitude of different tasks requires a dynamic adjustment in response to the demands of a changing environment. A considerable number of theoretical models have focussed on identifying the mechanisms allowing colonies to perform efficient task allocation. The large majority of these models are built on the observation that individuals in a colony vary in their propensity (response threshold) to perform different tasks. Since individuals with a low threshold for a given task stimulus are more likely to perform that task than individuals with a high threshold, infra-colony variation in individual thresholds results in colony division of labour. These theoretical models suggest that variation in individual thresholds is affected by the within-colony genetic diversity. However, the models have not considered the genetic architecture underlying the individual response thresholds. This is important because a better understanding of division of labour requires determining how genotypic variation relates to differences in infra-colony response threshold distributions. In this thesis, we investigated the combined influence on task allocation efficiency of both, the within-colony genetic variability (stemming from variation in the number of matings by queens) and the number of genes underlying the response thresholds. We used an agent-based simulator to model a situation where workers in a colony had to perform either a regulatory task (where the amount of a given food item in the colony had to be maintained within predefined bounds) or a foraging task (where the quantity of a second type of food item collected had to be the highest possible). The performance of colonies was a function of workers being able to perform both tasks efficiently. To study the effect of within-colony genetic diversity, we compared the performance of colonies with queens mated with varying number of males. On the other hand, the influence of genetic architecture was investigated by varying the number of loci underlying the response threshold of the foraging and regulatory tasks. Artificial evolution was used to evolve the allelic values underlying the tasks thresholds. The results revealed that multiple matings always translated into higher colony performance, whatever the number of loci encoding the thresholds of the regulatory and foraging tasks. However, the beneficial effect of additional matings was particularly important when the genetic architecture of queens comprised one or few genes for the foraging task's threshold. By contrast, higher number of genes encoding the foraging task reduced colony performance with the detrimental effect being stronger when queens had mated with several males. Finally, the number of genes determining the threshold for the regulatory task only had a minor but incremental effect on colony performance. Overall, our numerical experiments indicate the importance of considering the effects of queen mating frequency, genetic architecture underlying task thresholds and the type of task performed when investigating the factors regulating the efficiency of division of labour in social insects. In this thesis we also investigate the task allocation efficiency of response threshold models and compare them with neural networks. While response threshold models are widely used amongst theoretical biologists interested in division of labour in social insects, our simulation reveals that they perform poorly compared to a neural network model. A major shortcoming of response thresholds is that they fail at one of the most crucial requirement of division of labour, the ability of individuals in a colony to efficiently switch between tasks under varying environmental conditions. Moreover, the intrinsic properties of the threshold models are that they lead to a large proportion of idle workers. Our results highlight these limitations of the response threshold models and provide an adequate substitute. Altogether, the experiments presented in this thesis provide novel contributions to the understanding of how division of labour in social insects is influenced by queen mating frequency and genetic architecture underlying worker task thresholds. Moreover, the thesis also provides a novel model of the mechanisms underlying worker task allocation that maybe more generally applicable than the widely used response threshold models. Resumé : La répartition du travail est l'un des aspects les plus fascinants des insectes vivant en société. Une allocation efficace de la multitude de différentes tâches entre individus demande un ajustement dynamique afin de répondre aux exigences d'un environnement en constant changement. Un nombre considérable de modèles théoriques se sont attachés à identifier les mécanismes permettant aux colonies d'effectuer une allocation efficace des tâches. La grande majorité des ces modèles sont basés sur le constat que les individus d'une même colonie diffèrent dans leur propension (inclination à répondre) à effectuer différentes tâches. Etant donné que les individus possédant un faible seuil de réponse à un stimulus associé à une tâche donnée sont plus disposés à effectuer cette dernière que les individus possédant un seuil élevé, les différences de seuils parmi les individus vivant au sein d'une même colonie mènent à une certaine répartition du travail. Ces modèles théoriques suggèrent que la variation des seuils des individus est affectée par la diversité génétique propre à la colonie. Cependant, ces modèles ne considèrent pas la structure génétique qui est à la base des seuils de réponse individuels. Ceci est très important car une meilleure compréhension de la répartition du travail requière de déterminer de quelle manière les variations génotypiques sont associées aux différentes distributions de seuils de réponse à l'intérieur d'une même colonie. Dans le cadre de cette thèse, nous étudions l'influence combinée de la variabilité génétique d'une colonie (qui prend son origine dans la variation du nombre d'accouplements des reines) avec le nombre de gènes supportant les seuils de réponse, vis-à-vis de la performance de l'allocation des tâches. Nous avons utilisé un simulateur basé sur des agents pour modéliser une situation où les travailleurs d'une colonie devaient accomplir une tâche de régulation (1a quantité d'une nourriture donnée doit être maintenue à l'intérieur d'un certain intervalle) ou une tâche de recherche de nourriture (la quantité d'une certaine nourriture doit être accumulée autant que possible). Dans ce contexte, 'efficacité des colonies tient en partie des travailleurs qui sont capable d'effectuer les deux tâches de manière efficace. Pour étudier l'effet de la diversité génétique d'une colonie, nous comparons l'efficacité des colonies possédant des reines qui s'accouplent avec un nombre variant de mâles. D'autre part, l'influence de la structure génétique a été étudiée en variant le nombre de loci à la base du seuil de réponse des deux tâches de régulation et de recherche de nourriture. Une évolution artificielle a été réalisée pour évoluer les valeurs alléliques qui sont à l'origine de ces seuils de réponse. Les résultats ont révélé que de nombreux accouplements se traduisaient toujours en une plus grande performance de la colonie, quelque soit le nombre de loci encodant les seuils des tâches de régulation et de recherche de nourriture. Cependant, les effets bénéfiques d'accouplements additionnels ont été particulièrement important lorsque la structure génétique des reines comprenait un ou quelques gènes pour le seuil de réponse pour la tâche de recherche de nourriture. D'autre part, un nombre plus élevé de gènes encodant la tâche de recherche de nourriture a diminué la performance de la colonie avec un effet nuisible d'autant plus fort lorsque les reines s'accouplent avec plusieurs mâles. Finalement, le nombre de gènes déterminant le seuil pour la tâche de régulation eu seulement un effet mineur mais incrémental sur la performance de la colonie. Pour conclure, nos expériences numériques révèlent l'importance de considérer les effets associés à la fréquence d'accouplement des reines, à la structure génétique qui est à l'origine des seuils de réponse pour les tâches ainsi qu'au type de tâche effectué au moment d'étudier les facteurs qui régulent l'efficacité de la répartition du travail chez les insectes vivant en communauté. Dans cette thèse, nous étudions l'efficacité de l'allocation des tâches des modèles prenant en compte des seuils de réponses, et les comparons à des réseaux de neurones. Alors que les modèles basés sur des seuils de réponse sont couramment utilisés parmi les biologistes intéressés par la répartition des tâches chez les insectes vivant en société, notre simulation montre qu'ils se révèlent peu efficace comparé à un modèle faisant usage de réseaux de neurones. Un point faible majeur des seuils de réponse est qu'ils échouent sur un point crucial nécessaire à la répartition des tâches, la capacité des individus d'une colonie à commuter efficacement entre des tâches soumises à des conditions environnementales changeantes. De plus, les propriétés intrinsèques des modèles basés sur l'utilisation de seuils conduisent à de larges populations de travailleurs inactifs. Nos résultats mettent en évidence les limites de ces modèles basés sur l'utilisation de seuils et fournissent un substitut adéquat. Ensemble, les expériences présentées dans cette thèse fournissent de nouvelles contributions pour comprendre comment la répartition du travail chez les insectes vivant en société est influencée par la fréquence d'accouplements des reines ainsi que par la structure génétique qui est à l'origine, pour un travailleur, du seuil de réponse pour une tâche. De plus, cette thèse fournit également un nouveau modèle décrivant les mécanismes qui sont à l'origine de l'allocation des tâches entre travailleurs, mécanismes qui peuvent être appliqué de manière plus générale que ceux couramment utilisés et basés sur des seuils de réponse.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this paper is to propose a Neural-Q_learning approach designed for online learning of simple and reactive robot behaviors. In this approach, the Q_function is generalized by a multi-layer neural network allowing the use of continuous states and actions. The algorithm uses a database of the most recent learning samples to accelerate and guarantee the convergence. Each Neural-Q_learning function represents an independent, reactive and adaptive behavior which maps sensorial states to robot control actions. A group of these behaviors constitutes a reactive control scheme designed to fulfill simple missions. The paper centers on the description of the Neural-Q_learning based behaviors showing their performance with an underwater robot in a target following task. Real experiments demonstrate the convergence and stability of the learning system, pointing out its suitability for online robot learning. Advantages and limitations are discussed

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND The study of the attentional system remains a challenge for current neuroscience. The "Attention Network Test" (ANT) was designed to study simultaneously three different attentional networks (alerting, orienting, and executive) based in subtraction of different experimental conditions. However, some studies recommend caution with these calculations due to the interactions between the attentional networks. In particular, it is highly relevant that several interpretations about attentional impairment have arisen from these calculations in diverse pathologies. Event related potentials (ERPs) and neural source analysis can be applied to disentangle the relationships between these attentional networks not specifically shown by behavioral measures. RESULTS This study shows that there is a basic level of alerting (tonic alerting) in the no cue (NC) condition, represented by a slow negative trend in the ERP trace prior to the onset of the target stimuli. A progressive increase in the CNV amplitude related to the amount of information provided by the cue conditions is also shown. Neural source analysis reveals specific modulations of the CNV related to a task-related expectancy presented in the NC condition; a late modulation triggered by the central cue (CC) condition and probably representing a generic motor preparation; and an early and late modulation for spatial cue (SC) condition suggesting specific motor and sensory preactivation. Finally, the first component in the information processing of the target stimuli modulated by the interaction between orienting network and the executive system can be represented by N1. CONCLUSIONS The ANT is useful as a paradigm to study specific attentional mechanisms and their interactions. However, calculation of network effects is based in subtractions with non-comparable experimental conditions, as evidenced by the present data, which can induce misinterpretations in the study of the attentional capacity in human subjects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I use a multi-layer feedforward perceptron, with backpropagation learning implemented via stochastic gradient descent, to extrapolate the volatility smile of Euribor derivatives over low-strikes by training the network on parametric prices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Counterfeit pharmaceutical products have become a widespread problem in the last decade. Various analytical techniques have been applied to discriminate between genuine and counterfeit products. Among these, Near-infrared (NIR) and Raman spectroscopy provided promising results.The present study offers a methodology allowing to provide more valuable information fororganisations engaged in the fight against counterfeiting of medicines.A database was established by analyzing counterfeits of a particular pharmaceutical product using Near-infrared (NIR) and Raman spectroscopy. Unsupervised chemometric techniques (i.e. principal component analysis - PCA and hierarchical cluster analysis - HCA) were implemented to identify the classes within the datasets. Gas Chromatography coupled to Mass Spectrometry (GC-MS) and Fourier Transform Infrared Spectroscopy (FT-IR) were used to determine the number of different chemical profiles within the counterfeits. A comparison with the classes established by NIR and Raman spectroscopy allowed to evaluate the discriminating power provided by these techniques. Supervised classifiers (i.e. k-Nearest Neighbors, Partial Least Squares Discriminant Analysis, Probabilistic Neural Networks and Counterpropagation Artificial Neural Networks) were applied on the acquired NIR and Raman spectra and the results were compared to the ones provided by the unsupervised classifiers.The retained strategy for routine applications, founded on the classes identified by NIR and Raman spectroscopy, uses a classification algorithm based on distance measures and Receiver Operating Characteristics (ROC) curves. The model is able to compare the spectrum of a new counterfeit with that of previously analyzed products and to determine if a new specimen belongs to one of the existing classes, consequently allowing to establish a link with other counterfeits of the database.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The control and prediction of wastewater treatment plants poses an important goal: to avoid breaking the environmental balance by always keeping the system in stable operating conditions. It is known that qualitative information — coming from microscopic examinations and subjective remarks — has a deep influence on the activated sludge process. In particular, on the total amount of effluent suspended solids, one of the measures of overall plant performance. The search for an input–output model of this variable and the prediction of sudden increases (bulking episodes) is thus a central concern to ensure the fulfillment of current discharge limitations. Unfortunately, the strong interrelationbetween variables, their heterogeneity and the very high amount of missing information makes the use of traditional techniques difficult, or even impossible. Through the combined use of several methods — rough set theory and artificial neural networks, mainly — reasonable prediction models are found, which also serve to show the different importance of variables and provide insight into the process dynamics

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To explore whether triaxial accelerometric measurements can be utilized to accurately assess speed and incline of running in free-living conditions. METHODS: Body accelerations during running were recorded at the lower back and at the heel by a portable data logger in 20 human subjects, 10 men, and 10 women. After parameterizing body accelerations, two neural networks were designed to recognize each running pattern and calculate speed and incline. Each subject ran 18 times on outdoor roads at various speeds and inclines; 12 runs were used to calibrate the neural networks whereas the 6 other runs were used to validate the model. RESULTS: A small difference between the estimated and the actual values was observed: the square root of the mean square error (RMSE) was 0.12 m x s(-1) for speed and 0.014 radiant (rad) (or 1.4% in absolute value) for incline. Multiple regression analysis allowed accurate prediction of speed (RMSE = 0.14 m x s(-1)) but not of incline (RMSE = 0.026 rad or 2.6% slope). CONCLUSION: Triaxial accelerometric measurements allows an accurate estimation of speed of running and incline of terrain (the latter with more uncertainty). This will permit the validation of the energetic results generated on the treadmill as applied to more physiological unconstrained running conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present research deals with an application of artificial neural networks for multitask learning from spatial environmental data. The real case study (sediments contamination of Geneva Lake) consists of 8 pollutants. There are different relationships between these variables, from linear correlations to strong nonlinear dependencies. The main idea is to construct a subsets of pollutants which can be efficiently modeled together within the multitask framework. The proposed two-step approach is based on: 1) the criterion of nonlinear predictability of each variable ?k? by analyzing all possible models composed from the rest of the variables by using a General Regression Neural Network (GRNN) as a model; 2) a multitask learning of the best model using multilayer perceptron and spatial predictions. The results of the study are analyzed using both machine learning and geostatistical tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the relationship between topological scales and dynamic time scales in complex networks. The analysis is based on the full dynamics towards synchronization of a system of coupled oscillators. In the synchronization process, modular structures corresponding to well-defined communities of nodes emerge in different time scales, ordered in a hierarchical way. The analysis also provides a useful connection between synchronization dynamics, complex networks topology, and spectral graph analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study a Kuramoto model in which the oscillators are associated with the nodes of a complex network and the interactions include a phase frustration, thus preventing full synchronization. The system organizes into a regime of remote synchronization where pairs of nodes with the same network symmetry are fully synchronized, despite their distance on the graph. We provide analytical arguments to explain this result, and we show how the frustration parameter affects the distribution of phases. An application to brain networks suggests that anatomical symmetry plays a role in neural synchronization by determining correlated functional modules across distant locations.