786 resultados para Associative Memory
Resumo:
L'objectiu d'aquest projecte ha estat el desenvolupament d'algorismes biològicament inspirats per a l'olfacció artificial. Per a assolir-lo ens hem basat en el paradigma de les màquines amb suport vectorial. Hem construit algoritmes que imitaven els processos computacionals dels diferents sistemes que formen el sistema olfactiu dels insectes, especialment de la llagosta Schistocerca gregaria. Ens hem centrat en el lòbuls de les antenes, i en el cos fungiforme. El primer està considerat un dispositiu de codificació de les olors, que a partir de la resposta temporal dels receptors olfactius a les antenes genera un patró d'activació espaial i temporal. Quant al cos fungiforme es considera que la seva funció és la d'una memòria per als olors, així com un centre per a la integració multi-sensorial. El primer pas ha estat la construcció de models detallats dels dos sistemes. A continuació, hem utilitzat aquests models per a processar diferents tipus de senyals amb l'objectiu de abstraure els principis computacionals subjacents. Finalment, hem avaluat les capacitats d'aquests models abstractes, i els hem utilitzat per al processat de dades provinents de sensors de gasos. Els resultats mostren que el models abstractes tenen millor comportament front el soroll i més capacitat d'emmagatzematge de records que altres models més clàssics, com ara les memòries associatives de Hopfield o fins i tot en determinades circumstàncies que les mateixes Support Vector Machines.
Resumo:
Learning is the ability of an organism to adapt to the changes of its environment in response to its past experience. It is a widespread ability in the animal kingdom, but its evolutionary aspects are poorly known. Learning ability is supposedly advantageous under some conditions, when environmental conditions are not too stable - because in this case there is no need to learn to predict any event in the environment - and not changing too fast - otherwise environmental cues cannot be used because they are not reliable. Nevertheless, learning ability is also known to be costly in terms of energy needed for neuronal synthesis, memory formation, initial mistakes. During my PhD, I focused on the study of genetic variability of learning ability in natural populations. Genetic variability is the basis on which natural selection and genetic drift can act. How does learning ability vary in nature? What are the roles of additive genetic variation or maternal effects in this variation? Is it involved in evolutionary trade-offs with other fitness-related traits?¦I investigated a natural population of fruit fly, Drosophila melanogaster, as a model organism. Its learning ability is easy to measure with associative memory tests. I used two research tools: multiple inbred and isofemale lines derived from a natural population as a representative sample. My work was divided into three parts.¦First, I investigated the effects of inbreeding on aversive learning (avoidance of an odor previously associated with mechanical shock). While the inbred lines consistently showed reduced egg-to-adult viability by 28 %, the effects of inbreeding on learning performance was 18 % and varied among assays, with a trend to be most pronounced for intermediate conditioning intensity. Variation among inbred lines indicates that ample genetic variance for learning was segregating in the base population, and suggests that the inbreeding depression observed in learning performance was mostly due to dominance rather than overdominance. Across the inbred lines, learning performance was positively correlated with the egg-to-adult viability. This positive genetic correlation contradicts previous studies which observed a trade-off between learning ability and lifespan or larval competitive ability. It suggests that much of the genetic variation for learning is due to pleiotropic effects of genes affecting other functions related to survival. Together with the overall mild effects of inbreeding on learning performance, this suggests that genetic variation specifically affecting learning is either very low, or is due to alleles with mostly additive (semi-dominant) effects. It also suggests that alleles reducing learning performance are on average partially recessive, because their effect does not appear in the outbred base population. Moreover, overdominance seems unlikely as major cause of the inbreeding depression, because even if the overall mean of the inbred line is smaller than the outbred base population, some of the inbred lines show the same learning score as the outbred base population. If overdominance played an important part in inbreeding depression, then all the homozygous lines should show lower learning ability than¦outbred base population.¦In the second part of my project, I sampled the same natural population again and derived isofemale lines (F=0.25) which are less adapted to laboratory conditions and therefore are more representative of the variance of the natural population. They also showed some genetic variability for learning, and for three other fitness-related traits possibly related with learning: resistance to bacterial infection, egg-to-adult viability and developmental time. Nevertheless, the genetic variance of learning ability did not appear to be smaller than the variance of the other traits. The positive correlation previously observed between learning ability and egg- to-adult viability did not appear in isofemale lines (nor a negative correlation). It suggests that there was still genetic variability within isofemale lines and that they did not fix the highly deleterious pleiotropic alleles possibly responsible for the previous correlation.¦In order to investigate the relative amount of nuclear (additive and non-additive effects) and extra-nuclear (maternal and paternal effect) components of variance in learning ability and other fitness-related traits among the inbred lines tested in part one, I performed a diallel cross between them. The nuclear additive genetic variance was higher than other components for learning ability and survival to learning ability, but in contrast, maternal effects were more variable than other effects for developmental traits. This suggests that maternal effects, which reflects effects from mitochondrial DNA, epigenetic effects, or the amount of nutrients that are invested by the mother in the egg, are more important in the early stage of life, and less at the adult stage. There was no additive genetic correlation between learning ability and other traits, indicating that the correlation between learning ability and egg-to-adult viability observed in the first pat of my project was mostly due to recessive genes.¦Finally, my results showed that learning ability is genetically variable. The diallel experiment showed additive genetic variance was the most important component of the total variance. Moreover, every inbred or isofemale line showed some learning ability. This suggested that alleles impairing learning ability are eliminated by selection, and therefore that learning ability is under strong selection in natural populations of Drosophila. My results cannot alone explain the maintenance of the observed genetic variation. Even if I cannot eliminate the hypothesis of pleiotropy between learning ability and the other fitness-related traits I measured, there is no evidence for any trade-off between these traits and learning ability. This contradicts what has been observed between learning ability and other traits like lifespan and larval competitivity.¦L'apprentissage représente la capacité d'un organisme à s'adapter aux changement de son environnement au cours de sa vie, en réponse à son expérience passée. C'est une capacité très répandue dans le règne animal, y compris pour les animaux les plus petits et les plus simples, mais les aspects évolutifs de l'apprentissage sont encore mal connus. L'apprentissage est supposé avantageux dans certaines conditions, quand l'environnement n'est ni trop stable - dans ce cas, il n'y a rien à apprendre - ni trop variable - dans ce cas, les indices sur lesquels se reposer changent trop vite pour apprendre. D'un autre côté, l'apprentissage a aussi des coûts, en terme de synthèse neuronale, pour la formation de la mémoire, ou de coûts d'erreur initiale d'apprentissage. Pendant ma thèse, j'ai étudié la variabilité génétique naturelle des capacités d'apprentissage. Comment varient les capacités d'apprentissage dans la nature ? Quelle est la part de variation additive, l'impact des effets maternel ? Est-ce que l'apprentissage est impliqué dans des interactions, de type compromis évolutifs, avec d'autres traits liés à la fitness ?¦Afin de répondre à ces questions, je me suis intéressée à la mouche du vinaigre, ou drosophile, un organisme modèle. Ses capacités d'apprentissage sont facile à étudier avec un test de mémoire reposant sur l'association entre un choc mécanique et une odeur. Pour étudier ses capacités naturelles, j'ai dérivé de types de lignées d'une population naturelle: des lignées consanguines et des lignées isofemelles.¦Dans une première partie, je me suis intéressée aux effets de la consanguinité sur les capacités d'apprentissage, qui sont peu connues. Alors que les lignées consanguines ont montré une réduction de 28% de leur viabilité (proportion d'adultes émergeants d'un nombre d'oeufs donnés), leurs capacités d'apprentissage n'ont été réduites que de 18%, la plus forte diminution étant obtenue pour un conditionnement modéré. En outre, j'ai également observé que les capacités d'apprentissage était positivement corrélée à la viabilité entre les lignées. Cette corrélation est surprenante car elle est en contradiction avec les résultats obtenus par d'autres études, qui montrent l'existence de compromis évolutifs entre les capacités d'apprentissage et d'autres traits comme le vieillissement ou la compétitivité larvaire. Elle suggère que la variation génétique des capacités d'apprentissage est due aux effets pleiotropes de gènes récessifs affectant d'autres fonctions liées à la survie. Ces résultats indiquent que la variation pour les capacités d'apprentissage est réduite comparée à celle d'autres traits ou est due à des allèles principalement récessifs. L'hypothèse de superdominance semble peu vraisemblable, car certaines des lignées consanguines ont obtenu des scores d'apprentissage égaux à ceux de la population non consanguine, alors qu'en cas de superdominance, elles auraient toutes dû obtenir des scores inférieurs.¦Dans la deuxième partie de mon projet, j'ai mesuré les capacités d'apprentissage de lignées isofemelles issues de la même population initiale que les lignées consanguines. Ces lignées sont issues chacune d'un seul couple, ce qui leur donne un taux d'hétérozygosité supérieur et évite l'élimination de lignées par fixation d'allèles délétères rares. Elles sont ainsi plus représentatives de la variabilité naturelle. Leur variabilité génétique est significative pour les capacités d'apprentissage, et trois traits liés à la fois à la fitness et à l'apprentissage: la viabilité, la résistance à l'infection bactérienne et la vitesse de développement. Cependant, la variabilité des capacités d'apprentissage n'apparaît cette fois pas inférieure à celle des autres traits et aucune corrélation n'est constatée entre les capacité d'apprentissage et les autres traits. Ceci suggère que la corrélation observée auparavant était surtout due à la fixation d'allèles récessifs délétères également responsables de la dépression de consanguinité.¦Durant la troisième partie de mon projet, je me suis penchée sur la décomposition de la variance observée entre les lignées consanguines observée en partie 1. Quatre composants ont été examinés: la variance due à des effets nucléaires (additifs et non additifs), et due à des effets parentaux (maternels et paternels). J'ai réalisé un croisement diallèle de toutes les lignées. La variance additive nucléaire s'est révélée supérieure aux autres composants pour les capacités d'apprentissage et la résistance à l'infection bactérienne. Par contre, les effets maternels étaient plus importants que les autres composants pour les traits développementaux (viabilité et vitesse de développement). Ceci suggère que les effets maternels, dus à G ADN mitochondrial, à l'épistasie ou à la quantité de nutriments investis dans l'oeuf par la mère, sont plus importants dans les premiers stades de développement et que leur effet s'estompe à l'âge adulte. Il n'y a en revanche pas de corrélation statistiquement significative entre les effets additifs des capacités d'apprentissage et des autres traits, ce qui indique encore une fois que la corrélation observée entre les capacités d'apprentissage et la viabilité dans la première partie du projet était due à des effets d'allèles partiellement récessifs.¦Au, final, mes résultats montrent bien l'existence d'une variabilité génétique pour les capacités d'apprentissage, et l'expérience du diallèle montre que la variance additive de cette capacité est importante, ce qui permet une réponse à la sélection naturelle. Toutes les lignées, consanguines ou isofemelles, ont obtenu des scores d'apprentissage supérieurs à zéro. Ceci suggère que les allèles supprimant les capacités d'apprentissage sont fortement contre-sélectionnés dans la nature Néanmoins, mes résultats ne peuvent pas expliquer le maintien de cette variabilité génétique par eux-même. Même si l'hypothèse de pléiotropie entre les capacités d'apprentissage et l'un des traits liés à la fitness que j'ai mesuré ne peut être éliminée, il n'y a aucune preuve d'un compromis évolutif pouvant contribuer au maintien de la variabilité.
Resumo:
The main focus of the present thesis was at verbal episodic memory processes that are particularly vulnerable to preclinical and clinical Alzheimer’s disease (AD). Here these processes were studied by a word learning paradigm, cutting across the domains of memory and language learning studies. Moreover, the differentiation between normal aging, mild cognitive impairment (MCI) and AD was studied by the cognitive screening test CERAD. In study I, the aim was to examine how patients with amnestic MCI differ from healthy controls in the different CERAD subtests. Also, the sensitivity and specificity of the CERAD screening test to MCI and AD was examined, as previous studies on the sensitivity and specificity of the CERAD have not included MCI patients. The results indicated that MCI is characterized by an encoding deficit, as shown by the overall worse performance on the CERAD Wordlist learning test compared with controls. As a screening test, CERAD was not very sensitive to MCI. In study II, verbal learning and forgetting in amnestic MCI, AD and healthy elderly controls was investigated with an experimental word learning paradigm, where names of 40 unfamiliar objects (mainly archaic tools) were trained with or without semantic support. The object names were trained during a 4-day long period and a follow-up was conducted one week, 4 weeks and 8 weeks after the training period. Manipulation of semantic support was included in the paradigm because it was hypothesized that semantic support might have some beneficial effects in the present learning task especially for the MCI group, as semantic memory is quite well preserved in MCI in contrast to episodic memory. We found that word learning was significantly impaired in MCI and AD patients, whereas forgetting patterns were similar across groups. Semantic support showed a beneficial effect on object name retrieval in the MCI group 8 weeks after training, indicating that the MCI patients’ preserved semantic memory abilities compensated for their impaired episodic memory. The MCI group performed equally well as the controls in the tasks tapping incidental learning and recognition memory, whereas the AD group showed impairment. Both the MCI and the AD group benefited less from phonological cueing than the controls. Our findings indicate that acquisition is compromised in both MCI and AD, whereas long13 term retention is not affected to the same extent. Incidental learning and recognition memory seem to be well preserved in MCI. In studies III and IV, the neural correlates of naming newly learned objects were examined in healthy elderly subjects and in amnestic MCI patients by means of positron emission tomography (PET) right after the training period. The naming of newly learned objects by healthy elderly subjects recruited a left-lateralized network, including frontotemporal regions and the cerebellum, which was more extensive than the one related to the naming of familiar objects (study III). Semantic support showed no effects on the PET results for the healthy subjects. The observed activation increases may reflect lexicalsemantic and lexical-phonological retrieval, as well as more general associative memory mechanisms. In study IV, compared to the controls, the MCI patients showed increased anterior cingulate activation when naming newly learned objects that had been learned without semantic support. This suggests a recruitment of additional executive and attentional resources in the MCI group.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Self-organizing neural networks have been implemented in a wide range of application areas such as speech processing, image processing, optimization and robotics. Recent variations to the basic model proposed by the authors enable it to order state space using a subset of the input vector and to apply a local adaptation procedure that does not rely on a predefined test duration limit. Both these variations have been incorporated into a new feature map architecture that forms an integral part of an Hybrid Learning System (HLS) based on a genetic-based classifier system. Problems are represented within HLS as objects characterized by environmental features. Objects controlled by the system have preset targets set against a subset of their features. The system's objective is to achieve these targets by evolving a behavioural repertoire that efficiently explores and exploits the problem environment. Feature maps encode two types of knowledge within HLS — long-term memory traces of useful regularities within the environment and the classifier performance data calibrated against an object's feature states and targets. Self-organization of these networks constitutes non-genetic-based (experience-driven) learning within HLS. This paper presents a description of the HLS architecture and an analysis of the modified feature map implementing associative memory. Initial results are presented that demonstrate the behaviour of the system on a simple control task.
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
Resumo:
The modelling of a nonlinear stochastic dynamical processes from data involves solving the problems of data gathering, preprocessing, model architecture selection, learning or adaptation, parametric evaluation and model validation. For a given model architecture such as associative memory networks, a common problem in non-linear modelling is the problem of "the curse of dimensionality". A series of complementary data based constructive identification schemes, mainly based on but not limited to an operating point dependent fuzzy models, are introduced in this paper with the aim to overcome the curse of dimensionality. These include (i) a mixture of experts algorithm based on a forward constrained regression algorithm; (ii) an inherent parsimonious delaunay input space partition based piecewise local lineal modelling concept; (iii) a neurofuzzy model constructive approach based on forward orthogonal least squares and optimal experimental design and finally (iv) the neurofuzzy model construction algorithm based on basis functions that are Bézier Bernstein polynomial functions and the additive decomposition. Illustrative examples demonstrate their applicability, showing that the final major hurdle in data based modelling has almost been removed.
Resumo:
Objective. The main purpose of the study was to examine whether emotion impairs associative memory for previously seen items in older adults, as previously observed in younger adults. Method. Thirty-two younger adults and 32 older adults participated. The experiment consisted of 2 parts. In Part 1, participants learned picture–object associations for negative and neutral pictures. In Part 2, they learned picture–location associations for negative and neutral pictures; half of these pictures were seen in Part 1 whereas the other half were new. The dependent measure was how many locations of negative versus neutral items in the new versus old categories participants remembered in Part 2. Results. Both groups had more difficulty learning the locations of old negative pictures than of new negative pictures. However, this pattern was not observed for neutral items. Discussion. Despite the fact that older adults showed overall decline in associative memory, the impairing effect of emotion on updating associative memory was similar between younger and older adults.
Resumo:
The hippocampus receives input from upper levels of the association cortex and is implicated in many mnemonic processes, but the exact mechanisms by which it codes and stores information is an unresolved topic. This work examines the flow of information through the hippocampal formation while attempting to determine the computations that each of the hippocampal subfields performs in learning and memory. The formation, storage, and recall of hippocampal-dependent memories theoretically utilize an autoassociative attractor network that functions by implementing two competitive, yet complementary, processes. Pattern separation, hypothesized to occur in the dentate gyrus (DG), refers to the ability to decrease the similarity among incoming information by producing output patterns that overlap less than the inputs. In contrast, pattern completion, hypothesized to occur in the CA3 region, refers to the ability to reproduce a previously stored output pattern from a partial or degraded input pattern. Prior to addressing the functional role of the DG and CA3 subfields, the spatial firing properties of neurons in the dentate gyrus were examined. The principal cell of the dentate gyrus, the granule cell, has spatially selective place fields; however, the behavioral correlates of another excitatory cell, the mossy cell of the dentate polymorphic layer, are unknown. This report shows that putative mossy cells have spatially selective firing that consists of multiple fields similar to previously reported properties of granule cells. Other cells recorded from the DG had single place fields. Compared to cells with multiple fields, cells with single fields fired at a lower rate during sleep, were less likely to burst, and were more likely to be recorded simultaneously with a large population of neurons that were active during sleep and silent during behavior. These data suggest that single-field and multiple-field cells constitute at least two distinct cell classes in the DG. Based on these characteristics, we propose that putative mossy cells tend to fire in multiple, distinct locations in an environment, whereas putative granule cells tend to fire in single locations, similar to place fields of the CA1 and CA3 regions. Experimental evidence supporting the theories of pattern separation and pattern completion comes from both behavioral and electrophysiological tests. These studies specifically focused on the function of each subregion and made implicit assumptions about how environmental manipulations changed the representations encoded by the hippocampal inputs. However, the cell populations that provided these inputs were in most cases not directly examined. We conducted a series of studies to investigate the neural activity in the entorhinal cortex, dentate gyrus, and CA3 in the same experimental conditions, which allowed a direct comparison between the input and output representations. The results show that the dentate gyrus representation changes between the familiar and cue altered environments more than its input representations, whereas the CA3 representation changes less than its input representations. These findings are consistent with longstanding computational models proposing that (1) CA3 is an associative memory system performing pattern completion in order to recall previous memories from partial inputs, and (2) the dentate gyrus performs pattern separation to help store different memories in ways that reduce interference when the memories are subsequently recalled.
Resumo:
A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.
Resumo:
Geospatio-temporal conceptual models provide a mechanism to explicitly represent geospatial and temporal aspects of applications. Such models, which focus on both what and when/where, need to be more expressive than conventional conceptual models (e.g., the ER model), which primarily focus on what is important for a given application. In this study, we view conceptual schema comprehension of geospatio-temporal data semantics in terms of matching the external problem representation (that is, the conceptual schema) to the problem-solving task (that is, syntactic and semantic comprehension tasks), an argument based on the theory of cognitive fit. Our theory suggests that an external problem representation that matches the problem solver's internal task representation will enhance performance, for example, in comprehending such schemas. To assess performance on geospatio-temporal schema comprehension tasks, we conducted a laboratory experiment using two semantically identical conceptual schemas, one of which mapped closely to the internal task representation while the other did not. As expected, we found that the geospatio-temporal conceptual schema that corresponded to the internal representation of the task enhanced the accuracy of schema comprehension; comprehension time was equivalent for both. Cognitive fit between the internal representation of the task and conceptual schemas with geospatio-temporal annotations was, therefore, manifested in accuracy of schema comprehension and not in time for problem solution. Our findings suggest that the annotated schemas facilitate understanding of data semantics represented on the schema.
Resumo:
The Thouless-Anderson-Palmer (TAP) approach was originally developed for analysing the Sherrington-Kirkpatrick model in the study of spin glass models and has been employed since then mainly in the context of extensively connected systems whereby each dynamical variable interacts weakly with the others. Recently, we extended this method for handling general intensively connected systems where each variable has only O(1) connections characterised by strong couplings. However, the new formulation looks quite different with respect to existing analyses and it is only natural to question whether it actually reproduces known results for systems of extensive connectivity. In this chapter, we apply our formulation of the TAP approach to an extensively connected system, the Hopfield associative memory model, showing that it produces identical results to those obtained by the conventional formulation.
Resumo:
This thesis initially presents an 'assay' of the literature pertaining to individual differences in human-computer interaction. A series of experiments is then reported, designed to investigate the association between a variety of individual characteristics and various computer task and interface factors. Predictor variables included age, computer expertise, and psychometric tests of spatial visualisation, spatial memory, logical reasoning, associative memory, and verbal ability. These were studied in relation to a variety of computer-based tacks, including: (1) word processing and its component elements; (ii) the location of target words within passages of text; (iii) the navigation of networks and menus; (iv) command generation using menus and command line interfaces; (v) the search and selection of icons and text labels; (vi) information retrieval. A measure of self-report workload was also included in several of these experiments. The main experimental findings included: (i) an interaction between spatial ability and the manipulation of semantic but not spatial interface content; (ii) verbal ability being only predictive of certain task components of word processing; (iii) age differences in word processing and information retrieval speed but not accuracy; (iv) evidence of compensatory strategies being employed by older subjects; (v) evidence of performance strategy differences which disadvantaged high spatial subjects in conditions of low spatial information content; (vi) interactive effects of associative memory, expertise and command strategy; (vii) an association between logical reasoning and word processing but not information retrieval; (viii) an interaction between expertise and cognitive demand; and (ix) a stronger association between cognitive ability and novice performance than expert performance.