937 resultados para temporal-logic model
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.
Towards model driven software development for Arduino platforms: a DSL and automatic code generation
Resumo:
La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.
Resumo:
The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.
Resumo:
Alle Doldengewächse (Apiaceae), darunter die größte, weltweit verbreitete Unterfamilie der Apioideen, weisen in ihren Blütenständen sehr einheitliche Merkmale auf. Die ‚Doppeldolden´ werden aus kleinen, weißen oder gelben Blüten gebildet und von vielen unspezialisierten Insekten besucht. Der uniforme Eindruck, der damit erweckt wird, ist unter Umständen ein Grund, dass die zugrundeliegende Morphologie bislang wenig untersucht wurde. Gegenstand der vorliegenden Dissertation ist es daher, die ‚verborgene Diversität´ im Blütenstandsbereich der Apiaceae -Apioideen mit dem Ziel darzustellen, den Einfluss der Architektur der Pflanzen auf die Art der Blütenpräsentation in Raum und Zeit und damit auf das Reproduktionssystem der Art zu ermitteln. Im ersten Kapitel zeigt der Vergleich von neun ausgewählten Arten, dass in den selbstfertilen und unspezifisch bestäubten Pflanzen durch Synchronisation und Rhythmik in der Präsentation von Blüten Fremdbefruchtung gefördert wird. Entweder durchlaufen die Pflanzen dabei nur eine getrennte männliche und weibliche Blühphase (Xanthoselinum alsaticum) oder der moduläre Bau der Pflanzen führt zu einer Folge männlicher und weiblicher Blühphasen (multizyklische Dichogamie). Die Diözie in Trinia glauca kann in diesem Zusammenhang als eine Trennung der Blühphasen auf verschiedengeschlechtliche Individuen gesehen werden. Für die andromonözischen Arten wird gezeigt, dass der Anteil funktional männlicher Blüten mit steigender Doldenordnung nicht einheitlich zu- oder abnimmt. Dadurch fungieren die Pflanzen zu verschiedenen Zeiten und mit unterschiedlicher Stärke eher als Pollenrezeptoren oder Pollendonatoren. Es wird deutlich, dass das ‚uniforme Muster‘ der Apioideen mit Dolden verschiedener Ordnungen, dichogamen Blüten und deren diversen Geschlechtsausbildungen ein komplexes Raum-Zeit-Gefüge zur Optimierung des Reproduktionssystems darstellt. Das zweite Kapitel stellt die Ergebnisse von Manipulationsexperimenten (Handbestäubung, Bestäuberabschirmung, Entfernen von Dolden niedriger Ordnung) an Chaerophyllum bulbosum dar, die zeigen, dass das Raum-Zeit-Gefüge in der Präsentation der Blüten der Art erlaubt, flexibel auf Umwelteinflüsse zu reagieren. Es stellt sich heraus, dass mechanische Beschädigungen kaum Einfluss auf den Andromonöziegrad und prozentualen Fruchtansatz der Individuen nehmen. Grundvoraussetzung der Reaktionsfähigkeit ist wiederum deren modulärer Bau. Dieser erlaubt es den Pflanzen, zusammen mit dem andromonöziebedingten Reservoir an - geschlechtlich flexiblen - männlichen Blüten, in den später angelegten Dolden fehlenden Fruchtansatz der Blüten früh blühender Dolden zu kompensieren. Im dritten Kapitel wird eine vergleichende Merkmalsanalyse an 255 Apioideen-Arten vorgelegt, die alle Verwandtschaftskreise, Wuchsformen und Verbreitungsgebiete der Gruppe repräsentieren. Ziel der Analyse war die Identifizierung von Merkmalssyndromen, die den Zusammenhang zwischen Architektur und Reproduktionssystem verdeutlichen sollten. Interessanterweise sind die einzigen Merkmale, die miteinander einhergehen, Protogynie und die graduelle Abnahme männlicher Blüten mit steigender Doldenordnung. Alle anderen Merkmale variieren unabhängig voneinander und erzeugen auf vielen verschiedenen Wegen immer wieder das gleiche Funktionsmuster, das als ‚breeding syndrome‘ der Apioideae gedeutet werden kann. Die Arbeit leistet einen wichtigen Beitrag zum Verständnis der Blütenstände der Apiaceen und darüber hinaus zu morphologischer Variation in ‚unspezialisierten‘ Reproduktionssystemen. Offensichtlich liegt in den Apioideen der Selektionsdruck auf der Aufrechterhaltung der generalisistischen Bestäubung und überprägt alle morphologisch-phylogenetischen Merkmalsvarianten.
Resumo:
In multivariate time series analysis, the equal-time cross-correlation is a classic and computationally efficient measure for quantifying linear interrelations between data channels. When the cross-correlation coefficient is estimated using a finite amount of data points, its non-random part may be strongly contaminated by a sizable random contribution, such that no reliable conclusion can be drawn about genuine mutual interdependencies. The random correlations are determined by the signals' frequency content and the amount of data points used. Here, we introduce adjusted correlation matrices that can be employed to disentangle random from non-random contributions to each matrix element independently of the signal frequencies. Extending our previous work these matrices allow analyzing spatial patterns of genuine cross-correlation in multivariate data regardless of confounding influences. The performance is illustrated by example of model systems with known interdependence patterns. Finally, we apply the methods to electroencephalographic (EEG) data with epileptic seizure activity.
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.
Resumo:
We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.
Resumo:
This paper aims at the development and evaluation of a personalized insulin infusion advisory system (IIAS), able to provide real-time estimations of the appropriate insulin infusion rate for type 1 diabetes mellitus (T1DM) patients using continuous glucose monitors and insulin pumps. The system is based on a nonlinear model-predictive controller (NMPC) that uses a personalized glucose-insulin metabolism model, consisting of two compartmental models and a recurrent neural network. The model takes as input patient's information regarding meal intake, glucose measurements, and insulin infusion rates, and provides glucose predictions. The predictions are fed to the NMPC, in order for the latter to estimate the optimum insulin infusion rates. An algorithm based on fuzzy logic has been developed for the on-line adaptation of the NMPC control parameters. The IIAS has been in silico evaluated using an appropriate simulation environment (UVa T1DM simulator). The IIAS was able to handle various meal profiles, fasting conditions, interpatient variability, intraday variation in physiological parameters, and errors in meal amount estimations.
Resumo:
n learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.
Resumo:
Clinical and experimental evidence indicates that inflammatory processes contribute to the pathophysiology of epilepsy, but underlying mechanisms remain mostly unknown. Using immunohistochemistry for CD45 (common leukocyte antigen) and CD3 (T-lymphocytes), we show here microglial activation and infiltration of leukocytes in sclerotic tissue from patients with mesial temporal lobe epilepsy (TLE), as well as in a model of TLE (intrahippocampal kainic acid injection), characterized by spontaneous, nonconvulsive focal seizures. Using specific markers of lymphocytes, microglia, macrophages, and neutrophils in kainate-treated mice, we investigated with pharmacological and genetic approaches the contribution of innate and adaptive immunity to kainate-induced inflammation and neurodegeneration. Furthermore, we used EEG analysis in mutant mice lacking specific subsets of lymphocytes to explore the significance of inflammatory processes for epileptogenesis. Blood-brain barrier disruption and neurodegeneration in the kainate-lesioned hippocampus were accompanied by sustained ICAM-1 upregulation, microglial cell activation, and infiltration of CD3(+) T-cells. Moreover, macrophage infiltration was observed, selectively in the dentate gyrus where prominent granule cell dispersion was evident. Unexpectedly, depletion of peripheral macrophages by systemic clodronate liposome administration affected granule cell survival. Neurodegeneration was aggravated in kainate-lesioned mice lacking T- and B-cells (RAG1-knock-out), because of delayed invasion by Gr-1(+) neutrophils. Most strikingly, these mutant mice exhibited early onset of spontaneous recurrent seizures, suggesting a strong impact of immune-mediated responses on network excitability. Together, the concerted action of adaptive and innate immunity triggered locally by intrahippocampal kainate injection contributes seizure-suppressant and neuroprotective effects, shedding new light on neuroimmune interactions in temporal lobe epilepsy.
Resumo:
Stimulation of human epileptic tissue can induce rhythmic, self-terminating responses on the EEG or ECoG. These responses play a potentially important role in localising tissue involved in the generation of seizure activity, yet the underlying mechanisms are unknown. However, in vitro evidence suggests that self-terminating oscillations in nervous tissue are underpinned by non-trivial spatio-temporal dynamics in an excitable medium. In this study, we investigate this hypothesis in spatial extensions to a neural mass model for epileptiform dynamics. We demonstrate that spatial extensions to this model in one and two dimensions display propagating travelling waves but also more complex transient dynamics in response to local perturbations. The neural mass formulation with local excitatory and inhibitory circuits, allows the direct incorporation of spatially distributed, functional heterogeneities into the model. We show that such heterogeneities can lead to prolonged reverberating responses to a single pulse perturbation, depending upon the location at which the stimulus is delivered. This leads to the hypothesis that prolonged rhythmic responses to local stimulation in epileptogenic tissue result from repeated self-excitation of regions of tissue with diminished inhibitory capabilities. Combined with previous models of the dynamics of focal seizures this macroscopic framework is a first step towards an explicit spatial formulation of the concept of the epileptogenic zone. Ultimately, an improved understanding of the pathophysiologic mechanisms of the epileptogenic zone will help to improve diagnostic and therapeutic measures for treating epilepsy.
Resumo:
The means through which the nervous system perceives its environment is one of the most fascinating questions in contemporary science. Our endeavors to comprehend the principles of neural science provide an instance of how biological processes may inspire novel methods in mathematical modeling and engineering. The application ofmathematical models towards understanding neural signals and systems represents a vibrant field of research that has spanned over half a century. During this period, multiple approaches to neuronal modeling have been adopted, and each approach is adept at elucidating a specific aspect of nervous system function. Thus while bio-physical models have strived to comprehend the dynamics of actual physical processes occurring within a nerve cell, the phenomenological approach has conceived models that relate the ionic properties of nerve cells to transitions in neural activity. Further-more, the field of neural networks has endeavored to explore how distributed parallel processing systems may become capable of storing memory. Through this project, we strive to explore how some of the insights gained from biophysical neuronal modeling may be incorporated within the field of neural net-works. We specifically study the capabilities of a simple neural model, the Resonate-and-Fire (RAF) neuron, whose derivation is inspired by biophysical neural modeling. While reflecting further biological plausibility, the RAF neuron is also analytically tractable, and thus may be implemented within neural networks. In the following thesis, we provide a brief overview of the different approaches that have been adopted towards comprehending the properties of nerve cells, along with the framework under which our specific neuron model relates to the field of neuronal modeling. Subsequently, we explore some of the time-dependent neurocomputational capabilities of the RAF neuron, and we utilize the model to classify logic gates, and solve the classic XOR problem. Finally we explore how the resonate-and-fire neuron may be implemented within neural networks, and how such a network could be adapted through the temporal backpropagation algorithm.