55 resultados para Spatio-temporal analysis

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In multivariate time series analysis, the equal-time cross-correlation is a classic and computationally efficient measure for quantifying linear interrelations between data channels. When the cross-correlation coefficient is estimated using a finite amount of data points, its non-random part may be strongly contaminated by a sizable random contribution, such that no reliable conclusion can be drawn about genuine mutual interdependencies. The random correlations are determined by the signals' frequency content and the amount of data points used. Here, we introduce adjusted correlation matrices that can be employed to disentangle random from non-random contributions to each matrix element independently of the signal frequencies. Extending our previous work these matrices allow analyzing spatial patterns of genuine cross-correlation in multivariate data regardless of confounding influences. The performance is illustrated by example of model systems with known interdependence patterns. Finally, we apply the methods to electroencephalographic (EEG) data with epileptic seizure activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brian electric activity is viewed as sequences of momentary maps of potential distribution. Frequency-domain source modeling, estimation of the complexity of the trajectory of the mapped brain field distributions in state space, and microstate parsing were used as analysis tools. Input-presentation as well as task-free (spontaneous thought) data collection paradigms were employed. We found: Alpha EEG field strength is more affected by visualizing mentation than by abstract mentation, both input-driven as well as self-generated. There are different neuronal populations and brain locations of the electric generators for different temporal frequencies of the brain field. Different alpha frequencies execute different brain functions as revealed by canonical correlations with mentation profiles. Different modes of mentation engage the same temporal frequencies at different brain locations. The basic structure of alpha electric fields implies inhomogeneity over time — alpha consists of concatenated global microstates in the sub-second range, characterized by quasi-stable field topographies, and rapid transitions between the microstates. In general, brain activity is strongly discontinuous, indicating that parsing into field landscape-defined microstates is appropriate. Different modes of spontaneous and induced mentation are associated with different brain electric microstates; these are proposed as candidates for psychophysiological ``atoms of thought''.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern mixed alluvial-bedrock channels in mountainous areas provide natural laboratories for understanding the time scales at which coarse-grained material has been entrained and transported from their sources to the adjacent sedimentary sink, where these deposits are preserved as conglomerates. This article assesses the shear stress conditions needed for the entrainment of the coarse-bed particles in the Glogn River that drains the 400 km2 Val Lumnezia basin, eastern Swiss Alps. In addition, quantitative data are presented on sediment transport patterns in this stream. The longitudinal stream profile of this river is characterized by three ca 500 m long knickzones where channel gradients range from 0·02 to 0·2 m m−1, and where the valley bottom confined into a <10 m wide gorge. Downstream of these knickzones, the stream is flat with gradients <0·01 m m−1 and widths ≥30 m. Measurements of the grain-size distribution along the trunk stream yield a mean D84 value of ca 270 mm, whereas the mean D50 is ca 100 mm. The consequences of the channel morphology and the grain-size distribution for the time scales of sediment transport were explored by using a one-dimensional step-backwater hydraulic model (Hydrologic Engineering Centre – River Analysis System). The results reveal that, along the entire trunk stream, a two to 10 year return period flood event is capable of mobilizing both the D50 and D84 fractions where the Shields stress exceeds the critical Shields stress for the initiation of particle motion. These return periods, however, varied substantially depending on the channel geometry and the pebble/boulder size distribution of the supplied material. Accordingly, the stream exhibits a highly dynamic boulder cover behaviour. It is likely that these time scales might also have been at work when coarse-grained conglomerates were constructed in the geological past.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

n learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.