992 resultados para 610
Resumo:
Background: Among grape skin polyphenols, trans-resveratrol (RES) has been reported to slow the development of cardiac fibrosis and to affect myofibroblast (MFB) differentiation. Because MFBs induce slow conduction and ectopic activity following heterocellular gap junctional coupling to cardiomyocytes, we investigated whether RES and its main metabolites affect arrhythmogenic cardiomyocyte-MFB interactions. Methods: Experiments were performed with patterned growth strands of neonatal rat ventricular cardiomyocytes coated with cardiac MFBs. Impulse propagation characteristics were measured optically using voltage-sensitive dyes. Long-term video recordings served to characterize drug-related effects on ectopic activity. Data are given as means ± S.D. (n = 4–20). Results: Exposure of pure cardiomyocyte strands to RES at concentrations up to 10 µmol/L had no significant effects on impulse conduction velocity (θ) and maximal action potential upstroke velocities (dV/dtmax). By contrast, in MFB-coated strands exhibiting slow conduction, RES enhanced θ with an EC50 of ~10 nmol/L from 226 ± 38 to 344 ± 24 mm/s and dV/dtmax from 48 ± 7 to 69 ± 2%APA/ms, i.e., to values of pure cardiomyocyte strands (347 ± 33 mm/s; 75 ± 4%APA/ms). Moreover, RES led to a reduction of ectopic activity over the course of several hours in heterocellular preparations. RES is metabolized quickly in the body; therefore, we tested the main known metabolites for functional effects and found them similarly effective in normalizing conduction with EC50s of ~10 nmol/L (3-OH-RES), ~20 nmol/L (RES-3-O-β-glucuronide) and ~10 nmol/L (RES-sulfate), respectively. At these concentrations, neither RES nor its metabolites had any effects on MFB morphology and α-smooth muscle actin expression. This suggests that the antiarrhythmic effects observed were based on mechanisms different from a change in MFB phenotype. Conclusions: The results demonstrate that RES counteracts MFB-dependent arrhythmogenic slow conduction and ectopic activity at physiologically relevant concentrations. Because RES is rapidly metabolized following intestinal absorption, the finding of equal antiarrhythmic effectiveness of the main RES metabolites warrants their inclusion in future studies of potentially beneficial effects of these substances on the heart.
Resumo:
Background: A controlled, gradual distraction of the periosteum is expected to result in the formation of new bone. Purpose: This study was designed to estimate the possibility of new bone formation by periosteal distraction in a rat calvarium model. Material and Methods: Sixteen animals were subjected to a 7-day latency period and distraction rate at 0.4 mm/24 hours for 10 days. Two experimental groups with seven rats each were killed at 10 and 20 days of consolidation period and analyzed by means of microcomputed tomography, histologically and histomorphometry. Results: In the central regions underneath the disk device, signs of both bone apposition and bone resorption were observed. Peripheral to the disc, new bone was consistently observed. This new bone was up to two and three times thicker than the original bone after a 10- and 20-day consolidation period, respectively. Signs of ongoing woven bone formation indicated that the stimulus for new bone formation was still present. There were no statistically significant differences regarding bone density, bone volume, and total bone height between the two groups. Conclusion: The periosteal distraction model in the rat calvarium can stimulate the formation of considerable amounts of new bone.
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.
Resumo:
We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.
Resumo:
The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.
Resumo:
The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.