167 resultados para Recognition algorithms
Resumo:
In ovariectomized rats, administration of estradiol, or selective estrogen receptor agonists that activate either the alpha or beta isoforms, have been shown to enhance spatial cognition on a variety of learning and memory tasks, including those that capitalize on the preference of rats to seek out novelty. Although the effects of the putative estrogen G-protein-coupled receptor 30 (GPR30) on hippocampus-based tasks have been reported using food-motivated tasks, the effects of activation of GPR30 receptors on tasks that depend on the preference of rats to seek out spatial novelty remain to be determined. Therefore, the aim of the current study was to determine if short-term treatment of ovariectomized rats with G-1, an agonist for GPR30, would mimic the effects on spatial recognition memory observed following short-term estradiol treatment. In Experiment 1, ovariectomized rats treated with a low dose (1mug) of estradiol 48h and 24h prior to the information trial of a Y-maze task exhibited a preference for the arm associated with the novel environment on the retention trial conducted 48h later. In Experiment 2, treatment of ovariectomized rats with G-1 (25mug) 48h and 24h prior to the information trial of a Y-maze task resulted in a greater preference for the arm associated with the novel environment on the retention trial. Collectively, the results indicated that short-term treatment of ovariectomized rats with a GPR30 agonist was sufficient to enhance spatial recognition memory, an effect that also occurred following short-term treatment with a low dose of estradiol.
Resumo:
The l1-norm sparsity constraint is a widely used technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach.