873 resultados para psychostimulant agent
Resumo:
Fencamfamine (FCF) is a psychostimulant classified as an indirect dopamine agonist. The conditioning place preference (CPP) paradigm was used to investigate the reinforcing properties of FCF. After initial preferences had been determined, animals were conditioned with FCF (1.75, 3.5, or 7.0 mg/kg; IP). Only at the dose of 3.5 mg/kg FCF produced a significant place preference. Pretreatment with SCH23390 (0.05 mg/kg, SC) or naloxone (1.0 mg/kg SC) 10 min before FCF (3.5 mg/kg; IP) blocked both FCF-induced hyperactivity and CPP. Pretreatment with metoclopramide (10.0 mg/kg; IP) or pimozide (1.0 mg/kg, IP), respectively, 30 min or 4 h before FCF (3.5 mg/kg; IP), which blocked the FCF-induced locomotor activity, failed to influence place conditioning produced by FCF. In conclusion, the present study suggests that dopamine D 1 and opioid receptors are related to FCF reinforcing effect, while dopamine D 2 subtype receptor was ineffective in modifying FCF-induced CPP.
Resumo:
The New Zealand green lipped mussel preparation Lyprinol is available without a prescription from a supermarket, pharmacy or Web. The Food and Drug Administration have recently warned Lyprinol USA about their extravagant anti-inflammatory claims for Lyprinol appearing on the web. These claims are put to thorough review. Lyprinol does have anti-inflammatory mechanisms, and has anti-inflammatory effects in some animal models of inflammation. Lyprinol may have benefits in dogs with arthritis. There are design problems with the clinical trials of Lyprinol in humans as an anti-inflammatory agent in osteoarthritis and rheumatoid arthritis, making it difficult to give a definite answer to how effective Lyprinol is in these conditions, but any benefit is small. Lyprinol also has a small benefit in atopic allergy. As anti-inflammatory agents, there is little to choose between Lyprinol and fish oil. No adverse effects have been reported with Lyprinol. Thus, although it is difficult to conclude whether Lyprinol does much good, it can be concluded that Lyprinol probably does no major harm.
Resumo:
The load–frequency control (LFC) problem has been one of the major subjects in a power system. In practice, LFC systems use proportional–integral (PI) controllers. However since these controllers are designed using a linear model, the non-linearities of the system are not accounted for and they are incapable of gaining good dynamical performance for a wide range of operating conditions in a multi-area power system. A strategy for solving this problem because of the distributed nature of a multi-area power system is presented by using a multi-agent reinforcement learning (MARL) approach. It consists of two agents in each power area; the estimator agent provides the area control error (ACE) signal based on the frequency bias estimation and the controller agent uses reinforcement learning to control the power system in which genetic algorithm optimisation is used to tune its parameters. This method does not depend on any knowledge of the system and it admits considerable flexibility in defining the control objective. Also, by finding the ACE signal based on the frequency bias estimation the LFC performance is improved and by using the MARL parallel, computation is realised, leading to a high degree of scalability. Here, to illustrate the accuracy of the proposed approach, a three-area power system example is given with two scenarios.
Resumo:
An adaptive agent improves its performance by learning from experience. This paper describes an approach to adaptation based on modelling dynamic elements of the environment in order to make predictions of likely future state. This approach is akin to an elite sports player being able to “read the play”, allowing for decisions to be made based on predictions of likely future outcomes. Modelling of the agent‟s likely future state is performed using Markov Chains and a technique called “Motion and Occupancy Grids”. The experiments in this paper compare the performance of the planning system with and without the use of this predictive model. The results of the study demonstrate a surprising decrease in performance when using the predictions of agent occupancy. The results are derived from statistical analysis of the agent‟s performance in a high fidelity simulation of a world leading real robot soccer team.