32 resultados para Olympic games.
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Objectives Despite many reports on best practises regarding onsite psychological services, little research has attempted to systematically explore the frequency, issues, nature and client groups of onsite sport psychology consultancy at the Olympic Games. The present paper will fill this gap through a systematic analysis of the sport psychology consultancy of the Swiss team for the Olympic Games of 2006 in Turin, 2008 in Beijing and 2010 in Vancouver. Design Descriptive research design. Methods The day reports of the official sport psychologist were analysed. Intervention issues were labelled using categories derived from previous research and divided into the following four intervention-issue dimensions: “general performance”, “specific Olympic performance”, “organisational” and “personal” issues. Data were analysed using descriptive statistics, chi square statistics and odds ratios. Results Across the Olympic Games, between 11% and 25% of the Swiss delegation used the sport psychology services. On average, the sport psychologist provided between 2.1 and 4.6 interventions per day. Around 50% of the interventions were informal interventions. Around 30% of the clients were coaches. The most commonly addressed issues were performance related. An association was observed between previous collaboration, intervention likelihood and intervention theme. Conclusions Sport psychologists working at the Olympic Games are fully engaged with daily interventions and should have developed ideally long-term relationships with clients to truly help athletes with general performance issues. Critical incidents, working with coaches, brief contact interventions and team conflicts are specific features of the onsite consultancy. Practitioners should be trained to deal with these sorts of challenges.
Resumo:
Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.