17 resultados para Humans rights
em Cambridge University Engineering Department Publications Database
Resumo:
The nature of the relationship between information technology (IT) and organizations has been a long-standing debate in the Information Systems literature. Does IT shape organizations, or do people in organisations control how IT is used? To formulate the question a little differently: does agency (the capacity to make a difference) lie predominantly with machines (computer systems) or humans (organisational actors)? Many proposals for a middle way between the extremes of technological and social determinism have been put advanced; in recent years researchers oriented towards social theories have focused on structuration theory and (lately) actor network theory. These two theories, however, adopt different and incompatible views of agency. Thus, structuration theory sees agency as exclusively a property of humans, whereas the principle of general symmetry in actor network theory implies that machines may also be agents. Drawing on critiques of both structuration theory and actor network theory, this paper develops a theoretical account of the interaction between human and machine agency: the double dance of agency. The account seeks to contribute to theorisation of the relationship between technology and organisation by recognizing both the different character of human and machine agency, and the emergent properties of their interplay.
Resumo:
This study investigated the neuromuscular mechanisms underlying the initial stage of adaptation to novel dynamics. A destabilizing velocity-dependent force field (VF) was introduced for sets of three consecutive trials. Between sets a random number of 4-8 null field trials were interposed, where the VF was inactivated. This prevented subjects from learning the novel dynamics, making it possible to repeatedly recreate the initial adaptive response. We were able to investigate detailed changes in neural control between the first, second and third VF trials. We identified two feedforward control mechanisms, which were initiated on the second VF trial and resulted in a 50% reduction in the hand path error. Responses to disturbances encountered on the first VF trial were feedback in nature, i.e. reflexes and voluntary correction of errors. However, on the second VF trial, muscle activation patterns were modified in anticipation of the effects of the force field. Feedforward cocontraction of all muscles was used to increase the viscoelastic impedance of the arm. While stiffening the arm, subjects also exerted a lateral force to counteract the perturbing effect of the force field. These anticipatory actions indicate that the central nervous system responds rapidly to counteract hitherto unfamiliar disturbances by a combination of increased viscoelastic impedance and formation of a crude internal dynamics model.
Resumo:
The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making.
Resumo:
The desire to seek new and unfamiliar experiences is a fundamental behavioral tendency in humans and other species. In economic decision making, novelty seeking is often rational, insofar as uncertain options may prove valuable and advantageous in the long run. Here, we show that, even when the degree of perceptual familiarity of an option is unrelated to choice outcome, novelty nevertheless drives choice behavior. Using functional magnetic resonance imaging (fMRI), we show that this behavior is specifically associated with striatal activity, in a manner consistent with computational accounts of decision making under uncertainty. Furthermore, this activity predicts interindividual differences in susceptibility to novelty. These data indicate that the brain uses perceptual novelty to approximate choice uncertainty in decision making, which in certain contexts gives rise to a newly identified and quantifiable source of human irrationality.
Resumo:
Humans, like other animals, alter their behavior depending on whether a threat is close or distant. We investigated spatial imminence of threat by developing an active avoidance paradigm in which volunteers were pursued through a maze by a virtual predator endowed with an ability to chase, capture, and inflict pain. Using functional magnetic resonance imaging, we found that as the virtual predator grew closer, brain activity shifted from the ventromedial prefrontal cortex to the periaqueductal gray. This shift showed maximal expression when a high degree of pain was anticipated. Moreover, imminence-driven periaqueductal gray activity correlated with increased subjective degree of dread and decreased confidence of escape. Our findings cast light on the neural dynamics of threat anticipation and have implications for the neurobiology of human anxiety-related disorders.
Resumo:
Theories of instrumental learning are centred on understanding how success and failure are used to improve future decisions. These theories highlight a central role for reward prediction errors in updating the values associated with available actions. In animals, substantial evidence indicates that the neurotransmitter dopamine might have a key function in this type of learning, through its ability to modulate cortico-striatal synaptic efficacy. However, no direct evidence links dopamine, striatal activity and behavioural choice in humans. Here we show that, during instrumental learning, the magnitude of reward prediction error expressed in the striatum is modulated by the administration of drugs enhancing (3,4-dihydroxy-L-phenylalanine; L-DOPA) or reducing (haloperidol) dopaminergic function. Accordingly, subjects treated with L-DOPA have a greater propensity to choose the most rewarding action relative to subjects treated with haloperidol. Furthermore, incorporating the magnitude of the prediction errors into a standard action-value learning algorithm accurately reproduced subjects' behavioural choices under the different drug conditions. We conclude that dopamine-dependent modulation of striatal activity can account for how the human brain uses reward prediction errors to improve future decisions.
Resumo:
Decision making in an uncertain environment poses a conflict between the opposing demands of gathering and exploiting information. In a classic illustration of this 'exploration-exploitation' dilemma, a gambler choosing between multiple slot machines balances the desire to select what seems, on the basis of accumulated experience, the richest option, against the desire to choose a less familiar option that might turn out more advantageous (and thereby provide information for improving future decisions). Far from representing idle curiosity, such exploration is often critical for organisms to discover how best to harvest resources such as food and water. In appetitive choice, substantial experimental evidence, underpinned by computational reinforcement learning (RL) theory, indicates that a dopaminergic, striatal and medial prefrontal network mediates learning to exploit. In contrast, although exploration has been well studied from both theoretical and ethological perspectives, its neural substrates are much less clear. Here we show, in a gambling task, that human subjects' choices can be characterized by a computationally well-regarded strategy for addressing the explore/exploit dilemma. Furthermore, using this characterization to classify decisions as exploratory or exploitative, we employ functional magnetic resonance imaging to show that the frontopolar cortex and intraparietal sulcus are preferentially active during exploratory decisions. In contrast, regions of striatum and ventromedial prefrontal cortex exhibit activity characteristic of an involvement in value-based exploitative decision making. The results suggest a model of action selection under uncertainty that involves switching between exploratory and exploitative behavioural modes, and provide a computationally precise characterization of the contribution of key decision-related brain systems to each of these functions.
Resumo:
The ability to use environmental stimuli to predict impending harm is critical for survival. Such predictions should be available as early as they are reliable. In pavlovian conditioning, chains of successively earlier predictors are studied in terms of higher-order relationships, and have inspired computational theories such as temporal difference learning. However, there is at present no adequate neurobiological account of how this learning occurs. Here, in a functional magnetic resonance imaging (fMRI) study of higher-order aversive conditioning, we describe a key computational strategy that humans use to learn predictions about pain. We show that neural activity in the ventral striatum and the anterior insula displays a marked correspondence to the signals for sequential learning predicted by temporal difference models. This result reveals a flexible aversive learning process ideally suited to the changing and uncertain nature of real-world environments. Taken with existing data on reward learning, our results suggest a critical role for the ventral striatum in integrating complex appetitive and aversive predictions to coordinate behaviour.