927 resultados para Chance constrained programming
Resumo:
This paper demonstrates that recent influential contributions to monetary policy imply an emerging consensus whereby neither rigid rules nor complete discretion are found optimal. Instead, middle-ground monetary regimes based on rules (operative under 'normal' circumstances) to anchor inflation expectations over the long run, but designed with enough flexibility to mitigate the short-run effect of shocks (with communicated discretion in 'exceptional' circumstances temporarily overriding these rules), are gaining support in theoretical models and policy formulation and implementation. The opposition of 'rules versus discretion' has, thus, reappeared as the synthesis of 'rules cum discretion', in essence as inflation-forecast targeting. But such synthesis is not without major theoretical problems, as we argue in this contribution. Furthermore, the very recent real-world events have made it obvious that the inflation targeting strategy of monetary policy, which rests upon the new consensus paradigm in modern macroeconomics is at best a 'fair weather' model. In the turbulent economic climate of highly unstable inflation, deep financial crisis and world-wide, abrupt economic slowdown nowadays this approach needs serious rethinking to say the least, if not abandoning it altogether
Resumo:
A key reason for pessimism with respect to greenhouse gas emissions reduction relates to the ‘motivation problem’, whereby those who could make the biggest difference prima facie have the least incentive to act because they are most able to adapt: how can we motivate such people (and thereby everyone else) to accept, indeed to initiate, the changes to their lifestyles that are required for effective emissions reductions? This paper offers an account inspired by Rawls of the good of membership of ‘intergenerational cooperative union’ to achieve justice that provides a solution to the motivation problem.
Resumo:
This paper, one of a simultaneously published set, describes the establishment in 1990 of the UK standards project for the Pop programming language, and the progress of the project to the end of 1993.
Resumo:
In 1989, the computer programming language POP-11 is 21 years old. This book looks at the reasons behind its invention, and traces its rise from an experimental language to a major AI language, playing a major part in many innovating projects. There is a chapter on the inventor of the language, Robin Popplestone, and a discussion of the applications of POP-11 in a variety of areas. The efficiency of AI programming is covered, along with a comparison between POP-11 and other programming languages. The book concludes by reviewing the standardization of POP-11 into POP91.
Resumo:
The blind minimum output energy (MOE) adaptive detector for code division multiple access (CDMA) signals requires exact knowledge of the received spreading code of the desired user. This requirement can be relaxed by constraining the so-called surplus energy of the adaptive tap-weight vector, but the ideal constraint value is not easily obtained in practice. An algorithm is proposed to adaptively track this value and hence to approach the best possible performance for this class of CDMA detector.
Resumo:
Across the world there are many bodies currently involved in researching into the design of autonomous guided vehicles (AGVs). One of the greatest problems at present however, is that much of the research work is being conducted in isolated groups, with the resulting AGVs sensor/control/command systems being almost completely nontransferable to other AGV designs. This paper describes a new modular method for robot design which when applied to AGVs overcomes the above problems. The method is explained here with respect to all forms of robotics but the examples have been specifically chosen to reflect typical AGV systems.
Resumo:
The premotor theory of attention claims that attentional shifts are triggered during response programming, regardless of which response modality is involved. To investigate this claim, event-related brain potentials (ERPs) were recorded while participants covertly prepared a left or right response, as indicated by a precue presented at the beginning of each trial. Cues signalled a left or right eye movement in the saccade task, and a left or right manual response in the manual task. The cued response had to be executed or withheld following the presentation of a Go/Nogo stimulus. Although there were systematic differences between ERPs triggered during covert manual and saccade preparation, lateralised ERP components sensitive to the direction of a cued response were very similar for both tasks, and also similar to the components previously found during cued shifts of endogenous spatial attention. This is consistent with the claim that the control of attention and of covert response preparation are closely linked. N1 components triggered by task-irrelevant visual probes presented during the covert response preparation interval were enhanced when these probes were presented close to cued response hand in the manual task, and at the saccade target location in the saccade task. This demonstrates that both manual and saccade preparation result in spatially specific modulations of visual processing, in line with the predictions of the premotor theory.
Resumo:
The family of theories dubbed ‘luck egalitarianism’ represent an attempt to infuse egalitarian thinking with a concern for personal responsibility, arguing that inequalities are just when they result from, or the extent to which they result from, choice, but are unjust when they result from, or the extent to which they result from, luck. In this essay I argue that luck egalitarians should sometimes seek to limit inequalities, even when they have a fully choice-based pedigree (i.e., result only from the choices of agents). I grant that the broad approach is correct but argue that the temporal standpoint from which we judge whether the person can be held responsible, or the extent to which they can be held responsible, should be radically altered. Instead of asking, as Standard (or Static) Luck Egalitarianism seems to, whether or not, or to what extent, a person was responsible for the choice at the time of choosing, and asking the question of responsibility only once, we should ask whether, or to what extent, they are responsible for the choice at the point at which we are seeking to discover whether, or to what extent, the inequality is just, and so the question of responsibility is not settled but constantly under review. Such an approach will differ from Standard Luck Egalitarianism only if responsibility for a choice is not set in stone – if responsibility can weaken then we should not see the boundary between luck and responsibility within a particular action as static. Drawing on Derek Parfit’s illuminating discussions of personal identity, and contemporary literature on moral responsibility, I suggest there are good reasons to think that responsibility can weaken – that we are not necessarily fully responsible for a choice for ever, even if we were fully responsible at the time of choosing. I call the variant of luck egalitarianism that recognises this shift in temporal standpoint and that responsibility can weaken Dynamic Luck Egalitarianism (DLE). In conclusion I offer a preliminary discussion of what kind of policies DLE would support.
Resumo:
This paper describes an experimental application of constrained predictive control and feedback linearisation based on dynamic neural networks. It also verifies experimentally a method for handling input constraints, which are transformed by the feedback linearisation mappings. A performance comparison with a PID controller is also provided. The experimental system consists of a laboratory based single link manipulator arm, which is controlled in real time using MATLAB/SIMULINK together with data acquisition equipment.