913 resultados para targeted-agents
Resumo:
The work reported in this paper proposes ‘Intelligent Agents’, a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.
Resumo:
Travellers’ diarrhoea (TD) is the most common gastrointestinal illness to affect athletes competing abroad. Consequences of this debilitating condition include difficulties with training and/or participating in competitions which the athlete may have spent several years preparing for. Currently, there are no targeted strategies to reduce TD incidence in athletes. General methods used to reduce TD risk, such as avoidance of contaminated foods, chemoprophylactics and immunoprophylactics, have disadvantages. Since most causative agents of TD are microbial, strategies to minimise TD risks may be better focused on the gut microbiota. Prebiotics and probiotics can fortify the gut microbial balance, thus potentially aiding the fight against TD-associated microorganisms. Specific probiotics have shown promising actions against TD-associated microorganisms through antimicrobial activities. Use of prebiotics has led to an improved intestinal microbial balance which may be better equipped to combat TD-associated microorganisms. Both approaches have shown promising results in general travelling populations; therefore, a targeted approach for athletes has the potential to provide a competitive advantage.
Resumo:
The work reported in this paper is motivated towards handling single node failures for parallel summation algorithms in computer clusters. An agent based approach is proposed in which a task to be executed is decomposed to sub-tasks and mapped onto agents that traverse computing nodes. The agents intercommunicate across computing nodes to share information during the event of a predicted node failure. Two single node failure scenarios are considered. The Message Passing Interface is employed for implementing the proposed approach. Quantitative results obtained from experiments reveal that the agent based approach can handle failures more efficiently than traditional failure handling approaches.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? The work reported in this paper is motivated towards bridging this gap by proposing a swarm-array computing approach based on ‘Intelligent Agents’ to achieve autonomy for distributed parallel computing systems. In the proposed approach, a task to be executed on parallel computing cores is carried onto a computing core by carrier agents that can seamlessly transfer between processing cores in the event of a predicted failure. The cognitive capabilities of the carrier agents on a parallel processing core serves in achieving the self-ware objectives of autonomic computing, hence applying autonomic computing concepts for the benefit of parallel computing systems. The feasibility of the proposed approach is validated by simulation studies using a multi-agent simulator on an FPGA (Field-Programmable Gate Array) and experimental studies using MPI (Message Passing Interface) on a computer cluster. Preliminary results confirm that applying autonomic computing principles to parallel computing systems is beneficial.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
A theory based healthy eating leaflet was evaluated against an existing publicly available standard leaflet. The intervention leaflet was designed to encourage healthy eating in 18-30 year olds and was developed by modifying an existing British Nutrition Foundation leaflet. The intervention leaflet targeted attitudes and self-efficacy. Participants (n=104) were randomly assigned either to the intervention, Foundation or a local food leaflet control condition. Cognitions were measured pre-intervention, immediately after reading the corresponding leaflet, and once again at two weeks follow-up. Critically, intentions to eat healthily were significantly greater at follow-up in the Intervention group compared to the other two groups, with the former leaflet also being perceived as more persuasive. The Intervention group also showed evidence of healthier eating at two weeks compared to the other two groups. Collectively the results illustrate the utility of a targeted theory-based approach.
Resumo:
For a targeted observations case, the dependence of the size of the forecast impact on the targeted dropsonde observation error in the data assimilation is assessed. The targeted observations were made in the lee of Greenland; the dependence of the impact on the proximity of the observations to the Greenland coast is also investigated. Experiments were conducted using the Met Office Unified Model (MetUM), over a limited-area domain at 24-km grid spacing, with a four-dimensional variational data assimilation (4D-Var) scheme. Reducing the operational dropsonde observation errors by one-half increases the maximum forecast improvement from 5% to 7%–10%, measured in terms of total energy. However, the largest impact is seen by replacing two dropsondes on the Greenland coast with two farther from the steep orography; this increases the maximum forecast improvement from 5% to 18% for an 18-h forecast (using operational observation errors). Forecast degradation caused by two dropsonde observations on the Greenland coast is shown to arise from spreading of data by the background errors up the steep slope of Greenland. Removing boundary layer data from these dropsondes reduces the forecast degradation, but it is only a partial solution to this problem. Although only from one case study, these results suggest that observations positioned within a correlation length scale of steep orography may degrade the forecast through the anomalous upslope spreading of analysis increments along terrain-following model levels.
Resumo:
This paper explores principal‐agent issues in the stock selection processes of institutional property investors. Drawing upon an interview survey of fund managers and acquisition professionals, it focuses on the relationships between principals and external agents as they engage in property transactions. The research investigated the extent to which the presence of outcome‐based remuneration structures could lead to biased advice, overbidding and/or poor asset selection. It is concluded that institutional property buyers are aware of incentives for opportunistic behaviour by external agents, often have sufficient expertise to robustly evaluate agents’ advice and that these incentives are counter‐balanced by a number of important controls on potential opportunistic behaviour. There are strong counter‐incentives in the need for the agents to establish personal relationships and trust between themselves and institutional buyers, to generate repeat and related business and to preserve or generate a good reputation in the market.
Resumo:
The ‘action observation network’ (AON), which is thought to translate observed actions into motor codes required for their execution, is biologically tuned: it responds more to observation of human, than non-human, movement. This biological specificity has been taken to support the hypothesis that the AON underlies various social functions, such as theory of mind and action understanding, and that, when it is active during observation of non-human agents like humanoid robots, it is a sign of ascription of human mental states to these agents. This review will outline evidence for biological tuning in the AON, examining the features which generate it, and concluding that there is evidence for tuning to both the form and kinematic profile of observed movements, and little evidence for tuning to belief about stimulus identity. It will propose that a likely reason for biological tuning is that human actions, relative to non-biological movements, have been observed more frequently while executing corresponding actions. If the associative hypothesis of the AON is correct, and the network indeed supports social functioning, sensorimotor experience with non-human agents may help us to predict, and therefore interpret, their movements.
Resumo:
In negotiating commercial leases, many landlords and tenants employ property agents (brokers) to act on their behalf; typically these people are chartered surveyors. The aim of this paper is to explore the role that these brokers play in the shaping of commercial leases in the context of the current debate in the UK on upward only rent reviews. This role can be described using agency theory and the theories of professionalism. These provide expectations of behaviour which show inherent tensions between the role of agent and professional, particularly regarding the use of knowledge, autonomy and the obligation to the public interest. The parties to eleven recent lease transactions were interviewed to see if the brokers conformed to the expectations of agency theory or professionalism. Brokers that acted for industrial and office tenants behaved as professionals in using their expertise to determine lease structures. However, those acting for landlords and retail tenants simply followed instructions and behaved as conduits for their clients, a role more usually associated with that of an agent within the principal-agent relationship. None of the landlords’ brokers saw themselves as having responsibilities beyond their clients and so they were not promoting the discussion of alternatives to the UORR. The evidence from these case studies suggests that agents are not professionals; to behave entirely as an agent is to contradict the essential characteristics of a professional. While brokers cannot be held entirely responsible for the lack of movement on the UORR, by adopting predominantly agent roles then they must take some of the blame. However, behind this may be a much larger issue that needs to be explored; the institutional pressures that lead to professionals behaving in this way.