951 resultados para Significance-driven computing
Resumo:
A Blueprint for Affective Computing: A sourcebook and manual is the very first attempt to ground affective computing within the disciplines of psychology, affective neuroscience, and philosophy. This book illustrates the contributions of each of these disciplines to the development of the ever-growing field of affective computing. In addition, it demonstrates practical examples of cross-fertilization between disciplines in order to highlight the need for integration of computer science, engineering and the affective sciences.
Resumo:
Cheese currently suffers from an adverse nutritional image largely due to a perceived association between saturated fatty acid, cholesterol and the salt content of cheese with cardiovascular disease. However, cheese is also a rich source of essential nutrients such as, proteins, lipids, vitamins and minerals that play an integral part of a healthy diet. This review outlines the composition, structure and physiological characteristics of the nutritionally significant components of cheese, whilst presenting some of the controversies that surround the role of cheese in dietary guidelines and the potential cheese has to improve health in the UK population.
Resumo:
Dissolved organic carbon (DOC) concentrations have been rising in streams and lakes draining catchments with organic soils across Northern Europe. These increases have shown a correlation with decreased sulphate and chloride concentrations. One hypothesis to explain this phenomenon is that these relationships are due an increased in DOC release from soils to freshwaters, caused by a decline in pollutant sulphur and sea-salt deposition. We carried out controlled deposition experiments in the laboratory on intact peat and organomineral O-horizon cores to test this hypothesis. Preliminary data showed a clear correlation between the change in soil water pH and change in DOC concentrations, however uncertainty still remains about whether this is due to changes in biological activity or chemical solubility.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? The work reported in this paper is motivated towards bridging this gap by proposing a swarm-array computing approach based on ‘Intelligent Agents’ to achieve autonomy for distributed parallel computing systems. In the proposed approach, a task to be executed on parallel computing cores is carried onto a computing core by carrier agents that can seamlessly transfer between processing cores in the event of a predicted failure. The cognitive capabilities of the carrier agents on a parallel processing core serves in achieving the self-ware objectives of autonomic computing, hence applying autonomic computing concepts for the benefit of parallel computing systems. The feasibility of the proposed approach is validated by simulation studies using a multi-agent simulator on an FPGA (Field-Programmable Gate Array) and experimental studies using MPI (Message Passing Interface) on a computer cluster. Preliminary results confirm that applying autonomic computing principles to parallel computing systems is beneficial.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
This study addresses the extent to which insecure and disorganized attachments increase risk for externalizing problems using meta-analysis. From 69 samples (N = 5,947), the association between insecurity and externalizing problems was significant, d = 0.31 (95% CI: 0.23, 0.40). Larger effects were found for boys (d = 0.35), clinical samples (d = 0.49), and from observation-based outcome assessments (d = 0.58). Larger effects were found for attachment assessments other than the Strange Situation. Overall, disorganized children appeared at elevated risk (d = 0.34, 95% CI: 0.18, 0.50), with weaker effects for avoidance (d = 0.12, 95% CI: 0.03, 0.21) and resistance (d = 0.11, 95% CI: −0.04, 0.26). The results are discussed in terms of the potential significance of attachment for mental health.
Resumo:
A theory based healthy eating leaflet was evaluated against an existing publicly available standard leaflet. The intervention leaflet was designed to encourage healthy eating in 18-30 year olds and was developed by modifying an existing British Nutrition Foundation leaflet. The intervention leaflet targeted attitudes and self-efficacy. Participants (n=104) were randomly assigned either to the intervention, Foundation or a local food leaflet control condition. Cognitions were measured pre-intervention, immediately after reading the corresponding leaflet, and once again at two weeks follow-up. Critically, intentions to eat healthily were significantly greater at follow-up in the Intervention group compared to the other two groups, with the former leaflet also being perceived as more persuasive. The Intervention group also showed evidence of healthier eating at two weeks compared to the other two groups. Collectively the results illustrate the utility of a targeted theory-based approach.
Resumo:
Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.